id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.03849
Polynomial superpotential for Grassmannian $Gr(k,n)$ from a limit of vertex function
In this note we discuss an integral representation for the vertex function of the cotangent bundle over the Grassmannian, $X=T^{*} Gr(k,n)$. This integral representation can be used to compute the $\hbar\to \infty$ limit of the vertex function, where $\hbar$ denotes the equivariant parameter of a torus acting on $X$ by dilating the cotangent fibers. We show that in this limit the integral turns into the standard mirror integral representation for the $A$-series of the Grassmannian $Gr(k,n)$ with the Laurent polynomial Landau-Ginzburg superpotential of Eguchi, Hori and Xiong. We also observe some Dwork type congruences for the coefficients of the $A$-series.
Andrey Smirnov, Alexander Varchenko
2023-05-05T21:08:26Z
http://arxiv.org/abs/2305.03849v1
# Polynomial Superpotential for Grassmannian \(\operatorname{Gr}(k,n)\) ###### Abstract. In this note we discuss an integral representation for the vertex function of the cotangent bundle over the Grassmannian, \(X=T^{*}\operatorname{Gr}(k,n)\). This integral representation can be used to compute the \(\hbar\to\infty\) limit of the vertex function, where \(\hbar\) denotes the equivariant parameter of a torus acting on \(X\) by dilating the cotangent fibers. We show that in this limit the integral turns into the standard mirror integral representation for the \(A\)-series of the Grassmannian \(\operatorname{Gr}(k,n)\) with the Laurent polynomial Landau-Ginzburg superpotential of Eguchi, Hori and Xiong. We also observe some Dwork type congruences for the coefficients of the \(A\)-series. \({}^{*}\)E-mail: [email protected] \({}^{\diamond}\)E-mail: [email protected] _Key words_: Superpotentials; Vertex Functions; \(J\)-functions; Landau-Ginzburg model. _2020 Mathematics Subject Classification_: 14G33 (11D79, 32G34, 33C05, 33E30) ## 1. Introduction The vertex functions have been introduced in [1] as generating functions counting rational quasimaps to Nakajima varieties. In this respect, a vertex function is a "quasimap" analog of Givental's \(J\)-function in quantum cohomology. In this paper we consider the the cohomological vertex function for the cotangent bundle over the Grassmanian \(X=T^{*}\operatorname{Gr}(k,n)\). By definition, this function is a power series in the quantum parameter \(z\) with coefficients in the equivariant cohomology: \[\mathsf{Vertex}(z)\in H_{T}^{\bullet}(X)[[z]]\] where \(T\) is a torus acting on \(X\), see Section 2. Let \(\mathsf{V}(z)\) denote the coefficient of the fundamental class in the vertex function \[\mathsf{V}(z):=\left\langle\mathsf{Vertex}(z),[X]\right\rangle\] where \(\langle-,-\rangle\) stands for the standard pairing in the equivariant cohomology. The power series \(\mathsf{V}(z)\) is the analog of the so-called \(A\)-series in quantum cohomology. The coefficients of \(\mathsf{V}(z)\) depend non-trivially on the equivariant parameter \(\hbar\), which corresponds to the torus acting on \(X\) by dilating the cotangent fibers. In this note we describe the following result (Theorem 4.1): **Theorem 1.1**.: _In the non-equivariant specialization one has the following limit:_ \[\lim_{\hbar\to\infty}\mathsf{V}(z/\hbar^{n})=\frac{1}{(2\pi\sqrt{-1})^{k(n-k) }}\oint e^{\frac{1}{z}S(x,z)}\bigwedge_{i,j}\frac{dx_{i,j}}{x_{i,j}} \tag{1.1}\] ###### Contents * 1 Introduction * 2 Preliminaries * 3 The \(\mathcal{A}\)-series of the Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.2 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.3 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.4 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.5 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.6 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.7 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.8 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.9 The Grassmannian \(\operatorname{Gr}(k,n)\) * 3.1 The Grassmannian \(\operatorname{Gr}(k,n)\) We denote this coefficient by \(\mathsf{V}(z)\). Explicitly we have \[\mathsf{V}(z)=\sum_{d=0}^{\infty}\,\frac{(\hbar)_{d}^{2}}{(d!)^{2} \epsilon^{2d}}\,z^{d},\quad\text{where}\quad(\hbar)_{d}=\hbar(\hbar+\epsilon)( \hbar+2\epsilon)\ldots(\hbar+(d-1)\epsilon).\] Since \(\lim\limits_{\hbar\to\infty}\,(\hbar)_{d}/\hbar^{d}=1\), we obtain: \[\lim\limits_{\hbar\to\infty}\,\mathsf{V}(z/\hbar^{2})=\sum_{d=0}^{\infty}\, \frac{z^{d}}{(d!)^{2}\epsilon^{2d}}.\] Let \(S(x,z)=x+z/x\), then \[\oint\frac{dx}{x}\,S(x,z)^{d}:=[S(x,z)^{d}]_{0}=\left\{\begin{array}{ll} \frac{(d)!}{(d/2)!(d/2)!}z^{d},&d\ \ \text{is even}\\ 0,&d\ \ \text{is odd}\end{array}\right.\] where \([S(x,z)^{d}]_{0}\) denotes the constant term in \(x\) of the Laurent polynomial \(S(x,z)^{d}\). Combining all these together, we can write \[\lim\limits_{\hbar\to\infty}\,\mathsf{V}(z/\hbar^{2})=\sum_{d=0}^{\infty}\, \frac{1}{d!}\oint\frac{dx}{x}S(x,z)^{d}=\oint\frac{dx}{x}\,e^{\frac{S(x,z)}{ \epsilon}}.\] This formula is the statement of Theorem 1.1 in this case. A more straightforward way to compute this limit is to note that the hypergeometric function (1.3) has the integral representation: \[\mathsf{V}(z)=\oint\limits_{|x|=\varepsilon}\,\frac{dx}{x}\,\Big{(}1-x\Big{)} ^{-\frac{\hbar}{\epsilon}}\Big{(}1-\frac{z}{x}\Big{)}^{-\frac{\hbar}{\epsilon}} \tag{1.4}\] where \(\varepsilon\) is any positive real number such that \(|z|<\varepsilon<1\). We note that the change of variables \(z\to z/\hbar^{2}\), \(x\to x/\hbar\) together with change of the contour \(\varepsilon\to\varepsilon/\hbar\) does not affect this condition for large \(|\hbar|\). Thus, for large \(|\hbar|\) we have: \[\mathsf{V}(z/\hbar^{2})=\oint\limits_{|x|=\varepsilon}\,\frac{dx}{x}\,\Big{(} 1-\frac{x}{\hbar}\Big{)}^{-\frac{\hbar}{\epsilon}}\Big{(}1-\frac{z}{x\hbar} \Big{)}^{-\frac{\hbar}{\epsilon}}\] which allows one to compute the limit using elementary tools: \[\lim\limits_{\hbar\to\infty}\,\Big{(}1-\frac{x}{\hbar}\Big{)}^{-\frac{\hbar} {\epsilon}}\Big{(}1-\frac{z}{x\hbar}\Big{)}^{-\frac{\hbar}{\epsilon}}=e^{ \frac{S(x,z)}{\epsilon}}\] ### Exposition of the material In Section 2 we recall a combinatorial formula for the vertex functions generalizing (1.2) to the case of \(X=T^{*}\operatorname{Gr}(k,n)\). In Section 3 we describe the analog of the integral representation (1.4) for this case. In Section 4 we use this integral representation to compute the \(\hbar\to\infty\) limit of \(\mathsf{V}(z)\) similarly to the example above. In our previous paper [14], we show that certain truncations of \(\mathsf{V}(z)\) with parameters specialized to \(\mathbb{Q}_{p}\) satisfy some Dwork type congruence relations. In Section 5 we show that a similar structure exists in the limit \(\hbar\to\infty\). ### Acknowledgements We thank Thomas Lam for very useful comments. Work of A. Smirnov is partially supported by NSF grant DMS - 2054527 and by the RSF under grant 19-11-00062. Work of A. Varchenko is partially supported by NSF grant DMS - 1954266. ## 2. The vertex function of \(T^{*}\operatorname{Gr}(k,n)\) For \(X=T^{*}\operatorname{Gr}(k,n)\) we consider the following explicit power series: \[\langle\mathsf{Vertex}(z),[1,\ldots,k]\rangle:=\sum_{d=0}^{\infty}\,c_{d}(u_{1 },\ldots,u_{n},\hbar)\,z^{d} \tag{2.1}\] with the coefficients \(c_{d}(u_{1},\ldots,u_{n},\hbar)\in\mathbb{Q}(u_{1},\ldots,u_{n},\hbar,\epsilon)\) given by: \[c_{d}(u_{1},\ldots,u_{n},\hbar)=\sum_{d_{1},\ldots,d_{k}:\atop d_{1}+\cdots+d_ {k}=d}\Big{(}\prod_{i,j=1}^{k}\frac{(\epsilon-u_{i}+u_{j})_{d_{i}-d_{j}}}{( \hbar-u_{i}+u_{j})_{d_{i}-d_{j}}}\Big{)}\Big{(}\prod_{j=1}^{n}\prod_{i=1}^{k} \,\frac{(\hbar+u_{j}-u_{i})_{d_{i}}}{(\epsilon+u_{j}-u_{i})_{d_{i}}}\Big{)}, \tag{2.2}\] where \((x)_{d}\) denotes the Pochhammer symbol with step \(\epsilon\): \[(x)_{d}=\left\{\begin{array}{rl}x(x+\epsilon)\ldots(x+(d-1)\epsilon),&d>0\\ 1,&d=0\\ \frac{1}{(x-\epsilon)(x-2\epsilon)\ldots(x+d\epsilon)},&d<0\end{array}\right.\] The degree \(d\) coefficient of this series counts (equivariantly) the number of degree \(d\) rational curves in \(X\). More precisely, it is given by the equivariant integral \[c_{d}(u_{1},\ldots,u_{n},\hbar)=\int\limits_{[\mathsf{QM}_{d}(X,\infty)]^{ \mathrm{vir}}}\omega^{vir} \tag{2.3}\] over the virtual fundamental class on moduli space \(\mathsf{QM}_{d}(X,\infty)\) of quasimaps from \(\mathbb{P}^{1}\) to \(X\), which send \(\infty\in\mathbb{P}^{1}\) to a prescribed torus fixed point \([1,\ldots,k]\in X\), see Section 7.2 of [1] for definitions. Using the equivariant localization, the integral (2.3) reduces to the sum over the torus fixed points on \(\mathsf{QM}_{d}(X,\infty)\) which gives the sum (2.2). We refer to Section 4.5 of [13] where this computation is done in some details. The parameters \(u_{1},\ldots,u_{n},\hbar,\epsilon\) are the equivariant parameters of the torus \(T=(\mathbb{C}^{\times})^{n}\times\mathbb{C}^{\times}_{\hbar}\times\mathbb{C} ^{\times}_{\epsilon}\) acting on the moduli space \(\mathsf{QM}_{d}(X,\infty)\) in the following way: * \((\mathbb{C}^{\times})^{n}\) acts on \(\mathbb{C}^{n}\) in a natural way, scaling the coordinates with weights \(u_{1},\ldots,u_{n}\). * The set of torus fixes points \(X^{(\mathbb{C}^{\times})^{n}}\) corresponds to \(k\)-subspaces in \(\mathbb{C}^{n}\) spanned by any set of \(k\) coordinate lines. The fixed point \([1,\ldots,k]\in X^{(\mathbb{C}^{\times})^{n}}\) corresponds to the \(k\)-subspace spanned by the first \(k\) coordinate lines. * \(\mathbb{C}^{\times}_{\hbar}\) acts on \(X\) by scaling the cotangent fibers with weight \(\hbar\). * \(\mathbb{C}^{\times}_{\epsilon}\) acts on the source of the quasimaps \(C\cong\mathbb{P}^{1}\) fixing the points \(0,\infty\in\mathbb{P}^{1}\). The parameter \(\epsilon\) denotes the corresponding weight of the tangent space \(T_{0}\,C\). The full vertex function is a power series with coefficients in equivariant cohomology: \[\mathsf{Vertex}(z)\in H^{\bullet}_{T}(X)_{loc}[[z]]\] where \(loc\) denotes the equivariant localization with respect to torus \(\mathbb{C}^{\times}_{\epsilon}\). Using the equivariant localization, we can expand \(\mathsf{Vertex}(z)\) in the basis of \(H^{\bullet}_{T\times\mathbb{C}^{\times}_{\epsilon}}(X)_{loc}\) given by the classes of torus fixed points. The power series (2.1) gives the coefficient \(\mathsf{Vertex}(z)\) at the "first" torus fixed point \([1,\ldots,k]\). Other coefficients have the same structure and can be obtained from (2.1) by permutations of parameters \(u_{i}\). In this paper we consider the specialization of the equivariant parameters: \[u_{1}=0,\ldots,u_{n}=0 \tag{2.4}\] which corresponds to non-equivariant limit when the action of the torus \((\mathbb{C}^{\times})^{n}\) is "turned off". The coefficients of \(\mathsf{Vertex}(z)\) at the torus fixed points all reduce to the same function (simply because without \((\mathbb{C}^{\times})^{n}\)-action these points are indistinguishable) which corresponds to the coefficient of the vertex function at the fundamental class: \[\mathsf{V}(z):=\left\langle\mathsf{Vertex}(z),[X]\right\rangle\Bigr{|}_{u_{1}= 0,\ldots,u_{n}=0} \tag{2.5}\] Thus, \(\mathsf{V}(z)\) can be obtained by specializing the coefficients of the power series (2.1) at (2.4). We note that this specialization is non-trivial: already in the case of \(T^{*}\operatorname{Gr}(2,4)\) the terms in the sum (2.2) have poles at \(u_{i}=u_{j}\). The total sum (2.2) is, however, non-singular since the vertex function is an integral equivariant cohomology class (we recall that only \(\mathbb{C}^{\times}_{\epsilon}\) - localization is required to define it). ## 3. Integral representation of \(\mathsf{V}(z)\) In this section we describe an integral representation for the function (2.5) \[\mathsf{V}(z)=\int_{\gamma}\Phi(x,z)\,dx\] which has its origin in \(3D\)-mirror symmetry, we refer to Section 3 of [15] for more details. Assume that \(n\geqslant 2k\). Let \(\mathsf{v}_{i}\), \(i=1,\ldots,n-1\) be integers defined by: \[\mathsf{v}_{i}=\left\{\begin{array}{ll}i,&i<k,\\ k,&k\leqslant i\leqslant n-k,\\ n-i,&n-k<i,\end{array}\right.\] We denote by \(\omega=\hbar/\epsilon\) and define the _superpotential_ function: \[\Phi(x,z) = \Bigl{(}\prod_{i=1}^{n-1}\prod_{j=1}^{\mathsf{v}_{i}}\,x_{i,j} \Bigr{)}^{-1+\omega}\Bigl{(}\prod_{m=1}^{\mathsf{v}_{m}}\,\prod_{1\leqslant i <j\leqslant\mathsf{v}_{m}}(x_{m,j}-x_{m,i})\Bigr{)}^{2\omega}\] \[\times \Bigl{(}\prod_{i=1}^{n-2}\prod_{a=1}^{\mathsf{v}_{i}}\prod_{b=1} ^{\mathsf{v}_{i+1}}(x_{i,a}-x_{i+1,b})\Bigr{)}^{-\omega}\Bigl{(}\prod_{i=1}^{ k}(z_{1}-x_{k,i})(z_{2}-x_{n-k,i})\Bigr{)}^{-\omega}. \tag{3.1}\] We note that this function is an example of the _master functions_ in the theory of integral representations of the trigonometric Knizhnik-Zamolodchikov equations. In particular, (3.1) corresponds to the KZ equation associated with the weight subspace of weight \([1,\ldots,1]\) in the tensor product the \(k\)-th and \((n-k)\)-th fundamental representations of \(\mathfrak{gl}_{n}\), see [11, 12]. The dimension vector \(\mathsf{v}_{i}\) and the variables \(x_{i,j}\) have a convenient combinatorial visualization. Let us consider a \(k\times(n-k)\) rectangle rotated counterclockwise by \(45^{\circ}\), see Fig. 1. Note that in this picture the number of boxes in \(i\)-th vertical column is exactly \(\mathsf{v}_{i}\). In this way, we may assign the variables \(x_{i,j}\) to the boxes in this picture. We will order them as in Fig. 1. Note that the total number of variables \(x_{i,j}\) equals to \(\dim\operatorname{Gr}(k,n)=k(n-k)\). To a box \((i,j)\) in the Fig. 1 we assign a weight \[m_{i,j}=(|i-k|+2j-1)\in\mathbb{N} \tag{3.2}\] This function ranges from \(m_{k,1}=1\) to \(m_{n-k,k}=n-1\). The definition of \(m_{i,j}\) is clear from the Fig.2. We have a partial ordering on the boxes \((i,j)\) corresponding to: \[m_{k,1}<m_{k-1,1}=m_{k+1,1}<\cdots<m_{n-k,k} \tag{3.3}\] For a small real number \(0<\varepsilon\ll 1\) let us define the torus by the following equations: \[\gamma_{k,n}\subset\mathbb{C}^{k(n-k)},\quad|x_{i,j}|=m_{i,j}\varepsilon \tag{3.4}\] where \(i,j\) run through all possible values. **Proposition 3.1** ([33]).: _Assume that \(|z_{1}|<\varepsilon\) and \((n-1)\varepsilon<|z_{2}|\), then the superpotential (3.1) has a single-valued branch on the torus \(\gamma_{k,n}\), which is distinguished in the proof and which will be used in the paper._ Proof.: Let us denote \[L(x_{i,a},x_{j,b})=\left\{\begin{array}{ll}(1-x_{i,a}/x_{j,b})^{-\omega},&m_{ i,a}<m_{j,b},\ i\neq j\\ (x_{j,b}/x_{i,a}-1)^{-\omega},&m_{i,a}>m_{j,b},\ i\neq j.\end{array}\right. \tag{3.5}\] Figure 1. Set of variables \(x_{i,j}\) for \(k=4\) and \(n=8\). Each of these ratios \(x_{i,a}/x_{j,b}\), \(x_{j,b}/x_{i,a}\) restricted to \(\gamma_{k,n}\) has absolute value less than \(1\). We replace \((1-x_{i,a}/x_{j,b})^{-\omega}\) on \(\gamma_{k,n}\) with \(\sum_{m=0}^{\infty}\binom{-\omega}{m}(-x_{i,a}/x_{j,b})^{m}\) and replace \((x_{j,b}/x_{i,a}-1)^{-\omega}\) with \(e^{-\pi\sqrt{-1}\omega}\sum_{m=0}^{\infty}\binom{-\omega}{m}(-x_{j,b}/x_{i,a}) ^{m}\). Next, we denote \(L(z_{1},x_{k,a})=(1-z_{1}/x_{k,a})^{-\omega}\) and \(L(z_{2},x_{k,a})=\\ (1-x_{n-k,a}/z_{2})^{-\omega}\). On \(\gamma_{k,n}\) we have \(|x_{k,i}|\geqslant\epsilon\), and \(|x_{n-k,i}|\leqslant|x_{n-k,k}|=n\epsilon\), therefore \(|z_{1}/x_{k,i}|<1\) and \(|x_{n-k,i}/z_{2}|<1\). We replace on \(\gamma_{k,n}\) the factor \((1-z_{1}/x_{k,a})^{-\omega}\) with \(\sum_{m=0}^{\infty}\binom{-\omega}{m}(-z_{1}/x_{k,a})^{m}\) and the factor \((1-x_{n-k,a}/z_{2})^{-\omega}\) with \(\sum_{m=0}^{\infty}\binom{-\omega}{m}(-x_{n-k,a}/z_{2})^{m}\). Finally, we denote \(\Delta(x_{m,i},x_{m,j})=(1-x_{m,i}/x_{m,j})^{2\omega}\) for \(1\leqslant i<j\leqslant\nu_{m}\). On \(\gamma_{k,n}\) we have \(|x_{m,i}/x_{m,j}|<1\). We replace on \(\gamma_{k,n}\) the factor \(\Delta(x_{m,i},x_{m,j})\) with \(\sum_{m=0}^{\infty}\binom{2\omega}{m}(-x_{m,i}/x_{m,j})^{m}\). In these notations we have: \[\Phi(x,z)=\frac{\Big{(}\prod\limits_{i=1}^{n-1}\prod\limits_{a<b}\Delta(x_{i, a},x_{i,b})\Big{)}\Big{(}\prod\limits_{i=1}^{n-2}\prod\limits_{a=1}^{\nu_{i}} \prod\limits_{b=1}^{\nu_{i+1}}L(x_{i,a},x_{i+1,b})\Big{)}\Big{(}\prod\limits_{ i=1}^{k}L(z_{1},x_{k,i})L(z_{2},x_{n-k,i})\Big{)}}{\prod\limits_{i=1}^{n-1}\prod \limits_{j=1}^{\nu_{i}}x_{i,j}}, \tag{3.6}\] and for each factor a single-valued branch is chosen by replacing that factor with the corresponding power series. The product of those power series distinguishes a single-valued branch of \(\Phi(x,z)\) on \(\gamma_{k,n}\). Figure 2. The values of the weight function \(m_{i,j}\) **Example**.: For \(X=T^{*}\operatorname{Gr}(2,4)\) we have \[\Phi(x,z) = (x_{11}x_{21}x_{22}x_{31})^{-1}\] \[\times (1-x_{21}/x_{22})^{2\omega}\big{(}(x_{21}/x_{11}-1)(1-x_{21}/x_{31 })(z_{1}/x_{21}-1)(1-x_{21}/z_{2})\big{)}^{-\omega}\] \[\times \big{(}(1-x_{11}/x_{22})(x_{31}/x_{22}-1)(z_{1}/x_{22}-1)(1-x_{22} /z_{2})\big{)}^{-\omega}.\] From the previous proposition, the integral of \(\Phi(x,z)\) over \(\gamma_{k,n}\) is an analytic function of \(z=z_{1}/z_{2}\) in the disc \(|z|<\epsilon\). **Theorem 3.2** ([14]).: _The function (2.5) has the following integral representation_ \[\mathsf{V}(z)=\frac{\alpha}{(2\pi\sqrt{-1})^{k(n-k)}}\oint\limits_{\gamma_{k, n}}\Phi(x,z)\,\bigwedge_{i,j}dx_{i,j} \tag{3.7}\] _where \(\Phi(x,z)\) is the branch of superpotential function (3.1) on the torus \(\gamma_{k,n}\) chosen in Proposition 3.1, and \(\alpha=e^{\pi\sqrt{-1}N\omega}\) is a normalization constant where \(N\) is the number of factors in (3.6) having the form \((x_{j,b}/x_{i,a}-1)^{-\omega}\)._ **Definition 3.3**.: _Let \(\gamma^{{}^{\prime}}_{k,m}\) be another contour defined by \(|x_{i,j}|=R_{i,j}\) for \(R_{i,j}\in\mathbb{R}\) such that \(|z_{1}|<R_{1,1},R_{n-k,k}<|z_{2}|\) and the conditions_ \[m_{i,j}<m_{a,b}\Longrightarrow R_{i,j}<R_{a,b} \tag{3.8}\] _are satisfied for all pairs of indices \((i,j)\) and \((a,b)\). Then, we say that \(\gamma^{{}^{\prime}}_{k,m}\) is homologous to \(\gamma_{k,m}\) and write \(\gamma^{{}^{\prime}}_{k,m}\sim\gamma_{k,m}\)._ Note that (3.7) remains invariant if we replace \(\gamma_{k,m}\) by a homologous \(\gamma^{{}^{\prime}}_{k,m}\). This is simply because the evaluation of the integral over \(\gamma^{{}^{\prime}}_{k,n}\) is again reduces to computing the residues at \(x_{i,j}=0\), and the residues are computed in the same order as for \(\gamma_{k,n}\). This implies that the result remains the same. ### Relation to \(3d\)-mirror symmetry Let us explain the origin of the superpotential function (3.1). The factors of (3.1) correspond to the edges of the quiver which describes the \(3D\)-_mirror variety_\(X^{!}\). For \(X=T^{*}\operatorname{Gr}(k,n)\) the quiver of \(X^{!}\) is given in Sections 3.2-3.3 of [14], the correspondence between the factors of (3.1) and the edges of this quiver is also explained there. For for general Nakajima quiver varieties, the superpotential function (3.1) is constructed by the same procedure if the quiver for the \(3D\)-mirror variety \(X^{!}\) is known. For the Nakajima quiver varieties of type \(A\), which include cotangent bundles over partial flag varieties as special cases, a conjectural description of the \(3D\)-mirrors was given by physicists. It is explained for instance in [10]. We expect that the results of this note and of [14] have straightforward generalizations to those cases. The \(3D\)-mirror symmetry conjecture is formulated on the level of K-theory rather than cohomology. Recall, that the quantum difference equations [13] are the K-theoretic generalizations of quantum differential equations in quantum cohomology. The \(3D\)-mirror symmetry conjecture claims that the quantum difference equations for \(X\) and \(X^{!}\) are equivalent. The K-theoretic vertex functions of \(X\) and \(X^{!}\) provide two different bases of solutions to the this common system of \(q\)-difference equations. For cotangent bundles over Grassmannians this conjecture was proved by Dinkins in [4] and for full flag varieties in [4]. For the hypertoric varieties this result is obtained in [10]. An alternative definition of \(3D\)-mirror symmetry postulates the equality of the elliptic stable envelopes [1] of \(X\) and \(X^{\dagger}\). This idea was first proposed in [11] and later examined for various cases of \(X\) in [13, 14, 15, 16, 17]. It was shown in [11, 12] that the elliptic stable envelope of \(X\) determines the corresponding quantum difference equation of \(X\) and vice versa. This established an equivalence between the two definitions of \(3D\)-mirror symmetry. Theorem 1.1 says that the mirror description of the \(J\)-function for \(\operatorname{Gr}(k,n)\) arises as a double limit of \(3D\)-mirror symmetry. In the first limit one considers the cohomological limit of the K-theoretic vertex functions for \(T^{*}\operatorname{Gr}(k,n)\). In this limit, the \(3D\)-mirror symmetry description of these functions [4] degenerates to the integral representation (3.7). In the second limit \(\hbar\to\infty\) we obtain Theorem 1.1. ## 4. The limit \(\hbar\to\infty\) ### Polynomial superpotential Let \(\Gamma\) be an oriented graph, with vertices given by boxes inside the \(k\times(n-k)\) Young diagram, plus two extra vertices corresponding to \(z_{1}\) and \(z_{2}\), see Fig.3. The edges of the graph are defined as follows: every two adjacent boxes are connected by an edge. Each edge is oriented in the direction of decrease of weight function \(m_{i,j}\), which is defined by (3.2). Two additional edges are from \(x_{k,1}\) to \(z_{1}\) and from \(z_{2}\) to \(x_{n-k,k}\), Fig.3. Given an edge \(e\) of \(\Gamma\) we denote by \(h(e)\) and \(t(e)\) the corresponding head and tail. We define the following Laurent polynomial: \[S(x,z)=\sum_{e\in\operatorname{edges}(\Gamma)}\,\frac{x_{h(e)}}{x_{t(e)}}\,. \tag{4.1}\] **Example.** For \(k=1\) we obtain: \[S(x,z)=\frac{z_{1}}{x_{1,1}}+\frac{x_{1,1}}{x_{2,1}}+\frac{x_{2,1}}{x_{3,1}}+ \cdots+\frac{x_{n-2,1}}{x_{n-1,1}}+\frac{x_{n-1,1}}{z_{2}}\,. \tag{4.2}\] Substituting \(z_{1}=q,z_{2}=1\), and introducing new variables by \[x_{1,1}=a_{1}a_{2}\dots a_{n-1},\ \ x_{2,1}=a_{1}a_{2}\dots a_{n-2},\ \ \dots,\ \ x_{n-1,1}=a_{1}\,.\] We arrive at the standard Givental's superpotential of projective space \[S(x)=a_{1}+a_{2}+\cdots+a_{n-1}+\frac{q}{a_{1}\cdots a_{n-1}}\,.\] **Example.** For \(X=T^{*}\operatorname{Gr}(2,4)\) we obtain \[S(x,z)=\frac{z_{1}}{x_{2,1}}+\frac{x_{2,1}}{x_{1,1}}+\frac{x_{2,1}}{x_{3,1}}+ \frac{x_{3,1}}{x_{2,2}}+\frac{x_{1,1}}{x_{2,2}}+\frac{x_{2,2}}{z_{2}}\,. \tag{4.3}\] ### Exponential integral For \(S(x,z)\) defined by (4.1) we consider the power series: \[\frac{1}{(2\pi\sqrt{-1})^{k(n-k)}}\oint e^{\frac{1}{\varepsilon}S(x,z)}\bigwedge _{i,j}\frac{dx_{i,j}}{x_{i,j}}\ \ :=\ \ \sum_{d=0}^{\infty}\,\frac{\big{[}S(x,z)^{d}\big{]}_{0}}{d! \epsilon^{d}} \tag{4.4}\] where \(\big{[}S(x,z)^{d}\big{]}_{0}\) denotes the constant term of the Laurent polynomial \(S(x,z)^{d}\) in the variables \(x=(x_{i,j})\). We assume that \([S(x,z)^{0}]_{0}=1\). From the structure of the superpotential (4.1) it is easy to see that \(\left[S(x,z)^{d}\right]_{0}\) is a monomial in \(z=z_{1}/z_{2}\) and thus (4.4) is a power series in \(z\). **Example**.: For \(X=T^{*}\mathbb{P}^{n-1}\), the superpotential is given by (4.2). In this case, elementary computations shows that \(\left[S(x,z)^{d}\right]_{0}\) is non-vanishing only if the degree \(d\) is of the form \(d=nm\) for some \(m\in\mathbb{N}\). In this case we have \[\left[S(x,z)^{mn}\right]_{0}=\frac{(z_{1}/z_{2})^{m}(nm)!}{(m!)^{n}}\,.\] We thus conclude that \[\oint e^{\frac{1}{\epsilon}S(x,z)}\,\bigwedge_{i,j}\frac{dx_{i,j}}{x_{i,j}}= \sum_{m=0}^{\infty}\,\frac{z^{m}}{(m!)^{n}\epsilon^{mn}}\,.\] **Example**.: For \(X=T^{*}\operatorname{Gr}(2,4)\), the superpotential is given by (4.3). In this case \(\left[S(x,z)^{d}\right]_{0}\) is non-vanishing only if \(d=4m\), in which case \[\left[S(x,z)^{4m}\right]_{0}=\frac{(2m)!(4m)!}{(m!)^{6}}\frac{z^{m}}{\epsilon ^{4m}}\,.\] Thus, we obtain: \[\oint e^{\frac{1}{\epsilon}S(x,z)}\,\bigwedge_{i,j}\frac{dx_{i,j}}{x_{i,j}}=\sum_{ m=0}^{\infty}\,\frac{(2m)!}{(m!)^{6}\epsilon^{4m}}\,z^{m}.\] ### The vertex function in the \(\hbar\to\infty\) limit **Theorem 4.1**.: _Let \(\mathsf{V}(z)\) be the function (2.5), then:_ \[\lim_{h\to\infty}\,\mathsf{V}(z/\hbar^{n})=\frac{1}{(2\pi\sqrt{-1})^{k(n-k)}} \oint e^{\frac{1}{\epsilon}S(x,z)}\,\bigwedge_{i,j}\frac{dx_{i,j}}{x_{i,j}}\] _where the integral is defined by (4.4) and \(S(x,z)\) is the polynomial superpotential (4.1)._ Proof.: By Theorem 3.2 we have: \[\mathsf{V}(z)=\frac{\alpha}{(2\pi\sqrt{-1})^{k(n-k)}}\oint_{\gamma_{k,n}}\, \Phi(x,z)\,\bigwedge_{i,j}dx_{i,j}\] where the contour \(\gamma_{k,n}\) is defined by (3.4) and \(\Phi(x,z)\) is the branch of the superpotential (3.6) distinguished by Proposition 3.1. It will be convenient to define: \[\tilde{L}(x_{i,a},x_{j,b})=\left\{\begin{array}{ll}(1-x_{i,a}/x_{j,b})^{- \omega},&m_{i,a}<m_{j,b},\\ (1-x_{j,b}/x_{i,a})^{-\omega},&m_{i,a}>m_{j,b}\end{array}\right. \tag{4.5}\] which differ from (3.5) by a factor \[\tilde{L}(x_{i,a},x_{j,b})=\left\{\begin{array}{ll}L(x_{i,a},x_{j,b}),&m_{i, a}<m_{j,b},\\ e^{-\pi\sqrt{-1}\hbar/\epsilon}L(x_{i,a},x_{j,b}),&m_{i,a}>m_{j,b}.\end{array}\right.\] Recall that \(\alpha=e^{\pi\sqrt{-1}N\hbar/\epsilon}\) where \(N\) is the total number of factors in \(\Phi(x,z)\) for which \(\tilde{L}(x_{i,a},x_{j,b})/L(x_{i,a},x_{j,b})=e^{-\pi\sqrt{-1}\hbar/\epsilon}\). Thus, in these notations we have \[\mathsf{V}(z)=\frac{1}{(2\pi\sqrt{-1})^{k(n-k)}}\oint_{\gamma_{k,n}}\,\tilde {\Phi}(x,z)\,\bigwedge_{i,j}\frac{dx_{i,j}}{x_{i,j}} \tag{4.6}\] where \[\tilde{\Phi}(x,z)=\Big{(}\prod_{i=1}^{n-1}\prod_{a<b}\Delta(x_{i,a},x_{i,b}) \Big{)}\Big{(}\prod_{i=1}^{n-2}\prod_{a=1}^{\mathsf{v}_{i}}\prod_{b=1}^{ \mathsf{v}_{i+1}}\tilde{L}(x_{i,a},x_{i+1,b})\Big{)}\Big{(}\prod_{i=1}^{k} \tilde{L}(z_{1},x_{k,i})L(z_{2},x_{n-k,i})\Big{)}.\] In this integral we rescale the variables by: \(z_{1}\to z_{1},\;\;z_{2}\to z_{2}\hbar^{n}\), since in our notations \(z=z_{1}/z_{2}\), this is equivalent to substitution \(z\to z/\hbar^{n}\) in the left-hand side of (4.6). Let \(\gamma^{\prime}_{k,n}(\hbar)\) be the contour defined by the conditions \[|x_{i,j}|=m_{i,j}\varepsilon|\hbar|^{m_{i,j}}\,.\] Assuming that \(|\hbar|>1\) we have \[m_{i,j}<m_{a,b}\quad\Longrightarrow\quad m_{i,j}\varepsilon|\hbar|^{m_{i,j}} <m_{a,b}\varepsilon|\hbar|^{m_{a,b}}\] for all pairs \((i,j)\) and \((a,b)\). By assumption of Proposition 3.1, we have \(|z_{1}|<|x_{1,1}|\) and \(|x_{n-k,k}|<|z_{2}\hbar^{n}|\) on \(\gamma^{\prime}_{k,n}(\hbar)\). Therefore \(\gamma^{\prime}_{k,n}(\hbar)\sim\gamma_{k,n}\) in the sense of Definition 3.3. Thus \[\mathsf{V}(z/\hbar^{n})=\frac{1}{(2\pi\sqrt{-1})^{k(n-k)}}\oint_{\gamma^{{}^{ \prime}}_{k,n}(\hbar)}\tilde{\Phi}(x,z)\,\bigwedge_{i,j}\frac{dx_{i,j}}{x_{i,j}} \tag{4.7}\] Now, in this integral we change the variables of integration by \(x_{i,j}=y_{i,j}\hbar^{m_{i,j}}.\) The contour \(\gamma^{{}^{\prime}}_{k,n}(\hbar)\) in the variables \(y_{i,j}\) is given by \(|y_{i,j}|=m_{i,j}\varepsilon,\) i.e., in the coordinates \(y_{i,j}\) we integrate over the original contour \(\gamma_{k,n}\). Overall we obtain: \[\mathsf{V}(z/\hbar^{n})=\frac{1}{(2\pi\sqrt{-1})^{k(n-k)}}\oint_{\gamma_{k,n} }\tilde{\Phi}(y,z)\,\bigwedge_{i,j}\frac{dy_{i,j}}{y_{i,j}} \tag{4.8}\] where \[\tilde{\Phi}(y,z)=\Big{(}\prod_{i=1}^{n-1}\prod_{a<b}\Delta(y_{i, a}\hbar^{m_{i,a}},y_{i,b}\hbar^{m_{i,b}})\Big{)}\Big{(}\prod_{i=1}^{n-2}\prod_{a =1}^{\mathsf{v}_{i}}\prod_{b=1}^{\mathsf{v}_{i+1}}\tilde{L}(y_{i,a}\hbar^{m_{ i,a}},y_{i+1,b}\hbar^{m_{i+1,b}})\Big{)}\times\] \[\times\Big{(}\prod_{i=1}^{k}\tilde{L}(z_{1},y_{k,i}\hbar^{m_{k,i} })\tilde{L}(z_{2}\hbar^{n},y_{n-k,i}\hbar^{m_{n-k,i}})\Big{)}. \tag{4.9}\] We have: \[\tilde{L}(y_{i,a}\hbar^{m_{i,a}},y_{i+1,b}\hbar^{m_{i+1,b}})=\left\{\begin{array} []{ll}\Big{(}1-(y_{i,a}/y_{i+1,b})/(\hbar^{m_{i+1,b}-m_{i,a}})\Big{)}^{-\hbar /\epsilon},&m_{i,a}<m_{i+1,b},\\ \Big{(}1-(y_{i+1,b}/y_{i,a})/(\hbar^{m_{i,a}-m_{i+1,b}})\Big{)}^{-\hbar/ \epsilon},&m_{i,a}>m_{i+1,b}.\end{array}\right.\] Note that the powers of \(\hbar\) appearing in these factors are positive integers. Thus, we compute \[\lim_{\hbar\to\infty}\,L(y_{i,a}\hbar^{m_{i,a}},y_{i+1,b}\hbar^{m_{i+1,b}})= \left\{\begin{array}{ll}e^{\frac{1}{\epsilon}\frac{y_{i,a}}{y_{i+1,b}}},&m_{ i,a}=m_{i+1,b}-1,\\ \\ e^{\frac{1}{\epsilon}\frac{y_{i+1,b}}{y_{i,a}}},&m_{i,a}=m_{i+1,b}+1,\\ \\ 1,&\text{otherwise}.\end{array}\right.\] We also have \[\tilde{L}(z_{1},y_{k,i}\hbar^{m_{k,i}})=\Big{(}1-(z_{1}/y_{k,i})/\hbar^{m_{k, i}}\Big{)}^{-\hbar/\epsilon},\] \[\tilde{L}(z_{2}\hbar^{n},y_{n-k,i}\hbar^{m_{n-k,i}})=\Big{(}1-y_{n-k,i}\hbar^ {m_{n-k,i}-n}/z_{2}\Big{)}^{-\hbar/\epsilon}.\] with \(m_{k,i}=2i-1\) and \(m_{n-k,i}=n-2k+2i-1,\) therefore \[\lim_{\hbar\to\infty}\,\tilde{L}(z_{1},y_{k,i}\hbar^{m_{k,i}})=\left\{ \begin{array}{ll}e^{\frac{1}{\epsilon}\frac{z_{1}}{y_{k,1}}},&i=1,\\ 1&i\neq 1\end{array}\right.\] and \[\lim_{\hbar\to\infty}\,\tilde{L}(z_{2}\hbar^{n},y_{n-k,i}\hbar^{m_{n-k,i}})= \left\{\begin{array}{ll}e^{\frac{1}{\epsilon}\frac{y_{n-k,k}}{z_{2}}},&i=k,\\ 1&i\neq k\end{array}\right.\] Finally \[\Delta(y_{i,a}\hbar^{m_{i,a}},y_{i,b}\hbar^{m_{i,b}})=(1-y_{i,a}/y_{i,b}/(\hbar^{ m_{i,b}-m_{i,a}}))^{2\hbar/\epsilon}\] and since \(m_{i,b}-m_{i,a}\geqslant 2\) for \(b>a\) we have \[\lim_{\hbar\to\infty}\,\Delta(y_{i,a}\hbar^{m_{i,a}},y_{i,b}\hbar^{m_{i,b}})=1.\] In summary, the limit \(\hbar\to\infty\) of a factor in (4.9) is non-trivial only if it corresponds to an edge of the graph \(\Gamma\) and: \[\lim_{\hbar\to\infty}\,\tilde{\Phi}(y,z)=\prod_{e\in\mathrm{edges}(\Gamma)}e^ {\frac{1}{\epsilon}\,\frac{y_{\hbar(e)}}{y_{\hbar(e)}}}=e^{\frac{S(y,z)}{ \epsilon}} \tag{4.10}\] Finally, the point-wise limit (4.10) on the compact set \(\gamma_{k,n}\) is uniform, the limit commutes with the integration and from (4.8) we obtain \[\lim_{\hbar\to\infty}\mathsf{V}(z/\hbar^{n}) = \lim_{\hbar\to\infty}\frac{1}{(2\pi\sqrt{-1})^{k(n-k)}}\oint_{ \gamma_{k,n}}\tilde{\Phi}(y,z)\,\bigwedge_{i,j}\frac{dy_{i,j}}{y_{i,j}}\] \[= \frac{1}{(2\pi\sqrt{-1})^{k(n-k)}}\oint_{\gamma_{k,n}}e^{\frac{S (y,z)}{\epsilon}}\,\bigwedge_{i,j}\frac{dy_{i,j}}{y_{i,j}}\,,\] which completes the proof. ## 5. Dwork congruences Let \(p\) be a prime number. Consider the power series \[\mathsf{F}(z)=\sum_{d=0}^{\infty}\,\left[S(x,z)^{d}\right]_{0}\in\mathbb{Z}[[z ]]\,. \tag{5.1}\] Note that the coefficients of this series differ from the coefficients of the \(A\)-series in (4.4) by a factor \(d!\epsilon^{d}\), i.e., these two series are related by the Borel integral transform. Let us consider a system of polynomial truncations of the function \(\mathsf{F}(z)\,\): \[\mathsf{F}_{s}(z)=\sum_{d=0}^{p^{s}-1}\,\left[S(x,z)^{d}\right]_{0}\in\mathbb{ Z}[z],\qquad s=0,1,2,\ldots\,.\] where as before we assume that \([S(x,z)^{0}]_{0}=1\). **Theorem 5.1**.: _We have the following congruences:_ \[\frac{\mathsf{F}(z)}{\mathsf{F}(z^{p})}\equiv\frac{\mathsf{F}_{s}(z)}{\mathsf{ F}_{s-1}(z^{p})}\quad\mathrm{mod}\ p^{s}\,,\qquad s=1,2,\ldots\,.\] _In particular, the polynomials \(\mathsf{F}_{s}(z)\) satisfy the Dwork type congruences:_ \[\frac{\mathsf{F}_{s+1}(z)}{\mathsf{F}_{s}(z^{p})}\equiv\frac{\mathsf{F}_{s}(z )}{\mathsf{F}_{s-1}(z^{p})}\quad\mathrm{mod}\ p^{s},\qquad s=1,2,\ldots\,.\] Proof.: The proof of these congruences is based on the properties of the Newton polytope \[\Delta(k,n)=\mathsf{N}(S(x,z))\subset\mathbb{R}^{k(n-k)}\] of the Laurent polynomial (4.1) in the variables \(x=(x_{i,j})\). Let \(f_{i,j}\) with \(i=1,\ldots,k,\ \ j=1,\ldots,n-k\) denote the standard basis in \(\mathbb{R}^{k(n-k)}\). The vectors \(f_{i,j}\) correspond to the boxes of the \(k\times(n-k)\)-diagram in Fig. 3. From (4.1), we see that \(\Delta(k,n)\) is the convex hull of the vectors: \[f_{1,1},\ \ f_{i,j+1}-f_{i-1,j+1},\ \ i=2,\ldots,k,\ \ j=0,\ldots,n-k-1,\] \[-f_{k,n-k},\ \ f_{i,j+1}-f_{i,j},\ i=1,\ldots,k,\ \ j=1,\ldots,n-k-1.\] This polytope has been considered in many publications, in particular it is known to be _reflexive_ see Theorem 3.1.3 in [1]. We recall that the origin \((0,\ldots,0)\) is the only integral point in a reflexive polytope, see for instance [13] for an overview. Now the proof of Theorem 5.1 follows from Theorem 1.1 in [13], after simple modifications. Let \(S(x,1)\) denote the superpotential (4.1) with \(z_{1}=z_{2}=1\). Clearly, this Laurent polynomial has the same Newton polytope \(\mathsf{N}(S(z,1))=\mathsf{N}(S(x,z))=\Delta(k,n)\). Consider \[\mathsf{M}(\xi)=\sum_{d=0}^{\infty}\,[S(x,1)^{d}]_{0}\,\xi^{d},\qquad\mathsf{M }_{s}(\xi)=\sum_{d=0}^{p^{s}-1}\,[S(x,1)^{d}]_{0}\,\xi^{d}.\] Since \((0,\ldots,0)\) is the only interior point of \(\mathsf{N}(S(z,1))\) we may apply Theorem 1.1 in [13] and conclude that these functions satisfy the congruences: \[\frac{\mathsf{M}(\xi)}{\mathsf{M}(\xi^{p})}\equiv\frac{\mathsf{M}_{s}(\xi)}{ \mathsf{M}_{s-1}(\xi^{p})}\,,\qquad\frac{\mathsf{M}_{s+1}(\xi)}{\mathsf{M}_{s }(\xi^{p})}\equiv\frac{\mathsf{M}_{s}(\xi)}{\mathsf{M}_{s-1}(\xi^{p})}\quad \text{mod }p^{s}\,,\qquad s=1,2,\ldots\,. \tag{5.2}\] From the structure of the superpotential \(S(x,z)\) it is clear that \[[S(x,z)^{d}]_{0}=[S(x,1)^{d}]_{0}z^{\frac{d}{n}}\,.\] In particular, this coefficient equals zero unless \(n\) divides \(d\). From this we find that \[\mathsf{F}(z)=\sum_{d=0}^{\infty}\,\left[S(x,1)^{d}\right]_{0}z^{\frac{d}{n}} =\mathsf{M}(z^{\frac{1}{n}})\] and similarly \(\mathsf{F}_{s}(z)=\mathsf{M}_{s}(z^{\frac{1}{n}})\). Now theorem 5.1 follows from (5.2) after the substitution \(\xi\to z^{\frac{1}{n}}\). For further discussion of Dwork type congruences for vertex functions and solutions of KZ equations we refer to [11, 12, 13, 14]. **Remark**.: Theorem 5.1 implies an infinite factorization: \[\mathsf{F}(z)=\prod_{i=0}^{\infty}\,\frac{\mathsf{F}_{s}(z^{p^{i}})}{\mathsf{ F}_{s-1}(z^{p^{i+1}})}\mod p^{s},\] cf. Theorem 5.3 in [11].
2307.15852
Dimensionless Policies based on the Buckingham $π$ Theorem: Is This a Good Way to Generalize Numerical Results?
The answer to the question posed in the title is yes if the context (the list of variables defining the motion control problem) is dimensionally similar. This article explores the use of the Buckingham $\pi$ theorem as a tool to encode the control policies of physical systems into a more generic form of knowledge that can be reused in various situations. This approach can be interpreted as enforcing invariance to the scaling of the fundamental units in an algorithm learning a control policy. First, we show, by restating the solution to a motion control problem using dimensionless variables, that (1) the policy mapping involves a reduced number of parameters and (2) control policies generated numerically for a specific system can be transferred exactly to a subset of dimensionally similar systems by scaling the input and output variables appropriately. Those two generic theoretical results are then demonstrated, with numerically generated optimal controllers, for the classic motion control problem of swinging up a torque-limited inverted pendulum and positioning a vehicle in slippery conditions. We also discuss the concept of regime, a region in the space of context variables, that can help to relax the similarity condition. Furthermore, we discuss how applying dimensional scaling of the input and output of a context-specific black-box policy is equivalent to substituting new system parameters in an analytical equation under some conditions, using a linear quadratic regulator (LQR) and a computed torque controller as examples. It remains to be seen how practical this approach can be to generalize policies for more complex high-dimensional problems, but the early results show that it is a promising transfer learning tool for numerical approaches like dynamic programming and reinforcement learning.
Alexandre Girard
2023-07-29T00:51:26Z
http://arxiv.org/abs/2307.15852v2
# Dimensionless Policies based on the Buckingham \(\pi\) Theorem: ###### Abstract Yes if the context, the list of variables defining the motion control problem, is dimensionally similar. Here we show that by modifying the problem formulation using dimensionless variables, we can re-use the optimal control law generated numerically for a specific system to a sub-space of dimensionally similar systems. This is demonstrated, with numerically generated optimal controllers, for the classic motion control problem of swinging-up a torque-limited inverted pendulum. We also discuss the concept of regime, a region in the space of context variables, that can help relax the condition on dimensional similarity. Futhermore, we discuss how applying dimensionnal scaling of the input and output of a context-specific policy is equivalent to substituting the new systems parameters in an analytical equation for dimensionally similar systems. It remains to be seen if this approach can also help generalizing policies for more complex high-dimensional problems. ## I Introduction The state-of-the-art toolbox of control engineers and roboticists include many numerical algorithms and data-driven schemes. Many control approaches now include a type of mathematical optimization that has no closed-form solution and that is thus solved numerically [1]. Also reinforcement learning (RL), which we can be seen as a data-driven offline optimization, is starting to be a viable option and very promising for solving some motion control problems [2]. All in all, numerical tool are very useful and have been used to solve many hard control problem. However, they have a major drawback compared to simpler analytical approaches: parameters of the problem are not appearing explicitly in the solutions, which makes it much harder to generalize and reuse them. Analytical solutions to control problems have the useful property of allowing the solution to be adjusted to different system parameters by simply substituting the new values in the equation. Most numerical solutions, including RL, usually act has black-boxes with respect to the parameters. For instance, if we use a reinforcement learning (RL) approach to find a good feedback law for a given task with a robot, the solution will be specific to this system. If the RL-generated feedback law is transferred to a robot with a longer arm, there is a good chance it will not behave as intended. With an analytical feedback law solution, we would simply update the value of the length variable in the equation to adjust it, while with a RL solution we would have to re-conduct all the training implying generally multiple hours of data collection and/or computation. It would be a great asset to have the ability to adjust black-box numerical solutions with respect to some problem parameters. In this paper, we explore the concept of dimensionless policies, a more generic form of knowledge, has a mean to generalize numerical solutions to motion control problems. First, in section II, we use dimensional analysis (i.e. the Buckingham \(\pi\) theorem) to show that motion control problem with dimensionally similar context variables must share the same feedback law solution when expressed in a dimensionless form, and discuss the implications. Then in section III, this is demonstrated for the classical motion control problem of swinging-up an inverted pendulum, using numerically generated optimal feedback laws with a dynamic programming algorithm. Also, in section IV, we illustrate with two examples that the proposed dimensional scaling is equivalent to changing parameters in an analytical solution. A very promising application of the concept of dimensionless policies, is for conducting reinforcement learning with various systems sharing a database. The idea of leveraging dimensionless numbers as recently attracted research interest [3], and here in this paper we investigate this idea in the context of learning generic policies. Data efficiency is critical for reinforcement learning and the ability to use data from slightly different system could help in this aspect [4][5][6]. For instance, it would be interesting to use the data of all the vehicles on the road, even if they are of various dimension and dynamic characteristics, to learn appropriate manoeuvres in situation that happen very rarely. We think the presented concept of dimensionless policy is potentially a key enabler for making this pooling of experience and skills. Fig. 1: Shared dimensionless policy for various systems ## II Dimensionless Policies In the following section, we develop the concept of dimensionless policies based on the Buckingham \(\pi\) and show that multiple motion control problems will have the same policy solution when restated in a dimensionless form, if they have a dimensionally similar context. ### _Context variables in the policy mapping_ Here, we will call a feedback law a mapping noted \(f\), specific to a given system, from an vector space representing the state \(x\) of the dynamic system, to a vector space representing the control inputs \(u\) of the system: \[u=f\left(x\right) \tag{1}\] Under some assumptions, mainly a fully observable systems, an additive cost and an infinite time horizon, the optimal feedback law is guarantee to be in this state feedback form [7]. We will only consider this case in the following analysis. To consider the question of how can this system-specific feedback law be transferred in a different context, it is useful to think about a higher dimension mapping, that we will call a policy and note \(\pi\), also having as additional input arguments, a vector of variables \(c\) describing the context: \[u=\pi\left(x,c\right) \tag{2}\] where the context \(c\) is the vector of all variables that would affect what is the feedback law solution. We can think about the policy \(\pi\) as the solution to a motion control problem, and \(c\) as a vector of parameters in the problem definition. The policy \(\pi\) outputs the control action as a function of the state \(x\), but also of problem parameters. The policy \(\pi\) thus contains the feedback laws for all possible contexts. For example, in section III a case study is conducted considering the optimal feedback law for swinging-up a torque-limited inverted pendulum. For this example, the context variables are the pendulum mass \(m\), the gravitational constant \(g\), the length \(l\), but also what we will call task parameters: a parameter in the cost function \(q\) and a constraint \(\tau_{max}\) on the maximum input torque, see Fig. 2. For a given state of the system, the torque policy might be different if any of the context variable change value, for instance if the pendulum is heavier, more torque limited, etc. We will use a subscript letter to refer to a specific context, for instance we will note \(f_{a}\) the feedback law solution to a motion problem defined by an instance of context variables \(c_{a}\). The feedback law \(f_{a}\) is thus a slice of the global policy when the context variables are fixed at values \(c_{a}\): \[f_{a}\left(x\right)=\pi\left(x,c=c_{a}\right) \tag{3}\] as illustrated at Fig. 3. Then we can formalize the goal of generalizing a feedback law to a different context: if a feedback law \(f_{a}\) is known for a context \(a\) described by variables \(c_{a}\), can this knowledge help finding an equivalent good feedback policy in a different context \(c_{b}\)? \[\pi\left(x,c=c_{a}\right)=f_{a}\left(x\right)\quad\Rightarrow\quad\pi\left(x,c =c_{b}\right)=? \tag{4}\] Using the Buckingham \(\pi\) theorem [8], we will show that if the context is dimensionnally similar, then both feedback laws must be equal when restated in dimensionless form. It is important to note that for the following dimensional analysis to hold, we must include all the variables, system parameters and task parameters, that would affect the policy solution \(\pi\) to the control problem in the context vector \(c\): \[\underbrace{\begin{bmatrix}u_{1}\\ \vdots\\ u_{k}\end{bmatrix}}_{\text{inputs}}=\pi\underbrace{\begin{bmatrix}x_{1}\\ \vdots\\ x_{n}\end{bmatrix}}_{\text{states}},\underbrace{\begin{bmatrix}c_{1}\\ \vdots\\ \vdots\end{bmatrix}}_{\text{context}},\underbrace{\begin{bmatrix}\vdots\\ c_{m}\end{bmatrix}}_{\text{task}}\end{bmatrix}_{\text{Context}} \tag{5}\] ### _Dimensional analysis of the policy mapping_ For a system with \(k\) control inputs, we can treat the augmented policy as \(k\) mappings from states and context variables to each scalar control input \(u_{j}\): \[u_{j}=\pi_{j}\left(x_{1},\dots,x_{n},c_{1},\dots\dots,c_{m}\right) \tag{6}\] where eq. (6) is the \(j\)th line of the policy in vector form described by eq. (5). Then, if the state vector is defined by \(n\) variables, and the context is defined by \(m\) system plus tasks parameters, then each mapping \(\pi_{j}\) involves \(1+n+m\) variables. Here, we will assume that the policy is physically meaningful, in the sense of the requirement for applying the Buckingham \(\pi\) theorem [8]. This means for example, that a Fig. 3: A feedback law \(f\), is a slice of the higher dimensional policy mapping \(\pi\) for a specific context. Fig. 2: For the pendulum swing-up control problem, the context includes 5 variables: the parameter of the system: \(m\), \(g\), and \(l\), a parameter of the cost function: \(q\) and a parameter defining the constraints: \(\tau_{max}\). policy that computes a force based on position and velocity measurements would be in this framework, but not a policy for playing chess for instance. Applying the Buckingham \(\pi\) theorem to this relationship, tell us that if \(d\) dimensions are involved in all those variables, then eq. (6) can be restated into an equivalent relationship between \(p\) dimensionless \(\Pi\) groups where \(p\geq(1+n+m)-d\). Assuming \(d\) dimensions are involved in the \(m\) context variables, and that we are in the usual situation where the maximum reduction is possible, i.e. \(p=(1+n+m)-d\), then we can pick \(d\) context variables \(\{c_{1},c_{2},\ldots,c_{d}\}\) as the basis (the repeated variables) to scale all other variables in dimensionless \(\Pi\) groups. We will note dimensionless \(\Pi\) group as the base variable with an \({}^{*}\): \[u_{j}^{*} =u_{j}\left[c_{1}\right]^{e_{ij}^{*}}\left[c_{2}\right]^{e_{ij}^{ *}}\ldots\left[c_{d}\right]^{e_{d}^{*}}_{\begin{subarray}{c}j=\{1,\ldots,k \}\end{subarray}} \tag{7}\] \[x_{i}^{*} =x_{i}\left[c_{1}\right]^{e_{1i}^{*}}\left[c_{2}\right]^{e_{2i}^{ *}}\ldots\left[c_{d}\right]^{e_{d}^{*}}_{\begin{subarray}{c}i=\{1,\ldots,n \}\end{subarray}}\] (8) \[c_{i}^{*} =c_{i}\left[c_{1}\right]^{e_{i}^{*}}\left[c_{2}\right]^{e_{2i}^{ *}}\ldots\left[c_{d}\right]^{e_{d_{i}}^{*}}_{\begin{subarray}{c}i=\{d+1, \ldots,m\}\end{subarray}} \tag{9}\] where exponents \(e_{ij}\) are rational numbers selected to make all equations dimensionless. Then, the Buckingham \(\pi\) theorem tell us that the relationship described by eq. (6) can be restated as the following relationship between dimensionless variables: \[u_{j}^{*}=\pi_{j}^{*}\left(x_{1}^{*},\ldots,x_{n}^{*},c_{d+1}^{*},\ldots,c_{m+ }^{*},\right) \tag{10}\] involving \(d\) less dimensionless variables. If we apply the same procedure to all control inputs, when can then assemble the \(k\) mappings back into a vector form: \[\underbrace{\begin{bmatrix}u_{1}^{*}\\ \vdots\\ u_{k}^{*}\end{bmatrix}}_{\text{Dimensionless feedback law }f^{*}}\underbrace{ \begin{bmatrix}c_{d+1}^{*}\\ \vdots\\ c_{m}^{*}\end{bmatrix}}_{\text{contact }c^{*}} \tag{11}\] that we will sometimes write in compact form as: \[u^{*}=\pi^{*}(x^{*},c^{*}) \tag{12}\] One interesting perk of this dimensional analysis, is that we can remove \(p\) variables from the context (typically \(p\) would be 2 or 3 for controlling a physical system involving time, force and length). The global problem of learning \(\pi(x,c)\), i.e. the feedback policy for all possible contexts is thus simplified in a dimensionless form. Also, an even more interesting feature, that we can use for transferring feedback laws between systems, is that a global policy \(\pi(x,c)\), will have an equivalent dimensionless form for multiple context \(c\). As illustrated at Fig. 4, the dimensionless context \(c^{*}\) is in a lower dimensional space (\(m-d\)), thus multiple context vector \(c\) will correspond to the same dimensionless vector \(c^{*}\). For a given motion control problem, if the dimensionless context are equal, then the dimensionless feedback law should be exactly equivalent: \[\text{if}\quad c_{a}^{*}=c_{b}^{*}\quad\text{then}\quad f_{a}^{*}(x^{*})=f_{b} ^{*}(x^{*})\ \forall x^{*} \tag{13}\] where dimensionless feedback laws are defined as slices of the dimensionless policy for specific contexts: \[f_{a}^{*}(x^{*})=\pi^{*}(x^{*},c^{*}=c_{a}^{*}) \tag{14}\] \[f_{b}^{*}(x^{*})=\pi^{*}(x^{*},c^{*}=c_{b}^{*}) \tag{15}\] This is simply based on the fact that the dimensionless policy, i.e. eq. (12), gives the same outputs for the same inputs. This results means that the knowledge of the policy for a specific context \(c\) can actually be generalized to a sub-space of all context for which the dimensionless context \(c^{*}\) is equal. For instance, lets imagine we have a global policy for a spherical submarine that depends only on the velocity and the radius. In dimensionless form we would find the policy depends only on the Reynolds number, thus would be equivalent for all pair of velocity and radius that correspond to the same Reynolds number. ### _Transferring policies between contexts_ In order to exploit this property, it is useful to define transformation matrices based on scalar equations (7), (8) and (9): \[u^{*} =\left[T_{u}(c)\right]\ u \tag{16}\] \[x^{*} =\left[T_{x}(c)\right]\ x\] (17) \[c^{*} =\left[T_{c}(c)\right]\ c \tag{18}\] where matrices \(T_{u}\) and \(T_{x}\) are square diagonal matrix, where each diagonal term is a multiplication of the first \(d\) context variables (\(\{c_{1},c_{2},\ldots,c_{d}\}\)) up to a rational power (found by applying the Buckingham \(\pi\) theorem). Equations (16) and (17) are invisible (unless a context variable is equal to zero) and can be used to go back-and-forth between dimensional and dimensionless states and inputs variables. The matrix \(T_{c}\) however have \(d\) less row than columns and eq. (18) is not inversible: for a given context \(c\) there is only one dimensionless context \(c^{*}\), however a dimensionless context \(c^{*}\) correspond to multiple dimensional context \(c\). Using the transformation matrices, if a dimensional feedback law \(f_{a}\) for a context \(c_{a}\) is known: \[f_{a}(x)=\pi\left(x,c=c_{a}\right) \tag{19}\] its representation in dimensionless form: \[f_{a}^{*}(x^{*})=\pi\left(x^{*},c^{*}=c_{a}^{*}\right)) \tag{20}\] Fig. 4: Dimensionally similar contexts example with dimensions: \(m=2\) and \(d=1\): \(c_{a}\) is dimensionally similar to \(c_{b}\) but not to \(c_{c}\) or \(c_{d}\). can be found by scaling the input and output of \(f_{a}\) with \(T_{u}\) and \(T_{x}\): \[f_{a}^{*}(x^{*})=T_{u}(c_{a})\underbrace{f_{a}\left(\underbrace{T_{x}^{-1}(c_{a} )\ x^{*}}_{x}\right)}_{u} \tag{21}\] Inversely, if we know a dimensionless feedback law \(f_{b}^{*}\), matrices \(T_{u}\) and \(T_{x}\) can be used to scale it back to a specific context \(c_{b}\): \[f_{b}(x)=T_{u}^{-1}(c_{b})\underbrace{f_{b}^{*}\left(\underbrace{T_{x}(c_{b} )\ x}_{x^{*}}\right)}_{u^{*}} \tag{22}\] Thus, eq. (21) and eq. (22) can be used to take any context specific feedback law, finding its dimensionless form, and scale it back to a new context, as illustrated at Fig. 5. In general, there is no guarantee that the behaviour of the scaled feedback law in the new context will be similar to the behaviour of the feedback law in the original context, only if the context are dimensionally similar, i.e. the dimensionless context \(c^{*}\) are equal. Lets suppose an optimal policy solution \(f_{a}\) is known for a specific context \(c_{a}\), then the scaled policy: \[f_{b}(x)=\left[T_{u}^{-1}(c_{b})T_{u}(c_{a})\right]\,f_{a}\left(\left[T_{x}^{- 1}(c_{a})T_{x}(c_{b})\right]\,x\right) \tag{23}\] will be the optimal solution to the same motion control problem for a context \(c_{b}\) if \[c_{b}^{*}=T_{c}(c_{b})\ c_{b}=T_{c}(c_{a})\ c_{a}=c_{a}^{*} \tag{24}\] In some sense, this similar context condition means that the motion problem with parameters \(c_{a}\) and the motion problem with parameter \(c_{b}\) were actually the exact same problem up to scaling factors. It thus make sense that the two solutions should thus also be equivalent up to scaling factors. In section III, we show examples of this result with numerical solution to dimensionally similar pendulum swing-up problems. Furthermore, we demonstrate that in some situations, eq. (24) equality conditions can be relaxed into inequality conditions, using the concept of regimes. ## III Optimal pendulum swing-up task In this paper, we will use the classical pendulum swing-up task to test the ideas of dimensionless policies. The motion control problem is formally defined here as finding a feedback law for controlling the dynamic system described by the differential equation: \[ml^{2}\ddot{\theta}-mgl\sin\theta=\tau \tag{25}\] that minimize the quadratic cost function given by: \[J=\int{(q^{2}\theta^{2}+0\,\dot{\theta}^{2}+1\,\tau^{2})dt} \tag{26}\] subject to input constraints given by: \[-\tau_{max}\leq\tau\leq\tau_{max} \tag{27}\] Note that here, 1) the cost function parameter \(q\) is included with a power of two to have units of torque, 2) it was chosen to set to zero the weight on velocity for simplicity, and 3) the weight multiplying the torque is set to one without loss of generality as only the relative values of weights will impact the optimal solution. Thus, assuming there is no hidden variables and that equations (25), (26) and (27) fully describe the problem. The solution, i.e. the optimal policy for all context, should be of the form given by: \[\underbrace{\tau}_{\text{inputs}}=\pi\left(\underbrace{\theta,\dot{\theta}}_{ \text{states}},\underbrace{m,g,l}_{\text{system parameters}},\underbrace{q,\tau_{max}}_{ \text{task parameters}}\right) \tag{28}\] involving variables listed in table I. Before, conducting the dimensional analysis, it is interesting to note that while there are 3 system parameters \(m\), \(g\) and \(l\), they only appear independently in two groups in the dynamic equation. We can thus consider only two system parameters. For convenience we selected \(mgl\), corresponding to the maximum static gravitational torque (i.e. when the pendulum is horizontal) and \(\omega\), as listed at table II \begin{table} \begin{tabular}{l l l l} \hline \hline **Variable** & **Description** & **Units** & **Dimensions** \\ \hline \hline \multicolumn{4}{c}{**Control inputs**} \\ \hline \hline \(\tau\) & Actuator torque & \(Nm\) & [\(ML^{2}T^{-2}\)] \\ \hline \hline \multicolumn{4}{c}{**State variables**} \\ \hline \hline \(\theta\) & Joint angle & \(rad\) & [] \\ \hline \(\dot{\theta}\) & Joint angular velocity & \(rad/sec\) & [\(T^{-1}\)] \\ \hline \hline \multicolumn{4}{c}{**System parameters**} \\ \hline \hline \(m\) & Pendulum mass & \(kg\) & [\(M\)] \\ \hline \(g\) & Gravity & \(m/s^{2}\) & [\(LT^{-2}\)] \\ \hline \(l\) & Pendulum lenght & \(m\) & [\(L\)] \\ \hline \hline \multicolumn{4}{c}{**Problem parameters**} \\ \hline \hline \(q\) & Weight parameter & \(Nm\) & [\(ML^{2}T^{-2}\)] \\ \hline \(\tau_{max}\) & Maximum torque & \(Nm\) & [\(ML^{2}T^{-2}\)] \\ \hline \hline \end{tabular} \end{table} TABLE I: Pendulum swing-up optimal policy variables Fig. 5: Isolating the dimensionless knowledge in a policy ### _Dimensional analysis_ Here we have one control input, two states, two system parameters and two task parameters, for a total of \(1+(n=2)+(m=4)=7\) variables are involved. In those variables, only \(d=2\) independents dimensions ( \(ML^{2}T^{-2}\) and \(T^{-1}\) ) are present. Using \(c_{1}=mgl\) and \(c_{2}=\omega\) as the repeating variables leads to the following dimensionless groups: \[\Pi_{1} =\tau^{*}=\frac{\tau}{mgl}\qquad\frac{[ML^{2}T^{-2}]}{[M][LT^{-2}][L]} \tag{29}\] \[\Pi_{2} =\theta^{*}=\theta\qquad[]\] (30) \[\Pi_{3} =\dot{\theta}^{*}=\frac{\dot{\theta}}{\omega}\qquad\frac{[T^{-1} ]}{[T^{-1}]}\] (31) \[\Pi_{4} =\tau^{*}_{max}=\frac{\tau_{max}}{mgl}\qquad\frac{[ML^{2}T^{-2}]}{ [M][LT^{-2}][L]}\] (32) \[\Pi_{5} =q^{*}=\frac{q}{mgl}\qquad\frac{[ML^{2}T^{-2}]}{[M][LT^{-2}][L]} \tag{33}\] All 3 torque variables (\(\tau\), \(q\) and \(\tau_{max}\)) are scaled by the maximum gravitational torque, and the pendulum velocity variable is scaled by the pendulum natural frequency. The transformation matrices are then: \[\tau^{*} =\underbrace{[1/mgl]}_{T_{u}}\tau \tag{34}\] \[\begin{bmatrix}\theta^{*}\\ \dot{\theta}^{*}\end{bmatrix} =\underbrace{\begin{bmatrix}1&0\\ 0&1/\omega\end{bmatrix}}_{T_{x}}\begin{bmatrix}\theta\\ \dot{\theta}\end{bmatrix}\] (35) \[\underbrace{\begin{bmatrix}q^{*}\\ \tau^{*}_{max}\end{bmatrix}}_{c^{*}} =\underbrace{\begin{bmatrix}0&0&1/mgl&0\\ 0&0&0&1/mgl\end{bmatrix}}_{T_{c}}\underbrace{\begin{bmatrix}mgl\\ \omega\\ q\\ \tau_{max}\end{bmatrix}}_{c} \tag{36}\] According to the theorem, any policy that is only based on the variable included in our analysis can be restated as a relationship between the 5 dimensionless \(\Pi\) groups: \[\tau^{*}=\pi^{*}\left(\theta,\dot{\theta}^{*},q^{*},\tau^{*}_{max}\right) \tag{37}\] The dimensional analysis conducted at sec. II) told us that, for dimensionally similar swing-up problem (which means here equal ratios \(q^{*}\) and \(\tau^{*}_{max}\)) the optimal feedback laws should be equivalent in their dimensionless form. In other words, if we have an optimal policy \(f_{a}\) found in a specific context \(c_{a}=[m_{a},l_{a},g_{a},g_{a},\tau_{max,a}]\), and an optimal policy \(f_{b}\) for a second context \(c_{b}=[m_{b},l_{b},g_{b},q_{b},\tau_{max,b}]\). Then, both dimensionless form will be equal \(f^{*}_{a}=f^{*}_{b}\) if \(q^{*}_{a}=q^{*}_{b}\) and \(\tau^{*}_{max,a}=\tau^{*}_{max,b}\). Furthermore, we can thus find \(f_{b}\) using \(f_{a}\) or vice-versa using the scaling formula given by eq. (23) if this condition is met. However, if \(q^{*}_{a}\neq q^{*}_{b}\) or \(\tau^{*}_{max,a}\neq\tau^{*}_{max,b}\) then \(f_{a}\) doesn't give us information on \(f_{b}\) without additional assumptions. ### _Numerical results_ Here, we use a numerical algorithm (we give the details of the methodology at section III-E) to compute numerical solutions to the motion control problem defined by eq (25), (26) and (27). The used numerical recipe produce feedback laws in the form of look-up tables, based on a discretized grid of the state-space. The optimal (up to discretization errors) feedback laws are computed for 9 contexts listed at table III. In those 9 contexts, there are 3 sub-groups of 3 with dimensionally similar contexts. Also each sub-group inlcudes the same 3 pendulums, illustrated at Fig. 1, a regular, a twice longer and a twice heavier. Contexts 1, 2 and 3 describe a task where the torque is limited to half the static maximum torque. Contexts 4, 5 and 6 describe a task where the cost highly penalize applying large torques. Contexts 7, 8 and 9 describe a task where the cost highly penalize position errors. Figures 6 to 14 illustrate that for each sub-group with equal dimensionless context, the dimensional feedback law generated numerically looks very similar. They are similar up to a scaling of their axis, if we neglect slight differences due to discretization errors. Furthermore, when we compute the dimensionless version of the feedback laws \(f^{*}\), using eq. (21), the dimensionless version is actually equal within each similar sub-group. This was the expected results predicted by the dimensional analysis of section II. In terms of how to use this in a practical scenario, we see that if we computed the feedback law given by Fig. 6(a), we can get the feedback law given by Fig. 7(a) or Fig. 8(a) directly by scaling the original policy with eq. (23), using the appropriate context variables, without having to recompute. In some sense, we got back the ability to adjust the feedback \begin{table} \begin{tabular}{l l l l} \hline \hline **Variable** & **Description** & **Units** & **Dimensions** \\ \hline \hline \(mgl\) & Maximum gravitational torque & \(Nm\) & [\(ML^{2}T^{-2}\)] \\ \hline \(\omega=\sqrt{\frac{q}{l}}\) & Natural frequency & \(sec^{-1}\) & [\(T^{-1}\)] \\ \hline \hline \end{tabular} \end{table} TABLE II: Pendulum system parameters \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(m\) & \(g\) & \(l\) & \(q\) & \(\tau_{max}\) \\ \hline \hline \multicolumn{5}{c}{**Problems with \(\tau^{*}_{max}=0.5\) and \(q^{*}=0.1\)**} \\ \hline \hline Context no 1 : & 1.0 & 10.0 & 1.0 & 1.0 & 5.0 \\ Context no 2 : & 1.0 & 10.0 & 2.0 & 2.0 & 10.0 \\ Context no 3 : & 2.0 & 10.0 & 1.0 & 2.0 & 10.0 \\ \hline \hline \multicolumn{5}{c}{**Problems with \(\tau^{*}_{max}=1.0\) and \(q^{*}=0.05\)**} \\ \hline \hline Context no 4 : & 1.0 & 10.0 & 1.0 & 0.5 & 10.0 \\ Context no 5 : & 1.0 & 10.0 & 2.0 & 1.0 & 20.0 \\ Context no 6 : & 2.0 & 10.0 & 1.0 & 1.0 & 20.0 \\ \hline \hline \multicolumn{5}{c}{**Problems with \(\tau^{*}_{max}=1.0\) and \(q^{*}=10\)**} \\ \hline \hline Context no 7 : & 1.0 & 10.0 & 1.0 & 100.0 & 10.0 \\ Context no 8 : & 1.0 & 10.0 & 2.0 & 200.0 & 20.0 \\ Context no 9 : & 2.0 & 10.0 & 1.0 & 200.0 & 20.0 \\ \hline \hline \end{tabular} \end{table} TABLE III: Pendulum swing-up problems parameters ## References * [1] S. B. K. K. and S. K. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-118, 1974. * [2] S. B. K. and S. K. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-114, 1974. * [3] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-118, 1974. * [4] S. B. K. and S. K. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-118, 1974. * [5] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-114, 1974. * [6] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-118, 1974. * [7] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-118, 1974. * [8] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-118, 1974. * [9] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-118, 1974. * [10] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-118, 1974. * [11] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-118, 1974. * [12] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-120, 1974. * [13] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-120, 1974. * [14] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-120, 1974. * [15] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-120, 1974. * [16] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-120, 1974. * [17] S. B. K. and S. K., "Theoretical performance of the \(f\)-test-based method for the \(f\)-test-based method for the \(f\)-test-based method," _Journal of the Royal Statistical Society: Series B_, vol. 100, no. 1, pp. 103-120, 1974. * [18] S. B. K. and S. K., "Theoretical performance of the \
2303.01308
In-the-wild vibrotactile sensation: Perceptual transformation of vibrations from smartphones
Vibrations emitted by smartphones have become a part of our daily lives. The vibrations can add various meanings to the information people obtain from the screen. Hence, it is worth understanding the perceptual transformation of vibration with ordinary devices to evaluate the possibility of enriched vibrotactile communication via smartphones. This study assessed the reproducibility of vibrotactile sensations via smartphone in the in-the-wild environment. To realize improved haptic design to communicate with smartphone users smoothly, we also focused on the moderation effects of the in-the-wild environments on the vibrotactile sensations: the physical specifications of mobile devices, the manner of device operation by users, and the personal traits of the users about the desire for touch. We conducted a Web-based in-the-wild experiment instead of a laboratory experiment to reproduce an environment as close to the daily lives of users as possible. Through a series of analyses, we revealed that users perceive the weight of vibration stimuli to be higher in sensation magnitude than intensity under identical conditions of vibration stimuli. We also showed that it is desirable to consider the moderation effects of the in-the-wild environments for realizing better tactile system design to maximize the impact of vibrotactile stimuli.
Keiko Yamaguchi, Satoshi Takahashi
2023-03-02T14:33:41Z
http://arxiv.org/abs/2303.01308v1
# In-the-wild vibrotactile sensation: Perceptual transformation of vibrations from smartphones ###### Abstract Vibrations emitted by smartphones have become a part of our daily lives. The vibrations can add various meanings to the information people obtain from the screen. Hence, it is worth understanding the perceptual transformation of vibration with ordinary devices to evaluate the possibility of enriched vibrotactile communication via smartphones. This study assessed the reproducibility of vibrotactile sensations via smartphone in the-the-wild environment. To realize improved haptic design to communicate with smartphone users smoothly, we also focused on the moderation effects of the in-the-wild environments on the vibrotactile sensations: the physical specifications of mobile devices, the manner of device operation by users, and the personal traits of the users about the desire for touch. We conducted a Web-based in-the-wild experiment instead of a laboratory experiment to reproduce an environment as close to the daily lives of users as possible. Through a series of analyses, we revealed that users perceive the weight of vibration stimuli to be higher in sensation magnitude than intensity under identical conditions of vibration stimuli. We also showed that it is desirable to consider the moderation effects of the in-the-wild environments for realizing better tactile system design to maximize the impact of vibrotactile stimuli. mobile devices, vibrotatile perception, haptic assessment, perceptual transformation, in-the-wild study. ## I Introduction The sensation of vibrotactile stimuli has been studied for a long time [1], and extensive research has clarified the type of sensations that can be designed using different vibration actuators [2],[3]. Recently vibrotactile perceptions under multimodality have garnered research interest [4],[5]. Smartphones have now become a necessity in our lives and are one of the most familiar vibration actuators. Vibrations emitted by smartphones can impartious meanings to the information that users obtain from the screen [5]. With the widespread use of smartphones, vibratory stimuli have become a part of our daily lives [6]; the usage environment of smartphones is entirely different from the ones assumed in previous studies. Hence, it is worth understanding how people perceive the vibrotactile stimuli from smartphones for better information transmission in everyday use. This study focuses on vibrotactile sensations generated from smartphones under real-life constraints. We assessed the reproducibility of vibrotactile sensations via in-the-wild experiment. We investigated the perceptual transformation of vibrations using ordinary devices to evaluate the possibility of enriched vibrotactile communication on smartphones. We also examined the moderation effects of the in-the-wild environments on the vibrotactile sensations for realizing better haptic designs for smooth communicate for smartphone users. ## II Related works and motivation Vibration is a stimulus that can elicit multiple tactile perceptions. Previous studies elucidated vibrotactile percepts such as roughness and smoothness that can be differentiated by the vibration frequency [7],[8], a virtual texture of materials reproduced on a touchscreen [9], and softness that requires consideration of the material of the vibration actuator [10]. Most of these studies focused on designing apparatuses and vibration patterns to elicit certain perceptions in experimental settings [6]. Few studies have clarified how humans perceive vibration stimuli from actuators with specific physical properties under in-the-wild conditions. Vibration stimuli from smartphones are mainly used to convey immaterial information, such as incoming call notifications and task completion. For this purpose, the intensity of stimuli plays an essential role in communicating critical and emergent information to reduce the subjective workload of users [11],[12],[13]. Currently, people possess digital assets such as digital wallets on their smartphones. Since digital assets have no substance, it is difficult to perceive their existence. Weight is an essential factor in the perception of an entity. Researchers utilize vibrotactile feedback for object recognition on touch screens and in virtual spaces [14],[15]. It is reasonable to assume that the perceived weight of digital assets can enhance the perception of their substance; this is an attractive hypothesis owing to its commercial impact [3]. Some studies mapped smartphone vibration perception to the psychological state of users [16],[13], but few directly mapped it to the tactile perception. Hence, in this study, we focus on the perceived intensity and weight of vibrotactile stimuli from smartphones. We selected an iPhone as an apparatus in this study because it is one of the most popular vibration actuators today and the same haptic engine, i.e., Core Haptics [17], is employed in iPhone 8 and later models. We conducted a Web-based in-the-wild experiment [18] instead of a laboratory experiment. In our experiment, participants worked on assignments in an environment that was as close as possible to their daily lives. ## III Research questions The primary research question of this study was how people perceived and rendered the vibrations of their smartphones as vibrotactile stimuli and evaluated the magnitude of vibrotactile sensations: intensity and weight. The intensity of vibrotactile stimuli is a straightforward sensation that can be evaluated by ordinary smartphone users. Meanwhile, the weight of vibration is a more complicated perception than the intensity: it is a perceptual transformation that may occur in the mind of the user. Hence, the following research question was derived: **RQ1**: Do people evaluate different qualitative sensations of the same vibrotactile stimuli on the smartphone similarly or differently? In our daily environment, external and internal factors affect how people perceive vibrotactile stimuli: the physical specifications of devices, the manner of their use, and the personal traits of users. Previous studies showed that the physical specifications of devices influence the qualities of the vibrotactile stimuli [19]. The iPhone has a simple rectangular, common form unlike the original devices developed in the previous studies; however, the sizes and weights of the models vary. In addition, the manner of operation of the devices affects vibrotactile perception [19]: in particular, the manner of holding the iPhone. We focused on which hand people usually use to hold the iPhone whether they use the dominant hand or not. How the brains react to vibrotactile stimuli regarding hand dominance has been studied in the field of neuroscience [20, 21]; however, we focused on how people perceived the stimuli and evaluated them as interpretable sensations for the application of implications to design vibrotactile stimuli. Finally, it was necessary to consider the personal traits of smartphone users, such as the desire for touch when extracting information, when examining how they perceive smartphone vibrations. The "Need for Touch" (NFT) scale [22] indicates individual differences in preference for haptic (touch) information. People with higher NFT scores are likely to wish to touch objects to extract precise information about them or to enjoy haptic perceptions from them. We hypothesized that personal traits about touch may affect the user perception of the vibrotactile stimuli. Overall, we investigated the following research question: **RQ2**: What type of differences in vibrotactile perception are caused by the physical specifications of mobile devices, the manner of operating the devices, and the personal traits of users regarding their desire for touch? ## IV Experiment The aim of the experiment was measure perceptual responses to vibratory stimuli and determine the influence of various factors on their perception. We performed magnitude estimation [23] to measure the judgments of vibratory stimuli generated from the iPhone. To evaluate and compare how participants perceived the vibrotactile stimuli from the iPhone in the context of different sensations, we estimated the Stevens' power-law equation [24] per sensation by the ordinary least square method. The Stevens' power law is expressed as follows: \[\psi=k\phi^{\alpha} \tag{1}\] where \(\psi\) is the perceived magnitude of the sensation evoked by a vibrotactile stimulus, \(\phi\) is the amplitude of the vibration of the iPhone, and \(k\) and \(\alpha\) are the parameters to be estimated. ### _Apparatus_ We used iPhone 8 or later models and iOS 15 or later iOS that used Core Haptics worked on it, which enables us to compose haptics based on the Apple Haptic and Audio Pattern (AHAP) file. Although the precise physical specifications of different iPhone models vary, they can be broadly classified into two model groups by size: regular-size or small-size such as the iPhone SE series. Coincidently, the proportion of iPhone ownership of each of these groups in Japan is approximately 50% [25]; hence, we classified iPhone 8 or later models into these two model groups to increase the sample size for the experiment. For this classification, we set iPhone 14 and iPhone SE 2nd generation as the sample models in the regular-sized model group and the small-sized model group, respectively, considering their availability to evaluate the vibratory stimuli. We classified individual iPhone models into the closest group in terms of four physical specifications: height, width, thickness, and weight. The classification results are listed in Table I. We developed a mock application implementing the magnitude estimation. It enabled participants to compare vibrations and evaluate the magnitude of perceived sensation on the screen of the iPhone. ### _Stimuli_ We prepared eight single vibratory pulses of 1,000 ms. These vibratory stimuli were controlled using parameters named hapticIntensity [27],[28] in the AHAP file; the parameters were Fig. 1: Experimental environment to measure iPhone vibration accelerations. increased from 0.3 to 1.0 in steps of 0.1. We set the vibration such that the hapticIntensity parameter was 0.6 as the reference stimulus in the magnitude estimation. We measured the vibration accelerations of these stimuli in three places: lower right corner, lower center, and lower left corner, as shown in Fig. 1. iPhones were held suspended in the middle with a fishing line, and an accelerometer (Ono Sokki; model NP-3414) was attached to the back sides via a magnetic base (Ono Sokki; model NP-0102) and wax. Communication between the accelerometer and an FFT analyzer (ACO Co., Ltd.; model SpectraPLUS-SC) on Windows 10 was enabled by using a data acquisition board (ACO Co., Ltd.; model SpectraDAQ-200). The sampling rate for measurement was 24 kHz. Fig. 2 shows the maximum amplitudes of the hapticIntenstiy parameter among three measurement locations. All vibratory stimuli were observed at 230 Hz. We found that the amplitudes were nonlinear with respect to hapticIntensity and varied among models; meanwhile, the relative amplitudes based on the reference stimulus in the magnitude estimation were approximately the same in Fig. 3. ### _Participants_ We recruited participants over 18 years old with iPhone 8 or later models via Yahoo! Cloudsourcing [29]. The experiment took 20 min on average, and we offered 300 pen in compensation to each participant. Since anonymized data are handled in this study, our institutions do not require ethical review. ### _Experimental procedure_ The experimental procedure had four phases: 1) briefing and experimental condition assignment, 2) instruction, 3) experiment, and 4) debriefing. #### Iii-C1 Briefing and experimental condition assignment At the beginning of this phase, we conducted a briefing session to explain the experiment to the participants and obtained their consent. Then, we inquired about the iPhone models they used and whether they had accessories such as iPhone covers. After gathering this information, we asked them to install the mock application and remove all accessories before operating the smartphone. During the installation process, participants were randomly assigned to one of the experimental conditions for the vibrotactile sensations: intensity or weight. #### Iii-C2 Instruction We started this phase by instructing the participants how to hold their iPhone during the experiment, as shown in Fig. 4. Thereafter, we explained the base protocol for evaluating sensational magnitudes of experimental stimuli. The mock application first presented a screen, as shown in Fig. 5(a). This screen called up a reference vibration when tapped and prompted the participants to perceive the vibration. Then, the next screen, as shown in Fig. 5(b), was displayed; the second screen called up the experimental vibration and asked participants to perceive it. At the end of the base protocol, the third screen was displayed, and here, participants were asked to evaluate the experimental vibration in comparison to the Fig. 3: (a) Amplitudes are nonlinear with respect to the hapticIntensity parameter. (b) Relative amplitudes with respect to the reference stimulus (hapticIntensity = 0.6, red dot), plotted near the dotted line corresponding to equal relative amplitudes of iPhone 14 and iPhone SE 2nd generation. Fig. 2: Amplitudes of vibration controlled by hapticIntensity parameters in the AIAP file. (a) Amplitudes at the bottom center of iPhone SE 2nd generation and (b) at the bottom right corner of iPhone 14. Both devices vibrate at 230 Hz. reference stimulus using numbers, as shown in Fig. 5(c). We instructed that the base sensation magnitude of the reference stimulus was regarded as 10. Hence, we asked participants to provide the value "20" if they felt the magnitude of the experimental stimulus was twice the reference stimulus. The participants assigned to the intensity (weight) condition were asked to rate the subjective intensity (weight) of the vibration as a vibrotactile sensation. We prepared a practice mode to help the participants understand this protocol and operate the application as per their wish before the experiment. #### Iv-D3 Experiment We prepared an experimental set comprising eight experimental stimuli, and participants randomly evaluated each stimulus in the set according to the basic protocol. In the experiment, we offered five experimental sets repeatedly; the participants provided responses about their vibrotactile sensations 40 times. In a postexperiment survey, we asked participants about the way in which they held the iPhone (Fig. 4) during the experiment. We also asked 12 questions to measure the NFT scale [22]. At the end of the experiment phase, we collected the demographics (age, sex, occupation) of the participants and information about their dominant hand and manner of holding the iPhone during daily use. #### Iv-D4 Debriefing After measurement using the mock application, we confirmed whether the participants removed all smartphone accessories during the experiment, without imposing any penalty for the use of such accessories. The experiment automatically ended if a participant spent more than 60 min. And, the mock application proceeded from the instruction stage only if a participant could perceive the vibrotactile stimuli and passed the vibration check correctly. ### _Sanity check_ We presented the reference stimulus as one of the eight experimental stimuli for the instructional manipulation check. We instructed the participants that to set the sensation magnitude of the reference stimulus as 10 (See Fig. 5(c)) during simulation. Hence, we excluded all answers of participants who responded with outliers (such as zero) with respect to this reference stimulus as dishonest. We also excluded participants whose variation among the magnitudes per hapticIntenstiy parameter was beyond three standard deviations of the mean as unreliable. Finally, to remove possible data noises, we excluded participants who answered that they did not remove accessories during the debriefing session of the experiment. Consequently, the data from 167 participants were used in the analysis. The participants included 89 men aged 19-73 (\(M\) = 42.7) and 78 women aged 19-56 (\(M\) = 37.2). Eighty-nine of the participants (49 men and 40 women) were assigned to the intensity condition; the remaining 78 participants were assigned to the weight condition. ## V Results ### _Rq1_ Fig. 6 shows how participants perceived the magnitude of vibrotactile stimuli in different sensations. The estimated exponent \(\alpha\) of the power function was 0.65 (\(p<.01\)), which is fairly close to the exponent of vibration with an amplitude of 250 Hz on a finger (\(\alpha=0.60\)) as evaluated in a laboratory experiment [23]. The result showed that the Fig. 4: Instructions on holding the iPhone during the experiment. Fig. 5: Transition of the application screen: the base protocol for measuring levels of perceived sensation based on the magnitude estimation method. Fig. 6: Perceived magnitude of the vibrotactile sensation (intensity versus weight). vibrotactile perceptions perceived by the participants in this in-the-wild experiment were close to those perceived under laboratory conditions. The parallelism test showed that the constant \(k\) of the power-law equation varied significantly between intensity and weight conditions (\(F(3,5)=32.201\), \(p<.01\), \(\eta^{2}=0.00480\)). Meanwhile, the exponent \(\alpha\) showed different trends but the difference between the two conditions was insignificant (\(F(3,8)=3.5695\), \(p=0.0589\), \(\eta^{2}=0.000534\)). The results showed that the power laws governing the perceptual transformation of vibration for intensity and weight were statistically equivalent. The context used in the perceptual transformation affected the scaling by a constant \(k\) that multiplied the original power-law relation; however, this effect size \(\eta^{2}\) was very small. ### _Moderator variables for RQ2_ To address RQ2, we investigated how the factors in our daily lives, such as the physical specifications of mobile devices, the manner of operating the devices, and the personal traits of users regarding their desire for touch, caused differences in the vibrotactile sensations. We considered three factors as moderator variables for \(k\) and \(\alpha\) in (1): the iPhone model group as the proxy of the physical specifications of devices, the hand dominance as characteristics of the hand holding the iPhone, and the dichotomous NFT scale to assess individual differences in the process of haptic information. We estimated the power-law equation (1) per combination of vibrotactile sensations and these factors, and subsequently, we performed multiple comparisons of the parallelism tests. The iPhone model group was a dichotomous variable: regular-size or small-size. We obtained the model information of the iPhone that accessed the mock application during the experiment. Using the same rule as that given in Section IV-A, we classified them into one of the two groups. In all, the iPhones of 78 participants were classified into the regular-size group and those of 89 participants were classified into the small-size group. The variable "hand dominance" indicated the characteristics of the hand when holding the iPhone. Results of the postexperiment survey assigned the label "Dominant" to 75 participants whose dominant hand was the same as the hand that primarily supported the iPhone and "Nondominant" to 92 participants whose dominant hand was not used to hold the device. The NFT scale was calculated based on responses to the 12-item questions asked in a postexperiment survey. The entire range of the scale was from -23 to 32 in the sample, and the reliability (Cronbach's \(\alpha\)) of calculated NFT was.90. As in the original paper [22], a median split determined high and low NFT values: 82 subjects scoring greater than the median of 6 categorized as high in the dichotomous NFT, and those scoring less than 6 were categorized as low. As a balance check, we confirmed that no significant sample unbalance existed between the vibrotactile sensations and the iPhone model group (\(\chi^{2}(1)=0.238\), \(p=.626\)), hand dominance (\(\chi^{2}(1)=1.579\), \(p=.209\)), and the NFT scale (\(\chi^{2}(1)=2.127\), \(p=.145\)). The chi-square test of independence showed no significant relationship among factors: the model group versus the hand dominance (\(\chi^{2}(1)=0.000\), \(p=.993\)) and versus the NFT scale (\(\chi^{2}(1)=0.509\), \(p=.476\)), and the hand dominance versus the NFT scale (\(\chi^{2}(1)=0.774\), \(p=.379\)). ### _Rq2_ #### Iv-C1 iPhone model Fig. 7 shows the perceived magnitude of vibrotactile stimuli for different combinations of sensations and iPhone model groups. The estimated exponent \(\alpha\) of the power function ranged from 0.646 to 0.664 (all \(\alpha\) values had \(p<.01\)) among the combinations. The parallelism test showed that the constants of the power-law equations \(k\) (\(F(3,5)=71.732\), \(p<.01\), \(\eta^{2}=0.0312\)) and the exponents \(\alpha\) (\(F(3,8)=3.5695\), \(p<.01\), \(\eta^{2}=0.00264\)) varied significantly among the combinations. However, according to the effect sizes \(\eta^{2}\), the physical specifications of the devices primarily affected the differences in the constants, and the impact was medium. Table II lists the result of multiple comparisons of the parallelism test with the Bonferroni adjustment to investigate Fig. 7: Perceived magnitude of vibrotactile sensation by the iPhone model group (regular-size versus small-size) and sensation (intensity versus weight). which pair of the combination was significantly different. We considered not only the statistical significance but also the effect sizes that were at least approximately equal to or greater than "small" (\(\eta^{2}\) = 0.01) to determine the factor that meaningfully affected the vibrotactile perceptions. According to Table II (ii), there is no statistical difference in the perceived magnitudes of intensity and weight within the same model group; if there was any difference, the effect sizes were tiny. Meanwhile, from Table II (i) and (iii), the difference in the model group primarily affected the level of the perceived magnitudes of both intensity and weight: more extensive sensation magnitudes were obtained from the larger and heavier devices for the same stimuli. #### Iv-B2 Hand dominance Regardless of whether participants were right-handed or left-handed, approximately half of them operated their iPhones using their dominant hand. The remaining participants held their iPhones in their nondominant hands and operated them with the dominant hands. Fig. 8 shows the perceived magnitude of vibrotactile stimuli in different combinations of sensations and hand dominance. The estimated exponent \(\alpha\) of the power function ranged from 0.629 to 0.669 (all \(\alpha\) values were \(p<.01\)) among the combinations; the range of estimated exponents was wider than the iPhone model type. The parallelism test showed that the constants of the power-law equations \(k\) (\(F(3,5)=12.186\), \(p<.01\), \(\eta^{2}=0.00545\)) and the exponents \(\alpha\) (\(F(3,8)=17.243\), \(p<.01\), \(\eta^{2}=0.00769\)) varied significantly among the combinations. According to the effect size \(\eta^{2}\), the way people held their iPhone had only a minor influence on vibrotactile perception but had a relatively mentionable impact on the exponent than the constant. Table III lists the result of multiple comparisons of the parallelism test with the Bonferroni adjustment. Compared to the perception of intensity in Table III (ii), people perceived more weight for stronger vibrotactile stimuli on the dominant hand. Moreover, the perceived weight on the nondominant hand was more than on the dominant hand when the stimuli intensified (see Table III (i)). The estimated constant \(k\) in the parallelism test for Table III (iii) showed that the perceived sensational magnitude in weight on the nondominant hand was approximately 3.2% higher than the intensity on the dominant hand. This finding implies that the manner of holding the iPhone significantly affected the perceptual transformation of vibration. #### Iv-B3 Need For Touch Fig. 9 shows the perceived magnitude of vibrotactile stimuli in different combinations of sensations and dichotomous NFT scale. The estimated exponent \(\alpha\) of the power function ranged from 0.644 to 0.657 (all \(\alpha\) values were \(p<.01\)) among the combinations; the range of the estimated exponents was the narrowest among the three factors. The parallelism test showed that the constants of the power-law equation \(k\) varied significantly among the combinations (\(F(3,5)=11.438\), \(p<.01\), \(\eta^{2}=0.00511\)). Meanwhile, the exponents \(\alpha\) showed different but insignificant trends (\(F(3,8)=2.472\), \(p=0.0598\), \(\eta^{2}=0.00111\)). Fig. 8: Perceived magnitude of vibrotactile sensation when holding the iPhone in different ways (dominant versus nondominant hand) and sensation (intensity versus weight). Fig. 9: Perceived magnitude of vibrotactile sensation depending on the Need for Touch (NFT) scale (high versus low) and sensation (intensity versus weight). Table IV lists the result of multiple comparisons of the parallelism test with the Bonferroni adjustment. There was no significant difference in the scaling exponents, indicating that all the power-law relations in the combination of sensations and the dichotomous NFT scores were proportionally equivalent. The levels of perceived magnitude among the sensations and the interaction of NFT scores were significantly different in Table IV (ii) and (iii); meanwhile, the intensity and weight perceived by those with low NFT scores were equivalent, and their levels were moderate compared to those with high NFT scores in Fig. 9. These results implied that one should consider personal traits, such as the level of haptic information processing, for vibrotactile sensational transformation. ## VI Discussion Through a series of analyses, we clarified that people perceive the intensity and weight of smartphone vibrations differently. Under the same vibration stimulus conditions, the perception of the weight of vibration stimuli tends to be higher in sensation magnitude than when they perceive the intensity. More extensive magnitudes of weight were observed from larger and heavier devices for the same stimuli, indicating that the physical specifications of the vibration actuator affect the vibrotactile transformation to weight. It is reasonable to assume that people unconsciously add the weight of the iPhone when translating to the perceived weight. Our results show the importance of considering the personal traits of users, such as the level of haptic information processing, for the vibrotactile sensational transformation: People with high NFT scores are likely to distinguish different sensations from the same vibrotactile stimuli better than people with low NFT scores. For better tactile system design, it is desirable to consider the aspects of the personal traits of users; very few studies have focused on this aspect. The use of smartphones in daily life cannot be controlled; however, we found that the manner of holding smartphones and the hand that is used to hold it influences vibrotactile transformation. Unlike the previous two elements we investigated, the manner of holding smartphones affect differences in the power law to a lesser extent than the perceptual transformation of vibration among sensations. This result suggests some of the key issues to be considered when designing vibrotactile stimuli to perceive the weight of digital objects. Developing user interfaces to encourage people to hold their smartphones in specific ways may be essential to maximize the effects of vibrotactile stimuli. Limitations and future works are as follows. The weight and size of smartphones should be separated for detailed analyses. The size or shape of the smartphone can affect how people grasp it and how comfortable it is to use. We simplified the user interface of the mock application and the context to create as many vibrations as possible. The interaction of the user interface design, the context in which the vibrations are applied, and vibrotactile perception should be considered in future works.
2303.06467
A characterization of orthogonal permutative matrices of order 4
Orthogonal matrices which are linear combinations of permutation matrices have attracted enormous attention in quantum information and computation. In this paper, we provide a complete parametric characterization of all complex, real and rational orthogonal permutative matrices of order $4.$ We show that any such matrix can always be expressed as a linear combination of up to four permutation matrices. Finally we determine several matrix spaces generated by linearly independent permutation matrices such that any orthogonal matrix in these spaces is always permutative or direct sum of orthogonal permutative matrices up to permutation of its rows and columns.
Amrita Mandal, Bibhas Adhikari
2023-03-11T17:35:26Z
http://arxiv.org/abs/2303.06467v1
# A characterization of orthogonal permutative matrices of order \(4\) # A characterization of orthogonal permutative matrices of order \(4\) Amrita Mandal1, and Bibhas Adhikari2 Footnote 1: Department of Mathematics, IIT Kharagpur,Email: [email protected] Footnote 2: Corresponding author, Department of Mathematics, IIT Kharagpur,Email: [email protected] **Abstract.** Orthogonal matrices which are linear combinations of permutation matrices have attracted enormous attention in quantum information and computation. In this paper, we provide a complete parametric characterization of all complex, real and rational orthogonal permutative matrices of order \(4.\) We show that any such matrix can always be expressed as a linear combination of up to four permutation matrices. Finally we determine several matrix spaces generated by linearly independent permutation matrices such that any orthogonal matrix in these spaces is always permutative or direct sum of orthogonal permutative matrices up to permutation of its rows and columns. **Keywords.** Permutative matrix, Grover matrix, Quantum walks **AMS subject classification(2020):** 15B10, 15B99 ## 1 Introduction Characterization of orthogonal matrices which can be expressed as linear combinations of permutation matrices is an unsolved problem in the literature. This problem was first considered by Kapoor in [5] and he determined a necessary condition for a (real) linear combination of permutation matrices to be an orthogonal matrix. Indeed, he proved that for a set \(S\) of permutations of order \(n,\) if a linear combination \(\sum_{\sigma\in S}\alpha_{\sigma}P_{\sigma},\)\(\alpha_{\sigma}\in\mathbb{R}\) is an orthogonal matrix then \(\sum_{\sigma\in S}\alpha_{\sigma}\in\{1,-1\}.\) Here \(P_{\sigma}\) denotes the permutation matrix associated with \(\sigma,\) that is, the \(ij\) entry of \(P_{\sigma}\) is \(1\) if \(\sigma(i)=j,\) otherwise it is \(0.\) Later, Gibson proved that an orthogonal matrix over any field is a linear combination of permutation matrices if and only if each row and column sum of the matrix is \(\pm 1,\) called generalized doubly stochastic (gds) matrix corresponding to \(\pm 1\)[3]. However, explicit parametric representation of all gds matrices has remained to be determined. Parametric representation of orthogonal matrices which are also linear combinations of permutation matrices has recently attracted a lot of attention because they can have a proper quantum circuit representation [6]. Furthermore, orthogonal or unitary parametric matrices that are used as coin operators for discrete-time quantum walks play a crucial role in understanding the underlying quantum dynamics of the walks [15, 14]. Recently, all orthogonal matrices of order \(3\) over the fields of complex, real, and rational numbers that can be written as linear combinations of permutation matrices are characterized in [10]. One-parameter representations of such matrices are also provided. An interesting result about this characterization is that: an orthogonal matrix of order \(3\) is a linear combination of permutation matrices if and only if it is a permutative matrix, that is, any row of such matrix is a permutation of any other row. We emphasize that the Grover diffusion matrix \(G=\frac{2}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{T}-I_{n},\) a standard coin operator for coined quantum walks is an orthogonal permutative matrix (OPM), where \(\mathbf{1}_{n}\) is the all-one column vector of order \(n\) and \(I_{n}\) denotes the identity matrix of order \(n.\) Thus the characterization of orthogonal permutative matrices of higher order has become of paramount interest since they can be used as coin operators for high dimensional quantum walks and hence a better understanding of quantum dynamics for generalized Grover diffusion matrices can be studied. In the forthcoming paper [8], we investigate localization property of discrete-time quantum walks on two-dimensional lattices with coin operators as OPMs of order 4 which are studied here. In this paper, we pursue the problem of algebraic characterization of the set of all OPMs of order 4 and then investigate whether any orthogonal matrix of order 4 which can be expressed as linear combination of permutation matrices belongs to this set. First, we derive symbolic representation of all OPMs of order 4 over the field of complex numbers and we show that such a matrix can always be written as linear combination of up to four permutation matrices. However, contrary to the OPMs of order 3, we establish that the set of OPMs of order 4 does not form a group under matrix multiplication. Consequently, we determine chains of certain groups of OPMs of order 4. Then we develop a one-parameter representation of all real and rational OPMs of order 4. Further, we produce an example of an orthogonal matrix of order 4 which is a linear combination of permutation matrices but not permutative, and then we attempt to classify all such matrices by performing a search on the linear spaces of matrices generated by sets of linearly independent permutation matrices. We prove that any linear combination of permutation matrices of order 4 can be written as a linear combination of at most six permutative matrices, each of which is a linear combination of four pairwise Hadamard orthogonal permutation matrices, where two matrices are called Hadamard orthogonal if Hadamard product of them is the zero matrix. We also prove that there is no orthogonal matrix that can be expressed as a (non-trivial) linear combination of two distinct permutation matrices of order 4. Next, we prove that any orthogonal matrix that can be expressed as a linear combination of three distinct permutation matrices, is always a direct sum of OPMs of orders 3 and 1 up to the permutations of its rows and its columns. Then, fixing a maximal set of linearly independent permutation matrices of order 4, we determine direct sum of matrix spaces, which are generated by certain linearly independent permutation matrices, such that any orthogonal matrix \(A\) in these spaces is always permutative or direct sum of OPMs up to the permutations of their rows and columns. The paper is organised as follows. In Section 2, we provide a complete classification of complex, real and rational OPMs of order 4 and show that any such matrix is always a linear combination of permutation matrices. Then we derive a one-parameter representation for the real and rational OPMs. In Section 3, we derive matrix spaces generated by permutation matrices such that orthogonal matrices in these spaces are always permutative or direct sum of OPMs up to permutation of rows and columns. ## 2 Orthogonal permutative matrices of order 4 In this section we characterize the set of all OPMs of order 4 over the field of complex, real and rational numbers. Recall that a matrix is called permutative if any of its row is a permutation of any other row [9]. Thus, such a matrix of order \(n\) has \(n\) parameters in its symbolic form. Without loss of generality, a permutative matrix of order 4 can be written in the symbolic form \[A(\mathbf{x};P,Q,R)=\begin{bmatrix}\mathbf{x}\\ \mathbf{x}P\\ \mathbf{x}Q\\ \mathbf{x}R\end{bmatrix} \tag{1}\] where \(\mathbf{x}=\begin{bmatrix}x&y&z&w\end{bmatrix}\) is a symbolic row vector with \(x,y,z,w\in\mathbb{C}\) and \(P,Q,R\in\mathcal{P}_{4}\), the group of permutation matrices of order 4 [7]. Let \(\mathcal{OP}_{4}\) denote the set of all OPMs over the field of complex numbers of order 4. Then obviously \(\mathcal{P}_{4}\subset\mathcal{OP}_{4}\). We denote \[1\oplus\mathcal{P}_{3}=\left\{\begin{bmatrix}1&\mathbf{0}^{T}\\ \mathbf{0}&P\end{bmatrix}\in\mathcal{P}_{4}:P\in\mathcal{P}_{3}\right\},\] where \(\mathbf{0}=\begin{bmatrix}0&0&0\end{bmatrix}^{T}\) and \(\mathcal{P}_{3}\) denotes the group of all permutation matrices of order 3. Then the following theorem characterizes all matrices in \(\mathcal{OP}_{4}\). **Theorem 2.1**.: _A matrix \(A\equiv A(\mathbf{x};P,Q,R)\) given by equation (1) is an OPM if and only if \(A\in\mathcal{X}\cup\mathcal{Y}\cup\mathcal{Z}\) where_ \[\mathcal{X} = \left\{\overline{P}M_{x,z}^{\pm},\overline{P}N_{z,x}^{\pm}:x^{2} +z^{2}\mp z=0,x,z\in\mathbb{C}\right\}\] \[\mathcal{Y} = \left\{\overline{P}P_{(23)}M_{x,z}^{\pm}P_{(23)},\overline{P}P_{ (23)}N_{z,x}^{\pm}P_{(23)}:x^{2}+z^{2}\mp z=0,x,z\in\mathbb{C}\right\}\] \[\mathcal{Z} = \left\{\overline{P}P_{(24)}M_{x,z}^{\pm}P_{(24)},\overline{P}P_ {(24)}N_{z,x}^{\pm}P_{(24)}:x^{2}+z^{2}\mp z=0,x,z\in\mathbb{C}\right\},\text{ with}\] \[M_{x,z}^{\pm}=\begin{bmatrix}A_{x}&B_{z}^{\pm}\\ B_{z}^{\pm}&-A_{x}\end{bmatrix},N_{x,z}^{\pm}=\begin{bmatrix}B_{x}^{\pm}&A_{z} \\ A_{z}&FB_{x}^{\pm}\end{bmatrix},A_{t}=\begin{bmatrix}t&-t\\ -t&t\end{bmatrix},B_{t}^{\pm}=\begin{bmatrix}t&\pm 1-t\\ \pm 1-t&t\end{bmatrix}\] \[F=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},t\in\{x,z\},\text{ and }\overline{P}\in 1\oplus\mathcal{P}_{3}.\] **Proof:** The 'if' part is obvious and easy to check. To prove the 'only if' part consider the following cases. First assume that the symbolic OPM \(A\) has no repetition of entries in any of the columns. Besides, since rows and columns are orthogonal, none of \(P,Q,R\) are equal to each other. Then \(A\) can presume one of the following forms: \[\mathsf{X}=\begin{bmatrix}x&y&z&w\\ y&x&w&z\\ z&w&y&x\\ w&z&x&y\end{bmatrix},\ \mathsf{Y}=\begin{bmatrix}x&y&z&w\\ y&z&w&x\\ z&w&x&y\\ w&x&y&z\end{bmatrix},\ \mathsf{Z}=\begin{bmatrix}x&y&z&w\\ y&w&x&z\\ z&x&w&y\\ w&z&y&x\end{bmatrix} \tag{2}\] for some \(x,y,z,w\in\mathbb{C}\). Then \(\mathsf{X}^{T}\mathsf{X}=I_{4}\) provides the polynomial equations \(x^{2}+y^{2}+z^{2}+w^{2}=1\), \(xy+zw=0\) and \(xz+yw+yz+zx=0\), which further imply \((x+y+z+w)^{2}=1.\) Thus the quadruple \((x,y,z,w)\) must be zeros of the system of polynomial equations \[\begin{cases}x+y+z+w=\pm 1\\ xy+zw=0\\ x^{2}+y^{2}+z^{2}+w^{2}=1\end{cases}\quad\Rightarrow\begin{cases}x+y=0\\ x^{2}+z^{2}\mp z=0\\ z+w\mp 1=0\end{cases}\quad\text{or}\begin{cases}z+w=0\\ x^{2}+z^{2}\mp x=0\\ x+y\mp 1=0.\end{cases} \tag{3}\] Therefore, each of the four system of equations gives rise to the following set of matrices obeying the pattern \(\mathsf{X}\): \[\mathcal{X}_{1}=\left\{\begin{bmatrix}A_{x}&B_{z}^{+}\\ B_{z}^{+}&-A_{x}\end{bmatrix}:x^{2}+z^{2}-z=0\right\},\qquad\mathcal{X}_{2}= \left\{\begin{bmatrix}A_{x}&B_{z}^{-}\\ B_{z}^{-}&-A_{x}\end{bmatrix}:x^{2}+z^{2}+z=0\right\},\] \[\mathcal{X}_{3}=\left\{\begin{bmatrix}B_{x}^{+}&A_{z}\\ A_{z}&FB_{x}^{+}\end{bmatrix}:x^{2}+z^{2}-x=0\right\},\qquad\mathcal{X}_{4}= \left\{\begin{bmatrix}B_{x}^{-}&A_{z}\\ A_{z}&FB_{x}^{-}\end{bmatrix}:x^{2}+z^{2}+x=0\right\}. \tag{4}\] Hence, \[\mathcal{X}_{1}=\{M_{x,z}^{+}:x^{2}+z^{2}-z=0\},\ \mathcal{X}_{2}=\{M_{x,z}^{-}:x^{2}+z^{2}+z=0\},\] \[\mathcal{X}_{3}=\{N_{x,z}^{+}:x^{2}+z^{2}-x=0\},\ \mathcal{X}_{4}=\{N_{x,z}^{-}:x^{2}+z^{2}+x=0\},\] where \(M_{x,z}^{\pm},N_{x,z}^{\pm},A_{t},B_{t},t\in\{x,z\}\) are defined in the statement of the theorem. Similarly, the set of polynomial equations given by \(\mathsf{Y}^{T}\mathsf{Y}=I_{4}\) are \[\begin{cases}x+y+z+w=\pm 1\\ xz+yw=0\\ x^{2}+y^{2}+z^{2}+w^{2}=1\end{cases}\Rightarrow\begin{cases}x+z=0\\ x^{2}+y^{2}\mp y=0\\ y+w\mp 1=0\end{cases}\quad\text{or}\begin{cases}y+w=0\\ x^{2}+y^{2}\mp x=0\\ x+z\mp 1=0.\end{cases} \tag{5}\] Thus each of the four system of equations gives rise to the following sets of matrices obeying the pattern \(\mathsf{Y}\): \[\mathcal{Y}_{1} = \left\{P_{(23)}\begin{bmatrix}A_{x}&B_{y}^{+}\\ B_{y}^{+}&-A_{x}\end{bmatrix}P_{(23)}:x^{2}+y^{2}-y=0\right\},\] \[\mathcal{Y}_{2} = \left\{P_{(23)}\begin{bmatrix}A_{x}&B_{y}^{-}\\ B_{y}^{-}&-A_{x}\end{bmatrix}P_{(23)}:x^{2}+y^{2}+y=0\right\},\] \[\mathcal{Y}_{3} = \left\{P_{(23)}\begin{bmatrix}B_{x}^{+}&A_{y}\\ A_{y}&FB_{x}^{+}\end{bmatrix}P_{(23)}:x^{2}+y^{2}-x=0\right\},\] \[\mathcal{Y}_{4} = \left\{P_{(23)}\begin{bmatrix}B_{x}^{-}&A_{y}\\ A_{y}&FB_{x}^{-}\end{bmatrix}P_{(23)}:x^{2}+y^{2}+x=0\right\}. \tag{6}\] Note that, \(A=P_{(23)}M_{x,y}^{+}P_{(23)}\) if \(A\in\mathcal{Y}_{1}\); \(A=P_{(23)}M_{x,y}^{-}P_{(23)}\) if \(A\in\mathcal{Y}_{2}\); \(A=P_{(23)}N_{x,y}^{+}P_{(23)}\) if \(A\in\mathcal{Y}_{3}\); and \(A=P_{(23)}N_{x,y}^{-}P_{(23)}\) if \(A\in\mathcal{Y}_{4}\) where \((x,y)\) satisfies the respective constraint as given in equation (6). Finally, the orthogonality condition \(\mathsf{Z}^{T}\mathsf{Z}=I_{4}\) provides the system of polynomial equations \[\begin{cases}x+y+z+w=\pm 1\\ xw+yz=0\\ x^{2}+y^{2}+z^{2}+w^{2}=1\end{cases}\Rightarrow\begin{cases}x+w=0\\ x^{2}+y^{2}\mp y=0\\ y+z\mp 1=0\end{cases}\quad\text{or}\begin{cases}y+z=0\\ x^{2}+y^{2}\mp x=0\\ x+w\mp 1=0.\end{cases} \tag{7}\] Thus each of the four system of equations gives rise to the following set of matrices obeying the pattern \(\mathsf{Z}\): \[\mathcal{Z}_{1}=\left\{P_{(24)}\begin{bmatrix}A_{x}&B_{y}^{+}\\ B_{y}^{+}&-A_{x}\end{bmatrix}P_{(24)}:x^{2}+y^{2}-y=0\right\}, \quad\mathcal{Z}_{2}=\left\{P_{(24)}\begin{bmatrix}A_{x}&B_{y}^{-}\\ B_{y}^{-}&-A_{x}\end{bmatrix}P_{(24)}:x^{2}+y^{2}+y=0\right\},\] \[\mathcal{Z}_{3}=\left\{P_{(24)}\begin{bmatrix}B_{x}^{+}&A_{y}\\ A_{y}&FB_{x}^{+}\end{bmatrix}P_{(24)}:x^{2}+y^{2}-x=0\right\}, \quad\mathcal{Z}_{4}=\left\{P_{(24)}\begin{bmatrix}B_{x}^{-}&A_{y}\\ A_{y}&FB_{x}^{-}\end{bmatrix}P_{(24)}:x^{2}+y^{2}+x=0\right\}. \tag{8}\] Besides, \(A=P_{(24)}M_{x,y}^{+}P_{(24)}\) if \(A\in\mathcal{Z}_{1}\); \(A=P_{(24)}M_{x,y}^{-}P_{(24)}\) if \(A\in\mathcal{Z}_{2}\); \(A=P_{(24)}N_{x,y}^{+}P_{(24)}\) if \(A\in\mathcal{Z}_{3}\); and \(A=P_{(24)}N_{x,y}^{-}P_{(24)}\) if \(A\in\mathcal{Z}_{4}\). Next, consider the symbolic OPMs in which one entry is repeated in at least one column i.e. the case when \(P,Q,R\) are not chosen from any of the collections \[\{P_{(12)(34)},P_{(1324)},P_{(1423)}\},\{P_{(13)(24)},P_{(1234)},P_{(1432)}\}, \{P_{(14)(23)},P_{(1243)},P_{(1342)}\}.\] Then it follows that any such matrix \(A\) belongs to either of the sets \(\mathcal{X}=\cup_{k=1}^{4}\mathcal{X}_{k},\mathcal{Y}=\cup_{k=1}^{4}\mathcal{ Y}_{k},\mathcal{Z}=\cup_{k=1}^{4}\mathcal{Z}_{k}\) for certain values of \(x,y,z,w\). In particular, a straightforward calculation shows that any symbolic OPM whose entries do not follow the pattern of entries of \(\mathsf{X},\mathsf{Y},\mathsf{Z}\) are \(\pm P_{r}\) or \(\pm(\frac{1}{2}J_{4}-P_{r})\) for some permutation \(\tau\) and \(J_{4}=\mathbf{1}_{4}\mathbf{1}_{4}^{T}.\) Then the desired result follows from the fact that permutation of the rows preserves the orthogonality and permutative property of a matrix. The following corollary provides the determinant of all OPMs of order \(4\). **Corollary 2.2**.: _Let \(A\in\mathcal{OP}_{4}.\) Then \(\det(A)=1\) if \(A\in\mathcal{X}_{j}\cup\mathcal{Y}_{j}\cup\mathcal{Z}_{j},j=1,2\), and \(\det(A)=-1\) if \(A\in\mathcal{X}_{j}\cup\mathcal{Y}_{j}\cup\mathcal{Z}_{j},j=3,4,\) where \(\mathcal{X}_{j},\mathcal{Y}_{j},\mathcal{Z}_{j},j=1,2,3,4\) are given by equations (4), (6), (8) respectively._ **Proof:** From (4) we obtain, \[\det(A)=\begin{cases}4(x^{2}+z^{2}-z)+1\,\text{if}\,A\in\mathcal{X}_{1}\\ 4(x^{2}+z^{2}+z)+1\,\text{if}\,A\in\mathcal{X}_{2}\\ -4(x^{2}+z^{2}-x)-1\,\text{if}\,A\in\mathcal{X}_{3}\\ -4(x^{2}+z^{2}+x)-1\,\text{if}\,A\in\mathcal{X}_{4}.\end{cases}\] Then employing the conditions on the variables \(x,y,z\) which define the sets of matrices \(\mathcal{X}_{j},j=1,2,3,4\) the desired result follows. Similarly, the desired results for \(\mathcal{Y}_{j},\mathcal{Z}_{j}\)\(j=1,2,3,4\) follow from equations (6) and (8). \(\square\) Now in the following remark we provide a characterization of OPMs of order \(4\) in terms of linear combinations of permutation matrices. The result follows from equation (2) and Theorem 2.1. **Remark 2.3**.: _Any OPM \(A\) of order \(4\) can be written as linear combination of permutation matrices as follows:_ \[\overline{P}^{T}A=\begin{cases}xP_{(34)}+yP_{(12)}+zP_{(13)(24)}+wP_{(14)(23) },\,\,(x,y,z,w)\,\text{satisfies equation}\,(3)\,\text{when}\,\,A\in\mathcal{X}\\ xP_{(24)}+yP_{(12)(34)}+zP_{(13)}+wP_{(14)(23)},\,\,(x,y,z,w)\,\text{satisfies equation}\,(5)\,\text{when}\,\,A\in\mathcal{Y}\\ xP_{(23)}+yP_{(12)(34)}+zP_{(13)(24)}+wP_{(14)},\,\,(x,y,z,w)\,\text{satisfies equation}\,(7)\,\text{when}\,\,A\in\mathcal{Z}\end{cases} \tag{9}\] _and \(\overline{P}\in 1\oplus\mathcal{P}_{3}\)._ Let us emphasize that one of the motivations for the characterization of OPMs is to generalize Grover matrix which is used to define Grover quantum walks [4, 12, 13]. The Grover matrix of order \(4\) is given by \[G=\begin{bmatrix}-\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&-\frac{1}{2}&\frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}&-\frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}&\frac{1}{2}&-\frac{1}{2}\end{bmatrix}. \tag{10}\] Then it can be seen that \[G=\begin{cases}P_{(34)}\left(xP_{(34)}+yP_{(12)}+zP_{(13)(24)}+wP_{(14)(23)} \right)\in\mathcal{X}\\ P_{(24)}\left(xP_{(24)}+yP_{(12)(34)}+zP_{(13)}+wP_{(14)(23)}\right)\in\mathcal{ Y}\\ P_{(23)}\left(xP_{(23)}+yP_{(12)(34)}+zP_{(13)(24)}+wP_{(14)}\right)\in\mathcal{Z} \end{cases}\] where \(x=-\frac{1}{2},y=\frac{1}{2}=z=w.\) In particular, it follows that \[G\in P_{(34)}\mathcal{X}_{1}\cap P_{(24)}\mathcal{Y}_{1}\cap P_{(23)} \mathcal{Z}_{1} \tag{11}\] where \[\mathcal{X}_{1} = \left\{\begin{bmatrix}x&-x&z&1-z\\ -x&x&1-z&z\\ z&1-z&-x&x\\ 1-z&z&x&-x\end{bmatrix}:x^{2}+z^{2}-z=0\right\},\] \[\mathcal{Y}_{1} = \left\{\begin{bmatrix}x&y&-x&1-y\\ y&-x&1-y&x\\ -x&1-y&x&y\\ 1-y&x&y&-x\end{bmatrix}:x^{2}+y^{2}-y=0\right\},\] \[\mathcal{Z}_{1} = \left\{\begin{bmatrix}x&y&1-y&-x\\ y&-x&x&1-y\\ 1-y&x&-x&y\\ -x&1-y&y&x\end{bmatrix}:x^{2}+y^{2}-y=0\right\}.\] Thus the real matrices in \(P_{(34)}\mathcal{X}_{1},P_{(24)}\mathcal{Y}_{1},P_{(23)}\mathcal{Z}_{1}\) can be considered as the continuous deformations of the Grover matrix, and hence Grover walks can be generalized by considering the coin operators as matrices from these sets. The localization property of such quantum walks on two-dimensional lattices is analyzed in [8]. Now we provide a list of chains of groups of OPMs of order \(4\) in the following theorem. **Theorem 2.4**.: _The following are chains of groups of complex orthogonal matrices._ _1._ \(\begin{array}{l}\{I\}\leq P_{(34)}\mathcal{X}_{3}\leq P_{(34)}\mathcal{X}_{3} \cup P_{(34)}\mathcal{X}_{j}\leq P_{(34)}\mathcal{X}_{3}\cup P_{(34)}\mathcal{ X}_{j}\cup\mathcal{X}_{3}\cup\mathcal{X}_{j}\leq\mathcal{O}_{4},\\ \{I\}\leq P_{(34)}\mathcal{X}_{3}\leq P_{(34)}\mathcal{X}_{3}\cup\mathcal{X}_ {j}\leq P_{(34)}\mathcal{X}_{3}\cup P_{(34)}\mathcal{X}_{j}\cup\mathcal{X}_{3} \cup\mathcal{X}_{j}\leq\mathcal{O}_{4}\end{array}\)__ _2._ \(\begin{array}{l}\{I\}\leq P_{(24)}\mathcal{Y}_{3}\leq P_{(24)}\mathcal{Y}_{3} \cup P_{(24)}\mathcal{Y}_{j}\leq P_{(24)}\mathcal{Y}_{3}\cup P_{(24)}\mathcal{ Y}_{j}\cup\mathcal{Y}_{3}\cup\mathcal{Y}_{j}\leq\mathcal{O}_{4},\\ \{I\}\leq P_{(24)}\mathcal{Y}_{3}\leq P_{(24)}\mathcal{Y}_{3}\cup\mathcal{Y}_{j }\leq P_{(24)}\mathcal{Y}_{3}\cup P_{(24)}\mathcal{Y}_{j}\cup\mathcal{Y}_{3} \cup\mathcal{Y}_{j}\leq\mathcal{O}_{4}\end{array}\)__ _3._ \(\begin{array}{l}\{I\}\leq P_{(23)}\mathcal{Z}_{3}\leq P_{(23)}\mathcal{Z}_{3} \cup P_{(23)}\mathcal{Z}_{j}\leq P_{(23)}\mathcal{Z}_{3}\cup P_{(23)}\mathcal{ Z}_{j}\cup\mathcal{Z}_{3}\cup\mathcal{Z}_{j}\leq\mathcal{O}_{4},\\ \{I\}\leq P_{(23)}\mathcal{Z}_{3}\leq P_{(23)}\mathcal{Z}_{3}\cup\mathcal{Z}_{j }\leq P_{(23)}\mathcal{Z}_{3}\cup P_{(23)}\mathcal{Z}_{j}\cup\mathcal{Z}_{3} \cup\mathcal{Z}_{j}\leq\mathcal{O}_{4}\end{array}\)__ _where \(j=1,2,3,4\) and \(\mathcal{O}_{4}\) denotes the group of complex orthogonal matrices of order \(4.\)_ **Proof:** First we prove that \(P_{(34)}\mathcal{X}_{3}\cup P_{(34)}\mathcal{X}_{j}\cup\mathcal{X}_{3}\cup \mathcal{X}_{j}\) are complex orthogonal matrix groups for \(j=1,2,3,4\). Clearly \(I\in P_{(34)}\mathcal{X}_{3}\cup P_{(34)}\mathcal{X}_{j}\cup\mathcal{X}_{3} \cup\mathcal{X}_{j}\). If \(A\in P_{(34)}\mathcal{X}_{3}\) then \(A^{T}\in P_{(34)}\mathcal{X}_{3}\) follows by exchanging the role of \(z\) and \(-z\). Similarly, \(A^{T}\in P_{(34)}\mathcal{X}_{j}\) if \(A\in P_{(34)}\mathcal{X}_{j}\) for \(j=1,\ldots,4\). Since \(\mathcal{X}_{j}\) and \(\mathcal{X}_{3}\) contain complex symmetric matrices, obviously \(A^{T}\in\mathcal{X}_{j}\) if \(A\in\mathcal{X}_{j}\), and \(A^{T}\in\mathcal{X}_{3}\) if \(A\in\mathcal{X}_{3}\). Hence, \(P_{(34)}\mathcal{X}_{3}\cup P_{(34)}\mathcal{X}_{j}\cup\mathcal{X}_{3}\cup \mathcal{X}_{j},j=1,\ldots,4\) is closed under inverses. Let \(A=x_{1}I+y_{1}P_{(12)(34)}+z_{1}P_{(1324)}+w_{1}P_{(1423)},B=x_{2}I+y_{2}P_{(12 )(34)}+z_{2}P_{(1324)}+w_{2}P_{(1423)}\in P_{(34)}\mathcal{X}_{3}\cup P_{(34)} \mathcal{X}_{j},j=1,\ldots,4\), where \((x_{i},y_{i},z_{i},w_{i}),i=1,2\) satisfy the equation given by (3) accordingly. Then \(AB=x_{3}I+y_{3}P_{(12)(34)}+z_{3}P_{(1324)}+w_{3}P_{(1423)}\), where \[x_{3} =x_{1}x_{2}+y_{1}y_{2}+z_{1}w_{2}+w_{1}z_{2},y_{3}=x_{1}y_{2}+y_{1 }x_{2}+z_{1}z_{2}+w_{1}w_{2},\] \[z_{3} =x_{1}z_{2}+y_{1}w_{2}+z_{1}x_{2}+w_{1}y_{2},w_{3}=x_{1}w_{2}+y_{1 }z_{2}+z_{1}y_{2}+w_{1}x_{2}.\] Now note that \(x_{j}+y_{j}+z_{j}+w_{j}=1\), if \(A\) or \(B\in P_{(34)}\mathcal{X}_{1}\cup P_{(34)}\mathcal{X}_{3}\), and \(x_{j}+y_{j}+z_{j}+w_{j}=-1\) if \(A\) or \(B\in P_{(34)}\mathcal{X}_{2}\cup P_{(34)}\mathcal{X}_{4},j=1,2\). Thus \(x_{3}+y_{3}+z_{3}+w_{3}=(x_{1}+y_{1}+z_{1}+w_{1})(x_{2}+y_{2}+z_{2}+w_{2})\) yields \(x_{3}+y_{3}+z_{3}+w_{3}=1\) if \(A,B\in P_{(34)}\mathcal{X}_{1}\cup P_{(34)}\mathcal{X}_{3}\) or \(P_{(34)}\mathcal{X}_{2}\cup P_{(34)}\mathcal{X}_{4}\), and \(x_{3}+y_{3}+z_{3}+w_{3}=-1\) otherwise. Now \[x_{3}^{2}+y_{3}^{2}+z_{3}^{2}+w_{3}^{2}=\begin{bmatrix}x_{1}\\ y_{1}\\ z_{1}\\ w_{1}\end{bmatrix}^{T}\begin{bmatrix}a&b&c&c\\ b&a&c&c\\ c&c&a&b\\ c&c&b&a\end{bmatrix}\begin{bmatrix}x_{1}\\ y_{1}\\ z_{1}\\ w_{1}\end{bmatrix},\] where \(a=x_{2}^{2}+y_{2}^{2}+z_{2}^{2}+w_{2}^{2}=1,b=2x_{2}y_{2}+2z_{2}w_{2}=0\) and \(c=(x_{2}+y_{2})(z_{2}+w_{2})=0\) hold, since \(B\) is orthogonal i.e. \((x_{2},y_{2},z_{2},w_{2})\) satisfies \(x_{2}^{2}+y_{2}^{2}+z_{2}^{2}+w_{2}^{2}=1,x_{2}y_{2}+z_{2}w_{2}=0,x_{2}z_{2}+y_{2} z_{2}+w_{2}z_{2}+w_{2}x_{2}=0\). Consequently, \(x_{3}^{2}+y_{3}^{2}+z_{3}^{2}+w_{3}^{2}=x_{1}^{2}+y_{1}^{2}+z_{1}^{2}+w_{1}^{2}=1\). Also, \[x_{3}+y_{3} =(x_{1}+y_{1})(x_{2}+y_{2})+(z_{1}+w_{1})(z_{2}+w_{2}) \tag{12}\] \[z_{3}+w_{3} =(x_{1}+y_{1})(z_{2}+w_{2})+(z_{1}+w_{1})(x_{2}+y_{2}).\] At first, if \(A,B\in P_{(34)}\mathcal{X}_{1}\), then \(x_{i}+y_{i}=0,z_{i}+w_{i}=1\) for \(i=1,2.\) So that from (12) we have \(x_{3}+y_{3}=1,z_{3}+w_{3}=0\), where \(x_{3}^{2}+y_{3}^{2}+z_{3}^{2}+w_{3}^{2}=1\), and thus by (4) we get \(AB\in P_{(34)}\mathcal{X}_{3}\). If \(A\in P_{(34)}\mathcal{X}_{1}\) and \(B\in P_{(34)}\mathcal{X}_{3}\), then clearly \(x_{1}+y_{1}=z_{2}+w_{2}=0,x_{2}+y_{2}=z_{1}+w_{1}=1\). Hence we get \(x_{3}+y_{3}=0,z_{3}+w_{3}=1.\) Thus by (4) we have \(AB\in P_{(34)}\mathcal{X}_{3} Next, let \(A=x_{1}P_{(34)}+y_{1}P_{(12)}+z_{1}P_{(13)(24)}+w_{1}P_{(14)(23)},B=x_{2}P_{(34)}+ y_{2}P_{(12)}+z_{2}P_{(13)(24)}+w_{2}P_{(14)(23)}\in\mathcal{X}_{3}\cup\mathcal{X}_{j}\) where \((x_{i},y_{i},z_{i},w_{i}),i=1,2\) satisfies the equations in (3) and \(j=1,\ldots,4\). Then \(AB=x_{3}I+y_{3}P_{(12)(34)}+z_{3}P_{(1324)}+w_{3}P_{(1423)},\) where \[x_{3} =x_{1}x_{2}+y_{1}y_{2}+z_{1}z_{2}+w_{1}w_{2},y_{3}=x_{1}y_{2}+y_{1 }x_{2}+z_{1}w_{2}+w_{1}z_{2},\] \[z_{3} =x_{1}z_{2}+y_{1}w_{2}+z_{1}y_{2}+w_{1}x_{2},w_{3}=x_{1}w_{2}+y_{1 }z_{2}+z_{1}x_{2}+w_{1}y_{2}.\] Note that \(x_{j}+y_{j}+z_{j}+w_{j}=1\) if \(A\) or \(B\in\mathcal{X}_{1}\cup\mathcal{X}_{3}\), and \(x_{j}+y_{j}+z_{j}+w_{j}=-1\) if \(A\) or \(B\in\mathcal{X}_{2}\cup\mathcal{X}_{4},j=1,2.\) Hence \(x_{3}+y_{3}+z_{3}+w_{3}=1\) if \(A,B\in\mathcal{X}_{1}\cup\mathcal{X}_{3}\) or \(\mathcal{X}_{2}\cup\mathcal{X}_{4},\) and \(x_{3}+y_{3}+z_{3}+w_{3}=-1\) otherwise, since \(x_{3}+y_{3}+z_{3}+w_{3}=(x_{1}+y_{1}+z_{1}+w_{1})(x_{2}+y_{2}+z_{2}+w_{2}).\) Then as above, \(x_{3}^{2}+y_{3}^{2}+z_{3}^{2}+w_{3}^{2}=1.\) Further, \(x_{3}+y_{3}\) and \(z_{3}+w_{3}\) have same expressions as that given in (12). If \(A,B\in\mathcal{X}_{2},\) then \(x_{1}+y_{1}=x_{2}+y_{2}=0,w_{1}+z_{1}=w_{2}+z_{2}=-1.\) So that \(x_{3}+y_{3}=1\) and \(z_{3}+w_{3}=0.\) Finally, \(x_{3}^{2}+y_{3}^{2}+z_{3}^{2}+w_{3}^{2}=1\) implies \(x_{3}^{2}+z_{3}^{2}-x_{3}=0.\) Thus \(AB\in P_{(34)}\mathcal{X}_{3}.\) Now if \(A\in\mathcal{X}_{2}\) and \(B\in\mathcal{X}_{3},\) then \(x_{1}+y_{1}=z_{2}+w_{2}=0,w_{1}+z_{1}=x_{2}+y_{2}=1,\) which yield \(x_{3}+y_{3}=0,z_{3}+w_{3}=-1.\) Now \(x_{3}^{2}+y_{3}^{2}+z_{3}^{2}+w_{3}^{2}=1\) implies \(x_{3}^{2}+z_{3}^{2}+z_{3}=0.\) Thus by (4) we have \(AB\in P_{(34)}\mathcal{X}_{2}.\) The other cases can be done similarly and we obtain \(AB\in P_{(34)}\mathcal{X}_{3}\) if \(A,B\in\mathcal{X}_{j}\) and \(AB\in P_{(34)}\mathcal{X}_{j}\) if either \(A\) or \(B\in\mathcal{X}_{j}\), \(j=1,\ldots,4.\) Thus finally \(AB\in P_{(34)}\mathcal{X}_{3}\cup P_{(34)}\mathcal{X}_{j}\) for \(j=1,\ldots,4.\) Now suppose \(A=x_{1}I+y_{1}P_{(12)(34)}+z_{1}P_{(1324)}+w_{1}P_{(1423)}\in P_{(34)}\mathcal{ X}_{j}\) and \(B=x_{2}P_{(34)}+y_{2}P_{(12)}+z_{2}P_{(13)(24)}+w_{2}P_{(14)(23)}\in\mathcal{X}_{k}\), \(j,k\in\{1,\ldots,4\}\) and \((x_{i},y_{i},z_{i},w_{i}),i=1,2\) are given by equation (3) accordingly. Then \(AB=x_{3}P_{(34)}+y_{3}P_{(12)}+z_{3}P_{(13)(24)}+w_{3}P_{(14)(23)},\) where \[x_{3} =x_{1}x_{2}+y_{1}y_{2}+z_{1}z_{2}+w_{1}w_{2},y_{3}=x_{1}y_{2}+y_{1 }x_{2}+z_{1}w_{2}+w_{1}z_{2},\] \[z_{3} =x_{1}z_{2}+y_{1}w_{2}+z_{1}y_{2}+w_{1}x_{2},w_{3}=x_{1}w_{2}+y_{1 }z_{2}+z_{1}x_{2}+w_{1}y_{2}.\] Thus by similar arguments as the above two cases, we obtain: \(x_{3}+y_{3}+z_{3}+w_{3}=1\) if \(A\in P_{(34)}\mathcal{X}_{1}\cup P_{(34)}\mathcal{X}_{3},B\in\mathcal{X}_{1} \cup\mathcal{X}_{3}\) or \(A\in P_{(34)}\mathcal{X}_{2}\cup P_{(34)}\mathcal{X}_{4},B\in\mathcal{X}_{2} \cup\mathcal{X}_{4},\) and \(x_{3}+y_{3}+z_{3}+w_{3}=-1\) otherwise. Then it can be checked that \(x_{3}^{2}+y_{3}^{2}+z_{3}^{2}+w_{3}^{2}=1\) and expressions of \(x_{3}+y_{3}\) and \(z_{3}+w_{3}\) are given by (12). Further, if \(A\in P_{(34)}\mathcal{X}_{1}\) and \(B\in\mathcal{X}_{2}\), then \(x_{1}+y_{1}=x_{2}+y_{2}=0,w_{1}+z_{1}=-(w_{2}+z_{2})=1.\) So that \(x_{3}+y_{3}=-1\) and \(z_{3}+w_{3}=0.\) Finally, \(x_{3}^{2}+y_{3}^{2}+z_{3}^{2}+w_{3}^{2}=1\) yields \(x_{3}^{2}+z_{3}^{2}+x_{3}=0.\) Hence by (4) we write \(AB\in\mathcal{X}_{4}.\) Similarly, the other cases follow and we get \(AB\in\mathcal{X}_{3}\) if \(A\in P_{(34)}\mathcal{X}_{j}\) and \(B\in\mathcal{X}_{j};AB\in\mathcal{X}_{4}\) if \(A\in P_{(34)}\mathcal{X}_{j}\) and \(B\in\mathcal{X}_{k},j\neq k,\{j,k\}\in\{\{1,2\},\{2,1\},\{3,4\},\{4,3\}\};AB\in \mathcal{X}_{1}\) if \(A\in P_{(34)}\mathcal{X}_{j}\) and \(B\in\mathcal{X}_{k},j\neq k,\{j,k\}\in\{\{1,3\},\{3,1\},\{2,4\},\{4,2\}\};\) and \(AB\in\mathcal{X}_{2}\) if \(A\in P_{(34)}\mathcal{X}_{j}\) and \(B\in\mathcal{X}_{k},j\neq k,\{j,k\}\in\{\{1,4\},\{4,1\},\{2,3\},\{3,2\}\}.\) Thus considering all the above cases we conclude that \(P_{(34)}\mathcal{X}_{3},P_{(34)}\mathcal{X}_{3}\cup P_{(34)}\mathcal{X}_{j},P_{(3 4)}\mathcal{X}_{3}\cup\mathcal{X}_{j}\) and \(P_{(34)}\mathcal{X}_{3}\cup P_{(34)}\mathcal{X}_{j}\cup\mathcal{X}_{3}\cup \mathcal{X}_{j},j=1,\ldots,4\) are groups with respect to matrix multiplication. Now let \(G\) represent any matrix group from the chain of groups corresponding to \(\mathcal{X}_{k}.\) Now by Theorem 2.1 we observe that if \(D\in\mathcal{X}_{k}\) then there exist \(B\in\mathcal{Y}_{k}\) and \(C\in\mathcal{Z}_{k}\) such that \(B=P_{(23)}DP_{(23)}\) and \(C=P_{(24)}DP_{(24)},\)\(k=1,\ldots,4.\) Thus the chains corresponding to \(\mathcal{Y}_{3}\) and \(\mathcal{Z}_{3}\) follows from the observation that \(f:G\to G^{\prime},\) is a group isomorphism defined by \(f(M)=P_{(23)}MP_{(23)}\) and \(f(M)=P_{(24)}MP_{(24)}\) when \(G^{\prime}\ _Consider_ \[A=\begin{bmatrix}\frac{2}{5}&-\frac{2}{5}&\frac{4}{5}&\frac{1}{5}\\ -\frac{2}{5}&\frac{2}{5}&\frac{1}{5}&\frac{1}{5}\\ \frac{4}{5}&\frac{1}{5}&-\frac{2}{5}&\frac{1}{5}\\ \frac{1}{5}&\frac{4}{5}&\frac{2}{5}&-\frac{2}{5}\end{bmatrix}\in\mathcal{X}_{1} \text{ and }B=\begin{bmatrix}\frac{\sqrt{2}}{3}&\frac{2}{3}&-\frac{\sqrt{2}}{3}&\frac{1}{ 3}\\ \frac{2}{3}&-\frac{\sqrt{2}}{3}&\frac{1}{3}&\frac{\sqrt{2}}{3}\\ -\frac{\sqrt{2}}{3}&\frac{1}{3}&\frac{\sqrt{2}}{3}&\frac{2}{3}\\ \frac{1}{3}&\frac{\sqrt{2}}{3}&\frac{2}{3}&-\frac{\sqrt{2}}{3}\end{bmatrix} \in\mathcal{Y}_{1},\] _then clearly \(AB=\begin{bmatrix}-\frac{2\sqrt{2}}{16}&\frac{1}{5}&\frac{8}{15}+\frac{\sqrt{2 }}{5}&\frac{2\sqrt{2}}{15}&\frac{2}{3}-\frac{\sqrt{2}}{3}\\ -\frac{\sqrt{2}}{5}+\frac{8}{15}&-\frac{1}{5}+\frac{2\sqrt{2}}{15}&\frac{ \sqrt{2}}{5}+\frac{2}{3}&-\frac{2\sqrt{2}}{15}\\ \frac{2\sqrt{2}}{5}+\frac{4}{15}&\frac{2}{5}+\frac{2}{15}&-\frac{2\sqrt{2}}{ 5}+\frac{1}{3}&-\frac{2\sqrt{2}}{15}\\ -\frac{\sqrt{2}}{15}+\frac{2}{5}&\frac{4}{15}-\frac{2\sqrt{2}}{5}&\frac{ \sqrt{2}}{15}&\frac{1}{3}+\frac{2\sqrt{2}}{5}\end{bmatrix}\not\in\mathcal{OP}_{4}.\) Then the following corollary describes all real OPMs. **Corollary 2.6**.: _(Characterization of real OPMs) Under the assumptions and notations of Theorem 2.1, a matrix \(A\in\mathcal{X}\cup\mathcal{Y}\cup\mathcal{Z}\) where_ \[\mathcal{X} = \left\{\overline{P}M^{s}_{x,z},\overline{P}N^{s}_{z,x}:s=\pm\right\}\] \[\mathcal{Y} = \left\{\overline{P}P_{(23)}M^{s}_{x,z}P_{(23)},\overline{P}P_{(2 3)}N^{s}_{z,x}P_{(23)}:s=\pm\right\}\] \[\mathcal{Z} = \left\{\overline{P}P_{(24)}M^{s}_{x,z}P_{(24)},\overline{P}P_{(2 4)}N^{s}_{z,x}P_{(24)}:s=\pm\right\}\] _is a real OPM if and only if \(x=\pm\sqrt{z(1-z)},0\leq z\leq 1\) for \(s=+,\) and \(x=\pm\sqrt{-z(1+z)},-1\leq z\leq 0\) for \(s=-\)._ Observe that the parametric curves which define the real OPMs as given by Corollary 2.6 are \(x^{2}+z^{2}+rz=0\), \(r\in\{1,-1\}.\) Then, \[(x,z)=\left(\frac{1}{2}\sin\theta,-\frac{r}{2}(1-r\cos\theta)\right),\ -\pi\leq \theta\leq\pi\] provides one-parameter trigonometric parametrizations for the parametric curves. In particular, from equation (11), trigonometric parametrizations of the continuous deformations of the Grover matrix of order 4 can be obtained from the trigonometric parametrizations of sets of OPMs \(\mathcal{X}_{1},\mathcal{Y}_{1},\mathcal{Z}_{1}\) given by \[(\mathcal{X}_{1})_{\theta}=\left\{\begin{bmatrix}\frac{1}{2}\sin\theta&-\frac{ 1}{2}\sin\theta&\frac{1}{2}(1+\cos\theta)&\frac{1}{2}(1-\cos\theta)\\ \frac{1}{2}\sin\theta&\frac{1}{2}\sin\theta&\frac{1}{2}(1-\cos\theta)&\frac{1 }{2}(1+\cos\theta)\\ \frac{1}{2}(1+\cos\theta)&\frac{1}{2}(1-\cos\theta)&-\frac{1}{2}\sin\theta& \frac{1}{2}\sin\theta\\ \frac{1}{2}(1-\cos\theta)&\frac{1}{2}(1+\cos\theta)&\frac{1}{2}\sin\theta&- \frac{1}{2}\sin\theta\end{bmatrix}:\theta\in[-\pi,\pi]\right\}, \tag{13}\] \[(\mathcal{Y}_{1})_{\theta}=\left\{\begin{bmatrix}\frac{1}{2}\sin\theta&\frac{ 1}{2}(1+\cos\theta)&\frac{1}{2}(1-\cos\theta)&-\frac{1}{2}\sin\theta\\ \frac{1}{2}(1+\cos\theta)&-\frac{1}{2}\sin\theta&\frac{1}{2}(1-\cos\theta)& \frac{1}{2}\sin\theta\\ -\frac{1}{2}\sin\theta&\frac{1}{2}(1-\cos\theta)&\frac{1}{2}\sin\theta&\frac{1 }{2}(1+\cos\theta)\\ \frac{1}{2}(1-\cos\theta)&\frac{1}{2}\sin\theta&\frac{1}{2}(1+\cos\theta)&- \frac{1}{2}\sin\theta\end{bmatrix}:\theta\in[-\pi,\pi]\right\}, \tag{14}\] \[(\mathcal{Z}_{1})_{\theta}=\left\{\begin{bmatrix}\frac{1}{2}\sin\theta&\frac{ 1}{2}(1+\cos\theta)&\frac{1}{2}(1-\cos\theta)&-\frac{1}{2}\sin\theta\\ \frac{1}{2}(1+\cos\theta)&-\frac{1}{2}\sin\theta&\frac{1}{2}\sin\theta&\frac{1 }{2}(1-\cos\theta)\\ \frac{1}{2}(1-\cos\theta)&\frac{1}{2}\sin\theta&-\frac{1}{2}\sin\theta&\frac{1 }{2}(1+\cos\theta)\\ -\frac{1}{2}\sin\theta&\frac{1}{2}(1-\cos\theta)&\frac{1}{2}(1+\cos\theta)& \frac{1}{2}\sin\theta\end{bmatrix}:\theta\in[-\pi,\pi]\right\} \tag{15}\] respectively. Next, in what follows, we characterize all rational OPMs. Treating \(x^{2}+z^{2}-z=0\) as a polynomial in indeterminate \(z\), we obtain \[z=\frac{1\pm\sqrt{1-4x^{2}}}{2}.\] Then \(z\in\mathbb{Q}\) if and only if \(1-4x^{2}\) is zero or perfect square of a nonzero rational number of the form \(p/q\) for \(p,q\in\mathbb{Z},q\neq 0.\) It is zero if \(x\in\{-\frac{1}{2},\frac{1}{2}\}.\) If \(1-4x^{2}=\frac{p^{2}}{q^{2}}\) for \(p/q\neq 0\) then after rewriting it takes the form \(X^{2}-4Y^{2}=1,\) where \(X=q/p\) and \(Y=xq/p.\) Now \((X+2Y)\) and \((X-2Y)\) are units in \(\mathbb{Q}\) for \(x\in\mathbb{Q}\) such that \((X+2Y)(X-2Y)=1.\) Thus letting \(X+2Y=r\) and \(X-2Y=\frac{1}{r}\) for some nonzero \(r\in\mathbb{Q},\) we obtain \[X=\frac{1}{2}\left(r+\frac{1}{r}\right)\text{ and }Y=\frac{1}{4}\left(r- \frac{1}{r}\right),\] which ultimately gives us the values of \(x\) and \(z\) in terms of the parameter \(r.\) Similar procedure can be followed for \(x^{2}+z^{2}+z=0.\) Thus we have the following corollary. **Corollary 2.7**.: _(Characterization of rational OPMs) Under the assumptions and notations of Corollary 2.6, a matrix \(A\in\mathcal{X}\cup\mathcal{Y}\cup\mathcal{Z}\) is a rational OPM if and only if \(x=\frac{r^{2}-1}{2(r^{2}+1)},z=\frac{1}{2}\pm\frac{r}{r^{2}+1}\) for \(s=+\) and \(x=\frac{r^{2}-1}{2(r^{2}+1)},z=-\frac{1}{2}\pm\frac{r}{r^{2}+1},\) for \(s=-\), where \(r\in\mathbb{Q}.\)_ Finally, we mention that the chains of matrix groups described in Theorem 2.4 remains valid when the subgroups are restricted only for the real or rational matrices as given in Corollary 2.6 and Corollary 2.7. Search for orthogonal matrices that are linear combinations of permutation matrices but not permutative In [10], it is shown that an orthogonal matrix of order \(3\) is a linear combination of permutation matrices if and only if it is a permutative matrix. However, this is no longer true for matrices of order \(4\) as follows from the following example. Consider the block diagonal matrix \[A=\left[\begin{array}{cc}1&\mathbf{0}\\ \mathbf{0}^{T}&\frac{2}{3}J_{3}-I_{3}\end{array}\right]=-\frac{1}{3}I+\frac{2 }{3}P_{(234)}+\frac{2}{3}P_{(243)},\] where \(J_{3}=\mathbf{1}_{3}\mathbf{1}_{3}^{T}.\) Then \(A\) is an orthogonal matrix which is a linear combination of permutation matrices but not a permutative matrix. Indeed, it may be noted that this matrix \(A\) is a direct sum of two permutative matrices. Then the following question arises: Does there exist an orthogonal matrix of order \(4\) which is a linear combination of permutation matrices but neither a permutative matrix nor a direct sum of permutative matrices with permutations of its rows and columns? In this section, we investigate this problem. First we derive certain sufficient conditions for which an orthogonal matrix which is a linear combination of permutation matrices is always permutative, that is, such a matrix can always be written in the form given by equation (9). Also, recall that a necessary condition for a linear combination of permutation matrices to be real orthogonal is that sum of the entries along a row or column should be \(\pm 1\)[5]. We provide an alternative easy proof of this result for orthogonal matrices of order \(4\) in the following proposition. **Proposition 3.1**.: _A necessary condition for a linear combination of permutation matrices \(A\) of order \(4\) to be orthogonal is that the sum of the entries of \(A\) along each row and column is \(\pm 1.\)_ **Proof:** Suppose \(A=\sum_{\sigma\in S_{4}}x_{\sigma}P_{\sigma},\) where \(S_{4}\) denotes the symmetric group of order \(4.\) Then the \(i\)th row sum of \(A\) is \[A_{(i,:)}=\sum_{j=1}^{4}\sum_{\sigma}x_{\sigma}P_{\sigma}(i,j)=\sum_{\sigma}x _{\sigma}\sum_{j=1}^{n}P_{\sigma}(i,j)=\sum_{\sigma}x_{\sigma}.\] Similarly, the \(i\)th column sum \(A_{(:,i)}\) of \(A\) is \(\sum_{\sigma}x_{\sigma}.\) Then consider the Hardamard matrix of order \(4\) as follows: \[H=\frac{1}{2}\begin{bmatrix}1&1&1&1\\ 1&-1&1&-1\\ 1&1&-1&-1\\ 1&-1&-1&1\end{bmatrix}. \tag{16}\] Setting \(B=HAH,\)\(Be_{1}=(HAH)e_{1}=(HA)(He_{1})=\frac{1}{2}H(A\mathbf{1})=\frac{1}{2}\sum_{ \sigma}x_{\sigma}(H\mathbf{1})=(\sum_{\sigma}x_{\sigma})e_{1},\) where \(e_{1}=\begin{bmatrix}1&0&0&0\end{bmatrix}^{T}\) and \(\mathbf{1}=\begin{bmatrix}1&1&1&1\end{bmatrix}^{T}.\) Similarly, \(B^{T}e_{1}=(\sum_{\sigma}x_{\sigma})e_{1}.\) Then \[B=HAH=\begin{bmatrix}\sum_{\sigma}x_{\sigma}&\mathbf{0}\\ \mathbf{0}^{T}&\bar{A}\end{bmatrix},\] where \(\mathbf{0}=\begin{bmatrix}0&0&0\end{bmatrix}\) and \(\bar{A}\) is a \(3\times 3\) orthogonal matrix. Consequently, \(\sum_{\sigma}x_{\sigma}\in\{\pm 1\}\) since \(B\) is orthogonal. This completes the proof. \(\square\) We call a set of real (nonzero) matrices of order \(k,\)\(S=\{A_{1},A_{2},\ldots,A_{n}\}\) is pairwise \(H\)-orthogonal if Hadamard product of any pair of matrices \(A_{i},A_{j},i\neq j,\) denoted by \(A_{i}\circ A_{j}\) is the zero matrix, \(1\leq i,j\leq n.\) We denote \(\langle S\rangle=\left\{\sum_{j=1}^{n}\alpha_{j}A_{j}:\alpha_{j}\in\mathbb{R}\right\}\) as the vector space generated by elements of \(S.\) Observe that if \(S\) is a pairwise \(H\)-orthogonal set of permutation matrices then \(A\in\langle S\rangle\) is always permutative. Then we have the following proposition. **Proposition 3.2**.: _Any linear combination of permutation matrices of order \(4\) can be written as a sum of at most \(6\) permutative matrices each of which is a linear combination of \(4\) permutation matrices that are pairwise \(H\)-orthogonal._ **Proof:** The proof follows from the partition of the symmetric group \(S_{4}=\cup_{k=1}^{6}\tilde{S}_{k}\) where \[\begin{split}\tilde{S}_{1}&=\{id,(12)(34),(13)(24),( 14)(23)\},\;\tilde{S}_{2}=\{(23),(124),(1342),(143)\},\\ \tilde{S}_{3}&=\{(24),(123),(134),(1432)\},\; \tilde{S}_{4}=\{(34),(12),(1324),(1423)\},\\ \tilde{S}_{5}&=\{(14),(1243),(132),(234)\},\; \tilde{S}_{6}=\{(13),(1234),(142),(243)\}.\end{split}\] such that \(M_{\tilde{S}_{k}}=\{P_{\sigma}:\sigma\in\tilde{S}_{k}\}\) is a pairwise \(H\)-orthogonal set and any \(A\in\langle M_{\tilde{S}_{k}}\rangle\) is a permutative matrix. \(\square\) Then we have the following theorem. **Theorem 3.3**.: _Let \(A\in\langle S\rangle\) where \(S\) is a pairwise \(H\)-orthogonal set of permutation matrices of order \(4.\) Then \(A+cP\) is an OPM for any \(c\in\mathbb{R}\) and \(P\in\mathcal{P}_{4}.\)_ **Proof:** Obviously \(B=A\in\langle S\rangle\) is permutative if \(c=0.\) Let \(c\neq 0.\) Let \(A\) be a symbolic permutative matrix with first row \(\mathbf{x}=(x,y,z,w).\) Consider the entries \(A_{ij}\) for which \(P_{ij}=1.\) Then \(B_{ij}=A_{ij}+c\) if and only if \(P_{ij}=1;\) and \(B_{ij}=A_{ij}\) otherwise. For any pair of indices \((i,j)\) and \((k,l)\) with \(P_{ij}=1=P_{kl},\) the unit norm condition of rows of \(B\) implies \(A_{ij}=A_{kl}\) since \(c\neq 0.\) Thus the permutative structure of \(A\) implies that \(B\) is permutative. \(\square\) Below we show that no orthogonal matrix of order \(4\) can be a linear combination of two distinct permutation matrices. By non-trivial linear combination we mean all the coefficients of the linear combination are non-zero. **Theorem 3.4**.: _There is no orthogonal matrix which is a non-trivial linear combination of two distinct permutation matrices._ **Proof:** Let \(A=\alpha P+\beta Q\) be an orthogonal matrix and \(P\neq Q.\) If \(P\circ Q=0\) then \(A\) is an OPM. From the classification of all OPMs of order \(4\) described in Remark 2.3, any OPM is a linear combination of four \(H\)-orthogonal permutation matrices. Indeed, it follows from equations (3), (5), (7) that if one or more coefficients in the linear combination of \(H\)-orthogonal permutation matrices are zero, then the corresponding OPM becomes \(\pm 1\) times a permutation matrix. Hence the desired result follows. Next, assume that \(P\circ Q\neq 0\) and \(A=\alpha P+\beta Q\) is an orthogonal matrix, \(P,Q\in\mathcal{P}_{4}.\) This means there can exist at most two pairs of indices \((i,j)\) such that \(P_{ij}=Q_{ij}=1\) since \(P\neq Q.\) Then \(A_{ij}=\alpha+\beta\) for those \((i,j),\) and two permutation matrices \(X,Y\) can be found for which \(XAY=(\alpha+\beta)I_{1}\oplus A_{1}\) or \((\alpha+\beta)I_{2}\oplus A_{2}\) where \(A_{1}\) and \(A_{2}\) are OPMs of orders \(3\) and \(2\) respectively. Indeed, each row of \(A_{1}\) is a permutation of \(\{0,\alpha,\beta\};\) whereas each row of \(A_{2}\) is a permutation of \(\{\alpha,\beta\}.\) Then from the classification of OPMs of order \(3\) (see Theorem 3.1, [10]) it can be seen that either \((\alpha,\beta)=(\pm 1,0)\) or \((\alpha,\beta)=(0,\pm 1).\) The same holds for \(A_{2},\) and hence the desired result follows. \(\Box\) The following theorem provides characterization of orthogonal matrices that are linear combinations of three permutation matrices. **Theorem 3.5**.: _If an orthogonal matrix \(A\) is a (real) linear combination of three distinct permutation matrices then either \(\pm A\) is a permutation matrix or \(XAY\) is a direct sum of OPMs of order \(3\) and \(1,\) for some permutation matrices \(X,Y.\)_ **Proof:** Let \(A=\alpha P+\beta Q+\gamma R\) be an orthogonal matrix, where \(P,Q,R\) are distinct permutation matrices of order \(4\). Then two cases arise. Either the set \(S=\{P,Q,R\}\) is pairwise \(H\)-orthogonal or there exist at least one pair of matrices in \(S\) whose Hadamard product is a non-zero matrix. Note that sum of entries of each row and column of \(A\) is \(\alpha+\beta+\gamma.\) First suppose that \(S\) is a pairwise \(H\)-orthogonal set. Then following a similar argument as in the proof of Theorem 3.4 it can be concluded that \(A=\pm M\) for some \(M\in\mathcal{P}_{4}.\) Next assume that \(S\) is not pairwise \(H\)-orthogonal. If there is only one pair of elements of \(S\) that are not \(H\)-orthogonal. Without loss of generality, let \(P\circ Q\neq 0.\) Then \(A_{ij}=\alpha+\beta\) if and only if \(P_{ij}=Q_{ij}=1\) for at most two indices \((i,j).\) Hence the unit norm condition of rows and columns of \(A\) yields the polynomial system: \[\alpha^{2}+\beta^{2}+\gamma^{2}=1,\ \ (\alpha+\beta)^{2}+\gamma^{2}=1.\] Then it is computational to check that either \(\alpha=0\) or \(\beta=0.\) Then from Theorem 3.4 it follows that \(A=\pm M\) for some \(M\in\mathcal{P}_{4}.\) Now assume that there are two distinct pairs of matrices in \(S\) each of which are not \(H\)-orthogonal. Without loss of generality, let \(P\circ Q\neq 0,P\circ R\neq 0.\) Then \(A_{ij}=\alpha+\beta\) and \(A_{kl}=\alpha+\gamma\) if and only if \(P_{ij}=Q_{ij}=1,\)\(P_{kl}=R_{kl}=1\) for some indices \((i,j)\) and \((k,l).\) Thus \((\alpha,\beta,\gamma)\) satisfy the following polynomial system due to the unit norm condition of rows of \(A\): \[(\alpha+\beta)^{2}+\gamma^{2}=1,\ \ (\alpha+\gamma)^{2}+\beta^{2}=1.\] Solving these equations we have either \(\alpha=0\) or \(\beta=\gamma.\) If \(\alpha=0,\)\(A\) is linear combination of \(Q\) and \(R\) and hence the desired result follows from Theorem 3.4, while for \(\beta=\gamma\) rows of \(A\) are permutations of \(\mathbf{x}_{1}=(\alpha+\beta,\beta,0,0)\) or \(\mathbf{x}_{2}=(\alpha,\beta,\beta,0).\) If rows of \(A\) are permutations of \(\mathbf{x}_{1}\) only then \(\beta(\alpha+\beta)=0.\) Otherwise if permutations of both \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) present as rows of \(A\) we obtain \(\alpha\beta=0.\) Thus the result follows from Theorem 3.4. Finally, let all pairs of matrices from \(S\) are not \(H\)-orthogonal. Then if \(P\circ Q\circ R\neq 0\) then there is exactly one index \((i,j)\) such that \(A_{ij}=\alpha+\beta+\gamma\) and \(P_{ij}=Q_{ij}=R_{ij}=1\) since \(P\neq Q\neq R,\) which further implies that \(\alpha+\beta+\gamma\in\{\pm 1\}.\) Obviously, two permutation matrices \(X,Y\) can be found such that \(XAY=(\alpha+\beta+\gamma)\oplus A_{1}\) where \(A_{1}\) is an orthogonal matrix of order \(3\) which is linear combination of \(3\) permutation matrices. Then using the characterization of orthogonal matrices that are linear combinations of permutations (see Theorem 3.2, [10]) we conclude that \(A_{1}\) is a OPM and the desired result follows. Otherwise, if \(P\circ Q\neq 0,Q\circ R\neq 0,R\circ P\neq 0\) with \(P\circ Q\circ R=0\) then each rows of \(A\) can be permutations of \((\alpha+\beta,\gamma,0,0),\)\((\alpha+\gamma,\beta,0,0)\) and \((\alpha,\beta+\gamma,0,0).\) However, all of these row vectors can not appear as rows of \(A\) simultaneously since column sum of \(A\) is \(\alpha+\beta+\gamma\) for each column. These complete the proof. \(\Box\) Now we focus on finding orthogonal matrices that are real linear combinations of permutations but neither permutative nor a direct sum of permutative matrices up to permutations of rows or columns. Thus we investigate existence of such matrices belonging to subspaces and direct sum of subspaces of matrices of order \(4\) that are generated by sets of linearly independent permutation matrices. Recall that the real linear combinations of permutation matrices of order \(n\) form a vector space of dimension \((n-1)^{2}+1\)[2]. We denote this space as \(\mathsf{L}\) for \(n=4.\) We choose a basis \(\mathcal{B}\) of \(\mathsf{L}\) that contains \(10\) permutation matrices given by \[\mathcal{B}=\{P_{(12)},P_{(23)},P_{(24)},P_{(34)},P_{(123)},P_{(124)},P_{(234) },P_{(12)(34)},P_{(13)(24)},P_{(14)(23)}\}.\] Let \(A\in\mathsf{L}.\) Then \(A\in\oplus_{k=1}^{5}\mathsf{L}_{k},\)\(\mathsf{L}_{k}=\langle\mathcal{B}_{k}\rangle\) is the subspace generated by \(\mathcal{B}_{k},k=1,\ldots,5\) as follows: \[\mathcal{B}_{1} =\{P_{(12)},P_{(34)},P_{(13)(24)},P_{(14)(23)}\},\mathcal{B}_{2} =\{P_{(24)},P_{(12)(34)}\},\] \[\mathcal{B}_{3} =\{P_{(124)},P_{(234)}\},\mathcal{B}_{4}=\{P_{(123)}\},\mathcal{ B}_{5}=\{P_{(23)}\}.\] Note that \(\mathcal{B}_{k}\) is a pairwise \(H\)-orthogonal set for each \(k.\) In particular, if \(A\in\mathsf{L}_{k},k=1,\ldots,5,\) is orthogonal then \(A\) can be characterized by Theorem 2.1 and hence \(A\in\mathcal{OP}_{4}.\) Now we briefly review the concept of _combinatorially orthogonal_ matrices introduced by Brualdi et al. [1] which will be used in sequel. A matrix having entries from \(\{0,1\}\) is called a \((0,1)\)\(matrix\). The nonzero _pattern_ of a matrix \(A\) is defined as a \((0,1)\)\(matrix\)\(M_{A}\) such that \(ij\)th entry of \(M_{A}=1\) if and only if \(a_{ij}\neq 0\). A nonzero _pattern_\(M\) is orthogonal if there exists a (real) orthogonal matrix with the same pattern. Let \(A\) be a \((0,1)\)\(matrix\) of order \(n.\) Then \(A\) is _combinatorially orthogonal_ or _quadrangular_ if inner product of distinct rows or columns is not equal to \(1.\) Let \(S\) be a subset of rows of \(A\) such that for each element of \(S\) there is another element of \(S\) with nonzero inner product. Then \(A\) is said to be row strongly quadrangular if the matrix, whose rows are all the elements of \(S\), has at least \(|S|\) number of columns with at least two \(1\)s. Similarly, the matrix \(A\) is said to be column strongly quadrangular if the set \(S\) contains columns of \(A\) and if the matrix whose columns are all the elements of \(S\), has at least \(|S|\) number of rows with at least two \(1\)'s. If a \((0,1)\) matrix is both row and column strongly quadrangular then it is called _strongly quadrangular_. Note that if a \((0,1)\) matrix supports unitary then it is strongly quadrangular but the converse need not be true. Now we recall the following proposition from [11]. **Proposition 3.6**.: _A \((0,1)\) matrix of degree \(n\leq 4\) supports a unitary if and only if it is strongly quadrangular._ Then we have the following theorem. **Theorem 3.7**.: _Let \(A\in\mathsf{L}_{i}\oplus\mathsf{L}_{j},\)\(i,j\in\{1,\ldots,5\}\) be an orthogonal matrix. Then \(A\in\mathcal{OP}_{4}.\)_ **Proof:** It is clear from Theorem 3.3 and Theorem 2.1 that \(A\in\mathcal{OP}_{4}\) whenever \(A\in\mathsf{L}_{i}\oplus\mathsf{L}_{j}\) for \(i\neq j,i\in\{1,\ldots,5\},\)\(j\in\{4,5\}.\) First let \(A=a_{1}P_{(12)}+a_{2}P_{(34)}+a_{3}P_{(13)(24)}+a_{4}P_{(14)(23)}+b_{1}P_{(24)}+b _{2}P_{(12)(34)}\in\mathsf{L}_{1}\oplus\mathsf{L}_{2}.\) Then the unit norm conditions of \(2\)nd and \(4\)th rows, \(1\)st and \(2\)nd rows, and \(3\)rd and \(4\)th rows of \(A\) yield \(b_{2}=0\) or \(a_{1}=a_{2},\)\(b_{1}=0\) or \(a_{2}=a_{3},\) and \(b_{1}=0\) or \(a_{1}=a_{3},\) respectively. If \(b_{1}=b_{2}=0,\) then \(A\in\mathsf{L}_{1}.\) If \(b_{1}\neq 0,b_{2}=0\) and \(a_{1}=a_{2}=a_{3}\) then \(A=A\left(\mathbf{x};P_{(1432)},P_{(13)(24)},P_{(1234)}\right),\) where \(\mathbf{x}=(a_{1}+b_{1},a_{1},a_{1},a_{4}).\) If \(b_{1}=0,b_{2}\neq 0,a_{2}=a_{1},\) then \(A=A\left(\mathbf{x};P_{(12)(34)},P_{(13)(24)},P_{(14)(23)}\right),\) where \(\mathbf{x}=(a_{1},a_{1}+b_{2},a_{3},a_{4}).\) At last, for \(b_{1},b_{2}\neq 0,a_{1}=a_{2}=a_{3},\)\(A=A\left(\mathbf{x};P_{(1432)},P_{(13)(24)},P_{(1234)}\right),\) where \(\mathbf{x}=(a_{1}+b_{1},a_{1}+b_{2},a_{1},a_{4}).\) Thus by Theorem 2.1 in all the above cases \(A\in\mathcal{OP}_{4}.\) If \(A\in\mathsf{L}_{1}\oplus\mathsf{L}_{3}\) then \(A\) is linear combination of at most \(6\) permutations and using similar arguments as above it is easy to verify that \(A\in\mathcal{OP}_{4}.\) Next, suppose \(A=b_{1}P_{(24)}+b_{2}P_{(12)(34)}+c_{1}P_{(124)}+c_{2}P_{(234)}\in\mathsf{L}_{2 }\oplus\mathsf{L}_{3}.\) Then the \((0,1)\) pattern \(M_{A}\) of \(A\) is given by \[M_{A}=\begin{bmatrix}1&1&0&0\\ 1&0&1&1\\ 0&0&1&1\\ 1&1&1&0\end{bmatrix},\] which is not quadrangular. Hence \(A\) is not an orthogonal matrix with the \((0,1)\) pattern \(M_{A}.\) Let \(M_{i}\) denote the \(i\)th column of \(M_{A}.\) However, for \(A\) to be orthogonal which means \(M_{A}\) to be quadrangular, the coefficients must satisfy the following conditions: \(b_{2}=0\) or \(b_{1}+c_{1}=0,\) and \(b_{2}=0\) or \(b_{1}+c_{2}=0\) by setting \(M_{1}^{T}M_{4}=0\) and \(M_{2}^{T}M_{3}=0\) respectively. If \(b_{2}=0,\) then by Proposition 3.6 the nonzero pattern of \(A\) can support an orthogonal matrix. Further unit norm condition of rows of \(A\) implies either \(b_{1}=0\) or \(c_{1}=c_{2}.\) If \(b_{2}=0\) and \(b_{1}=0,\)\(A\in\mathsf{L}_{3}.\) Otherwise for \(c_{1}=c_{2}\) together with \(b_{2}=0,\)\(A\) is a permutative matrix with two nonzero entries in each row. Hence from Theorem 3.4, \(A=\pm R,R\in\mathcal{P}_{4}.\) If \(b_{2}\neq 0\) i.e. \(b_{1}+c_{1}=0\) and \(b_{1}+c_{2}=0,\) a further analysis yields \(c_{1}=c_{2}=0.\) Hence \(A=b_{2}P_{(12)(34)},\) where \(b_{2}=\pm 1.\) These complete the proof. **Theorem 3.8**.: _Let \(A\in\mathsf{L}_{1}\oplus\mathsf{L}_{i}\oplus\mathsf{L}_{j}\) be orthogonal where \(i,j\in\{2,3,4,5\}\) and \((i,j)\notin\{(2,5),(3,4)\}.\) Then \(A\in\mathcal{OP}_{4}.\)_ **Proof:** Since \(\mathsf{L}_{1}\) is generated by four \(H\)-orthogonal permutation matrices and each of \(\mathsf{L}_{i}\) and \(\mathsf{L}_{j}\) is generated by one or two permutation \(H\)-orthogonal matrices, the nonzero pattern of \(A\) is the all-one matrix in general. Let \(A=a_{1}P_{(12)}+a_{2}P_{(34)}+a_{3}P_{(13)(24)}+a_{4}P_{(14)(23)}+b_{1}P_{(24) }+b_{2}P_{(12)(34)}+c_{1}P_{(124)}+c_{2}P_{(234)}\in\mathsf{L}_{1}\oplus\mathsf{ L}_{2}\oplus\mathsf{L}_{3}.\) Then the unit norm conditions of 1st row and 1st column, 3rd row and 3rd column, 3rd row and 4th column, and 1st row and 2nd column of \(A\) yield \(c_{1}=0\) or \(a_{4}=a_{1}+b_{2},\)\(c_{2}=0\) or \(a_{4}=a_{2}+b_{2},\)\(b_{1}+c_{1}=0\) or \(a_{3}=a_{1},\) and \(b_{1}+c_{2}=0\) or \(a_{3}=a_{2},\) respectively. Then it can be verified that: 1. If \(c_{1}=c_{2}=0,\) then \(A\in\mathsf{L}_{1}\oplus\mathsf{L}_{2}.\) 2. Consider \(c_{1}=0,c_{2}\neq 0,a_{4}=a_{2}+b_{2},b_{1}+c_{1}=0,a_{3}=a_{2}.\) Further, \(b_{2}=0\) or \(a_{1}=a_{2}\) and \(c_{2}=0\) or \(a_{2}=a_{4},\) and hence when \(b_{2}=0,\)\(A\in\mathsf{L}_{1}\oplus\mathsf{L}_{3}.\) Otherwise we have \(a_{1}=a_{2}.\) Hence \(a_{1}=a_{2}=a_{4},\) which again implies \(b_{2}=0.\) Thus \(A\in\mathsf{L}_{1}\oplus\mathsf{L}_{3}.\) 3. If \(c_{1}=0,c_{2}\neq 0,a_{4}=a_{2}+b_{2},b_{1}+c_{2}=0,a_{3}=a_{1},\) then we obtain \(b_{2}=0\) or \(a_{1}=a_{2}.\) Further it can be verified that in both the cases \(A\) can not be an orthogonal matrix under all the given conditions. 4. When \(c_{1}=0,c_{2}\neq 0,a_{4}=a_{2}+b_{2},a_{3}=a_{2}=a_{1},\) then the orthonormality of \(A\) further implies \(a_{4}=a_{1}+b_{1}\) and hence \(b_{1}=b_{2}.\) So that \(A\) becomes \(A\left(\mathbf{x};P_{(1324)},P_{(1423)},P_{(12)(34)}\right)\) with \(\mathbf{x}=(a_{4}+c_{2},a_{4},a_{1},a_{4}).\) A similar analysis can be done for all the cases where \(c_{2}=0\) and \(c_{1}\neq 0.\) 5. If \(c_{1}\neq 0,c_{2}\neq 0,a_{4}=a_{1}+b_{2},a_{1}=a_{2},b_{1}+c_{1}=0,b_{1}+c_{2}=0,\) then \(A\) becomes \(A\left(\mathbf{x};P_{(1234)},P_{(13)(24)},P_{(1423)}\right)\) with \(\mathbf{x}=(a_{1},a_{4}+c_{1},a_{3},a_{4}).\) 6. Consider \(c_{1}\neq 0,c_{2}\neq 0,a_{4}=a_{1}+b_{2},a_{1}=a_{2}=a_{3},b_{1}+c_{1}=0.\) Then the orthogonality condition of \(A\) yields \(a_{1}=0\) or \(2a_{4}+c_{2}=0.\) For \(a_{1}=0\) we obtain \(a_{4}=0.\) Hence \(A\in\mathsf{L}_{2}\oplus\mathsf{L}_{3}\) or \(b_{1}+c_{2}=0,\) which satisfies the previous case. Otherwise for \(2a_{4}+c_{2}=0\) we obtain \(b_{1}=2a_{4},\) which is equivalent to say \(b_{1}+c_{2}=0,\) or \(b_{1}=0,\) hence it follows from Proposition 3.1 that \(A=\pm(\frac{1}{2}J_{4}-P_{(234)}).\) Similarly it can be done for \(c_{1}\neq 0,c_{2}\neq 0,a_{4}=a_{1}+b_{2},a_{1}=a_{2}=a_{3},b_{1}+c_{2}=0.\) 7. If \(c_{1}\neq 0,c_{2}\neq 0,a_{4}=a_{1}+b_{2},a_{1}=a_{2}=a_{3},\) then the orthonormality of \(A\) yields \(c_{1}=c_{2}\) or \(a_{1}+b_{1}=a_{4}.\) Now, \(c_{1}=c_{2}\) implies \(A=A\left(\mathbf{x};P_{(14)(23)},P_{(13)(24)},P_{(12)(34)}\right)\) with \(\mathbf{x}=(a_{1}+b_{1}+c_{1},a_{4}+c_{1},a_{1},a_{4}).\) In particular \(A=\pm(\frac{1}{2}J_{4}-P),P\in\mathcal{P}_{4}.\) On the other hand for \(a_{1}+b_{1}=a_{4}\) we have \(b_{1}=b_{2}\) and \(A\) takes the form \(A\left(\mathbf{x};P_{(1324)},P_{(1423)},P_{(12)(34)}\right)\) with \(\mathbf{x}=(a_{4}+c_{2},a_{4}+c_{1},a_{1},a_{4}).\) Using similar arguments of step by step elimination of entries of the polynomial system defined by orthonormality condition of the columns and rows of concerned matrices it can be verified that any orthogonal matrix belongs to \(\mathsf{L}_{1}\oplus\mathsf{L}_{2}\oplus\mathsf{L}_{4},\mathsf{L}_{1}\oplus \mathsf{L}_{3}\oplus\mathsf{L}_{5},\) or \(\mathsf{L}_{1}\oplus\mathsf{L}_{4}\oplus\mathsf{L}_{5}\) is in \(\mathcal{OP}_{4}.\) **Theorem 3.9**.: _Let \(A\in\mathsf{L}_{i}\oplus\mathsf{L}_{j}\oplus\mathsf{L}_{k}\) be orthogonal where \(i,j,k\in\{2,3,4,5\},\) then \(A\in\mathcal{OP}_{4}.\)_ **Proof:** Clearly for all the given choices of \(i,j,k,\)\(A\) can be a linear combination of at most 5 permutation matrices and nonzero patterns of \(A\) are other than the all-one matrix. Suppose, \(A=b_{1}P_{(24)}+b_{2}P_{(12)(34)}+c_{1}P_{(124)}+c_{2}P_{(234)}+eP_{(23)}\in \mathsf{L}_{2}\oplus\mathsf{L}_{3}\oplus\mathsf{L}_{5}.\) Clearly the \((0,1)\) pattern of \(A\) is not quadrangular and \(b_{2}=0\) or \(b_{1}+c_{2}+e=0\) and \(e=0\) or \(b_{2}+c_{1}=0\) are satisfied. Hence if \(e=0\) then \(A\in\mathsf{L}_{2}\oplus\mathsf{L}_{3}.\) Otherwise if \(e\neq 0\) and \(b_{2}=0\) then \(c_{1}=0,\) so that nonzero pattern of \(A,\) \[M_{A}=\begin{bmatrix}1&0&0&0\\ 0&0&1&1\\ 0&1&1&1\\ 0&1&0&1\end{bmatrix}\] is non-quadrangular. Then considering the 3rd and 4th rows of \(A\) to be orthogonal to each other, it follows that \(b_{1}\) should be 0. Thus \(A\in\mathsf{L}_{3}\oplus\mathsf{L}_{5}.\) Finally, \(b_{1}+c_{2}+e=0\) and \(b_{2}+c_{1}=0\) can not hold simultaneously since \(b_{1}+b_{2}+c_{1}+c_{2}+e\in\{\pm 1\}\) is a necessary condition for \(A\) to be orthogonal. Hence the proof follows from Theorem 3.7. Similarly by looking into the nonzero patterns and eliminating some entries for the requirement of orthogonality of \(A\) the desired result can be proved when \(A\) belongs to each of the spaces \(\mathsf{L}_{2}\oplus\mathsf{L}_{3}\oplus\mathsf{L}_{4},\mathsf{L}_{2}\oplus \mathsf{L}_{4}\oplus\mathsf{L}_{5},\mathsf{L}_{3}\oplus\mathsf{L}_{4}\oplus \mathsf{L}_{5}.\) **Theorem 3.10**.: _Let \(A\in\mathsf{L}_{2}\oplus\mathsf{L}_{3}\oplus\mathsf{L}_{4}\oplus\mathsf{L}_{5}\) be orthogonal. Then \(A\in\mathcal{OP}_{4}.\)_ **Proof:** Let \(A=b_{1}P_{(24)}+b_{2}P_{(12)(34)}+c_{1}P_{(124)}+c_{2}P_{(234)}+dP_{(123)}+e_ {(23)}P_{(23)}\in\mathsf{L}_{2}\oplus\mathsf{L}_{3}\oplus\mathsf{L}_{4}\oplus \mathsf{L}_{5}.\) Let \(M_{A}^{k}\) for \(k=1,\ldots,4\) denote the \((0,1)\) pattern of \(A\) arise at different stages. Now, \[M_{A}^{1}=\begin{bmatrix}1&1&0&0\\ 1&0&1&1\\ 1&1&1&1\\ 1&1&1&1\end{bmatrix},\] which is not quadrangular. Since \(R_{1}R_{2}^{T}=1\) where \(R_{i}\) denotes the \(i\)th row of \(M_{A}^{1},\)\(b_{2}=0\) or \(b_{1}+c_{2}+e=0\) holds. Hence if \(b_{2}=0,\) then \[M_{A}^{2}=\begin{bmatrix}1&1&0&0\\ 0&0&1&1\\ 1&1&1&1\\ 1&1&0&1\end{bmatrix},\] which is again not quadrangular since \(C_{1}^{T}C_{3}=1=C_{2}^{T}C_{3},\) where \(C_{i}\) denotes the \(i\)th column of \(M_{A}^{2}.\) Thus \(d=0\) or \(b_{1}+c_{1}=0\) and \(e=0\) or \(b_{1}+c_{1}=0\) are satisfied. If \(d=e=0,\) then \(A\in\mathsf{L}_{2}\oplus\mathsf{L}_{3}.\) If \(d=0,e\neq 0\) then \(A\in\mathsf{L}_{2}\oplus\mathsf{L}_{3}\oplus\mathsf{L}_{5}.\) Otherwise if \(d\neq 0,e=0\) then \(A\in\mathsf{L}_{2}\oplus\mathsf{L}_{3}\oplus\mathsf{L}_{4}.\) Finally if \(d\neq 0,e\neq 0,b_{1}+c_{1}=0,\) then \[M_{A}^{3}=\begin{bmatrix}1&1&0&0\\ 0&0&1&0\\ 1&1&0&1\\ 1&1&0&1\end{bmatrix}\] and the orthogonality condition of corresponding \(A\) implies \(d=0\) or \(e=0,\) which is a contradiction. Hence at least one of \(d=0\) or \(e=0\) holds whenever \(b_{2}=0.\) If \(b_{2}\neq 0\) then \(b_{1}+c_{2}+e=0\) holds. Thus we obtain \[M_{A}^{4}=\begin{bmatrix}0&1&0&0\\ 1&0&1&1\\ 1&1&1&1\\ 1&1&1&1\end{bmatrix},\] which is not quadrangular. Hence \(e=0\) or \(b_{2}+c_{1}+d=0\) and \(b_{1}+c_{2}=0,\) or \(b_{2}+c_{1}+d=0\) hold. So that \(A\in\mathsf{L}_{2}\oplus\mathsf{L}_{3}\oplus\mathsf{L}_{4}\) if \(e=0.\) While for \(e\neq 0\) we get \(b_{2}+c_{1}+d=0\) and thus \(b_{1}+b_{2}+c_{1}+c_{2}+d+e=0.\) Which leads to a contradiction by Proposition 3.1. Thus the proof follows from Theorem 3.7 and Theorem 3.9. \(\Box\) In all the cases above we determine spaces generated by specific permutation matrices such that any orthogonal matrix belonging to these spaces is either permutative or direct sum of permutative matrices after pre and post multiplication by permutation matrices to the original matrix. In the following we determine classes of orthogonal matrices that are linear combinations of permutation matrices but are not permutative matrices. **Theorem 3.11**.: _Let \(A\in\mathsf{L}_{1}\oplus\mathsf{L}_{3}\oplus\mathsf{L}_{4}\) be orthogonal. Then either \(A\in\mathcal{OP}_{4}\) or there exist \(P,Q\in\mathcal{P}_{4}\) such that \(PAQ\) or \(H(PAQ)H\) is of the form \(\begin{bmatrix}\pm 1&\mathbf{0}^{T}\\ \mathbf{0}&X\end{bmatrix}\) for some \(\text{OPM}\,X\) of order \(3,\) where \(H\) is the Hadamard matrix of order \(4\) as given in equation (16)._ **Proof:** Suppose \(A=a_{1}P_{(12)}+a_{2}P_{(34)}+a_{3}P_{(13)(24)}+a_{4}P_{(14)(23)}+c_{1}P_{(124) }+c_{2}P_{(234)}+dP_{(123)}\in\mathsf{L}_{1}\oplus\mathsf{L}_{3}\oplus\mathsf{ L}_{4}.\) Then the unit norm conditions of 1st row and 2nd column, 2nd row and 3rd column, 3rd row and 1st column of \(A\) yield \(c_{2}=0\) or \(a_{2}=a_{3},\)\(c_{1}=0\) or \(a_{1}=a_{3}\) and \(c_{1}=0\) or \(a_{1}=a_{4},\) respectively. Thus if \(c_{1}\neq 0\) and \(c_{2}\neq 0\) then we get \(a_{1}=a_{2}=a_{3}=a_{4}\) and a further computation for 2nd and 4th rows gives \(d=0,\) which implies \(A\in\mathsf{L}_{1}\oplus\mathsf{L}_{3}.\) If \(c_{1}=c_{2}=0,\) then \(A\in\mathsf{L}_{1}\oplus\mathsf{L}_{4}.\) If \(c_{1}=0,a_{2}=a_{3}\) while \(c_{2}\neq 0\) then a further unit norm condition of 1st and 2nd columns lead to either \(a_{1}=a_{2}\) or \(d=0.\) If \(d=0\) then we are done. Suppose \(a_{1}=a_{2},\) with \(c_{1}=0,a_{2}=a_{3},\) so that \(a_{1}=a_{2}=a_{3}.\) Then orthogonality of 1st and 2nd rows of \(A\) implies \(a_{1}=0\) or \(a_{1}+a_{4}+c_{2}+d=0.\) 1. Now while \(a_{1}=0,\) we obtain \[P_{(12)}AP_{(13)}=\begin{bmatrix}a_{4}+c_{2}+d&0&0&0\\ 0&d&c_{2}&a_{4}\\ 0&a_{4}&d&c_{2}\\ 0&c_{2}&a_{4}&d\end{bmatrix},\] and hence \(a_{4}+c_{2}+d=\pm 1.\) 2. For \(a_{1}+a_{4}+c_{2}+d=0\) using Proposition 3.1 we obtain \(3a_{1}+a_{4}+c_{2}+d=\pm 1\) so that \(2a_{1}=\pm 1\) and \(a_{4}+c_{2}+d=-a_{1}.\) Hence when row sum of \(A\) is \(1\) and \(-1,\) we obtain \(a_{1}=\frac{1}{2}\) and \(a_{1}=-\frac{1}{2},\) respectively. Thus \(P_{(12)}AP_{(13)}\) belongs to one of the following sets: \[\mathcal{C}_{1} = \begin{cases}\begin{bmatrix}-\frac{1}{2}&\frac{1}{2}&\frac{1}{2}& \frac{1}{2}\\ \frac{1}{2}&-a_{4}-c_{2}&\frac{1}{2}+c_{2}&a_{4}\\ \frac{1}{2}&a_{4}&-a_{4}-c_{2}&\frac{1}{2}+c_{2}\\ \frac{1}{2}&\frac{1}{2}+c_{2}&a_{4}&-a_{4}-c_{2}\end{bmatrix}:a_{4}=-\frac{1} {2}c_{2}\pm\frac{1}{2}\sqrt{(1-3c_{2})(1+c_{2})},\] \[-1\leq c_{2}\leq\frac{1}{3}\Big{\}},\] \[\mathcal{C}_{2} = \begin{cases}\begin{bmatrix}\frac{1}{2}&-\frac{1}{2}&-\frac{1}{2}& -\frac{1}{2}\\ -\frac{1}{2}&-a_{4}-c_{2}&-\frac{1}{2}+c_{2}&a_{4}\\ -\frac{1}{2}&a_{4}&-a_{4}-c_{2}&-\frac{1}{2}+c_{2}\\ -\frac{1}{2}&-\frac{1}{2}+c_{2}&a_{4}&-a_{4}-c_{2}\end{bmatrix}:a_{4}=-\frac{1 }{2}c_{2}\pm\frac{1}{2}\sqrt{(1+3c_{2})(1-c_{2})}\,,\] \[-\frac{1}{3}\leq c_{2}\leq 1\end{cases}.\] where the relations between \(a_{4}\) and \(c_{2}\) can be obtained by considering the unit norm condition of the rows and columns. Then observe that \(HMH=\begin{bmatrix}1&0\\ 0&M_{1}\end{bmatrix}\) if \(M\in\mathcal{C}_{1}\) for some matrix \(M_{1}\in\overline{\mathcal{C}}_{1}\), and \(HNH=\begin{bmatrix}1&0\\ 0&N_{2}\end{bmatrix}\) if \(N\in\mathcal{C}_{2}\) for some matrix \(N_{2}\in\overline{\mathcal{C}}_{2}\), where \[\overline{\mathcal{C}}_{1} = \left\{\begin{bmatrix}-\frac{1}{2}-a_{4}-c_{2}&-\frac{1}{2}+a_{4}&c _{2}\\ c_{2}&-\frac{1}{2}-a_{4}-c_{2}&-\frac{1}{2}+a_{4}\\ -\frac{1}{2}+a_{4}&c_{2}&-\frac{1}{2}-a_{4}-c_{2}\end{bmatrix}:\right.\] \[\left.a_{4}=-\frac{1}{2}c_{2}\pm\frac{1}{2}\sqrt{(1-3c_{2})(1+c_{ 2})},-1\leq c_{2}\leq\frac{1}{3}\right\},\] \[\overline{\mathcal{C}}_{2} = \left\{\begin{bmatrix}\frac{1}{2}-a_{4}-c_{2}&\frac{1}{2}+a_{4}&c _{2}\\ c_{2}&\frac{1}{2}-a_{4}-c_{2}&\frac{1}{2}+a_{4}\\ \frac{1}{2}+a_{4}&c_{2}&\frac{1}{2}-a_{4}-c_{2}\end{bmatrix}:\right.\] \[\left.a_{4}=-\frac{1}{2}c_{2}\pm\frac{1}{2}\sqrt{(1+3c_{2})(1-c_{ 2})},-\frac{1}{3}\leq c_{2}\leq 1\right\}.\] It is to be noted that any matrix in \(\overline{\mathcal{C}}_{1}\) has row and column sums \(-1\) while for \(\overline{\mathcal{C}}_{2}\) it is \(1\). Also, matrices in \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are linear combinations of at most \(6\) permutation matrices. Hence the proof. \(\square\) Thus from the above theorems we obtain that: Any orthogonal matrix \(A\) belonging to the spaces \(\mathsf{L}_{i},i\in\{1,\ldots,5\};\mathsf{L}_{i}\oplus\mathsf{L}_{j},i,j\in\{1,\ldots,5\};\mathsf{L}_{i}\oplus\mathsf{L}_{j}\oplus\mathsf{L}_{k},\,i,j,k\in \{1,\ldots,5\},\,(i,j,k)\neq(1,2,5)\) and \(\mathsf{L}_{2}\oplus\mathsf{L}_{3}\oplus\mathsf{L}_{4}\oplus\mathsf{L}_{5}\), is always permutative or there exists \(P,Q\in\mathcal{P}_{4}\) such that \(PAQ\) or \(H(PAQ)H\) is direct sum of OPMs. Finally we conclude this section with the following important remark about orthogonal matrices of order \(4\) that are direct sum of OPMs. **Remark 3.12**.: _(Orthogonal matrices in \(\mathsf{L}\) that are direct sum of OPMs) Let \(A\in\mathsf{L}\) be a \(4\times 4\) orthogonal matrix such that \(PAQ\) is the direct sum of OPMs for some \(P,Q\in\mathcal{P}_{4}\). Then \(PAQ=\begin{bmatrix}1&0\\ 0&B\end{bmatrix}\) for \(B\in\overline{\mathcal{X}}_{1}\cup\overline{\mathcal{Z}}_{1},\) or \(PAQ=\begin{bmatrix}-1&0\\ 0&C\end{bmatrix}\) for \(C\in\overline{\mathcal{Y}}_{-1}\cup\overline{\mathcal{W}}_{-1},\) where_ \[\overline{\mathcal{X}}_{1} = \left\{\begin{bmatrix}x&y&1-x-y\\ 1-x-y&x&y\\ y&1-x-y&x\end{bmatrix}:x^{2}+y^{2}-x-y+xy=0\right\},\] \[\overline{\mathcal{Y}}_{-1} = \left\{\begin{bmatrix}x&y&-1-x-y\\ -1-x-y&x&y\\ y&-1-x-y&x\end{bmatrix}:x^{2}+y^{2}+x+y+xy=0\right\},\] \(\overline{\mathcal{Z}}_{1}=\{\overline{P}A:A\in\overline{\mathcal{X}}_{1}\}, \overline{\mathcal{W}}_{-1}=\{\overline{P}B:B\in\overline{\mathcal{Y}}_{-1}\}\) and \(\overline{P}\) is the \(3\times 3\) permutation matrix corresponds to the permutation \((\ref{eq:perm_1}).\) Note that union of \(\overline{\mathcal{X}}_{1},\overline{\mathcal{Y}}_{-1},\overline{\mathcal{Z}}_ {1}\) and \(\overline{\mathcal{W}}_{-1}\) provides the set of all permutative orthogonal matrices OPMs of order \(3\)[10]. Obviously the above matrices \(PAQ\) are not permutative. Clearly if \(PAQ\) is the direct sum of two \(2\times 2\) permutative matrices, then \(PAQ\in\mathcal{P}_{4}\) or equivalently \(A\in\mathcal{P}_{4}.\)_ **Conclusion.** In this paper, we have provided parametric representation of all orthogonal permutative matrices (OPMs) of order \(4\) over the field of complex, real and rational numbers. Consequently, we have shown that OPMs can be written as linear combinations of permutation matrices. We have determined several matrix spaces such that any orthogonal matrix \(A\) in these spaces is always permutative or \(PAQ\) or \(HPAQH\) is a direct sum of OPMs for some permutation matrices \(P,Q\) and the Hadamard matrix \(H\) given in equation (16). These matrix spaces are defined by direct sums of linear spaces \(\mathsf{L}_{i},i=1,\ldots,5\) which are generated by linearly independent Hadamard orthogonal permutation matrices. However, exploring all such matrix spaces and characterizing combinatorial structure of all orthogonal matrices of order 4 is beyond the scope of this paper. For example, consider the orthogonal matrix \[M=\frac{1}{11}\begin{bmatrix}10&-2&-1&4\\ -2&7&-2&8\\ -1&-2&10&4\\ 4&8&4&-5\end{bmatrix} = \frac{1}{11}P_{(12)}+\frac{7}{11}P_{(34)}-\frac{1}{11}P_{(13)(24)}+ \frac{4}{11}P_{(14)(23)}\] \[+\frac{9}{11}P_{(24)}-\frac{3}{11}P_{(12)(34)}-\frac{6}{11}P_{(23 )}\in\mathsf{L}_{1}\oplus\mathsf{L}_{2}\oplus\mathsf{L}_{5}.\] It can be verified that \(M\) does not have any of the combinatorial structure as mentioned above. We plan to explore this problem in future. **Acknowledgement.** Amrita Mandal thanks Council for Scientific and Industrial Research (CSIR), India for financial support in the form of a junior/senior research fellowship.
2301.08233
Square compactness and Lindelöf trees
We prove that every weakly square compact cardinal is a strong limit cardinal. We also study Aronszajn trees with no uncountable finitely branching subtrees, characterizing them in terms of being Lindel\"of with respect to a particular topology. We prove that the class of such trees lies between the classes of Suslin and Aronszajn trees, and that the inclusions can consistently be strict.
Pedro E. Marun
2023-01-19T18:41:47Z
http://arxiv.org/abs/2301.08233v3
# Square compactness and Lindelof trees ###### Abstract. We prove that every weakly square compact cardinal is a strong limit cardinal. We also study Aronszajn trees with no uncountable finitely branching subtrees, characterizing them in terms of being Lindelof with respect to a particular topology. We prove that the class of such trees is non-empty and lies strictly between the classes of Suslin and Aronszajn trees. 2020 Mathematics Subject Classification: Primary 03E05; Secondary 54B10,03E04 The results of this paper will form a part of the author's PhD thesis written under the supervision of James Cummings, to whom the author would like to express his gratitude. This shows that the existence of \(\kappa\) which is \(2^{\kappa}\)-square compact is already a large cardinal notion. In [2], Buhagiar and Dzamonja undertake a closer study of weak square compactness, and give a variety of equivalent formulations. In particular, they proved the following: **Theorem** (Buhagiar-Dzamonja, [2, Theorem 5.1]).: _Let \(\kappa\) be an uncountable cardinal. Suppose that \(\kappa^{<\kappa}=\kappa\). Then \(\kappa\) is weakly compact if and only if it is weakly square compact._ The first result we establish in this paper is that the assumption \(\kappa^{<\kappa}=\kappa\) in the previous theorem can be removed. This is done by generalizing the Sorgenfrey line construction. As far as we know, strong compactness continues to be the best upper bound for the consistency strength of square compactness. This is part of the folklore of the subject and attributed to Hajnal and Juhasz in [2]. However, finding a proof in the published literature is non-trivial, so we provide one (see Theorem 2.13). Having studied topologies on linear orders, we turn to looking at topologies on trees, with a view towards introducing new examples of Lindelof spaces. In the survey [9], Nyikos considers a total of ten different topologies on trees. Of these, only two are always Hausdorff, and we adhere to the doctrine of only considering Hausdorff spaces. By [9, Theorem 3.6], the _coarse wedge topology_ appears uninteresting for our purposes, since it is \(\omega_{1}\)-compact1 if and only if the underlying tree has countably many minimal elements. This leaves us with only the _fine wedge topology_ to focus on. We give a tree-theoretic characterization of being Lindelof with respect to this topology. First, some terminology: we say that a tree is _finitely branching_ if and only if every point in the tree has finitely many immediate successors. A _subtree_ of a tree \(T\) is a set \(S\subseteq T\) such that for all \(x\in S\) and \(y\in T\), if \(y<x\), then \(y\in S\). Footnote 1: We caution the reader that, in Nyikos’ survey, \(X\) being \(\omega_{1}\)-compact means that every closed and discrete subset of \(X\) is countable. **Theorem**.: _Let \(T\) be an \(\aleph_{1}\)-tree. Then \(T\) is Lindelof with respect to the fine-wedge topology if and only if every finitely branching subtree of \(T\) is countable._ We shall show that \[\{\mathrm{Suslin}\}\subseteq\{\mathrm{Lindelof}\}\subsetneq\{\mathrm{Aronszajn}\}.\] Here, by Lindelof we mean Lindelof with respect to the fine-wedge topology. Given a partially ordered set \(X\), we let \(X^{*}\) denote the _dual order_ on \(X\), that is \(x<^{*}y\) if and only if \(y<x\). A _tree_ is a pair \((T,<_{T})\) such that \(<_{T}\) is a strict partial order on \(T\) and \(\{y\in T:y<_{T}x\}\) is well-ordered for every \(x\in T\). We will usually suppress the subscript in \(<_{T}\) and identify the tree with its underlying set when there is no danger of confusion. Elements of a tree are referred to as _nodes_ or _points_. We say \(x,y\in T\) are _comparable_, denoted \(x\parallel y\), if and only if \(x\leq_{T}y\) or \(y\leq_{T}x\). Otherwise, we say that \(x\) and \(y\) are _incomparable_, denoted \(x\perp y\). The _height_ of a node \(x\in T\) is the order-type of the set \(\{y\in T:y<_{T}x\}\), denoted \(\mathrm{ht}_{T}(x)\) (or simply \(\mathrm{ht}(x)\)). Given an ordinal \(\alpha\), _level_\(\alpha\) of the tree is the set \(T_{\alpha}=\{x\in T:\mathrm{ht}_{T}(x)=\alpha\}\). The _height_\(h_{T}(T)\) of \(T\) is defined by \(\mathrm{ht}(T)=\min\{\alpha:T_{\alpha}=\emptyset\}\). Given an ordinal \(\alpha<\mathrm{ht}(T)\), we let \(T\!\upharpoonright\!\alpha=\{x\in T:\mathrm{ht}(x)<\alpha\}\), which is of course a subtree of \(T\) of height \(\alpha\). Given \(x\in T\), we let \(I_{T}(x)\) denote the set of _immediate successors_ of \(x\), and write \(I(x)\) when there is no possibility of confusion. ## 2. Square compactness We shall say a space \(X\) is _hereditarily \(\kappa\)-compact_ if and only if every subspace of \(X\) is \(\kappa\)-compact. For example, any space of weight less than \(\kappa\) is hereditarily \(\kappa\)-compact. A useful criterion for hereditary \(\kappa\)-compactness is **Lemma 2.1**.: _Let \((X,\tau)\) be a topological space. Then \(X\) is hereditarily \(\kappa\)-compact if and only if for every \(\mathcal{U}\subseteq\tau\) there is some \(\mathcal{U}_{0}\in[\mathcal{U}]^{<\kappa}\) with \(\bigcup\mathcal{U}=\bigcup\mathcal{U}_{0}\)._ Proof.: \(\Rightarrow\)) Given \(\mathcal{U}\), consider the subspace \(\bigcup\mathcal{U}\). \(\Leftarrow\)) Suppose \(Y\subseteq X\) is not \(\kappa\)-compact. Fix some \(\mathcal{U}\subseteq\tau\) such that \(\bigcup\mathcal{U}\supseteq Y\) but there is no \(\mathcal{U}_{0}\in[\mathcal{U}]^{<\kappa}\) with \(\bigcup\mathcal{U}_{0}\supseteq Y\). Then \(\bigcup\mathcal{U}_{0}\neq\bigcup\mathcal{U}\) for every \(\mathcal{U}_{0}\in[\mathcal{U}]^{<\kappa}\). The following is obvious: **Lemma 2.2**.: _Suppose that \((X,\tau)\) is \(\kappa\)-compact and \(Y\subseteq X\) is closed. Then \(Y\) is \(\kappa\)-compact with the subspace topology._ As mentioned in the introduction, Hajnal and Juhasz already proved that weak square compactness entails regularity. To deal with the (strong) inaccessibility of \(\kappa\), we will generalize the classical construction of the Sorgenfrey line to larger linear orders. **Definition 2.3**.: Let \((X,<)\) be a _dlo_ (dense linear order without end-points). The _density_ of \(X\), denoted \(d(X)\), is the cardinal \[d(X)=\min\{|D|:D\text{ is dense in }X\}\] This of course coincides with the density of \(X\) as a topological spacer under the order topology. For example, \(d(\mathbb{R})=\aleph_{0}\). It is straightforward to show that \(w(X)\), the weight of \(X\) with respect to the order topology, is exactly \(d(X)\). **Definition 2.4**.: Given a dlo \((X,<)\), the family \(\{[x,y):x,y\in X\wedge x<y\}\) forms a basis for a topology on \(X\), which we shall call the _Sorgenfrey_ topology. **Lemma 2.5**.: _Let \((X,<)\) be a dlo with \(d(X)<\kappa\). Then the Sorgenfrey topology on \(X\) is hereditarily \(\kappa\)-compact._ Proof.: Let \(\mathcal{U}\subseteq\{[x,y):x,y\in X\}\) and let \(W=\bigcup\{(x,y):[x,y)\in\mathcal{U}\}\). Obviously, \(W\) is open with respect to the order topology on \(X\), which has weight less than \(\kappa\). By Lemma 2.1, \(W=\bigcup\{(x,y):[x,y)\in\mathcal{U}_{0}\}\) for some \(\mathcal{U}_{0}\in[\mathcal{U}]^{<\kappa}\). Let \(A:=(\bigcup\mathcal{U})\setminus W\). **Claim**.: \(|A|<\kappa\). Proof of claim.: Fix \(D\in[X]^{<\kappa}\) dense in the order topology. For each \(x\in A\), find \([a_{x},b_{x})\in\mathcal{U}\) such that \(x\in[a_{x},b_{x})\). Since \(x\not\in W\), we infer that \(x=a_{x}<b_{x}\), so we can pick some \(d_{x}\in D\) with \(x<d_{x}<b_{x}\). Now suppose \(x,y\in X\) with \(x<y\). Since \(y\not\in W\), \(b_{x}\leq y\). Then \(d_{x}<b_{x}\leq y<d_{y}\), so \(d_{x}<d_{y}\). Therefore, \(x\mapsto d_{x}\) is an injective map from \(A\) into \(D\). For each \(x\in A\), pick \(U_{x}\in\mathcal{U}\) with \(x\in U_{x}\). Let \(\mathcal{U}_{1}=\{U_{x}:x\in A\}\). Clearly, \(|\mathcal{U}_{1}|<\kappa\). We now have that \(\mathcal{U}_{2}=\mathcal{U}_{0}\cup\mathcal{U}_{1}\in[\mathcal{U}]^{<\kappa}\) and \(\bigcup\mathcal{U}_{2}=\bigcup\mathcal{U}\). **Lemma 2.6**.: _Let \(\kappa>\omega\) be a cardinal. Suppose that there is a dlo \((X,<)\) with \(d(X)<\kappa=|X|\). Then \(\kappa\) is not \(\kappa\)-square compact._ Proof.: Replacing \(X\) by \(X\oplus X^{*}\) if necessary, we may assume that \((X,<)\) admits an order reversing involution, which we shall suggestively denote by \(x\mapsto-x\). Let \(\tau\) be the Sorgenfrey topology on \(X\). Note that \(w(X,\tau)\leq\kappa\), because \(|X|\leq\kappa\), and that \((X,\tau)\) is \(\kappa\)-compact by lemma 2.5. It therefore suffices to show that \(X^{2}\) is not \(\kappa\)-compact with respect to the product topology. Let \[Y=\{(x,-x):x\in X\}.\] Since \(x\mapsto-x\) is order-reversing, it is continuous with respect to the order topology \(\tau_{<}\), hence \(Y\) is closed in \((X^{2},\tau_{<}\otimes\tau_{<})\). But \(\tau_{<}\subseteq\tau\), so \(Y\) is closed in \((X^{2},\tau\otimes\tau)\). For each \(x\in X\), pick \(u_{x},v_{x}\in X\) with \(u_{x}<x<v_{x}\). Now observe that \[([x,v_{x})\times[-x,-u_{x}))\cap Y=\{(x,-x)\}.\] We have shown that \(Y\) is discrete in \((X^{2},\tau\otimes\tau)\). Since \(|Y|=\kappa\), \(Y\) is not \(\kappa\)-compact, and so neither is \((X^{2},\tau\otimes\tau)\) because \(Y\) is closed. The goal now is to build large dlo's with small density. This will be possible, under certain cardinal arithmetic constraints. Our original construction was rather convoluted, and we thank Will Brian for suggesting the following simpler approach. **Lemma 2.7**.: _Let \(\kappa\geq\omega_{1}\). Suppose there exist infinite cardinals \(\mu\) and \(\theta\) such that \(\mu^{<\theta}=\mu<\kappa\leq\mu^{\theta}\). Then there is a dlo \(X\) with \(d(X)<\kappa=|X|\)._ Proof.: Let \(Y:={}^{\theta}\mu\), ordered lexicographically. Note that \(|Y|=\mu^{\theta}\geq\kappa\). Let \(D\) be the set of sequences in \(Y\) which are eventually \(0\). Then \(D\) is dense in \(Y\) and \(|D|=\mu^{<\theta}<\kappa\). By the Downward Lowenheim-Skolem theorem, find \(X\prec Y\) with \(D\subseteq X\) and \(|X|=\kappa\). Since \(D\) is dense in \(X\), \(d(X)<\kappa\). **Theorem 2.8**.: _Let \(\kappa\geq\omega_{1}\). If there are cardinals \(\mu\) and \(\theta\) such that \(\mu^{<\theta}=\mu<\kappa\leq\mu^{\theta}\), then \(\kappa\) is not \(\kappa\)-compact._ Proof.: Immediate from Lemmas 2.6 and 2.7. **Corollary 2.9**.: _Suppose \(\lambda\geq\omega\). Then \(\lambda^{+}\) is not \(\lambda^{+}\)-square compact._ Proof.: Let \(\theta:=\min\{\nu:\lambda^{\nu}\geq\lambda\}\). By Konig's lemma, \(\theta\leq\operatorname{cf}(\lambda)\), so \(\lambda^{<\theta}=\lambda\) by the minimality of \(\theta\). Now apply Lemma 2.7 with \(\kappa=\lambda^{+}\) and \(\mu=\lambda\). In particular, if \(\kappa\) is \(\kappa\)-square compact, then \(\kappa\) is a limit cardinal, hence weakly inaccessible. In fact, this can be improved: **Corollary 2.10**.: _Suppose \(\kappa\) is \(\kappa\)-square compact. Then \(\kappa\) is strongly inaccessible._ Proof.: Suppose \(\kappa\) is not strong limit. Let \[\theta=\min\{\nu:\exists\lambda\,(\nu\leq\lambda<\kappa\leq\lambda^{\nu})\}.\] To see that this is well defined, fix \(\delta<\kappa\) so that \(2^{\delta}\geq\kappa\), and take \(\lambda=\nu=\delta\). Having fixed \(\theta\), let \(\lambda<\kappa\) be the least witness to the definition of \(\theta\), that is \(\theta\leq\lambda<\kappa\leq\lambda^{\theta}\) and \(\lambda\) is least with these properties. Note that, if \(\alpha<\theta\), then \(\lambda^{\alpha}<\kappa\), since otherwise \(\alpha\) contradicts the minimal choice of \(\theta\). **Claim 1**.: \(\theta\) _is regular._ Proof of claim 1.: Suppose not, say \(\theta^{*}=\operatorname{cf}(\theta)<\theta\). Fix \(\langle\theta_{\xi}:\xi<\theta^{*}\rangle\) cofinal in \(\theta\). By the minimality of \(\theta\), \(\lambda^{\theta_{\xi}}<\kappa\) for every \(\xi<\theta^{*}\). Let \(\lambda^{*}:=\sup\{\lambda^{\theta_{\xi}}:\xi<\theta^{*}\}\). Since \(\kappa\) is regular and \(\theta^{*}<\theta<\kappa\), it follows that \(\lambda^{*}<\kappa\). We therefore have \[\kappa\leq\lambda^{\theta}=\prod_{\xi<\theta^{*}}\lambda^{\theta_{\xi}}\leq( \lambda^{*})^{\theta^{*}}\] This contradicts the minimality of \(\theta\). Put \(\mu=\lambda^{<\theta}\). Again, \(\mu<\kappa\), because \(\lambda^{\alpha}<\kappa\) for \(\alpha<\theta\) and \(\theta<\kappa=\operatorname{cf}(\kappa)\). Also, \(\mu^{\theta}\geq\lambda^{\theta}\geq\kappa\). **Claim 2**.: \(\mu^{<\theta}=\mu\)_._ Proof of claim 2.: We consider two separate cases. Case 1: \(\alpha\mapsto\lambda^{\alpha}\) is eventually constant for \(\alpha<\theta\). Note that this includes the case when \(\theta\) is a successor cardinal. By definition of \(\mu\), the eventual constant value must be \(\mu\), so \(\lambda^{\alpha}=\mu\) for all large enough \(\alpha<\theta\). But then \(\mu^{\alpha}=\mu\) whenever \(\alpha<\theta\) is sufficiently big, hence \(\mu^{<\theta}=\mu\). Case 2: \(\lambda^{\alpha}\) is not eventually constant for \(\alpha<\theta\). As \(\theta\) is regular, \(\operatorname{cf}(\mu)=\theta\). So, if \(\alpha<\theta\), we have \[\mu^{\alpha}=\sum_{\beta<\theta}(\lambda^{\beta})^{\alpha}=\mu.\] Therefore, \(\mu^{<\theta}=\mu\). Therefore, \(\mu,\theta\) and \(\kappa\) satisfy the conditions of Theorem 2.7, so \(\kappa\) is not \(\kappa\)-square compact. **Theorem 2.11**.: _Let \(\kappa\) be an uncountable cardinal. Then \(\kappa\) is weakly compact if and only if it is \(\kappa\)-square compact._ Proof.: The forwards direction can be found in [4], Theorem 2. The backwards direction is in [2], Theorem 5.1, under the additional hypothesis that \(\kappa^{<\kappa}=\kappa\). But this is redundant when \(\kappa\) is \(\kappa\)-square compact, because \(\kappa\) is strongly inaccessible by Corollary 2.10. To make the paper self-contained, we include a proof that strong compactness implies square compactness. Recall that a _subbasis_ for a topology \(\tau\) (on a set \(X\)) is a family \(\mathcal{S}\) such that \(\tau\) is the smallest topology on \(X\) including \(\mathcal{S}\). Equivalently, the set of finite intersections of members of \(\mathcal{S}\) is a basis for \(\tau\). **Lemma 2.12**.: _Let \(\kappa\) be a strongly compact cardinal and \(X\) a topological space. Suppose that there exists a subbasis \(\mathcal{S}\) such that for every cover of \(X\) using members of \(\mathcal{S}\) there exists a subcover of size \(<\kappa\). Then \(X\) is \(\kappa\)-compact._ Proof.: Let \(\mathcal{B}\) be the collection of finite intersections of sets in \(\mathcal{S}\), so \(\mathcal{B}\) is a basis for \(X\). If suffices to argue that every open cover of \(X\) consisting of members of \(\mathcal{B}\) has a subcover of size \(<\kappa\). Suppose towards a contradiction that this is not the case. Let \(\mathcal{U}\subseteq\mathcal{B}\) be a cover of \(X\) such that no subset of \(\mathcal{U}\) of size \(<\kappa\) covers \(X\). Let \(\mathcal{I}\) be the \(\kappa\)-complete ideal generated by \(\mathcal{U}\): \[\mathcal{I}=\left\{A\subseteq X:\exists\,\mathcal{U}_{0}\in\left[\mathcal{U} \right]^{<\kappa}(A\subseteq\bigcup\mathcal{U}_{0})\right\}\] By our assumption on \(\mathcal{U}\), \(X\not\in\mathcal{I}\), so \(\mathcal{I}\) is proper. Since \(\kappa\) is strongly compact, there is a prime \(\kappa\)-complete ideal \(\mathcal{J}\) on \(X\) such that \(\mathcal{I}\subseteq\mathcal{J}\). **Claim**.: _If \(x\in X\) then there is some \(W_{x}\in\mathcal{J}\cap\mathcal{S}\) such that \(x\in W_{x}\)._ Proof of claim.: Fix \(x\in X\). Since \(\mathcal{U}\subseteq\mathcal{B}\) covers \(X\), by definition of \(\mathcal{B}\) there exists a finite sequence \(\langle W_{i}^{x}:i<n_{x}\rangle\in\mathcal{S}^{n_{x}}\), where \(n_{x}\in\omega\), such that \(x\in\bigcap_{i<n_{x}}W_{i}^{x}\). By definition of \(\mathcal{I}\), \(\bigcap_{i<n_{x}}W_{i}^{x}\in\mathcal{J}\). Since \(\mathcal{J}\) is prime, there exists \(i_{x}<n_{x}\) such that \(W_{i_{x}}^{x}\in\mathcal{J}\). Put \(W_{x}=W_{i_{x}}^{x}\). This works. Using the claim we can choose, for each \(x\in X\), a set \(W_{x}\in\mathcal{J}\cap\mathcal{S}\) such that \(x\in W_{x}\). Obviously, \(\{W_{x}:x\in X\}\) covers \(X\). Since \(\{W_{x}:x\in X\}\subseteq\mathcal{S}\), our hypothesis on \(\mathcal{S}\) implies the existence of \(Y\in[X]^{<\kappa}\) such that \(\{W_{x}:x\in Y\}\) covers \(X\). In symbols, \(X=\bigcup_{x\in Y}W_{x}\). But \(W_{x}\in\mathcal{J}\), \(\mathcal{J}\) is \(\kappa\)-complete and \(|Y|<\kappa\), so \(X\in\mathcal{J}\). This contradicts that \(\mathcal{J}\) is a proper ideal. **Corollary 2.13** (Hajnal-Juhasz).: _Every strongly compact cardinal is square compact._ Proof.: Let \(\kappa\) be strongly compact. Suppose \((X,\tau)\) is a \(\kappa\)-compact space and let \[\mathcal{S}:=\{X\times U:U\in\tau\}\cup\{U\times X:U\in\tau\}.\] It is clear that \(\mathcal{S}\) is a subbasis for the product topology on \(X^{2}\). Let \(\mathcal{U}\subseteq\mathcal{S}\) cover \(X\), we argue that \(\mathcal{U}\) has a subcover of size \(<\kappa\). By Lemma 2.12, this is enough to complete the proof. Put \(\mathcal{U}_{0}:=\mathcal{U}\cap(\tau\times\{X\})\) and \(\mathcal{U}_{1}=\mathcal{U}\cap(\{X\}\times\tau)\), so that \(\mathcal{U}=\mathcal{U}_{0}\cup\mathcal{U}_{1}\). Let \(\mathcal{V}_{0}=\{V:V\times X\in\mathcal{U}_{0}\}\) and \(\mathcal{V}_{1}=\{V:X\times V\in\mathcal{V}_{1}\}\). **Claim**.: _At least one of \(\mathcal{V}_{0}\) or \(\mathcal{V}_{1}\) covers \(X\)._ Proof of claim.: Suppose that \(\bigcup\mathcal{V}_{0}\neq X\) and \(\bigcup\mathcal{V}_{1}\neq X\). Pick \(x_{0}\in X\setminus\bigcup\mathcal{V}_{0}\) and \(x_{1}\in X\setminus\bigcup\mathcal{V}_{1}\). By assumption, \(\mathcal{U}\) covers \(X^{2}\), so \((x_{0},x_{1})\in U\) for some \(U\in\mathcal{U}\). There are now two possibilities: either \(U=V\times X\) for some \(V\in\tau\), in which case \(x_{0}\in\bigcup\mathcal{V}_{0}\), or \(U=X\times V\) for some \(V\in\tau\), in which case \(x_{1}\in\bigcup\mathcal{V}_{1}\). In either case, we get a contradiction. Suppose that \(\mathcal{V}_{0}\) covers \(X\), the other case is analogous. Let \(\mathcal{V}\) be a subcover of \(\mathcal{V}_{0}\) of size \(<\kappa\). Then \(\{V\times X:V\in\mathcal{V}\}\) is a subcover of \(\mathcal{U}\) of size \(<\kappa\), which completes the proof. ## 3. Preliminaries on trees A _chain_ in a tree \(T\) is a subset of \(T\) which is linearly ordered by \(<_{T}\). A _branch_ is a maximal chain. A _cofinal branch_ is a branch which meets every level of \(T\). Given a regular cardinal \(\kappa\), we say that \(T\) is a \(\kappa\)_-tree_ if and only if \(\operatorname{ht}(T)=\kappa\) and \(|T_{\alpha}|<\kappa\) for every \(\alpha<\kappa\). We say \(T\) is a \(\kappa\)_-Aronszajn tree_ if and only if it is a \(\kappa\)-tree with no branches. An Aronszajn tree is just an \(\aleph_{1}\)-Aronszajn tree. Classically: **Theorem** (Konig, [5]).: _There are no \(\aleph_{0}\)-Aronszajn trees._ **Theorem** (Aronszajn, see [11]).: _There is an Aronszajn tree._ **Theorem** (Specker, [11]).: _If \(\mathsf{CH}\) holds, then there is an \(\aleph_{2}\)-Aronszajn tree._ **Theorem** (Mitchell-Silver, [8]).: _The theories_ * \(\mathsf{ZFC}\) _+ "There is a weakly compact cardinal",_ * \(\mathsf{ZFC}\) _+ "There are no_ \(\aleph_{2}\)_-Aronszajn trees"_ _are equiconsistent._ An _antichain_ in a tree is a set of pairwise incomparable elements of \(T\). A \(\kappa\)_-Suslin tree_ is a \(\kappa\)-tree which has no chains or antichains of size \(\kappa\). A _Suslin tree_ is an \(\aleph_{1}\)-Suslin tree. A tree \(T\) is _normal_ if and only if it satisfies the following conditions: * It has a unique minimal element (called a _root_), * for all \(\alpha<\beta<\operatorname{ht}(T)\) and all \(x\in T_{\alpha}\) there is some \(y\in T_{\beta}\) such that \(x<y\), * for all \(\alpha<\operatorname{ht}(T)\) and \(x\in T_{\alpha}\) there exist \(y,z\in T\) such that \(x<y\), \(x<z\), and \(y\perp z\). **Lemma 3.1** (folklore).: _Let \(T\) be a normal \(\kappa\)-tree. If \(T\) is has no antichains of size \(\kappa\), then \(T\) is \(\kappa\)-Suslin._ Proof.: If \(b\) is a branch through \(T\) of length \(\kappa\), use the normality of \(T\) to pick, for each \(x\in b\), some \(y_{x}\in I(x)\) such that \(y_{x}\not\in b\). Then \(A=\{y_{x}:x\in b\}\) is an antichain and \(|A|=\kappa\). In view of Lemma 3.1, to check whether a given \(\kappa\)-tree is Suslin, one "only" has to argue that all of its antichains have size less than \(\kappa\). We shall make use of this fact without any further mention. We also recall that an \(\aleph_{1}\)-tree is _special_ if and only if it can be written as a countable union of antichains. Equivalently, \(T\) is special if and only if there is an order preserving map \(T\to\mathbb{Q}\), see [6, Lemma III.5.17]. Let \((T,<)\) be a tree. If \(X\subseteq T\), we let \(\uparrow X:=\{y\in T:\exists x\in X(x\leq y)\}\). If \(X=\{x\}\), we write \(\uparrow x\) instead of \(\uparrow\{x\}\). The symbols \(\downarrow X\) and \(\downarrow x\) are defined analogously. ## 4. The fine wedge topology If \(T\) is a tree, the _fine wedge topology_ on \(T\) is generated by the sets \(\uparrow t\) and their complements, where \(t\in T\). Note that, if \(x<y\), then \((\uparrow x)\setminus\uparrow y\) and \(\uparrow y\) are disjoint open neighbourhoods of \(x\) and \(y\), respectively. If \(x\perp y\), then \(\uparrow x\) and \(\uparrow y\) are disjoint open neighbourhoods of \(x\) and \(y\). We have thus shown that the topology is Hausdorff. All topological notions below refer to the fine wedge topology. If \(T\) is _finitely branching at \(x\)_ (that is \(|I(x)|<\aleph_{0}\)), then the identity \[\{x\}=(\uparrow x)\cap\bigcap_{y\in I(x)}(\uparrow y)^{c}\] shows that \(x\) is isolated. Therefore, if \(T\) is finitely branching, the fine wedge topology is just the discrete topology on \(T\). The interplay between finite and infinitely branching trees will play a key role in our work, see Theorem 4.9. Recall that, if \(X\) is a topological space and \(x\in X\), we say that a collection of open sets \(\mathcal{B}\) is a _local basis at \(x\)_ if and only if for every open set \(U\) with \(x\in U\) there is some \(B\in\mathcal{B}\) with \(x\in B\subseteq U\). We define the _character of \(x\)_ to be the cardinal \(\chi(x,X):=\min\{|\mathcal{B}|:\mathcal{B}\text{ is a local basis at }x\}\). **Lemma 4.1**.: _Let \(T\) be a tree. Given \(x\in T\), the sets_ \[(\uparrow x)\setminus\uparrow F,\] _where \(F\in[I(x)]^{<\omega}\), form a local basis at \(x\), and so \(\chi(x,T)=|I(x)|\). In particular, if every node has \(\aleph_{0}\) many immediate successors, then the fine-wedge topology is first countable._ Proof.: Let \(U\) be a basic open neighbourhood of \(x\), say \[x\in U=\bigcap_{i<n}\uparrow\!\!x_{i}\setminus\bigcup_{j<m}\uparrow\!\!y_{j}\] for some \(x_{i},y_{j}\in T\), \(n,m\in\omega\). Let \(J=\{j<m:x<y_{j}\}\). For each \(j\in J\), let \(z_{j}\in I(x)\) be the unique point with \(z_{j}\leq y_{j}\). Then \[x\in(\uparrow\!\!x)\setminus\bigcup_{j\in J}\uparrow\!\!z_{j}\subseteq U,\] which completes the proof. **Lemma 4.2**.: _Suppose \(\kappa\) is a regular cardinal. Let \((T,<)\) be a \(\kappa\)-tree which is \(\kappa\)-compact. Then \(T\) is \(\kappa\)-Aronszajn._ Proof.: Suppose that \(b\) is a cofinal branch through \(T\). Then \(\{(\uparrow\!\!x)^{c}:x\in b\}\) has no subcover of size \(<\kappa\) by regularity. **Lemma 4.3**.: _Let \((T,<)\) be a \(\kappa\)-Aronszajn tree. Then every cover of \(T\) by subbasic open sets has a subcover of size \(<\kappa\)._ Proof.: Let \(\mathcal{U}\) be a cover of \(T\) by subbasic open sets. Observe that, if all nodes at some level of the tree belong to a cone from \(\mathcal{U}\), then these cones give a subcover of size \(<\kappa\) of the tree above that level. But there are less than \(\kappa\) many nodes below that level, so we're done. The idea is essentially to show that such a "good" level must exist. Consider the sets \[A=\{t\in T:\uparrow\!\!t\in\mathcal{U}\}\] and \[B=\{t\in T:(\uparrow\!\!t)^{c}\in\mathcal{U}\},\] where \({}^{c}\) denotes complementation with respect to \(T\). Suppose there are \(s,t\in B\) such that \(s\perp t\). Then \((\uparrow\!\!s)\cap(\uparrow\!\!t)=\emptyset\), so \((\uparrow\!\!s)^{c}\cup(\uparrow\!\!t)^{c}=T\) and we've found a finite subcover. Therefore, we may assume that \(B\) is linearly ordered, hence a branch. Put \[X=\bigcap_{x\in B}\uparrow\!\!x\] and observe that \[T=X\cup\bigcup_{x\in B}(\uparrow\!\!x)^{c}.\] Since the tree is \(\kappa\)-Aronszajn, \(|B|<\kappa\), hence we only need to show that \(X\) is covered by some subset of \(\mathcal{U}\) of size \(\kappa\). Let \(\alpha\) be the least height of a member of \(X\) (if \(X\) is empty, there's nothing to do), and pick \(y\in X\cap T_{\alpha}\), where \(T_{\alpha}\) denotes the \(\alpha^{\text{th}}\) level of \(T\). Since \(\mathcal{U}\) covers \(T\), there is some \(U\in\mathcal{U}\) with \(y\in U\). If \(U=(\uparrow\!\!t)^{c}\) for some \(t\), then \(t\in B\), so \(b<y\), contradicting that \(y\in(\uparrow\!\!b)^{c}\). It follows that \(y\in\uparrow\!\!t\) for some \(t\in A\). This shows that \(T_{\alpha}\cap X\) is on e of the good levels described in the first paragraph of the proof, and we're done. We isolate the following elementary result from point-set topology: **Lemma 4.4**.: _Let \(X\) be a topological space and \(\kappa\) an infinite cardinal. Suppose we have a sequence \(\left\langle\mathcal{B}_{x}:x\in X\right\rangle\) such that \(\mathcal{B}_{x}\) is a local basis at \(x\) for every \(x\in X\). Then \(X\) is \(\kappa\)-compact if and only if for each \(\Gamma\in\prod_{x\in X}\mathcal{B}_{x}\) there is some \(Y\in[X]^{<\kappa}\) such that \(X=\bigcup_{y\in Y}\Gamma(y)\)._ Proof.: The forwards direction is easy. We prove the backwards implication. Let \(\mathcal{U}\) be an open cover of \(X\). Choose, for each \(x\in X\), some \(U_{x}\in\mathcal{U}\) such that \(x\in U_{x}\). Given \(x\in X\), we know that \(\mathcal{B}_{x}\) is an open basis at \(x\), hence we can find \(\Gamma(x)\in\mathcal{B}_{x}\) such that \(x\in\Gamma(x)\subseteq U_{x}\). This defines \(\Gamma\in\prod_{x\in X}\mathcal{B}_{x}\). By assumption, there is some \(Y\in[X]^{<\kappa}\) such that \(X=\bigcup_{y\in Y}\Gamma(y)\). But then \(X=\bigcup_{y\in Y}U_{y}\) and \(|\{U_{y}:y\in Y\}|<\kappa\). In the tree context, we'll be looking at the system of local bases formed by the sets \(\uparrow x\setminus\uparrow f(x)\), where \(f\in\prod_{x\in T}[I(x)]^{<\omega}\). We shall say that such an \(f\)_codes_ the cover \(\mathcal{U}_{f}:=\{\uparrow x\setminus\uparrow f(x):x\in T\}\). Going forward, we shall blur the distinction between the function \(f\) and the open cover \(\mathcal{U}_{f}\), and speak simply of the _cover_\(f\). We shall also consider \(\mathcal{U}_{f}\) for \(f\in\prod_{x\in X}[I(x)]^{<\omega}\), where \(X\subseteq T\) (of course, \(\mathcal{U}_{f}\) might not cover \(T\)). Note that covers of this kind have the following important property: **Lemma 4.5**.: _Let \(T\) be a tree and \(f\in\prod_{x\in T}[I(x)]^{<\omega}\). The following are equivalent:_ 1. \(f\) _has a countable subcover._ 2. _There is a limit ordinal_ \(\alpha<\omega_{1}\) _such that_ \(f\!\restriction\!(T\!\restriction\!\alpha)\) _covers_ \(T\)_._ 3. _There is an ordinal_ \(\alpha<\omega_{1}\) _such that for every_ \(x\in T_{\alpha}\) _there is some_ \(y\in T\!\restriction\!\alpha\) _such that_ \(x\in\uparrow y\setminus\uparrow f(y)\)_._ Proof.: Trivial. **Definition 4.6**.: Let \(T\) be an \(\aleph_{1}\)-tree and \(f\in\prod_{x\in T}[I(x)]^{<\omega}\). We say that a point \(x\in T\) is _safe_ (for \(f\)) if and only if for all \(y<x\), \(x\in\uparrow f(y)\). An immediate consequence of the definition is: **Lemma 4.7**.: _Let \(T\) be an \(\aleph_{1}\)-tree and \(f\in\prod_{x\in T}[I(x)]^{<\omega}\). If \(x\in T\) is safe, then so is every \(y<x\). Also, if \(z\in I(x)\) (with \(x\) safe), then \(z\) is safe if and only if \(z\in f(x)\)._ Proof.: Trivial. The key property of safe points is the following: **Lemma 4.8**.: _Let \(T\) be an \(\aleph_{1}\)-tree and \(f\in\prod_{x\in T}[I(x)]^{<\omega}\). The following are equivalent:_ 1. \(f\) _has no countable subcover._ 2. _For every_ \(\alpha<\omega_{1}\) _there is a safe point of height_ \(\alpha\)_._ 3. _The set_ \(\{\operatorname{ht}(x):x\text{ is safe}\}\) _is unbounded in_ \(\omega_{1}\)_._ Proof.: (1) \(\Rightarrow\) (2): Fix \(\alpha<\omega_{1}\). By 4.5, there is some \(x\in T_{\alpha}\) such that for all \(y\in T\!\restriction\!\alpha\), \(x\not\in\uparrow y\setminus\uparrow f(y)\). In particular, if \(y<x\), \(x\in\uparrow f(y)\), so \(x\) is safe. (2) \(\Rightarrow\) (3): Trivial. (3) \(\Rightarrow\) (1): Suppose towards a contradiction that \(f\) has a countable subcover. By 4.5, there is some limit \(\gamma<\omega_{1}\) such that \(f\!\restriction\!(T\!\restriction\!\gamma)\) covers \(T\). Choose a safe point \(x\) with \(\gamma<\operatorname{ht}(x)\). Let \(y\in T\!\restriction\!\gamma\). If \(y\not<x\), then obviously \(x\not\in\uparrow y\setminus\uparrow f(y)\). If \(y<x\), the safety of \(x\) implies that \(x\in\uparrow f(y)\), so \(x\not\in\uparrow y\setminus\uparrow f(y)\). In either case, \(x\not\in\bigcup\mathcal{U}_{f\!\restriction\!(T\!\restriction\!\gamma)}=T\), contradiction. Recall that a subtree of a tree is a downwards closed subset. Note that, if \(S\) is a subtree of \(T\) and \(\alpha<\operatorname{ht}(T)\), then \(S_{\alpha}=S\cap T_{\alpha}\). Also, if \(x\in S\), then \(I_{S}(x)=I_{T}(x)\cap S\). **Theorem 4.9**.: _Let \(T\) be an \(\aleph_{1}\)-tree. Then \(T\) is Lindelof if and only if every finitely branching subtree of \(T\) is countable._ Proof.: \(\Rightarrow)\) Suppose \(S\subseteq T\) is a finitely branching subtree of \(T\) with \(\operatorname{ht}(S)=\aleph_{1}\). Define \(f(x)=I_{T}(x)\cap S\), where \(x\in T\). We claim that the cover coded by \(f\) has no countable subcover. To see this, fix a limit ordinal \(\alpha<\omega_{1}\). Choose \(x\in S\cap T_{\alpha}\) and \(y<x\) (in \(T\)). Since \(S\) is a subtree, \(y\in S\). Let \(z\) be the unique element of \(I_{T}(y)\cap\downarrow x\). As \(z\leq x\in S\), we see that \(z\in S\), hence \(z\in f(y)\) and so \(x\in\uparrow f(y)\). This shows that \(x\) is safe. There is therefore a safe point at every limit level, so we're done by Lemma 4.8. \(\Leftarrow)\) Let \(f\) code a cover with no countable subcover. Let \(S\) be the set of points of \(S\) which are safe for \(f\). It is clear that \(S\) is a subtree of \(T\). Given \(x\in S\), we see that \(I_{S}(x)=f(x)\), so \(S\) is finitely branching. Since \(f\) has no countable subcover, \(\operatorname{ht}(S)=\aleph_{1}\). We end this section by showing that a Suslin tree is automatically Lindelof. To do this, we need to recall some elementary facts on forcing with Suslin trees. Given a tree \(T\), let \(\mathbb{P}_{T}\) be \(T\) upside down. More precisely, the underlying set of \(\mathbb{P}_{T}\) is (the underlying set of) \(T\) and the order on \(\mathbb{P}_{T}\) is the dual order on \(T\). **Lemma 4.10**.: _Let \(T\) be a Suslin tree. Then \(\mathbb{P}_{T}\) is ccc and, for every dense open set \(D\subseteq\mathbb{P}_{T}\) there is some \(\alpha<\omega_{1}\) such that \(\{x\in T:\operatorname{ht}(x)\geq\alpha\}\subseteq D\). In particular, \(\mathbb{P}_{T}\) is countably distributive (every countable intersection of dense open sets is dense)._ Proof.: The poset \(\mathbb{P}_{T}\) being ccc is just a restatement of \(T\) having no uncountable antichains. To prove the remaining statements, let \(D\subseteq\mathbb{P}_{T}\) be dense and open. Let \(A\subseteq D\) be a maximal antichain, which must therefore be countable. Fix \(\alpha<\omega_{1}\) such that \(A\subseteq T\mathord{\restriction}\alpha\). If \(x\in T\mathord{\restriction}(\alpha,\omega_{1})\), there is some \(a\in A\) with \(a\parallel x\), hence \(a<x\). Since \(D\) is open, \(x\in D\). **Lemma 4.11**.: _Let \(T\) be a Suslin tree in the universe \(V\). Let \(W\) be an outer model of \(V\). Suppose that \(b\in W\) is a cofinal branch through \(T\). Then \(b\) is \(\mathbb{P}_{T}\)-generic over \(V\)._ Proof.: Let \(D\in V\) be dense and open in \(\mathbb{P}_{T}\). Choose \(\alpha<\omega_{1}^{V}\) with \(T\mathord{\restriction}[\alpha,\omega_{1})\subseteq D\). Then any point in \(b\) of height at least \(\alpha\) must belong to \(D\). **Theorem 4.12**.: _Let \(T\) be an infinitely branching Suslin tree. Then every finitely branching subtree of \(T\) is countable._ Proof.: Suppose that \(S\) is a finitely branching subtree of \(T\) with height \(\aleph_{1}\). Since \(T\) is Suslin, so is \(S\). Let \(G\) be \(\mathbb{P}(S)\)-generic over \(V\). By 4.11, \(G\) is \(\mathbb{P}_{T}\)-generic over \(V\). But an easy density argument shows that every \(\mathbb{P}_{T}\)-generic branch is disjoint from \(S\) above some node of \(T\), because \(S\) is finitely branching and \(T\) is not. This is a contradiction. **Corollary 4.13**.: _Every Suslin tree is Lindelof._ Proof.: Immediate from theorems 4.9 and 4.12. ## 5. Examples of Lindelof and non-Lindelof trees If \(s\) and \(t\) are two functions with domain \(\alpha\), we let \(\Delta(s,t):=\{\xi<\alpha:s(\xi)\neq t(\xi)\}\). We write \(s=^{*}t\) if and only if \(\Delta(s,t)\) is finite. We shall say that \(\langle e_{\alpha}:\alpha<\omega_{1}\rangle\) is a _coherent sequence of injections_ if \(e_{\alpha}:\alpha\to\omega\) is injective and \(e_{\alpha}=^{*}e_{\beta}\!\upharpoonright\!\alpha\) for all \(\alpha,\beta<\omega_{1}\). Note that each \(e_{\alpha}\) must have coinfinite range. Going forward, we shall speak simply of coherent sequences, which in the literature usually refers to finite to one functions. Let \[T^{\vec{e}}=\bigcup_{\alpha<\omega_{1}}\{s\in{}^{\alpha}\omega:s\text{ is injective }\wedge s=^{*}e_{\alpha}\}.\] It is clear that \(T^{\vec{e}}\) is an infinitely branching Aronszajn tree. Our next goal is to construct a non-Lindelof Aronszajn tree. The idea is to build a finitely branching Aronszajn tree, and then make \(\aleph_{0}\) many new nodes "sprout" at each node, producing a larger, infinitely branching tree. The constructions of Aronszajn trees that we're familiar with all produce infinitely branching trees, so our first task is to build a finitely branching one. Recall that a tree if _splitting_ if every node has at least two distinct immediate successors. **Lemma 5.1**.: _There is a splitting subtree of \(2^{<\omega_{1}}\) which is Aronszajn._ Proof.: Fix a coherent sequence \(\vec{e}\) and a bijection \(f:\omega_{1}\times\omega\to\omega_{1}\) such that for every limit ordinal \(\gamma<\omega_{1}\), \(f[\gamma\times\omega]=\gamma\). Put \(\Gamma=\lim(\omega_{1})\cup\{0\}\). Define \(x_{\alpha}\in 2^{\alpha}\) for \(\alpha\in\Gamma\) by \(x_{\alpha}=\chi_{f[e_{\alpha}]}\), where \(\chi_{A}\) denotes the characteristic function of \(A\). This makes sense because \(f[e_{\alpha}]\subseteq\alpha\). **Claim**.: _If \(\alpha,\beta\in\Gamma\) and \(\alpha<\beta\) then \(x_{\alpha}=^{*}x_{\beta}\!\upharpoonright\!\alpha\)._ Proof of claim.: For readability, we extend our \(\Delta(s,t)\) notation to allow functions with different domains. More precisely, if \(\operatorname{dom}(s)\leq\operatorname{dom}(t)\), we write \(\Delta(s,t)\) for \(\Delta(s,t|\operatorname{dom}(s))\). If \(\alpha=0\) everything is trivial so we assume that \(\alpha\geq\omega\). Fix \(\eta\in\Delta(x_{\alpha},x_{\beta})\), so \(x_{\alpha}(\eta)\neq x_{\beta}(\eta)\). Since \(f[\alpha\times\omega]=\alpha\), there exist unique \(\xi<\alpha\) and \(n\in\omega\) with \(f(\xi,n)=\eta\). Since \(x_{\alpha}(\eta)\neq x_{\beta}(\eta)\), there are two possibilities: \(\eta\in f[e_{\alpha}]\setminus f[e_{\beta}]\) or \(\eta\in f[e_{\beta}]\setminus f[e_{\alpha}]\). We consider the two cases separately: 1. Suppose that \(\eta\in f[e_{\alpha}]\setminus f[e_{\beta}]\). Then \(e_{\alpha}(\xi)=n\) but \(e_{\beta}(\xi)\neq n\), so \(\xi\in\Delta(e_{\alpha},e_{\beta})\) and \(\eta=f(\xi,e_{\alpha}(\xi))\). 2. Suppose that \(\eta\in f[e_{\beta}]\setminus f[e_{\alpha}]\). Then \(e_{\beta}(\xi)=n\) but \(e_{\alpha}(\xi)\neq n\), so \(\xi\in\Delta(e_{\alpha},e_{\beta})\) and \(\eta=f(\xi,e_{\beta}(\xi))\). We have therefore established that \[\Delta(x_{\alpha},x_{\beta})\subseteq\{f(\xi,e_{\alpha}(\xi)):\xi\in\Delta(e_ {\alpha},e_{\beta})\}\cup\{f(\xi,e_{\beta}(\xi)):\xi\in\Delta(e_{\alpha},e_{ \beta})\}.\] Since \(\vec{e}\) is coherent, the union on the right is finite, hence \(\Delta(x_{\alpha},x_{\beta})\) is finite too. Given \(\alpha<\omega_{1}\), let \(\gamma_{\alpha}\) be the unique ordinal in \(\Gamma\) such that \(\gamma_{\alpha}\leq\alpha<\gamma_{\alpha}+\omega\). Note that \(\alpha\in\Gamma\) if and only if \(\alpha=\gamma_{\alpha}\). Define \[T=\bigcup_{\alpha<\omega_{1}}\{x\in 2^{\alpha}:x\!\upharpoonright\!\gamma_{ \alpha}=^{*}x_{\gamma_{\alpha}}\}\] By the claim, \(T\) is a subtree of \(2^{<\omega_{1}}\). Indeed, if \(x\in 2^{\alpha}\) and \(y\in T\cap 2^{\beta}\) satisfy \(x\subseteq y\), then \(\gamma_{\alpha}\leq\gamma_{\beta}\), so \(x\!\upharpoonright\!\gamma_{\alpha}=^{*}x_{\gamma_{\beta}}\!\upharpoonright\! \gamma_{\alpha}=^{*}x_{\gamma_{\alpha}}\), where the last \(=^{*}\) follows from the claim. The fact that \(T\) has countable levels is immediate from the claim. As \(x_{\alpha}\in T_{\alpha}\) for every \(\alpha\in\Gamma\), we see that \(T\) has height \(\omega_{1}\), and so \(T\) is an \(\aleph_{1}\)-tree. We point out that that, if \(x\in T\), then \(x^{\frown}\langle 0\rangle,x^{\frown}\langle 1\rangle\in T\), because \(\gamma_{\alpha}=\gamma_{\alpha+1}\) for every \(\alpha<\omega_{1}\). To see that \(T\) is Aronszajn, assume towards a contradiction that \(\langle y_{\alpha}:\alpha<\omega_{1}\rangle\) is a branch through \(T\), so in particular \(y_{\alpha}=^{*}x_{\alpha}\) for every \(\alpha\in\lim(\omega_{1})\). Find \(\eta_{\alpha}<\alpha\) for each \(\alpha\in\lim(\omega_{1})\) so that \(\Delta(y_{\alpha},x_{\alpha})\subseteq\eta_{\alpha}\). Since \(\alpha\mapsto\eta_{\alpha}\) is regressive, by Fodor's Lemma there is some stationary set \(E\subseteq\lim(\omega_{1})\) and some \(\eta<\omega_{1}\) such that \(\eta_{\alpha}=\eta\) for every \(\alpha\in E\). Since \(|T_{\eta}|\leq\aleph_{0}\), we may find \(E^{\prime}\subseteq E\) stationary and \(x,y\in 2^{\eta}\) such that \(y_{\alpha}\!\upharpoonright\eta=y\) and \(x_{\alpha}\!\upharpoonright\eta=x\) for every \(\alpha\in E^{\prime}\). If \(\alpha,\beta\in E^{\prime}\), \(\alpha<\beta\), \[x_{\alpha}\!\upharpoonright[\eta,\alpha)=y_{\alpha}\!\upharpoonright[\eta, \alpha)=y_{\beta}\!\upharpoonright[\eta,\alpha)=x_{\beta}\!\upharpoonright[ \eta,\alpha)\] by the choice of \(\eta\). Since \(x_{\alpha}\!\upharpoonright\eta=x=x_{\beta}\!\upharpoonright\eta\), it follows that \(x_{\alpha}=x_{\beta}\!\upharpoonright\alpha\). Finally, if \(\xi<\alpha\) and \(n:=e_{\alpha}(\xi)\), then \(x_{\alpha}(f(\xi,n))=1\), so \(f(\xi,n)\in f[e_{\beta}]\), so \(e_{\beta}(\xi)=n\). We have thus shown that \(\langle e_{\alpha}:\alpha\in E^{\prime}\rangle\) is a chain, which is absurd because \(E^{\prime}\) is uncountable. **Lemma 5.2**.: _There is an infinitely branching Aronszajn tree with a finitely branching subtree of height \(\aleph_{1}\)._ Proof.: Let \(\vec{e}\) be a coherent sequence and let \(T\subseteq 2^{<\omega_{1}}\) be the tree constructed from \(\vec{e}\) in Lemma 5.1. We recursively define a tree \(U\subseteq\omega^{<\omega_{1}}\) level by level, starting with \(U_{0}=\{\emptyset\}\). For successor stages, we let \(U_{\alpha+1}=\{u^{\frown}\langle n\rangle:u\in U_{\alpha}\wedge n\in\omega\}\). If \(\alpha<\omega_{1}\) is a limit ordinal, we let \(U_{\alpha}=\{u\cup t\!\upharpoonright[\operatorname{dom}(u),\alpha):u\in U\! \upharpoonright\alpha\wedge t\in T_{\alpha}\}\). An easy induction shows that \(U_{\alpha}\) is countable and that \(T_{\alpha}\subseteq U_{\alpha}\) for every \(\alpha\). **Claim 1**.: _If \(\beta<\alpha\), \(t\in T_{\alpha}\) and \(u\in U_{\beta}\), then \(u\cup t\!\upharpoonright[\beta,\alpha)\in U_{\alpha}\)._ Proof of claim 1.: By induction on \(\alpha\): * If \(\alpha=0\), then it's obvious. * Suppose that \(\alpha\) is a successor ordinal, say \(\alpha=\gamma+1\). Then \(u\cup(t\!\upharpoonright[\beta,\alpha))=(u\cup t\!\upharpoonright[\beta,\gamma)) ^{\frown}\langle t(\gamma)\rangle\) which belongs to \(U_{\alpha}\) by the inductive hypothesis and the definition of \(U_{\alpha}\). * If \(\alpha\) is a limit ordinal, this is immediate from the definition of \(U_{\alpha}\). **Claim 2**.: _If \(\alpha<\omega_{1}\), \(\beta<\alpha\) and \(v\in U_{\alpha}\), then \(v\!\upharpoonright\beta\in U\)._ Proof of claim 2.: By induction on \(\alpha\): * If \(\alpha=0\) then it's vacuously true. * Suppose that \(\alpha\) is a successor ordinal, say \(\alpha=\gamma+1\). Fix \(v\in U_{\alpha}\), so \(v=u^{\frown}\langle n\rangle\) for some \(u\in U_{\gamma}\) and \(n\in\omega\). Let \(\beta<\alpha\). If \(\beta=\gamma\), then \(v\!\upharpoonright\equiv u\in U\) as desired. If \(\beta<\gamma\), then \(v\!\upharpoonright\beta=u\!\upharpoonright\!\upharpoonright\in U\) by the inductive hypothesis applied to \(\gamma\). * Suppose that \(\alpha\) is a limit ordinal and let \(v\in U_{\alpha}\), say \(v=u\cup(t\!\upharpoonright[\operatorname{dom}(u),\alpha))\), where \(u\in U_{\gamma}\), \(\gamma<\alpha\), and \(t\in T\). Let \(\beta<\alpha\). If \(\beta=\gamma\), then \(v\!\upharpoonright\equiv u\!\upharpoonright\in U\) by the inductive hypothesis applied to \(\gamma\). If \(\gamma<\beta\), then \(v\!\upharpoonright\equiv u\cup(t\!\upharpoonright[\operatorname{dom}(u),\beta))\). We now induct on \(\beta\). If \(\beta\) is a limit ordinal, then \(v\!\upharpoonright\beta\in U_{\beta}\) by definition of \(U_{\beta}\). If \(\beta\) is a successor ordinal, say \(\beta=\delta+1\), then \(v\!\upharpoonright\beta=(u\cup t\!\upharpoonright[\operatorname{dom}(u),\delta)) ^{\frown}\langle t(\delta)\rangle\). By Claim 1, \(u\cup t\!\upharpoonright[\operatorname{dom}(u),\delta)\in U_{\delta}\), and so \(v\!\upharpoonright\beta\in U_{\beta}\) by definition of \(U_{\beta}\). To see that \(U\) is Aronszajn, suppose towards a contradiction that \(b\) is a cofinal branch through \(U\). Let \(x_{\alpha}\) be the \(\alpha^{\text{th}}\) point of \(b\). For each limit ordinal \(\alpha<\omega_{1}\), there exist \(u_{\alpha}<x_{\alpha}\) and \(t_{\alpha}\in T_{\alpha}\) such that \(x_{\alpha}=u_{\alpha}\cup t_{\alpha}\!\upharpoonright[\operatorname{dom}(u_{ \alpha}),\alpha)\). Define \(f:\lim(\omega_{1})\to\omega_{1}\) by \(f(\alpha)=\operatorname{dom}(u_{\alpha})\). Since \(f\) is regressive, we may find a stationary set \(\Gamma\subseteq\lim(\omega_{1})\) and some \(\eta<\omega_{1}\) such that \(f\,``\Gamma=\{\eta\}\). As \(|T_{\eta}|\leq\aleph_{0}\), we may assume, by shrinking \(\Gamma\) if necessary, that there is some \(t\in T_{\eta}\) such that \(t=t_{\alpha}\,|\,\eta\) for all \(\alpha\in\Gamma\). If \(\alpha,\beta\in\Gamma\), \(\alpha<\beta\), then \(t_{\alpha}\,|\eta,\alpha)=x_{\alpha}\,|\eta,\alpha)=x_{\beta}\,|\eta,\alpha)=t_ {\beta}\,|\eta,\alpha)\) by our choice of \(\eta\). But then \(t_{\alpha}=t_{\beta}\,|\alpha\) because they both agree with \(t\) below \(\eta\). This means that \(\langle t_{\alpha}:\alpha\in\Gamma\rangle\) is an uncountable chain in \(T\), contradicting that \(T\) is Aronszajn. **Corollary 5.3**.: _There is a non-Lindelof Aronszajn tree._ Proof.: Apply lemmas 5.1 and 5.2 to obtain an infinitely branching Aronszajn tree \(T\) with a finitely branching subtree of uncountable height. Then \(T\) is not Lindelof by Theorem 4.9. Recall that Jensen's _diamond principle_, denoted \(\diamondsuit\), asserts the existence of a sequence \(\langle A_{\alpha}:\alpha<\omega_{1}\rangle\) such that \(A_{\alpha}\subseteq\alpha\) and for every \(A\subseteq\omega_{1}\) there exist stationarily many \(\alpha<\omega_{1}\) with \(A\cap\alpha=A_{\alpha}\). We shall need to modify the \(\diamondsuit\)-sequence so that it "guesses" functions \(\omega_{1}\to[\omega_{1}]^{<\omega}\). **Lemma 5.4**.: _The principle \(\diamondsuit\) holds if and only if there is a sequence \(\langle f_{\alpha}:\alpha<\omega_{1}\rangle\) such that \(f_{\alpha}:\alpha\to[\alpha]^{<\omega}\) and for every \(f:\omega_{1}\to[\omega_{1}]^{<\omega}\) the set \(\{\alpha:f\,|\alpha=f_{\alpha}\}\) is stationary._ The proof is a standard coding argument, and we omit it. A sequence \(\langle f_{\alpha}:\alpha<\omega_{1}\rangle\) of the kind appearing in the statement of Lemma 5.4 will also be referred to as a \(\diamondsuit\)-sequence. **Theorem 5.5**.: _If \(\diamondsuit\) holds, then there is a special Lindelof tree._ Proof.: We construct a _normal_, infinitely branching \(T\) level by level, together with a specializing function \(\varphi:T\to\mathbb{Q}\). To make sure that \(\varphi\) can be extended at limit stages, we require that \[\forall x\in T\,|\alpha\,\forall q\in\mathbb{Q}(q<\varphi(x)\to\exists y\in T _{\alpha}(x<y\wedge\varphi(y)=q))\] holds for every \(\alpha<\omega_{1}\). The construction will also depend on a fixed \(\diamondsuit\) sequence \(\langle f_{\alpha}:\alpha<\omega_{1}\rangle\) such that \(f_{\alpha}:\alpha\to[\alpha]^{<\omega}\) and, for every \(f:\omega_{1}\to[\omega_{1}]^{<\omega}\), the set \(\{\alpha<\omega_{1}:f\,|\alpha=f_{\alpha}\}\) is stationary. Such a sequence of functions exists by Lemma 5.4. The underlying set of the tree will be \(\omega_{1}\), with \(T\,|\alpha=\omega\alpha\) and \(T_{\alpha}=[\omega\alpha,\omega(\alpha+1))\) for \(\alpha>\omega\). Let \(0\) be the root of \(T\). For the successor step, given a node at level \(\alpha\) we put \(\aleph_{0}\) many nodes immediately above \(x\) and let \(\varphi\,|I(x)\) be a bijection between \(I(x)\) and \(\mathbb{Q}\cap(\varphi(x),\infty)\). Note that normality and \((*)\) continue to hold. Suppose now that \(\alpha<\omega_{1}\) is a limit ordinal and we've constructed \(T\,|\alpha\) and \(\varphi\,|T\,|\alpha\). List the set \(\{(x,q)\in T\,|\alpha\times\mathbb{Q}:\varphi(x)<q\}\) as \(\{(x_{k},q_{k}):k\in\omega\}\). The construction splits into two cases. Case 1: \(\omega\alpha>\alpha\). Fix \(k\in\omega\). By normality and \((*)\), we can choose a branch \(b_{k}\) with least element \(x_{k}\) such that the heights of members of \(b_{k}\) converge to \(\alpha\) and \(\sup(\varphi\,``b_{k})=q_{k}\). Now put a node above \(b_{k}\) and let \(\varphi\) take the value \(q_{k}\) at this node. Case 2: \(\omega\alpha=\alpha\). Fix \(k\in\omega\). Choose a cofinal branch \(b_{k}\subseteq T\,|\alpha\) satisfying the following properties: 1. \(x_{k}\in b_{k}\), 2. \(\sup(\varphi\,``b_{k})=q_{k}\), 3. the unique point on \(b_{k}\) immediately above \(x_{k}\) does not belong to \(f_{\alpha}(x_{k})\). To achieve this, use that, by our construction at successor stages, \(\varphi\) maps \(I(x_{k})\) onto \(\mathbb{Q}\cap(\varphi(x_{k}),\infty)\) to find a point \(z_{k}\in I(x_{k})\setminus f_{\alpha}(x_{k})\) with \(\varphi(z_{k})<q_{k}\). Then construct \(b_{k}\) by repeatedly applying \((*)\) and taking a downwards closure. Finally, put a node above \(b_{k}\) on level \(\alpha\) and let \(\varphi\) take the value \(q_{k}\) at this node. This completes the construction of \(T\). It is obvious that \(\varphi\) is a specializing function for \(T\). To see that \(T\) is Lindelof, consider a basic cover \(f\in\prod_{x\in T}[I(x)]^{<\omega}\). Pick \(\alpha<\omega_{1}\) limit such that \(\omega\cdot\alpha=\alpha\) and \(f\!\upharpoonright\!\alpha=f_{\alpha}\), so that \(f\!\upharpoonright\!(T\!\upharpoonright\!\alpha)=f_{\alpha}\). The key observation is that every node at level \(\alpha\) is in \(\uparrow y\setminus\bigcup_{z\in f_{\alpha}(y)}\uparrow\!z\) for some \(y\in T\!\upharpoonright\!\alpha\). By Lemma 4.5, \(f_{\alpha}\) covers \(T\), hence so does \(f\!\upharpoonright\!\alpha\), and we're done. **Corollary 5.6**.: _If \(\diamondsuit\) holds, then there is a Lindelof tree that is not Suslin._ Proof.: Assume \(\diamondsuit\) holds. By Theorem 5.5, there is a special Lindelof tree. But special trees can never be Suslin. So, under \(\diamondsuit\), we have strict inclusion of our classes of trees: \[\{\text{Suslin}\}\subsetneq **Lemma 6.4**.: _Let \(T\) be an infinitely branching Aronszajn tree and \(\dot{S}\) a \(\mathbb{D}_{T}\) name for the set \(\bigcup_{p\in\dot{G}}\operatorname{dom}(p)\), where \(\dot{G}\) is a \(\mathbb{D}_{T}\)-name for the generic filter._ 1. _For every_ \(p\in\mathbb{D}_{T}\)_, every_ \(y\in\operatorname{dom}(p)\) _and every_ \(x\in T\)_, if_ \(x<y\)_, then_ \(p\Vdash x\in\dot{S}\)_._ 2. \(\Vdash\dot{S}\) _is downwards closed._ 3. _For all_ \(p\in\mathbb{D}_{T}\) _and all_ \(x\in\operatorname{dom}(p)\)_,_ \(p\Vdash I_{\dot{S}}(x)\subseteq p(x)\)_._ 4. _For all_ \(p\in\mathbb{D}_{T}\) _and all_ \(x\in\operatorname{dom}(p)\)_,_ \(p\Vdash p(x)\subseteq\dot{S}\)_._ 5. _For all_ \(\alpha<\omega_{1}\)_, the set_ \(\{p\in\mathbb{D}_{T}:\exists x\in\operatorname{dom}(p)(\operatorname{ht}(x) \geq\alpha)\}\) _is dense in_ \(\mathbb{D}_{T}\)_._ Proof.: Write \(\mathbb{D}=\mathbb{D}_{T}\) to simplify the notation. 1. It suffices to show that the set \(\{r\in\mathbb{D}:x\in\operatorname{dom}(r)\}\) is dense below \(p\). Fix \(q\leq p\) with \(x\not\in\operatorname{dom}(q)\). Let \(E=\{t\in\operatorname{dom}(q):x<t\}\). Since \(y\in E\), \(E\neq\emptyset\). Define a function \(r\) with domain \(\operatorname{dom}(q)\cup\{x\}\) by \(r\!\operatorname{\mid}\!\operatorname{dom}(q)=q\) and \(r(x)=\{t!\!\mid\!(\operatorname{ht}(x)+1):t\in E\}\). Since \(E\neq\emptyset\), \(r(x)\neq\emptyset\). Obviously, \(q\subseteq r\). We check that \(r\) is a condition. Fix \(u,v\in\operatorname{dom}(r)\) with \(u<v\). * If \(u,v\in\operatorname{dom}(q)\), then \(v\!\restriction\!(\operatorname{ht}(u)+1)\in q(u)=r(u)\). * If \(u=x\), then \(v\in E\), so \(v\!\restriction\!(\operatorname{ht}(u)+1)\in r(x)\) by definition of \(r(x)\). * If \(v=x\), then \(u<y\) by \(x<y\), therefore \[v\!\restriction\!(\operatorname{ht}(u)+1) =x\!\restriction\!(\operatorname{ht}(u)+1)\] \[=y\!\restriction\!(\operatorname{ht}(u+1))\in q(u)=r(u).\] So, in every case, \(v\!\restriction\!(\operatorname{ht}(u)+1)\in r(u)\), and therefore \(r\) is a condition. 2. Let \(G\) is \(\mathbb{P}\)-generic over \(V\) and \(y\in S=\dot{S}_{G}\). Suppose \(x<y\). Fix \(p\in G\) with \(y\in\operatorname{dom}(p)\). By (i), we can find \(q\in G\) with \(x\in\operatorname{dom}(p)\). Then \(x\in S\). 3. Let \(G\ni p\) be \(\mathbb{P}\)-generic over \(V\) and suppose \(y\in I_{S}(x)\), where \(S=\dot{S}_{G}\). Since \(y\in S\), there exists \(q\in G\) with \(y\in\operatorname{dom}(q)\). Choose \(r\in G\) with \(r\leq p,q\). Then \(x,y\in\operatorname{dom}(r)\) and \(x<y\), so \[y=y\!\restriction\!(\operatorname{ht}(x)+1)\in r(x)=p(x)\] because \(r\) is a condition. 4. Fix \(y\in p(x)\). As in (i), it suffices to argue that \(\{r\in\mathbb{D}:y\in\operatorname{dom}(r)\}\) is dense below \(p\). Let \(q\leq p\) with \(y\not\in\operatorname{dom}(q)\) and let \(E=\{t\in\operatorname{dom}(q):y<t\}\). Case 1: \(E\neq\emptyset\). This is as in (i): define \(r\) by \(r\!\operatorname{\mid}\!\operatorname{dom}(q)=q\) and \(r(y)=\{t!\!\restriction\!(\operatorname{ht}(y)+1):t\in E\}\). The verification that \(r\) is a condition is exactly the same as in the proof of (i). Case 2: \(E=\emptyset\). Let \(a\) be any finite non-empty subset of \(I_{T}(y)\) and let \(r=q\!\cup\!\{(y,a)\}\). It is enough to show that \(r\) is a condition. Fix \(u,v\in\operatorname{dom}(r)\) with \(u<v\). * If \(u,v\in\operatorname{dom}(q)\), then \(v\!\restriction\!(\operatorname{ht}(u)+1)\in q(u)=r(u)\). * If \(u=y\), then \(v\in\operatorname{dom}(q)\), so \(v\in E\), which contradicts \(E=\emptyset\). * If \(v=y\), then \(u\leq x\) because \(y\in I_{T}(x)\). We now distinguish two cases. If \(u=\underline{x}\), then \[v\!\restriction\!(\operatorname{ht}(u)+1) =y\!\restriction\!(\operatorname{ht}(u)+1)\] \[=y\in p(x)=q(x)=r(x)=r(u).\] If \(u<\underline{x}\), then \(u\in\operatorname{dom}(q)\) and \(x\in\operatorname{dom}(p)\subseteq\operatorname{dom}(q)\), so \[v\!\restriction\!(\operatorname{ht}(u)+1)=y\!\restriction\!(\operatorname{ht}(u)+1) =x\!\restriction\!(\operatorname{ht}(u)+1)\in q(u)=r(u).\] In every case, \(v\!\restriction\!(\operatorname{ht}(u)+1)\in r(u)\), and so \(r\) is a condition. * Fix \(p\in\mathbb{D}\). We may assume that \(\operatorname{dom}(p)\subseteq T\mathord{\upharpoonright}\alpha\). Since \(\emptyset\not\in\operatorname{ran}(p)\), we can define \(\gamma=\max\{\operatorname{ht}(y):y\in\bigcup\operatorname{ran}(p)\}\). Note that \(\gamma=\beta+1\) for some \(\beta\). Fix \(y\in\bigcup\operatorname{ran}(p)\) with \(\operatorname{ht}(y)=\beta+1\). By maximality, \(y\not\in\operatorname{dom}(p)\). \(\operatorname{Case}\)\(1\): \(\beta+1=\alpha\). By the proof of (v), there exists \(q\leq p\) with \(y\in\operatorname{dom}(p)\), and we are done. \(\operatorname{Case}\)\(2\): \(\beta+1<\alpha\) Since \(T\) is normal, there exists \(z\in T_{\alpha}\) with \(y<z\). Let \(a\) be any finite non-empty subset of \(I(z)\) and let \(q=p\cup\{(z,a)\}\). We check that \(q\) is a condition. * If \(u,v\in\operatorname{dom}(p)\), then it is easy. * If \(u=z\), then \(v\in\operatorname{dom}(p)\). But \(v>u\), so \(\operatorname{ht}(v)>\alpha>\gamma\), contradiction. * If \(v=z\), then \(\operatorname{ht}(u)\leq\operatorname{ht}(x)\), where \(x\) is the immediate predecessor of \(y\). Since \(T\) is a tree, \(u\leq x\). If \(u=x\), then by \(z>y\) we infer that \[v\mathord{\upharpoonright}(\operatorname{ht}(x)+1)=y\in p(x)=q(x).\] If \(u<x\), then \[v\mathord{\upharpoonright}(\operatorname{ht}(u)+1)=x\mathord{\upharpoonright}( \operatorname{ht}(u)+1)=p(u)=r(u)\] In every case, \(v\mathord{\upharpoonright}(\operatorname{ht}(u)+1)\in r(u)\), and so \(r\) is a condition. **Theorem 6.5**.: _Let \(T\) be an infinitely branching Aronszajn tree and \(\dot{S}\) a \(\mathbb{D}_{T}\) name for the set \(\bigcup_{p\in\dot{G}}\operatorname{dom}(p)\), where \(\dot{G}\) is a \(\mathbb{D}_{T}\)-name for the generic filter. Then \(\mathord{\upharpoonright}_{\mathbb{D}_{T}}\) "\(\dot{S}\) is an uncountable finitely branching subtree of \(T\)"._ Proof.: By Lemma 6.3, \(\mathbb{D}_{T}\) preserves cardinals. By parts (ii),(iii) and (v) of Lemma 6.4. Since the forcing \(\mathbb{D}_{T}\) is only ccc and not obviously Knaster, it is unclear whether \(T\) remains Aronszajn in the \(\mathbb{D}_{T}\)-extension. To deal with this issue, we first do some preliminary forcing. **Definition 6.6** (Baumgartner).: If \(T\) is a tree, then \(\mathbb{S}_{T}\) is the poset of finite order preserving partial functions from \(T\) into \(\mathbb{Q}\) (the set of rational numbers), ordered by inclusion. **Theorem 6.7** (Baumgartner).: _Let \(T\) be an Aronszajn tree. Then_ 1. \(\mathbb{S}_{T}\) _has the ccc._ 2. \(\mathord{\upharpoonright}_{\mathbb{S}_{T}}\)__\(T\) _is special._ For a proof, see [6, Lemma III.5.19]. **Theorem 6.8**.: _Let \(T\) be an Aronszajn tree. Then \(\mathbb{S}_{T}\times\mathbb{D}_{T}\) is a ccc poset which forces that \(T\) is a special non-Lindelof tree._ Proof.: Since \(T\) is Aronszajn, \(\mathbb{S}_{T}\) is ccc by Theorem 6.7(1). Since \(\mathbb{D}_{T}\) is defined in an absolute manner from \(T\), the product \(\mathbb{S}_{T}\times\mathbb{D}_{T}\) densely embeds into the two-step iteration \(\mathbb{S}_{T}\ast\mathring{\mathbb{D}}_{T}\), hence they are forcing equivalent. The proof that \(\mathring{\mathbb{D}}_{T}\) is ccc can be carried out in the \(\mathbb{S}_{T}\) extension, and so \(\mathord{\upharpoonright}_{\mathbb{S}_{T}}\)\(\mathbb{D}_{T}\) is ccc, which implies that \(\mathbb{S}_{T}\times\mathbb{D}_{T}\) is ccc. In the \(\mathbb{S}_{T}\)-extension, \(T\) is special, so its Aronszajn-ness cannot be destroyed without collapsing \(\omega_{1}\). In particular, \(T\) remains Aronszajn in the \(\mathbb{S}_{T}\times\mathbb{D}_{T}\)-extension. Finally, in the \(\mathbb{S}_{T}\times\mathbb{D}_{T}\)-extension, \(T\) has an uncountable finitely branching subtree by Theorem 6.5, and so \(T\) is non-Lindelof in that universe. **Corollary 6.9**.: _Suppose that \(\operatorname{\mathsf{MA}}_{\aleph_{1}}\) holds. Then there are no Lindelof trees._ Proof.: Given an Aronszajn tree \(T\), we only need to meet \(\aleph_{1}\)-many dense sets of \(\mathbb{D}_{T}\) to obtain an uncountable finitely branching subtree of \(T\), namely those in Lemma 6.4(vi). ## Open questions 1. Suppose \(\lambda<\mu\) are infinite cardinals. Is it consistent, modulo large cardinals, that there exists a cardinal \(\kappa\leq\lambda\) which is \(\lambda\)-square compact but not \(\mu\)-square compact? 2. For which pairs of infinite cardinals \(\kappa,\lambda\) with \(\kappa<\lambda\) can one find a Hausdorff \(\kappa\)-compact space of weight \(\lambda\)? 3. Does \(\mathsf{ZFC}\) prove the existence of a non-Lindelof special tree? 4. Is the topological square of a Lindelof tree Lindelof?
2303.09754
On the local dimensions of solutions of Brent equations
Let $\langle m,n,p \rangle$ be the matrix multiplication tensor. The solution set of Brent equations corresponds to the tensor decompositions of $\langle m,n,p \rangle$. We study the local dimensions of solutions of the Brent equations over the field of complex numbers. The rank of Jacobian matrix of Brent equations provides an upper bound of the local dimension, which is well-known. We calculate the ranks for some typical known solutions, which are provided in the databases \cite{Faw22+} and \cite{Heule19}. We show that the automorphism group of the natural algorithm computing $\langle m,n,p \rangle$ is $(\mathcal{P}_m\times \mathcal{P}_n\times \mathcal{P}_p)\rtimes Q(m,n,p)$, where $\mathcal{P}_m$, $\mathcal{P}_n$ and $\mathcal{P}_p$ are groups of generalised permutation matrices, $Q(m,n,p)$ is a subgroup of $S_3$ depending on $m$, $n$ and $p$. For other algorithms computing $\langle m,n,p \rangle$, some conditions are given, which imply the corresponding automorphism groups are isomorphic to subgroups of $(\mathcal{P}_m\times \mathcal{P}_n\times \mathcal{P}_p)\rtimes Q(m,n,p)$. So under these conditions, $m^2+n^2+p^2-m-n-p-3$ is a lower bound for the local dimensions of solutions of Brent equations. Moreover, the gap between the lower and upper bounds is discussed.
Xin Li, Yixin Bao, Liping Zhang
2023-03-17T03:35:20Z
http://arxiv.org/abs/2303.09754v2
# On the local dimensions of solutions of Brent equations ###### Abstract. Let \(\langle m,n,p\rangle\) be the matrix multiplication tensor. The solution set of Brent equations corresponds to the tensor decompositions of \(\langle m,n,p\rangle\). We study the local dimensions of solutions of the Brent equations over the field of complex numbers. The rank of Jacobian matrix of Brent equations provides an upper bound of the local dimension, which is well-known. We calculate the ranks for some typical known solutions, which are provided in the databases [16] and [17]. We show that the automorphism group of the natural algorithm computing \(\langle m,n,p\rangle\) is \((\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P}_{p})\rtimes Q(m,n,p)\), where \(\mathcal{P}_{m}\), \(\mathcal{P}_{n}\) and \(\mathcal{P}_{p}\) are groups of generalised permutation matrices, \(Q(m,n,p)\) is a subgroup of \(S_{3}\) depending on \(m\), \(n\) and \(p\). For other algorithms computing \(\langle m,n,p\rangle\), a condition is given, which implies the corresponding automorphism groups are isomorphic to subgroups of \((\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P}_{p})\rtimes Q(m,n,p)\). So under this condition, \(m^{2}+n^{2}+p^{2}-m-n-p-3\) is a lower bound for the local dimensions of solutions of Brent equations. Key words and phrases:Brent equations, Local dimension, Jacobian matrix, Automorphism group, Matrix multiplication tensor 2010 Mathematics Subject Classification: Primary 14L30; Secondary 68Q17 ## 1. Introduction The solution set \(V\) of a system of polynomial equations may contain points, curves, surfaces, etc., each with its own dimension. A fundamental problem when working with such systems is to describe these (local) dimensions [1]. Let \(V\) be an affine variety and \(q\in V\). The _local dimension_ of \(q\) is defined by the maximum dimension of an irreducible component of \(V\) containing \(q\)[10]. In this paper, we give estimations for the local dimensions of the solution set of _the Brent equations_[4, 18, 22, 27]. Since the Brent equations contain a huge number of equations and variables, it is hard to calculated the local dimensions explicitly by current methods, such as [1, 23]. So in this paper, we study the upper and lower bounds of the local dimensions. Firstly, by the Jacobian Criterion of [14, Theorem 4.1.12], we calculate the upper bound of local dimensions for some typical points, which are provided in the databases [16] and [17]. Secondly, by the theory of linear algebraic groups, we get the lower bound of local dimensions [5, Proposition 1.11]. Other methods for the estimation of local dimensions can be found in [3]. Let \(\langle m,n,p\rangle\) denote the well-known matrix multiplication tensor (see e.g. [8, 13, 18, 22]). The tensor decomposition of \(\langle m,n,p\rangle\) of length \(r\) gives rise to the Brent equations [18, 27] (see also Section 3.1). We denote it by \(B(m,n,p;r)\). The solution set of \(B(m,n,p;r)\) in the field of complex number is denoted by \(V(m,n,p;r)\). For a point \(q\in V(m,n,p;r)\), let \(\dim_{q}V(m,n,p;r)\) denote its local dimension. Let \(J_{q}(B(m,n,p;r))\) denote the Jacobian matrix of \(B(m,n,p;r)\) at \(q\) and \(rank\)\(J_{q}(B(m,n,p;r))\) its rank. The Jacobian Criterion implies \[\dim_{q}V(m,n,p;r)\leq k-rank\;J_{q}(B(m,n,p;r))\] ## 1. Introduction Let \(\mathbb{C}\) denote the field of complex numbers and \(\mathbb{C}^{n}\) the \(n\)-dimensional complex vector space. Let \(A=\mathbb{C}[x_{1},...,x_{n}]\) be the polynomial ring in \(n\) variables over \(\mathbb{C}\). If \(T\) is any subset of \(A\), set \[V(T)=\{p\in\mathbb{C}^{n}|f(p)=0\ for\ all\ \ f\in T\}\] to be the common zeros of all the elements of \(T\). A subset \(Y\) of \(\mathbb{C}^{n}\) is an affine variety if there exists a subset \(T\subseteq A\) such that \(Y=V(T)\). In literatures, \(Y\) is also called an algebraic set (see e.g. [14]). Here we follow the definition of [10]. Thus, the solution set \(V\) of a system of polynomial equations is an affine variety. Given a point \(p\) on the affine variety \(V=\{x\in\mathbb{C}^{n}|F(x)=0\}\) for a set of polynomials \(F(x)=\{f_{1}(x),...,f_{m}(x)\}\subseteq\mathbb{C}[x_{1},...,x_{n}]\). The Jacobian matrix of \(F\) at a point \(p\in\mathbb{C}^{n}\) is defined by \[J_{p}(F)=J_{p}(f_{1},f_{2},...,f_{m})=\left\{\begin{array}{ccc}\frac{\partial f _{1}}{\partial x_{1}}(p)&\cdots&\frac{\partial f_{1}}{\partial x_{n}}(p)\\ \vdots&&\vdots\\ \frac{\partial f_{n}}{\partial x_{1}}(p)&\cdots&\frac{\partial f_{n}}{ \partial x_{n}}(p)\end{array}\right\}_{m\times n}\] The _vanishing ideal_ of \(V\) is given by the set \[I(V)=\{f\in\mathbb{C}[x_{1},...,x_{n}]\mid f(a)=0\ \text{for all}\ a\in V\},\] which is a radical ideal. Let \(\langle f_{1},f_{2},...,f_{m}\rangle\) denote the ideal generated by \(f_{1},f_{2},...,f_{m}\) in \(\mathbb{C}[x_{1},...,x_{n}]\). Then \(I(V)\) is the radical of \(\langle f_{1},f_{2},...,f_{m}\rangle\). So \(\langle f_{1},f_{2},...,f_{m}\rangle\) may be strictly contained in \(I(V)\). Let \(T_{p}(V)\) denote the tangent space of \(p\) at \(V\) (see Definition 4.1.6 of [14]). If \(I(V)\) is generated by \(\{g_{1},g_{2},...,g_{r}\}\), then by Remark 4.1.7 of [14] we know that \[\dim T_{p}(V)=n-rank\ J_{p}(g_{1},g_{2},...,g_{r}).\] For an affine variety \(V\) which is defined by \(F=\{f_{1},f_{2},...,f_{m}\}\), we have the following upper bound on the local dimension of a point \(p\in V\), which is called _Jacobian Criterion_ in [14]. **Theorem 2.3**.: _[_14_, Theorem 4.1.12, Jacobian Criterion]_ _Let \(p\) be a point on an affine variety \(V\in\mathbb{C}^{n}\). Then_ \[\dim_{p}V\leq\dim T_{p}(V)\leq n-rank\ J_{p}(F).\] In Section 3, when \(F\) is the Brent equations, using computer we will calculate \(J_{p}(F)\) for some typical known solutions. So they give the upper bounds of local dimensions for these solutions. ### Isotropy group orbit and the lower bound In this section, we recall some results of [5] and [13]. A \(G\)-variety is an affine variety \(X\) equipped with an action of the algebraic group \(G\). Given a \(G\)-variety \(X\) and a point \(x\in X\), the orbit \(G\cdot x\subseteq X\) is the set of all \(g\cdot x\), where \(g\in G\). The automorphism group (it was called the isotropy group in [5]) \(G_{x}\subseteq G\) is the set of those \(g\in G\) such that \(g\cdot x=x\); it is a closed subgroup of \(G\). **Proposition 2.4**.: _[_5_, Proposition 1.11]_ _The group orbit \(G\cdot x\) is a locally closed, smooth subvariety of \(X\), and every component of \(G\cdot x\) has dimension \(\dim(G)-\dim(G_{x})\)._ A group \(G\) is said to be a _semidirect product_ of \(A\) by \(B\), which is denoted by \(G=A\rtimes B\), if \(G=AB\), \(A\trianglelefteq G\) (\(A\) is a normal subgroup of \(G\)) and \(A\cap B=1\). If \(A\) is a subgroup (or isomorphic to a subgroup) of \(G\), then it will be denoted by \(A\leq G\). Let \(PGL_{n}(\mathbb{C})\) and \(\mathcal{D}_{n}\subseteq GL_{n}(\mathbb{C})\) denote the projective general linear group and the group of nonsigular diagonal matrices over \(\mathbb{C}\), respectively. Let \(S_{n}\) denote the symmetric group of order \(n\). For the dimension of algebraic groups, we have the following proposition. They can be found in Section 1.8 and 5.5 of [28] and [24]. **Proposition 2.5**.: _Suppose that \(G\), \(H\) are algebraic groups. Then_ 1. \(\dim G=0\) _if_ \(G\) _is a finite group;_ 2. \(\dim(G\times H)=\dim(G\rtimes H)=\dim G+\dim H\)_;_ 3. _In particular,_ \(\dim PGL_{n}(\mathbb{C})=n^{2}-1\)_;_ 4. \(\dim((PGL_{m}(\mathbb{C})\times PGL_{n}(\mathbb{C})\times PGL_{p}(\mathbb{C} ))\rtimes S_{3})=m^{2}+n^{2}+p^{2}-3\)_;_ 5. \(\dim(\mathcal{D}_{n}\rtimes S_{n})=n\)_._ Let \(U\), \(V\), \(W\) be finite dimensional vector spaces over \(\mathbb{C}\) and \(\Phi:\ U\times V\to W\) a bilinear mapping. Let \(t\in U^{*}\otimes V^{*}\otimes W\) be the structure tensor of \(\Phi\). According to some choice of bases in \(U\), \(V\), \(W\), let \(t_{\lambda\mu v}\) be the coordinates of \(t\), where \(\lambda=1,...,\dim U\); \(\mu=1,...,\dim V\); \(\nu=1,...,\dim W\). Suppose that \(t\) has a tensor (or decomposable) decomposition of length \(R\). That is, \(t=\sum_{i=1}^{R}u_{i}\otimes v_{i}\otimes w_{i}\) where \(u_{i}=(u_{i\lambda})\in U\), \(v_{i}=(v_{i\mu})\in V\), \(w_{i}=(w_{i\nu})\in W\). It means the following polynomial system has solutions: \[t_{\lambda\mu v}-\sum_{i=1}^{R}u_{i\lambda}v_{i\mu}w_{i\nu}=0. \tag{2.1}\] Let \(a_{R}(\Phi)\) denote the set of all solutions which is an affine variety. It was called the _algorithm variety_ of \(\Phi\)[13]. In particular, if \(\Phi\) is the matrix multiplication, then the polynomial system (2.1) is called _the Brent equations_ (see an explicit description in equation (3.3)). If some linear transformation \(\varphi\in End(U^{*}\otimes V^{*}\otimes W)\) leaves the tensor \(t\) of \(\Phi\) fixed, i.e. \(\varphi(t)=t\), then it induces a transformation (denoted by \(\hat{\varphi}\)): \(\hat{\varphi}:a_{R}(\Phi)\to a_{R}(\Phi)\). In particular, let \[H=\{\varphi|\ \varphi(t)=t\ \text{and}\ \varphi\in GL(U^{*}\otimes V^{*} \otimes W)\}.\] Then \(H\leq GL(U^{*}\otimes V^{*}\otimes W)\) is called _the isotropy group_ of \(t\). The transformation \(\hat{\varphi}\) induced by \(\varphi\in H\) is a group action on the affine variety \(a_{R}(\Phi)\). For a point \(x\in a_{R}(\Phi)\), if we know the dimension of the automorphism group \(H_{x}\) and \(H\), then with these notations, by Proposition 2.4 we have the following proposition **Proposition 2.6**.: _For \(x\in a_{R}(\Phi)\), the local dimension of \(x\) is greater than \(\dim H-\dim H_{x}\)._ In particular, if \(\Phi\) is the matrix multiplication and \(H\) the isotropy group studied in [11, 12] and [8], then in Section 3.2 we will study the structure of \(H_{x}\). A lower bound of local dimensions of points in \(a_{R}(\Phi)\) (the solution of Brent equations) will be given. ### Rank of the Jacobian matrix as an invariant Suppose that \(V\) is an affine variety and \(G\) is an algebraic group which acts on \(V\). If we know the generators of the radical ideal \(I(V)\), then by the following proposition we can see that the rank of Jacobian matrix of the generators is an invariant to distinguish group orbits. **Proposition 2.7**.: _[_14_, Cor. 4.1.18]_ _If \(\varphi:A\to B\) is an isomorphism of affine varieties and \(p\in A\) is a point, then_ \[d_{p}\varphi:T_{p}A\to T_{\varphi(p)}B\] _is an isomorphism of \(\mathbb{C}\)-vector spaces._ **Corollary 2.8**.: _Suppose that \(V\subseteq\mathbb{C}^{n}\) is an affine variety and \(G\subseteq GL_{n}(\mathbb{C})\) is an algebraic group which acts on \(V\). Suppose that \(I(V)\) is generated by \(\{f_{1},f_{2},...,f_{m}\}\subseteq\mathbb{C}[x_{1},x_{2},...,x_{n}]\). Let \(p,q\in V\) be two points. If rank \(J_{p}(f_{1},f_{2},...,f_{m})\neq\) rank \(J_{q}(f_{1},f_{2},...,f_{m})\), then \(p\) and \(q\) can not lie in the same group orbit of \(G\)._ Proof.: In Proposition 2.7, if we let \(A=B=V\), then the group action of \(G\) induces an isomorphism of \(V\). So if \(p\) and \(q\) lie in the same group orbit of \(G\), we have \(\dim T_{p}V=\dim T_{q}V\). The proof is completed by \(\dim T_{p}V=n-rank\ J_{p}(f_{1},f_{2},...,f_{m})\) for all \(p\in V\). ## 3. The local dimensions of solutions of Brent equations In this section, we use the definitions and symbols of [8]. Let \(V_{1}\), \(V_{2}\), \(V_{3}\) be vector spaces over \(\mathbb{C}\). Denote their tensor product by \(\tilde{V}=V_{1}\otimes V_{2}\otimes V_{3}\). A tensor \(t\in\tilde{V}\) is _decomposable_ if \(t=v_{1}\otimes v_{2}\otimes v_{3}\) where \(v_{i}\in V_{i}\). If a tensor \(t\in\tilde{V}\) can be written as \[t=t_{1}+t_{2}+\cdots+t_{r}\] where \(t_{i}\) are decomposable tensors, we call \(t\) has a _tensor decomposition of length \(r\)_. The set \(\{t_{1},...,t_{r}\}\) is called _an algorithm of length \(r\) computing \(t\)_. The minimal length computing \(t\) is called the _tensor rank_ of \(t\). ### Brent equations and the upper bound In particular, let \(V_{1}=\mathbb{C}^{m\times n}\), \(V_{2}=\mathbb{C}^{n\times p}\) and \(V_{3}=\mathbb{C}^{p\times m}\) denote the set of complex \(m\times n\), \(n\times p\) and \(p\times m\) matrices, respectively. The matrix multiplication tensor is defined by (see e.g. [8, 22]) \[\langle m,n,p\rangle=\sum_{i=1}^{m}\sum_{j=1}^{n}\sum_{k=1}^{p}e_{ij}\otimes e _{jk}\otimes e_{ki}\in\mathbb{C}^{m\times n}\otimes\mathbb{C}^{n\times p} \otimes\mathbb{C}^{p\times m}, \tag{3.1}\] where \(e_{ij}\) denotes the matrix with its coefficient at the intersection of row \(i\) and column \(j\) equal to \(1\) and all its other coefficients equal to \(0\). So equation (3.1) provides an algorithm of length \(mnp\) computing \(\langle m,n,p\rangle\). Starting from the famous work of Strassen [29], finding possible tensor decomposition of \(\langle m,n,p\rangle\) with length less than \(mnp\) is an important topic in algebraic complexity theory [6, 13, 22]. It is equivalent to find solutions of a system of polynomial equations called the _Brent equations_, which was firstly observed by Brent [4]. The Brent equations are given as follows (see also e.g. [8, 18, 27]). To find a tensor decomposition of \(\langle m,n,p\rangle\) of length \(r\), it means that \[\langle m,n,p\rangle=t_{1}+t_{2}+\cdots+t_{r}, \tag{3.2}\] where \(t_{i}=u_{i}\otimes v_{i}\otimes w_{i}\) and \[u_{i}=\left(\alpha_{i_{1},i_{2}}^{(i)}\right)\in\mathbb{C}^{m\times n},v_{i}= \left(\beta_{j_{1},j_{2}}^{(i)}\right)\in\mathbb{C}^{n\times p},w_{i}=\left( \gamma_{k_{1},k_{2}}^{(i)}\right)\in\mathbb{C}^{p\times m}.\] Here \(\alpha_{i_{1},i_{2}}^{(i)}\), \(\beta_{j_{1},j_{2}}^{(i)}\) and \(\gamma_{k_{1},k_{2}}^{(i)}\) are unknown variables. Then by (3.1) and (3.2) we obtain the Brent equations which correspond to the tensor decomposition of \(\langle m,n,p\rangle\) of length \(r\): \[\sum_{i=1}^{r}\alpha_{i_{1},i_{2}}^{(i)}\beta_{j_{1},j_{2}}^{(i)}\gamma_{k_{1 },k_{2}}^{(i)}=\delta_{i_{2},j_{1}}\delta_{j_{2},k_{1}}\delta_{k_{2},j_{1}}, \tag{3.3}\] where \(k_{2},i_{1}\in\{1,2,...,m\}\), \(i_{2},j_{1}\in\{1,2,...,n\}\), \(j_{2},k_{1}\in\{1,2,...,p\}\) and \(\delta_{ij}\) is the Kronecker symbol. In the following, we denote the Brent equations in (3.3) by \(B(m,n,p;r)\). From (3.3), we can see that the polynomial system \(B(m,n,p;r)\) has \((mnp)^{2}\) equations and \((mn+np+pm)r\) variables. So it is a large polynomial system even when \(m,n,p\) are small. Usually, \(B(m,n,p;r)\) is an overdetermined polynomial system. For example, \(B(3,3,3;23)\) (resp. \(B(4,4,4;49)\)) has 729 (resp. 4096) equations and 621 (resp. 2352) variables. The solution set of \(B(m,n,p;r)\) over \(\mathbb{C}\) is denoted by \(V(m,n,p;r)\). #### 3.1.1. The numerical statistics of ranks of \(J_{p}(B(3,3,3;23))\) and \(J_{p}(B(4,4,4;49))\) In this subsection, we calculate the ranks of Jacobian matrices of points in \(V(3,3,3;23)\) and \(V(4,4,4;49)\), which are provided in [17] and [16]. Then by Theorem 2.3, we get upper bounds of local dimensions for these points. We mainly focused on points of \(V(3,3,3;23)\) provided in [17]. In [17], Heule et al. provided 17376 points in \(V(3,3,3;23)\) (they are called "schemes" in [17]). Using computer, we calculated the ranks of Jacobian matrices of \(B(3,3,3;23)\) at these points. We summarize the results in Table 1 and 2. For points \(p\in V(3,3,3;23)\) provided in [17], the first rows of Table 1 and 2 are the ranks of \(J_{p}(B(3,3,3;23))\). The second rows are the upper bound of local dimensions provided in Theorem 2.3, that is, \(621-J_{p}(B(3,3,3;23))\). The third rows are total number of points that have the rank among all 17376 points. Take the first column of Table 1 as an example. The first number 526 is the rank of Jacobian matrix at some points in \(V(3,3,3;23)\), which are provided in [17]. The second number is 621-526=95, which is an upper bound of local dimensions for these points. The third number 5 is the number of points that have the rank 526 among all 17376 points. We also calculated ranks of Jacobian matrices for points in \(V(4,4,4;49)\), that are provided in [16]. Fawzi et al. provided 14236 points. By computation, we find that the ranks of \(J_{p}(B(4,4,4;49))\) are range from 2144 to 2201. So the upper bounds of local dimensions for these points are range from 151 to 208. Moreover, we find that the rank of \(J_{p}(B(4,4,4;49))\) equal to 2155 for 13530 points, which takes the percentage of 95%. ### Automorphism groups of \(\langle m,n,p\rangle\) and the lower bound Let \(S(V_{1},V_{2},V_{3})\) denote the group of all _decomposable automorphisms_ of \(\tilde{V}=V_{1}\otimes V_{2}\otimes V_{3}\). Let \(S^{0}(V_{1},V_{2},V_{3})\) denote the subgroup of \(S(V_{1},V_{2},V_{3})\) consisting of all decomposable automorphisms that preserve each factor \(V_{i}\). The set of all decomposable automorphisms of \(\tilde{V}\) that preserve \(t\) is called _the (full) isotropy group_ of \(t\), which is denoted by \(\Gamma(t)\): \[\Gamma(t)=\{g\in S(V_{1},V_{2},V_{3})|g(t)=t\}.\] The _small_ isotropy group of \(t\) is defined by \[\Gamma^{0}(t)=\Gamma(t)\cap S^{0}(V_{1},V_{2},V_{3}).\] From Corollary V.5 of [13], we have \(\Gamma^{0}(t)\trianglelefteq\Gamma(t)\) and \(\Gamma(t)/\Gamma^{0}(t)\) is a subgroup of \(S_{3}\). Let \(\mathcal{A}=\{t_{1},...,t_{r}\}\) be an algorithm computing \(t\). Then \[Aut(\mathcal{A})=\{g\in\Gamma(t)|g(\mathcal{A})=\mathcal{A}\}\] is called _the automorphism group_ of \(\mathcal{A}\), which is a subgroup of \(\Gamma(t)\). The _small_ automorphism group of \(\mathcal{A}\) is defined by \[Aut(\mathcal{A})_{0}=Aut(\mathcal{A})\cap\Gamma^{0}(t).\] Since \(\Gamma^{0}(t)\trianglelefteq\Gamma(t)\) and \(Aut(\mathcal{A})\leq\Gamma(t)\), by the second isomorphism theorem of groups (see e.g. Theorem 2.26 of [25]) we have \(Aut(\mathcal{A})_{0}\trianglelefteq Aut(\mathcal{A})\) and \[Aut(\mathcal{A})/Aut(\mathcal{A})_{0}=Aut(\mathcal{A})/(Aut(\mathcal{A})\cap \Gamma^{0}(t))\cong(Aut(\mathcal{A})\Gamma^{0}(t))/\Gamma^{0}(t)\leq\Gamma(t) /\Gamma^{0}(t)\leq S_{3}.\] In the following, let \(t=\langle m,n,p\rangle\). We will give an upper bound of \(Aut(\mathcal{A})\) (see Corollary 3.10). Recall that \[\langle m,n,p\rangle=\sum_{i=1}^{m}\sum_{j=1}^{n}\sum_{k=1}^{p}e_{ij}\otimes e _{jk}\otimes e_{ki}\in\mathbb{C}^{m\times n}\otimes\mathbb{C}^{n\times p} \otimes\mathbb{C}^{p\times m}.\] **Definition 3.1**.: We call \(\mathcal{A}(m,n,p)=\{e_{ij}\otimes e_{jk}\otimes e_{ki}|1\leq i\leq m,1\leq j \leq n,1\leq k\leq p\}\) the _natural algorithm_ computing \(\langle m,n,p\rangle\). Suppose that \(\langle m,n,p\rangle=t_{1}+t_{2}+\cdots+t_{s}\), where \(t_{i}=u_{i}\otimes v_{i}\otimes w_{i}\) and \(u_{i}\in\mathbb{C}^{m\times n}\), \(v_{i}\in\mathbb{C}^{m\times p}\), \(w_{i}\in\mathbb{C}^{p\times m}\). Let \(V=\mathbb{C}^{m\times n}\otimes\mathbb{C}^{m\times p}\otimes\mathbb{C}^{p \times m}\). For elements \(a\in GL_{m}(\mathbb{C})\), \(b\in GL_{n}(\mathbb{C})\), \(c\in GL_{p}(\mathbb{C})\) define transformation \(T(a,b,c):V\to V\) by the formula \[T(a,b,c)(x\otimes y\otimes z)=axb^{-1}\otimes byc^{-1}\otimes cza^{-1}.\] It was shown in Proposition 4.8 of [8] that \(\Gamma^{0}(\langle m,n,p\rangle)\) consists of \(T(a,b,c)\). \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline Rank & 526 & 527 & 528 & 529 & 530 & 531 & 532 & 533 & 534 & 535 \\ \hline Upper bound & 95 & 94 & 93 & 92 & 91 & 90 & 89 & 88 & 87 & 86 \\ \hline Total number & 5 & 25 & 79 & 256 & 624 & 1421 & 2250 & 3069 & 3486 & 2870 \\ \hline \end{tabular} \end{table} Table 1. Part I \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline Rank & 536 & 537 & 538 & 539 & 540 & 541 & 542 & 543 & 544 & 545 \\ \hline Upper bound & 85 & 84 & 83 & 82 & 81 & 80 & 79 & 78 & 77 & 76 \\ \hline Total number & 1709 & 858 & 387 & 159 & 73 & 68 & 25 & 7 & 2 & 3 \\ \hline \end{tabular} \end{table} Table 2. Part II Recall that a _generalized permutation matrix_ is a matrix of the form \(G=PD\), in which \(P,D\in\mathbb{C}^{n\times n}\), \(P\) is a permutation matrix, and \(D\) is a nonsingular diagonal matrix [19]. So \(P=(\delta_{i,n(i)})\) for some permutation \(\pi\in S_{n}\), where \(\delta\) is the Kronecker symbol. Denote the set of generalised permutation matrices of order \(n\) by \(\mathcal{P}_{n}\). Recall that \(\mathcal{D}_{n}\) is the set of nonsingular diagonal matrices over \(\mathbb{C}\), which is a subgroup of \(GL_{n}(\mathbb{C})\). By the definition of \(\mathcal{P}_{n}\) we have the following proposition. **Proposition 3.2**.: \(\mathcal{P}_{n}\) _is isomorphic to the group \(\mathcal{D}_{n}\rtimes S_{n}\)._ In the following, if \(V\) is a linear space and \(\{v_{1},v_{2},...,v_{k}\}\subseteq V\) be a subset (may be a multiset) of vectors, the linear space spanned by \(\{v_{1},v_{2},...,v_{k}\}\) is denoted by \(\langle v_{1},v_{2},...,v_{k}\rangle\). **Lemma 3.3**.: _Let \(L=\{\langle v_{1}\rangle,\langle v_{2}\rangle,...,\langle v_{m}\rangle\} \subseteq\mathbb{C}^{n}\) be a subset of one dimensional spaces (lines) of \(\mathbb{C}^{n}\). Let \(\{e_{i}|i=1,2,...,n\}\) be the standard basis of \(\mathbb{C}^{n}\). Suppose that \(\langle v_{1},v_{2},...,v_{m}\rangle=\mathbb{C}^{n}\). Suppose that \(A\in GL_{n}(\mathbb{C})\) preserves \(L\). Then there exists a \(B\in GL_{n}(\mathbb{C})\) and a \(P_{A}\in\mathcal{P}_{n}\) such that \(A=BP_{A}B^{-1}\), where \(B\) is independent with the choice of \(A\). If \(\{e_{i}|i=1,2,...,n\}\subseteq\{v_{1},v_{2},...,v_{m}\}\), then we can choose \(B\) as the identity of \(GL_{n}(\mathbb{C})\), so \(A=P_{A}\) is a generalised permutation matrix._ Proof.: By assumption we have \(n\leq m\). Since the linear span of \(\{v_{1},v_{2},...,v_{m}\}\) is \(\mathbb{C}^{n}\), it has a maximal linearly independent subset \(M\) which consists of \(n\) vectors. Without loss of generality, suppose that \(M=\{v_{1},v_{2},...,v_{n}\}\). Suppose that \(A\in GL_{n}(\mathbb{C})\) preserves \(L\). In particular, \(A\) preserves the lines \(\{\langle v_{1}\rangle,\langle v_{2}\rangle,...,\langle v_{n}\rangle\}\). Let \(\tilde{M}=(v_{1},v_{2},...,v_{n})\in\mathbb{C}^{n\times n}\) be the matrix consists of \(v_{i}\) (\(i=1,2,...,n\)). It implies that \[\tilde{A}\tilde{M}=(\lambda_{1}v_{1},\lambda_{2}v_{2},...,\lambda_{n}v_{n})P= \tilde{M}DP=\tilde{M}PD^{\prime},\] where \(P\) is a permutation matrix, \(D=diag(\lambda_{1},\lambda_{2},...,\lambda_{n})\) and \(D^{\prime}=diag(\lambda_{i_{1}},\lambda_{i_{2}},...,\lambda_{i_{n}})\) (\(\lambda_{i_{j}}\) are obtained by some permutation of \(\lambda_{i}\)). Since \(v_{1},v_{2},...,v_{n}\) are linearly independent, \(\tilde{M}\) is invertible. Then we have \[A=\tilde{M}PD^{\prime}\tilde{M}^{-1} \tag{3.4}\] In (3.4), setting \(\tilde{M}=B\) and \(PD^{\prime}=P_{A}\), we have \(A=BP_{A}B^{-1}\). Moreover, we can see that \(B\) is independent with the choice of \(A\). In particular, if \(\{e_{i}|i=1,2,...,n\}\subseteq\{v_{1},v_{2},...,v_{m}\}\), we can set \(B=(e_{1},e_{2},...,e_{n})\) which is the identity of \(GL_{n}(\mathbb{C})\). _Remark 3.4_.: Let \(L=\{\langle v_{1}\rangle,\langle v_{2}\rangle,...,\langle v_{m}\rangle\} \subseteq\mathbb{C}^{n}\) be a subset of one dimensional spaces (lines) of \(\mathbb{C}^{n}\). Let \[P_{L}=\{A\in GL_{n}(\mathbb{C})|\,A\text{ preserves }L\}.\] So by Lemma 3.3, \(P_{L}\) is the conjugate of a subgroup of \(\mathcal{P}_{n}\). Since the conjugate of a subgroup is isomorphic to itself, in the following, we don't distinguish them. So we also write \(P_{L}\leq\mathcal{P}_{n}\). Recall that on \(\mathbb{C}^{m\times n}\) (similarly for \(\mathbb{C}^{n\times p}\) and \(\mathbb{C}^{p\times m}\)) there exists a natural inner product which is defined by \[(a,b)=tr(ab^{*}),\] where \(tr\) and '\(*\)' denote the trace and conjugate transpose, respectively. With this inner product, \(\mathbb{C}^{m\times n}\) becomes a Hilbert space. Moreover, there is an induced inner product on \(\mathbb{C}^{m\times n}\otimes\mathbb{C}^{m\times p}\otimes\mathbb{C}^{p\times n}\), which makes it into a Hilbert space too. Now we have the following lemma **Lemma 3.5**.: _Suppose that \(\langle m,n,p\rangle=t_{1}+t_{2}+\cdots+t_{r}\). Let \(t_{i}=u_{i}\otimes v_{i}\otimes w_{i}\) (\(i=1,2,...r\)). Then \(\langle u_{1},u_{2},...,u_{r}\rangle=\mathbb{C}^{m\times n}\), \(\langle v_{1},v_{2},...,v_{r}\rangle=\mathbb{C}^{m\times p}\), \(\langle w_{1},w_{2},...,w_{r}\rangle=\mathbb{C}^{p\times m}\)._ Proof.: It suffices to show \(\langle u_{1},u_{2},...,u_{r}\rangle=\mathbb{C}^{m\times n}\), the other two statements are similar. If \(\langle u_{1},u_{2},...,u_{r}\rangle\subset\mathbb{C}^{m\times n}\) as a proper linear subspace, there exists a matrix \(0\neq a\in\mathbb{C}^{m\times n}\) such that \[(a,u_{i})=tr(au_{i}^{*})=0,\] for \(i=1,2,...,r\). Let \(a=(a_{ij})\) and suppose that \(a_{i_{0},j_{0}}\neq 0\) as an entry of \(a\). Let \(a\otimes e_{j_{0},k_{0}}\otimes e_{k_{0},j_{0}}\in\mathbb{C}^{m\times n} \otimes\mathbb{C}^{n\times p}\otimes\mathbb{C}^{p\times n}\), where \(k_{0}\in\{1,2,...,p\}\) is a fixed number. Then by assumption we have the following inner product equation \[(a\otimes e_{j_{0},k_{0}}\otimes e_{k_{0},j_{0}},\langle m,n,p\rangle)=\left( a\otimes e_{j_{0},k_{0}}\otimes e_{k_{0},j_{0}},\sum_{i=1}^{r}t_{i}\right). \tag{3.5}\] Since \(\langle m,n,p\rangle=\sum_{i=1}^{m}\sum_{j=1}^{n}\sum_{k=1}^{p}e_{ij}\otimes e _{jk}\otimes e_{ki}\), with the induced inner product on \(\mathbb{C}^{m\times n}\otimes\mathbb{C}^{n\times p}\otimes\mathbb{C}^{p\times n}\), the left hand side of (3.5) is equal to \(a_{i_{0},j_{0}}\neq 0\). However, it is not hard to see that the right hand side of (3.5) is \(0\), which is a contradiction. For \(a=(a_{ij})\in\mathbb{C}^{m\times m}\) and \(b\in\mathbb{C}^{n\times n}\), we define the tensor product \(a\otimes b\in\mathbb{C}^{m\times m\times m}\) by \[a\otimes b=(a_{ij}b).\] Then by the definition of generalised permutation matrix, it is not hard to obtain the following lemma. **Lemma 3.6**.: _If \(a\in GL_{m}(\mathbb{C})\), \(b\in GL_{n}(\mathbb{C})\) such that \(a\otimes b\in\mathcal{P}_{mn}\), then \(a\in\mathcal{P}_{m}\) and \(b\in\mathcal{P}_{n}\). That is, both \(a\) and \(b\) are generalised permutation matrices._ For \(x\in\mathbb{C}^{m\times n}\), define the _rowise vectorization map_\(\mathcal{R}:\mathbb{C}^{m\times n}\rightarrow\mathbb{C}^{m\times 1}\) by concatenating the rows of the matrix in \(\mathbb{C}^{m\times n}\) as a column vector of \(\mathbb{C}^{m\times 1}\). So \(\mathcal{R}\) is a linear isomorphism. For example, let \(a=\left(\begin{array}{cc}a_{11}&a_{12}\\ a_{21}&a_{22}\end{array}\right)\in\mathbb{C}^{2\times 2}\), then \(\mathcal{R}(a)=(a_{11},a_{12},a_{21},a_{22})^{t}\in\mathbb{C}^{4\times 1}\), where \(t\) is the transpose. For any linear map \(\phi\) between \(\mathbb{C}^{m\times n}\), \(\mathcal{R}\) induces a linear map \(\tilde{\phi}\) between \(\mathbb{C}^{m\times 1}\) such that \(\mathcal{R}\circ\phi=\tilde{\phi}\circ\mathcal{R}\). In particular, if \(\phi(x)=uxv\) where \(x\in\mathbb{C}^{m\times n}\), \(u\in\mathbb{C}^{m\times m}\) and \(v\in\mathbb{C}^{n\times n}\), then \(\tilde{\phi}:\mathbb{C}^{m\times 1}\rightarrow\mathbb{C}^{m\times 1}\) is defined by \[\tilde{\phi}:y\mapsto(u\otimes v^{t})\left(y\right), \tag{3.6}\] for each \(y=\mathcal{R}(x)\in\mathbb{C}^{m\times 1}\). Suppose that \(\langle m,n,p\rangle=t_{1}+t_{2}+\cdots+t_{r}\) where \(t_{i}=u_{i}\otimes v_{i}\otimes w_{i}\). Then from Lemma 3.5 we have \(\langle u_{1},u_{2},...u_{r}\rangle=\mathbb{C}^{m\times n}\). So we can choose a maximal linear independent subset \(\{u_{i_{1}},u_{i_{2}},...u_{i_{m}}\}\subseteq\{u_{1},u_{2},...u_{r}\}\) such that the following matrix \[U_{mn}:=\left(\mathcal{R}(u_{i_{1}}),\mathcal{R}(u_{i_{2}}),\cdots\mathcal{R} (u_{i_{mn}})\right)\in\mathbb{C}^{m\times m\times m}\] is invertible, that is, \(U_{mn}\in GL_{mn}(\mathbb{C})\). Similarly, we can define \(V_{np}\in GL_{np}(\mathbb{C})\) (resp. \(W_{pm}\in GL_{pm}(\mathbb{C})\)) for some maximal linear independent subset of \(\{v_{1},v_{2},...v_{r}\}\) (resp. \(\{w_{1},w_{2},...w_{r}\}\)). The set \(\{u_{1},u_{2},...u_{r}\}\) is said to have _D-property_ (Decomposable property) if for some maximal linear independent subset \(\{u_{i_{1}},u_{i_{2}},...u_{i_{mn}}\}\subseteq\{u_{1},u_{2},...u_{r}\}\) we have \(U_{mn}=U_{m}\otimes U_{n}\) for some \(U_{m}\in GL_{m}(\mathbb{C})\) and \(U_{n}\in GL_{n}(\mathbb{C})\). Similarly, the set \(\{v_{1},v_{2},...v_{r}\}\) (resp. \(\{w_{1},w_{2},...w_{r}\}\)) has _D-property_ if \(V_{np}=V_{n}\otimes V_{p}\) (resp. \(W_{pm}=W_{p}\otimes W_{m}\)) where \(V_{n}\in GL_{n}(\mathbb{C})\) and \(V_{p}\in GL_{p}(\mathbb{C})\) (resp. \(W_{p}\in GL_{p}(\mathbb{C})\) and \(W_{m}\in GL_{m}(\mathbb{C})\)). **Definition 3.7**.: Given a tensor decomposition \(\langle m,n,p\rangle=u_{1}\otimes v_{1}\otimes w_{1}+u_{2}\otimes v_{2}\otimes w _{2}+\cdots u_{r}\otimes v_{r}\otimes w_{r}\). Let \(\mathcal{A}=\{u_{i}\otimes v_{i}\otimes w_{i}|i=1,2,...,r\}\) be the corresponding algorithm computing \(\langle m,n,p\rangle\). We call \(\mathcal{A}\) has _D-property_ if both \(\{u_{1},u_{2},...u_{r}\}\), \(\{v_{1},v_{2},...v_{r}\}\) and \(\{w_{1},w_{2},...w_{r}\}\) have _D-property_. The _D-property_ defined above seems so special. However, we have some examples. **Example 3.8**.: It is not hard to see that the natural algorithm has \(D\)-property. Because after a suitable change of the order of the maximal independent subset, we can make \(U_{mn}=I_{mn}\otimes I_{n}\), \(V_{np}=I_{np}=I_{n}\otimes I_{p}\) and \(W_{pm}=I_{pm}=I_{p}\otimes I_{m}\), where \(I_{mn}\) is the identity of the group \(GL_{mn}(\mathbb{C})\) and similarly, for others. The Laderman algorithm studied in Section 5 of [8] also has this property. **Theorem 3.9**.: _Let \(\mathcal{A}=\{u_{i}\otimes v_{i}\otimes w_{i}|i=1,2,...,r\}\) be an algorithm computing \(\langle m,n,p\rangle\). If \(\mathcal{A}\) has \(D\)-property, then (with the convention in Remark 3.4) \(Aut(\mathcal{A})_{0}\leq\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P} _{p}\)._ Proof.: By assumption, \(Aut(\mathcal{A})_{0}=\{T(a,b,c)\in\Gamma^{0}(\langle m,n,p\rangle)|T(a,b,c)( \mathcal{A})=\mathcal{A}\}\). So if \(T(a,b,c)\in Aut(\mathcal{A})_{0}\), then \(\mathcal{A}=\{t_{i}|i=1,2,...,r\}=\{u_{i}\otimes v_{i}\otimes w_{i}|i=1,2,...,r\}=\{au_{i}b^{-1}\otimes bv_{i}c^{-1}\otimes cw_{i}a^{-1}|i=1,2,...,r\}\). By Lemma 5.9 of [8], we have \(\varphi(x)=axb^{-1}\) (where \(x\in\mathbb{C}^{m\times n}\)) is the linear map that preserves the set of lines \[U=\{\langle u_{i}\rangle|i=1,2,...,r\},\] where \(\langle u_{i}\rangle\) are the one dimensional spaces (lines) spanned by \(u_{i}\) in \(\mathbb{C}^{m\times n}\). Let \(\mathcal{R}(x)\in\mathbb{C}^{m\times 1}\) be the rowise vectorization of \(x\in\mathbb{C}^{m\times n}\). Let \(\bar{\varphi}(y)=[a\otimes(b^{-1})^{r}](y)\) for \(y\in\mathbb{C}^{m\times 1}\) be the induced map defined in (3.6). Then we have \(\varphi(x)=axb^{-1}\) if and only if \(\bar{\varphi}(\mathcal{R}(x))=[a\otimes(b^{-1})^{r}](\mathcal{R}(x))\). Let \[\bar{U}=\{\langle\mathcal{R}(u_{i})\rangle\subseteq\mathbb{C}^{m\times 1}|i=1,2,...,r\}.\] From Lemma 3.5, we have \(\langle u_{1},u_{2},...,u_{r}\rangle=\mathbb{C}^{m\times n}\). So \(\langle\mathcal{R}(u_{1}),\mathcal{R}(u_{2}),...,\mathcal{R}(u_{r})\rangle= \mathbb{C}^{m\times 1}\). Then \(\varphi\) preserves \(U\) if and only if \(\bar{\varphi}\) preserves \(\bar{U}\). So by Lemma 3.3, we have \[a\otimes(b^{-1})^{t}=U_{mn}P_{ab}U_{mn}^{-1},\] for some \(U_{mn}\in GL_{mn}(\mathbb{C})\) and \(P_{ab}\in\mathcal{P}_{mn}\). Since \(\{u_{1},u_{2},...u_{r}\}\) has \(D\)-property, we can set \(U_{mn}=U_{m}\otimes U_{n}\) for some \(U_{m}\in GL_{m}(\mathbb{C})\) and \(U_{n}\in GL_{n}(\mathbb{C})\). So we have \[P_{ab}=\left(U_{m}^{-1}aU_{m}\right)\otimes\left(U_{n}^{-1}(b^{-1})^{r}U_{n} \right).\] Thus, by Lemma 3.6, we have \[U_{m}^{-1}aU_{m}=P_{a}\quad\text{and}\quad U_{n}^{-1}(b^{-1})^{t}U_{n}=P_{b},\] for some \(P_{a}\in\mathcal{P}_{m}\) and \(P_{b}\in\mathcal{P}_{n}\). So \(a=U_{m}P_{a}U_{m}^{-1}\in U_{m}\mathcal{P}_{m}U_{m}^{-1}\) and \(b=(U_{n}^{t})^{-1}(P_{b}^{t})^{-1}U_{n}^{t}\in(U_{n}^{t})^{-1}\mathcal{P}_{n}U_ {n}^{t}\). Similar result holds for \(c\). So we have \(Aut(\mathcal{A})_{0}\leq\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P} _{p}\). Let \(Q(m,n,p)\) be a subgroup of \(\Gamma(\langle m,n,p\rangle)\), isomorphic to \(S_{3},\mathbb{Z}_{2}\), or \(1\), when \(|\{m,n,p\}|=1,2,\) or \(3\), respectively. Just as in the proof of Theorem 4.12 and Lemma 5.8 of [8], \(Aut(\mathcal{A})\) is the semidirect product of \(Aut(\mathcal{A})_{0}\) with a subgroup of \(Q(m,n,p)\). By Theorem 3.9, we have the following corollary. **Corollary 3.10**.: _Suppose that \(\mathcal{A}\) is an algorithm computing \(\langle m,n,p\rangle\). If \(\mathcal{A}\) has \(D\)-property, then \(Aut(\mathcal{A})\leq(\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P} _{p})\rtimes Q(m,n,p)\)._ For the natural algorithm, we have the following corollary. **Corollary 3.11**.: \(Aut(\mathcal{A}(m,n,p))=(\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P} _{p})\rtimes Q(m,n,p)\)_._ Proof.: From Example 3.8, we know that \(\mathcal{A}(m,n,p)\) has \(D\)-property. So from Corollary 3.10, in the following we only need to check that \((\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P}_{p})\rtimes Q(m,n,p)\) is contained in \(Aut(\mathcal{A}(m,n,p))\). Let \(a=P_{1}D_{1}\in\mathcal{P}_{m}\) where \(P_{1}=(\delta_{i,\pi(j)})\) is a permutation matrix corresponding to the permutation \(\pi\in S_{m}\) and \(D_{1}=diag(a_{1},a_{2},...,a_{m})\) is a nonsingular diagonal matrix with \(a_{i}\in\mathbb{C}\setminus\{0\}\). Similarly, let \(b=P_{2}D_{2}\in\mathcal{P}_{n}\) where \(P_{2}=(\delta_{i,\sigma(j)})\), \(D_{2}=diag(b_{1},b_{2},...,b_{n})\) and \(\sigma\in S_{n}\). Let \(c=P_{3}D_{3}\in\mathcal{P}_{p}\) where \(P_{2}=(\delta_{i,\tau(j)})\), \(D_{3}=diag(c_{1},c_{2},...,c_{p})\) and \(\tau\in S_{p}\). Then we can see that \[T(a,b,c)(e_{ij}\otimes e_{jk}\otimes e_{ki}) =a_{i}\left(e_{\pi^{-1}(i),\sigma^{-1}(j)}\right)b_{j}^{-1} \otimes b_{j}\left(e_{\sigma^{-1}(j),\pi^{-1}(k)}\right)c_{k}^{-1}\otimes c_{ k}\left(e_{\tau^{-1}(k),\pi^{-1}(i)}\right)a_{i}^{-1}\] \[=e_{\pi^{-1}(i),\sigma^{-1}(j)}\otimes e_{\sigma^{-1}(j),\pi^{-1 }(k)}\otimes e_{\tau^{-1}(k),\pi^{-1}(i)}.\] Hence, \[T(a,b,c)\mathcal{A}(m,n,p) =\{e_{\pi^{-1}(i),\sigma^{-1}(j)}\otimes e_{\sigma^{-1}(j),\pi^{ -1}(k)}\otimes e_{\tau^{-1}(k),\pi^{-1}(i)}|1\leq i\leq m,1\leq j\leq n,1\leq k \leq p\}\] \[=\{e_{ij}\otimes e_{jk}\otimes e_{ki}|1\leq i\leq m,1\leq j\leq n,1\leq k\leq p\}\] \[=\mathcal{A}(m,n,p).\] Thus, by Theorem 3.9 we have \(Aut(\mathcal{A}(m,n,p))_{0}=\mathcal{P}_{m}\times\mathcal{P}_{n}\times \mathcal{P}_{p}\). Since \(Aut(\mathcal{A}(m,n,p))\) is also preserved under the action of \(Q(m,n,p)\) (see e.g. Theorem 4.12 of [8]), we have \((\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P}_{p})\rtimes Q(m,n,p) \subseteq Aut(\mathcal{A}(m,n,p))\). _Remark 3.12_.: Conversely, it is interesting to show that if \(Aut(\mathcal{A})=(\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P}_{p}) \rtimes Q(m,n,p)\), then \(\mathcal{A}\) should be the natural algorithm \(\mathcal{A}(m,n,p)\). By the discussions in Section 4.3 of [8], we have the following proposition. **Proposition 3.13**.: _[_8_, Sect.4.3]_ \(\Gamma^{0}(\langle m,n,p\rangle)=PGL_{m}(\mathbb{C})\times PGL_{n}(\mathbb{C}) \times PGL_{p}(\mathbb{C})\) and \(\Gamma(\langle m,n,p\rangle)=\Gamma^{0}(\langle m,n,p\rangle)\rtimes Q(m,n,p)\)._ For a point \(q\in V(m,n,p;r)\), by (3.2) and (3.3) there exists a tensor decomposition of \(\langle m,n,p\rangle\) of length \(r\): \[\langle m,n,p\rangle=t_{1}^{q}+t_{2}^{q}+\cdots+t_{r}^{q}, \tag{3.7}\] where \(t_{i}^{q}=u_{i}^{q}\otimes v_{i}^{q}\otimes w_{i}^{q}\in\mathbb{C}^{m\times n }\otimes\mathbb{C}^{m\times p}\otimes\mathbb{C}^{p\times m}\). **Corollary 3.14**.: _For a point \(q\in V(m,n,p;r)\), with notations in (3.7) let \(\mathcal{A}^{q}=\{t_{1}^{q},t_{2}^{q},...,t_{r}^{q}\}\) be the corresponding algorithm of length \(r\) computing \(\langle m,n,p\rangle\). If \(\mathcal{A}^{q}\) has \(D\)-property, then_ \[\dim_{q}V(m,n,p;r)\geq m^{2}+n^{2}+p^{2}-m-m-p-3.\] Proof.: Since \(\mathcal{A}^{q}\) has D-property, \(\mathcal{A}^{q}\leq(\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P}_{p} )\rtimes Q(m,n,p)\) by Corollary 3.10. So by Proposition 2.5 and 3.2, we have \[\dim Aut(\mathcal{A}^{q})\leq m+n+p.\] By the analysis in Subsection 2.2, the isotropy group \(\Gamma(\langle m,n,p\rangle)\) induces a group action on \(V(m,n,p;r)\). Let \(\Gamma(\langle m,n,p\rangle)\cdot q\) denote the group orbit of \(q\), which is induced by the action of \(\Gamma(\langle m,n,p\rangle)\) on \(V(m,n,p;r)\). Then by Proposition 2.5, 2.6 and 3.13, we have \[\dim_{q}V(m,n,p;r) \geq\dim\left(\Gamma(\langle m,n,p\rangle)\cdot q\right)\] \[=\dim\left(\Gamma(\langle m,n,p\rangle)\right)-\dim Aut(\mathcal{ A}^{q})\] \[\geq m^{2}+n^{2}+p^{2}-3-(m+n+p).\] The \(D\)-property is special. By current results on automorphism groups (see e.g. [8], [9] and [26]), we believe the \(D\)-property in Corollary 3.10 is not necessary. So we have the following conjecture for any algorithms of (any) length \(r\) computing \(\langle m,n,p\rangle\), that is, \(r\) can be less or more than \(mnp\). **Conjecture 1**.: Suppose that \(\mathcal{A}\) is an algorithm of any length \(r\) computing \(\langle m,n,p\rangle\). Then \[Aut(\mathcal{A})\leq(\mathcal{P}_{m}\times\mathcal{P}_{n}\times\mathcal{P}_{p} )\rtimes Q(m,n,p).\] ### Radical ideal and rank of \(J_{q}(B(m,n,p;r))\) as an invariant Let \(I(2,2,2;7)\) denote the set \[\{f\in\mathbb{C}[x_{1},x_{2},...,x_{84}]|f(p)=0\text{ when }p\in V(2,2,2;7)\}.\] It is the vanishing ideal of \(V(2,2,2;7)\) which is radical. On the other hand, let \(\langle B(2,2,2;7)\rangle\) denote the ideal generated by the Brent equations \(B(2,2,2;7)\). We have that \(\langle B(2,2,2;7)\rangle\) is strictly contained in \(I(2,2,2;7)\). The reason is as follows. Suppose that \(I(2,2,2;7)=\langle f_{1},f_{2},...,f_{k}\rangle\) where \(f_{i}\in\mathbb{C}[x_{1},x_{2},...,x_{84}]\). It was shown in [12] that the isotropy group \(\Gamma(\langle 2,2,2\rangle)\) acts transitively on \(V(2,2,2;7)\) and dim \(V(2,2,2;7)=9\). So by Proposition 2.4, \(V(2,2,2;7)\) is a smooth affine variety, which has no singular points. So we have \(\dim V(2,2,2;7)=84-rank\)\(J_{p}(f_{1},f_{2},...,f_{k})\) for \(p\in V(2,2,2;7)\). Since the Brent equations \(B(2,2,2;7)\) have 64 equations and 84 variables, the rank of \(J_{p}(B(2,2,2;7))\) can not exceed 64. So \(84-rank\)\(J_{p}(B(2,2,2;7))\geq 20>9\). So we have \(\langle B(2,2,2;7)\rangle\subseteq I(2,2,2;7)\). In fact, we calculated that \(rank\)\(J_{p}(B(2,2,2;7))=61\) when \(p\) is Strassen's algorithm [29]. So the ideal generated by Brent equations may not be radical. Generally, let \(k=(mn+np+pm)r\) and \(I(m,n,p;r)\) denote the set \[\{f\in\mathbb{C}[x_{1},x_{2},...,x_{k}]|f(p)=0\text{ when }p\in V(m,n,p;r)\},\] that is, the vanishing ideal of \(V(m,n,p;r)\). Let \(\langle B(m,n,p;r)\rangle\) denote the ideal generated by the Brent equations \(B(m,n,p;r)\). By the discussions above, \(\langle B(m,n,p;r)\rangle\) may not be equal to \(I(m,n,p;r)\). So from Corollary 2.8, we can see that \(rank\)\(J_{q}(B(m,n,p;r))\) may not be equal to \(rank\)\(J_{q^{\prime}}(B(m,n,p;r))\) when \(q\) and \(q^{\prime}\) lie in the same group orbit under the action of \(\Gamma(\langle m,n,p\rangle)\) on \(V(m,n,p;r)\). However, by random experiments in computer we have not found counterexamples. So we have the following question **Question 3.15**.: Does \(rank\)\(J_{q}(B(m,n,p;r))=rank\)\(J_{q^{\prime}}(B(m,n,p;r))\) when \(q\) and \(q^{\prime}\) lie in the same group orbit under the action of \(\Gamma(\langle m,n,p\rangle)\) on \(V(m,n,p;r)\)? If the answer of Question 3.15 is affirmative, we get another criterion which can be used to distinguish group orbits. _Remark 3.16_.: Since the rank of \(J_{q}(B(m,n,p;r))\) is finite and the number of group orbits may be infinite [20], it has limitation when distinguishing group orbits just by the rank of \(J_{q}(B(m,n,p;r))\). By now, some efficient algorithms and criterions are raised to distinguish group orbits, such as [2, 18, 21] and the supplementary information of [15]. To distinguish the group orbits more efficiently, we should combine all these methods. ## Acknowledgments We are very grateful to the editors and referees for their valuable comments and suggestions. X. Li is supported by National Natural Science Foundation of China (Grant No.11801506) and the National Science Basic Research Plan in Shaanxi Province of China (2021JQ-661). Y. Bao is supported by the NSFC (Grant No. 11801117) and the Natural Science Foundation of Guangdong Province. China (Grant No. 2018A030313268). L. Zhang is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LY21A010015) and Grant 11601484, the National Natural Science Foundation, People's Republic of China.
2303.05146
A UNIONS view of the brightest central galaxies of candidate fossil groups
The formation process of fossil groups (FGs) is still under debate, and large samples of such objects are still missing. The aim of this paper is to increase the sample of known FGs, and to analyse the properties of their brightest group galaxies (BGG) and compare them with a control sample of non-FG BGGs. Based on the Tinker spectroscopic catalogue of haloes and galaxies, we extract 87 FG and 100 non-FG candidates. For all the objects with data available in UNIONS in the u and r bands, and/or in an extra r-band processed to preserve all low surface brightness features (rLSB), we made a 2D photometric fit of the BGG with GALFIT with one or two Sersic components and analysed how the subtraction of intracluster light contribution modifies the BGG properties. From the SDSS spectra available for the BGGs of 65 FGs and 82 non-FGs, we extracted the properties of their stellar populations with Firefly. We also investigated the origin of the emission lines in a nearby FG, NGC 4104, that has an AGN. A single Sersic profile can fit most objects in the u band, while two Sersics are needed in the r and rLSB bands, both for FGs and non-FGs. Non-FG BGGs cover a larger range of Sersic index. FG BGGs follow the Kormendy relation derived for almost one thousand brightest cluster galaxies (BCGs) by Chu et al. (2022) while non-FGs BGGs are mostly located below this relation, suggesting that FG BGGs have evolved similarly to BCGs, while non-FG BGGs have evolved differently. The above properties can be strongly modified by the subtraction of intracluster light contribution. The stellar populations of FG and non-FG BGGs do not differ significantly. Our results suggest FG and non-FG BGGs have had different formation histories, but it is not possible to trace differences in their stellar populations or large scale distributions.
Aline Chu, F. Durret, A. Ellien, F. Sarron, C. Adami, I. Marquez, N. Martinet, T. de Boer, K. C. Chambers, J. -C. Cuillandre, S. Gwyn, E. A. Magnier, A. W. McConnachie
2023-03-09T09:55:50Z
http://arxiv.org/abs/2303.05146v1
# A UNIONS view of the brightest central galaxies of candidate fossil groups + ###### Abstract Context:The formation process of fossil groups (FGs) is still under debate, and, due to the relative rarity of FGs, large samples of such objects are still missing. Aims:The aim of the present paper is to increase the sample of known FGs, and to analyse the properties of their brightest group galaxies (BGG) and compare them with a control sample of non-FG BGGs. Methods:Based on the large spectroscopic catalogue of haloes and galaxies publicly made available by Tinker, we extract a sample of 87 FG and 100 non-FG candidates. For all the objects with data available in UNIONS (initially the Canada France Imaging Survey, CFIS), in the u and r bands, and/or in an extra r-band processed to preserve all low surface brightness features (rLSB hereby), we made a 2D photometric fit of the BGG with GALFIT with one or two Sersic components. We also analysed how the subtraction of intracluster light contribution modifies the BGG properties. From the SDSS spectra available for the BGGs of 65 FGs and 82 non-FGs, we extracted the properties of their stellar populations with Firstly. To complement our study, we investigated the origin of the emission lines in a nearby FG, dominated by the NGC 4104 galaxy, to illustrate in detail the possible origin of emission lines in the FG BGGs, involving the presence or absence of an AGN. Results:Morphologically, a single Sersic profile can fit most objects in the u band, while two Sersic are needed in the r and rLSB bands, both for FGs and non-FGs. Non-FG BGGs cover a larger range of Sersic index \(n\). FG BGGs follow the Kormendy relation (mean surface brightness versus effective radius) previously derived for almost one thousand brightest cluster galaxies (BCGs) by Chu et al. (2022) while non-FGs BGGs are in majority located below this relation, with fainter mean surface brightnesses. This suggests that FG BGGs have evolved similarly to BCGs, and non-FG BGGs have evolved differently from both FG BGGs and BCGs. All the above properties can be strongly modified by the subtraction of intracluster light contribution. Based on spectral fitting, the stellar populations of FG and non-FG BGGs do not differ significantly. Conclusions:The morphological properties and the Kormendy relation of FG and non-FG BGGs differ, suggesting they have had different formation histories. However, it is not possible to trace differences in their stellar populations or in their large scale distributions. ## 1 Introduction Fossil groups (FGs) were discovered by Ponman et al. (1994). They are particular groups of galaxies with high X-ray luminosities but with fewer bright galaxies than groups or clusters of galaxies. Jones et al. (2003) later gave the commonly accepted definition of FGs as satisfying three conditions: they are extended X-ray sources with an X-ray luminosity of at least L\({}_{\rm X}\) = 10\({}^{42}\) h\({}_{50}^{-2}\) erg s\({}^{-1}\), with a Brightest Group Galaxy (BGG) at least two magnitudes brighter than other group members, the distance between the two brightest galaxies being smaller than half the group virial radius. The formation of these peculiar objects and why they present such a low amount of optically emitting matter are still under debate. Jones et al. (2003) have suggested that FGs are the remnants of early mergers, and that they are cool-core structures which accreted most of the large galaxies in their environment a long time ago, a scenario supported by hydrodynamical simulations by D'Onghia et al. (2005). However, FGs could also be a short temporary stage of group evolution before they capture more galaxies in their vicinity, as reported for instance by von Benda-Beckmann et al. (2008), based on N-body simulations. FGs can be studied through their optical (Vikhlinin et al. 1999; Santos et al. 2007) or X-ray (Romer et al. 2000; Adami
2308.08710
A Framework for Designing Fair Ubiquitous Computing Systems
Over the past few decades, ubiquitous sensors and systems have been an integral part of humans' everyday life. They augment human capabilities and provide personalized experiences across diverse contexts such as healthcare, education, and transportation. However, the widespread adoption of ubiquitous computing has also brought forth concerns regarding fairness and equitable treatment. As these systems can make automated decisions that impact individuals, it is essential to ensure that they do not perpetuate biases or discriminate against specific groups. While fairness in ubiquitous computing has been an acknowledged concern since the 1990s, it remains understudied within the field. To bridge this gap, we propose a framework that incorporates fairness considerations into system design, including prioritizing stakeholder perspectives, inclusive data collection, fairness-aware algorithms, appropriate evaluation criteria, enhancing human engagement while addressing privacy concerns, and interactive improvement and regular monitoring. Our framework aims to guide the development of fair and unbiased ubiquitous computing systems, ensuring equal treatment and positive societal impact.
Han Zhang, Leijie Wang, Yilun Sheng, Xuhai Xu, Jennifer Mankoff, Anind K. Dey
2023-08-17T00:19:53Z
http://arxiv.org/abs/2308.08710v1
# A Framework for Designing Fair Ubiquitous Computing Systems ###### Abstract. Over the past few decades, ubiquitous sensors and systems have been an integral part of humans' everyday life. They augment human capabilities and provide personalized experiences across diverse contexts such as healthcare, education, and transportation. However, the widespread adoption of ubiquitous computing has also brought forth concerns regarding fairness and equitable treatment. As these systems can make automated decisions that impact individuals, it is essential to ensure that they do not perpetuate biases or discriminate against specific groups. While fairness in ubiquitous computing has been an acknowledged concern since the 1990s, it remains understudied within the field. To bridge this gap, we propose a framework that incorporates fairness considerations into system design, including prioritizing stakeholder perspectives, inclusive data collection, fairness-aware algorithms, appropriate evaluation criteria, enhancing human engagement while addressing privacy concerns, and interactive improvement and regular monitoring. Our framework aims to guide the development of fair and unbiased ubiquitous computing systems, ensuring equal treatment and positive societal impact. Fairness; Ubiquitous Computing Systems; Framework + Footnote †: journal: Computer Vision and Image Understanding + Footnote †: journal: Computer Vision and Image Understanding more structured and static tabular format of data (_e.g._, language corpus from social media) used by the machine learning fairness community (Krishnan et al., 2017). Finally, the limited body of research on fairness in ubiquitous computing is also compounded by the lack of guidance for designing fair ubiquitous computing systems. Without such guidance, it is difficult to ensure that technologies designed for ubiquitous computing are being used in a fair and ethical manner. In this work, we propose a framework for designing fair systems in the context of ubiquitous computing. As illustrated in Figure 1, our framework starts by identifying relevant stakeholders in different contexts, determining who will use the algorithmic tool and who will be impacted by the algorithmic outcomes. Next, researchers gather appropriate and inclusive datasets and select or develop algorithms for evaluation. Evaluation criteria are then defined, incorporating fairness metrics and other performance measures, with a clear rationale for their selection (_e.g._, why it is more reasonable to use the disparity of a traditional performance metric, such as false negative rate, as a fairness metric in a specific context). Algorithms are evaluated using the defined criteria and fairness metrics, and the results are analyzed within the context of stakeholder priorities. Transparency is ensured by communicating the findings and soliciting feedback from stakeholders. Finally, to adapt to the dynamic nature of ubiquitous computing, the systems are iteratively improved, and regularly monitored. The contributions of this work are as follows. Firstly, we emphasize the significance of integrating fairness considerations into ubiquitous computing. Secondly, we present a framework that guides the design of fair ubiquitous computing systems. Lastly, we provide a detailed rationale for each component of the proposed framework. Our intention for this work is to serve as a valuable resource for future endeavors aiming to incorporate fairness into the design of ubiquitous computing systems. ## 2. Background and Related Work In this section, we first provide a broader review of prior work on fairness (Section 2.1). We then review the existing fairness literature in ubiquitous computing (Section 2.2). ### A Broader Review of Existing Fairness Literature As ML/AI is now being used in many decision-making systems, concerns have been raised about the fairness of these systems (Krishnan et al., 2017; Krishnan et al., 2017). In response to these concerns, Mehrabi _et al._ conducted a comprehensive systematic review of prior research, examining various sources of biases that can impact AI applications (Mehrabi et al., 2017). Their study identified two key sources of unfairness in machine learning outcomes: biases originating from the data and biases arising from the algorithms themselves. To mitigate biases stemming from the data, researchers have proposed the adoption of inclusive benchmark datasets (Dwork et al., 2018). These datasets aim to enhance the representation and diversity of the training data, thereby reducing the potential for biased and discriminatory outcomes in machine learning models. Meanwhile, researchers have made substantial contributions to laying the foundation for understanding and mitigating algorithmic biases (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). This work has resulted in the proposal of influential notions and frameworks aimed at promoting fairness in algorithmic decision-making systems. One influential notion is **fairness through awareness**, proposed by Dwork _et al._(Dwork et al., 2018). This notion emphasizes the consideration of **individual fairness**, which suggests treating similar individuals similarly. The authors highlighted the importance of ensuring ML models make consistent decisions and avoid discrimination based on protected attributes (such as race, gender, or age). Dwork _et al._(Dwork et al., 2018) also discussed **group fairness**, which focuses on fairness at the group level. They emphasized that the demographics of those receiving positive or negative classifications should align with the overall population. The concepts of individual fairness and Figure 1. Overview of the framework for designing fair ubiquitous computing systems. group fairness contribute to the broader understanding and pursuit of fairness in machine learning and decision-making systems. To enforce algorithmic fairness, researchers have proposed various mathematical formulations and frameworks. Zafar _et al._Zafar et al. (2017) explored the notion of **disparate mistreatment**, where ground truth is available for historical decisions used during the training phase. They provided a mathematical formulation for incorporating fairness criteria into the training process of machine learning models. This work allows practitioners to define and optimize fairness goals during modeling training, aiming to reduce bias in the resulting predictions. Another widely adopted framework, introduced by Hardt _et al._Hardt et al. (2017), is **equality of opportunity**. This framework highlights the importance of equalizing the true positive rates across different demographic groups to ensure fairness. The authors provided theoretical analysis and practical algorithms for achieving equality of opportunity, and they demonstrated the effectiveness of their approach through empirical evaluations. ### Existing Ubiquitous Computing Fairness Literature While fairness research has made significant progress in addressing bias and discrimination in the machine learning fairness community, it is also crucial to consider these issues within the context of ubiquitous computing. Ubiquitous computing, characterized by the integration of computing power into everyday environments(Bahdan et al., 2016; Chen et al., 2017; Li et al., 2018), presents unique challenges and opportunities for ensuring fairness. For example, one specific challenge arises from the dynamic nature of datasets collected in ubiquitous computing, which often makes it difficult to obtain an accurate understanding of the environment and capture every individual's needs. Consequently, biases and discrimination can be perpetuated and even amplified, leading to unfair outcomes for individuals. For instance, consider a health and fitness app that aims to provide personalized health recommendations based on user data collected from various wearable sensors. If the app's models are trained on outdated or inconsistent data due to irregular updates from users, it might provide inaccurate recommendations, potentially disadvantage certain individuals. To address these concerns, researchers have recently begun to study fairness specifically in the field of ubiquitous computing. For instance, researchers conducted a systematic review of papers published in the IMWUT journal between 2018 and 2022 on algorithmic fairness (Zafar et al., 2017). They found that only 5% of the published papers included fairness reports, indicating a need for more attention to fairness in the field of ubiquitous computing. Recently, researchers have started designing fairness-aware ubiquitous computing systems. One such work proposed a method that combines personalized federated learning with hierarchical clustering techniques to enhance the accuracy, robustness, and fairness of activity recognition systems (Zafar et al., 2018). By leveraging hierarchical clustering based on both activity similarity and user similarity, personalized models are created within smaller user groups, ensuring equitable treatment of individuals within the system. In another study, researchers developed a data-driven fairness-aware charging recommendation system specifically designed for large-scale electric taxi fleets (Zafar et al., 2018). The authors employed data-driven techniques and incorporated various self-defined fairness metrics (_e.g._, reduction of traveling time and charging station occupation rates) to recommend charging stations to electric taxi drivers, aiming to ensure fairness in the allocation of resources. ## 3. Towards Fair Ubiquitous Computing In this section, we provide a summary of the challenges and limitations that must be addressed to achieve fair ubiquitous computing (Section 3.1). This is followed by the objective of this work (Section 3.2). ### Challenges and Limitations towards Fair Ubiquitous Computing Despite the emerging body of research that has started to investigate the fairness of systems in ubiquitous computing, as the example discussed in Section 2.2, there are still several challenges and inherent limitations due to the unique nature of ubiquitous computing. Below, we identify six challenges and limitations and emphasize the need for addressing them. **Limited consideration of sensitive attributes in specific contexts**. In contrast to the machine learning fairness community, which has explored a broad spectrum of sensitive attributes such as race, sexuality, disability, and nationality (Bahdan et al., 2016; Chen et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018), recent studies have highlighted a disparity in the focus of fairness work within the field of ubiquitous computing. Over the past five years, the research in ubiquitous computing has predominantly concentrated on gender and age attributes (Zafar et al., 2017; Li et al., 2018; Li et al., 2018), with these attributes being mentioned in almost 90% of the included papers (Zafar et al., 2017). On the other hand, many sensitive attributes, which are often associated with discrimination, have received limited attention in the ubiquitous computing literature (Zafar et al., 2017; Li et al., 2018; Li et al., 2018). Only a few of these papers discuss sensitive attributes through the lens of fairness. For instance, one group of stakeholders that has been overlooked in the literature is individuals with disabilities (Zafar et al., 2017; Li et al., 2018). Their unique needs require specific attention to ensure fair and inclusive ubiquitous computing experiences. Sexual orientation (Zafar et al., 2018) is another important attribute that should be considered in fairness work for ubiquitous computing when modeling mental health. Given the wide implementation of ubiquitous computing across diverse contexts, the limited focus on different groups of stakeholders who may experience disproportionate effects from automated decision-making systems raises concerns and emphasizes the need for a broader consideration of these marginalized groups. To advance fairness and inclusively in ubiquitous computing, it is essential to expand the research scope and give careful consideration to identifying and prioritizing stakeholders in the specific context. **Issue of Data Bias.** Similar to the machine learning fairness community, bias in datasets used for training models is a significant concern within ubiquitous computing community (Zafar et al., 2017; Zafar et al., 2017). These biases can stem from various factors, such as the data collection process, sampling techniques, or the inherent societal biases present in the data. Additionally, as ubiquitous computing systems extensively use data gathered from diverse sources, including sensors, mobile devices, and online platforms (Bahdan et al., 2016; Li et al., 2018), these data sources often suffer from sparsity and uneven distribution, which can introduce biases and perpetuate existing inequities. The scarcity or imbalance of data can result in underrepresentation of certain groups, leading to biased algorithms and discriminatory outcomes. Therefore, it is crucial to address these challenges in data collection and ensure that data used in ubiquitous computing systems is representative, diverse, and less inherently biased. **Limited Fairness-aware Algorithms in Ubiquitous Computing Systems.** Despite an emerging body of research that has started to investigate algorithmic fairness and mitigate bias in algorithms (_e.g._, [24; 62]), the development of fair algorithms for ubiquitous computing systems lags significantly behind. Based on the comprehensive review of fairness papers published in IMWUT between 2018 and 2022, it is evident that only a limited number of studies (three papers [50; 51; 65]) have explored the incorporation of fairness considerations into machine learning algorithms during the training process [61]. This can be attributed to the distinct challenges presented by ubiquitous computing, setting it apart from other domains. For instance, ubiquitous computing systems rely heavily on contextual information, including user location, preferences, and social interactions. Contextual factors can introduce additional complexities when determining fairness, as fairness considerations might vary based on the specific context and user characteristics. Moreover, ubiquitous computing systems often make real-time or near-real-time decisions based on continuous time-series data. Ensuring algorithmic fairness in these dynamic, time-sensitive scenarios adds a layer of complexity. **Lack of Context-aware Evaluation Criteria.** In contrast to the machine learning fairness community, which commonly relies on standard fairness metrics such as demographic parity [7], equalized odds [62], and equal opportunity [24], the ubiquitous computing community often employs performance metrics such as accuracy and error rate [61] without providing explicit justifications. Given the datasets used in the machine learning fairness community are static, while datasets used in ubiquitous computing are context-specific and sequential, it is crucial for ubiquitous computing researchers to carefully select appropriate fairness metrics that align with their specific contexts. As an illustration, consider the case of modeling depressive behavior. In this instance, employing the disparity of false negative rates between prioritized stakeholders and others as a fairness metric proves more suitable than utilizing commonly employed metrics such as demographic parity and equalized odds commonly used in the machine learning fairness community. This selection is substantiated by existing research that demonstrates higher levels of mental health concerns among prioritized stakeholders, such as females with depressive symptoms [19; 35; 39]. By focusing on the disparity of false negative rates, which quantifies the variations in misclassification rates for individuals with depressive symptoms, researchers can more accurately capture the specific challenges and disparities faced by the prioritized stakeholders. In contrast, adopting standard machine learning fairness metrics, which primarily strive for equal treatment across groups by assuming comparable levels of depressive symptoms across genders in this example, may fail to adequately address the nuanced needs and disparities inherent to this particular context. Additionally, another challenge in both the ubiquitous computing and the machine learning communities, _i.e._, how to define a threshold that determines the point at which models are classified as unfair across different groups, should be addressed [61]. This absence of a well-defined threshold for determining fairness in ubiquitous computing models makes it challenging to assess whether a model is truly fair and not simply due to chance. **Lack of Transparency and Explainability to stakeholders.** Fairness is closely intertwined with the principles of transparency and explainability. As AI-powered systems play an increasingly significant role in consequential decision-making, their explainability becomes essential for end-users to make informed and accountable decisions [17; 34]. Over the past few years, significant advancements have been made in addressing this aspect within the machine learning fairness and human-computer interaction communities. For instance, machine learning researchers have deliberately opted for certain machine learning models, such as decision trees and linear models, which possess transparent structures and inherently provide interpretable explanations for their problems (_e.g._, [11; 30]). Additionally, there has been substantial progress in developing techniques for model interpretability and explainability such as LIME [46] and SHAP [36; 37]. HCI researchers have been focusing on designing machine learning models' outputs in a transparent and understandable manner for end-users to bridge the gap between complex machine learning models and human comprehension (_e.g._, [3; 66]). Additionally, researchers have been investigating ways to incorporate user feedback and control mechanisms in machine learning models to enhance transparency and trust [45]. In contrast to other research communities, the transparency and explainability of existing work in the field of ubiquitous computing pose unique challenges. One primary reason is that ubiquitous computing systems often operate on vast and diverse datasets comprising heterogeneous and time-series data from multiple sources. The processing and analysis of such intricate data necessitate the utilization of sophisticated algorithms and models, rendering it arduous to elucidate their underlying rationale in a clear and interpretable manner. Furthermore, these systems commonly handle sensitive user data, including personal health information [32], location data [58], or behavioral patterns [59]. Maintaining privacy and security is of utmost importance in such contexts, often entailing practices such as data anonymization, access restrictions, and the careful selection of interviewees. However, this pursuit of privacy introduces a trade-off between fairness and privacy [12], further exacerbating the complexity and challenges of transparency and explainability. **Need for Regular Monitoring.** In contrast to other domains where data may be relatively static, ubiquitous computing systems require regular monitoring for fairness due to their dynamic and adaptive nature [14; 49]. These systems collect data dynamically in real-time, which introduces the need for ongoing monitoring to detect any biases that may emerge as the system adapts and learns from new information. Contextual factors such as location, time, and social surroundings play a significant role in ubiquitous computing systems, influencing their behavior. Regular fairness monitoring becomes essential to ensure fair treatment across different contexts and prevent biases from impacting users. Additionally, ubiquitous computing systems often make real-time decisions and interact directly with users. Monitoring helps identify any biases or discriminatory patterns in these decisions and interactions, enabling corrective actions to be taken promptly. Furthermore, the adaptability of ubiquitous computing systems and their potential to amplify biases emphasize the necessity of regular monitoring to ensure that biases are not perpetuated or amplified, safeguarding fairness and equitable outcomes for users. ### Objective of This Work Our work is motivated by the challenges and limitations pertaining to fairness in ubiquitous computing. Our objective of this work is to adapt and integrate the existing frameworks and concepts (reviewed in Section 2.1) into a specialized framework tailored for ubiquitous computing. Through this work, we aim to advance the development and deployment of ubiquitous computing systems that prioritize fairness and effectively cater to the diverse needs of stakeholders. By advocating for the integration of fairness considerations into system design, we seek to pave the way for future research towards fair ubiquitous computing. ## 4. Framework for Designing Fair Ubiquitous Computing Systems In this section, we present an overview of our proposed framework (shown in Figure 1), designed specifically to address the challenges and limitations discussed in the preceding section (Section 4.1). Additionally, we delve into potential avenues for future research and development (Section 4.2). ### Overview of Proposed Framework Our framework has six components in total, each representing one important stage in ubiquitous computing system implementation: 1. **Identify and prioritize stakeholders**. Identify relevant stakeholders, such as those who will use the systems, as well as those who will be affected and biased by the algorithmic outcomes. For example, in the context of education, students with minoritized identities may face biased outcomes, making them important stakeholders. Transparency is crucial for both students and instructors who will use the systems. In contrast to the existing prevalent ubiquitous computing fairness literature, which often concentrates on a restricted range of sensitive attributes, our framework highlights the crucial importance of meticulously considering diverse sensitive attributes within varying contexts, _e.g._, taking sexual minority status into account when modeling mental health. 2. **Select/collect inclusive datasets**. Collect or select representative datasets that include prioritized stakeholders for evaluation, based on the specific context. In comparison to the prevailing ubiquitous computing fairness literature that frequently relies on unrepresentative datasets, our framework underscores the importance of gathering more inclusive datasets. One example is the recently published GLOBEM dataset (Zhu et al., 2020), where researchers intentionally oversampled diverse subpopulation groups (_e.g._, gender, race, and immigration). 3. **Choose/design fairness-aware algorithms**. Carefully select or design explainable fairness-aware algorithms that are relevant to the identified context and align with the goals of the evaluation. In contrast to the prevailing ubiquitous computing fairness literature, which often neglects the consideration of fairness during the algorithm design process, our framework emphasizes the need for developing fairness-aware algorithms. Note that, step 2a and step 2b can be interchanged. 4. **Define evaluation criteria**. Determine appropriate fairness metrics and thresholds to quantify differences and biases across stakeholder groups. Provide explicit justifications for the chosen metrics and thresholds. For example, in the evaluation of a depression detection algorithm, the disparity in false negative rates across different groups serves as a critical fairness metric. This metric reflects the algorithm's failure to identify depression in certain populations, making it essential to address. To ensure that observed disparities are not merely due to chance occurrences, a statistical test can be conducted. In contrast to the existing literature on fairness in ubiquitous computing, which often lacks justification for the selected fairness criteria, our framework emphasizes the necessity of designing context-aware evaluation criteria. 5. **Conduct evaluation and analysis**. Use the selected fairness metrics to evaluate algorithms on selected datasets, and analyze the results based on the predetermined evaluation criteria within the specific context. Additionally, thoroughly discuss the potential harm to stakeholders that may arise from the algorithmic decisions. In contrast to the existing literature on fairness in ubiquitous computing, which often neglects the discourse on the harm to stakeholders, our proposed framework seeks to bridge this gap. 6. **Enhance stakeholder engagement while protecting privacy**. Communicate evaluation findings, recommendations, and potential limitations to stakeholders, fostering transparency, accountability, and stakeholder involvement in algorithmic decision-making. Safeguard stakeholder privacy throughout the process. In comparison to the prevailing literature on fairness in ubiquitous computing, which often overlooks the involvement of humans in the system design, our framework emphasizes the importance of including human perspectives in the design process. Additionally, our framework recognizes the significance of striking a balance between fairness and privacy considerations. 7. **Iterative improvement and regular monitoring**. Refine algorithms to address potential biases and unfairness. Iterate on algorithmic design, data collection, and preprocessing to enhance fairness. Continuously monitor real-world performance and update algorithms and evaluation processes to align with evolving fairness standards and best practices. In comparison to the prevailing literature on fairness in ubiquitous computing, which often lacks this step, our framework emphasizes the importance of iterative improvement and regular monitoring. ### Potential Directions for Future Work In this section, we outline potential directions for future work and extensions of the proposed framework for designing fair ubiquitous computing systems. These avenues of research offer opportunities to advance the framework and enhance its applicability in real-world contexts. #### 4.2.1. Validation and Case Studies To further validate and demonstrate the effectiveness of the proposed framework, future work can focus on conducting validation studies and case studies in real-world scenarios. For example, researchers can apply the framework to specific contexts and assess its practicality. By selecting representative use cases, researchers can demonstrate how the framework can be implemented and tailored to address fairness challenges in different scenarios. Case studies can involve evaluating or deploying fair algorithms in healthcare settings and educational environments. The findings from these case studies will provide valuable insights into the framework's feasibility, efficacy, and adaptability across diverse ubiquitous computing application domains. Moreover, to enrich the validation process, future work should include interviewing identified stakeholders to gather their feedback on the framework and the results of fairness testing in various scenarios. This qualitative feedback can provide additional context and perspectives, further refining and validating the proposed framework. By conducting these studies, it can also contribute to the identification of potential challenges and limitations of the framework. #### 4.2.2. Balancing Privacy and Fairness As ubiquitous computing systems rely on collecting and analyzing vast amounts of personal data to make algorithmic decisions, addressing the tradeoff between privacy and fairness stands as a significant future direction within the context of the proposed framework for designing fair ubiquitous computing systems. One possible avenue for future research lies in the exploration of strategies and methodologies to reconcile the inherent tension between privacy and fairness. Researchers can develop privacy-preserving algorithms and techniques tailored specifically to ubiquitous computing environments and delve into privacy-enhancing technologies, such as secure multi-party computation (Krishnan et al., 2017; Krizhevsky et al., 2017), federated learning (Krizhevsky et al., 2017), and differential privacy (Krizhevsky et al., 2017), and incorporate them into the framework. Another essential avenue for future investigation lies in understanding the impact of various privacy-preserving mechanisms on the fairness of algorithmic decision-making. Researchers can undertake a comprehensive examination of the tradeoff between privacy and fairness, critically analyzing how privacy-enhancing measures may influence the accuracy, reliability, and equity of algorithmic outcomes. ## 5. Conclusion In conclusion, this work introduces a novel framework for designing fair ubiquitous computing systems, addressing the existing gap in the ubiquitous computing literature on fairness challenges. By presenting this framework, we contribute to the advancement of ubiquitous computing research, with the hope of providing a valuable resource for researchers and practitioners striving to develop more equitable and inclusive systems in the future. We envision that the proposed framework can serve as an initial stepping stone towards fostering fairness and ensuring that ubiquitous computing systems align with ethical principles and societal values. ## Acknowledgement This material is based upon work supported by the National Science Foundation under Grant No. EDA-2009977 and the University of Washington College of Engineering, Department of Electrical and Computer Engineering, and the Paul G. Allen School of Computer Science and Engineering.
2304.01387
Field-level multiprobe analysis of the CMB, integrated Sachs-Wolfe effect, and the galaxy density maps
Extracting information from cosmic surveys is often done in a two-step process, construction of maps and then summary statistics such as two-point functions. We use simulations to demonstrate the advantages of a general Bayesian framework that consistently combines different cosmological experiments on the field level, and reconstructs both the maps and cosmological parameters. We apply our method to jointly reconstruct the primordial CMB, the integrated Sachs-Wolfe effect, and six tomographic galaxy density maps on the full sky on large scales along with several cosmological parameters. While the traditional maximum a posterior estimator has both two-point level and field-level bias, the new approach yields unbiased cosmological constraints and improves the signal-to-noise ratio of the maps.
Alan Junzhe Zhou, Scott Dodelson
2023-04-03T21:26:23Z
http://arxiv.org/abs/2304.01387v2
# A field-level multi-probe analysis of the CMB, ISW, and the galaxy density maps ###### Abstract Extracting information from cosmic surveys is often done in a two-step process, construction of maps and then summary statistics such as two-point functions. We use simulations to demonstrate the advantages of a general Bayesian framework that consistently combines different cosmological experiments on the field level, and reconstructs both the maps and cosmological parameters. We apply our method to jointly reconstruct the primordial CMB, the Integrated Sachs Wolfe effect, and 6 tomographic galaxy density maps on the full sky on large scales along with several cosmological parameters. While the traditional maximum a posterior estimator has both 2-point level and field-level bias, the new approach yields unbiased cosmological constraints and improves the signal-to-noise ratio of the maps. ## I Introduction The large-scale structure (LSS) of the universe is defined by the full 3-dimensional matter density field \(\delta({\bf x},t)\). Although it is difficult to determine \(\delta({\bf x},t)\) directly, we extract information about it indirectly in two general ways: (i) light from distant sources (including the cosmic microwave background) is impacted by over- and under-dense regions and (ii) gravitationally bound objects such as galaxies and clusters often _trace_ the matter density. Examples of the first class of information include the late-time Integrated Sachs Wolfe (ISW) effect caused by decaying gravitational potentials in the dark energy era and the deflection of photons due to gravitational lensing. The second class includes galaxy clustering and cluster counts. One important objective of modern cosmology is to develop statistical methods to combine this information in the most efficient and consistent manner, in order to reconstruct \(\delta({\bf x},t)\) and constrain models of its origin and evolution. In the past decades, independent experiments have made extraordinary advances in charting these individual tracers. For example, on the cosmic microwave background (CMB) front, several generations of anisotropy and polarization measurements have led to recent results: the Planck Collaboration has mapped the temperature and polarization anisotropy of the early universe and used its lensing statistics to study the integrated gravitational potential along the line of sight [1; 2]. The Atacama Cosmology Telescope (ACT) and the South Pole Telescope (SPT) have made similar achievements with smaller footprints but higher resolutions [3; 4; 5; 6]. Stage-III Wide-field photometric surveys such as the Dark Energy Survey (DES), the KioDegree Survey (KiDS) and the Hyper Suprime-Cam (HSC) have observed millions of galaxies on a significant fraction of the sky and used galaxy positions and shape statistics to probe the low-redshift matter distributions [7; 8; 9]. The recipe for analyzing most of this data involves first converting the data into 2-dimensional maps (e.g. for CMB surveys) and catalogs (e.g. for galaxy surveys); computing the correlation functions (or the power spectra) of these fields; and then comparing these observed correlation statistics to a cosmological model in a Bayesian likelihood analysis to yield cosmological parameter constraints. In almost all of these cases, the fiducial cosmological model, \(\Lambda\)CDM, fits the data well. In addition to these results from single probes, there has been an increased effort to maximize information by combining probes. An example of this is the recent DES result combining its data of galaxy positions and galaxy shapes with the projected gravitational potential measured by SPT and Planck [10]. In this example, roughly the same recipe is followed: DES made maps of the galaxy density in five tomographic bins and the shear in 4 bins; these were combined with maps of the projected gravitational potential from SPT and Planck. Given these 3 sets of maps, there are 6 sets of 2-point functions (galaxy clustering; galaxy-galaxy lensing; cosmic shear; cosmic shear \(\times\) CMB lensing; galaxy density \(\times\) CMB lensing; and the CMB lensing auto-correlation function). This set of six 2-point functions forms the data vector, which is then used to constrain parameters. The main goal of this effort is to extract from all this low-redshift (much lower than the decoupling of the CMB) data a measurement of the amount of clustering at late times. This is often quantified with \(S_{8}\), which the DES+SPT analysis determined to be \(\sigma_{8}=0.792\pm 0.012\), lower than the Planck measurement, \(S_{8}=0.832\pm 0.013\). The discrepancy does not meet strict statistical standards but it has spawned much interest and it is reminiscent of the Hubble tension that is driven by different measurements of the zeroth order expansion rate of the universe. Taking stock, the fiducial cosmological model fits most of the data, but there are alluring hints that it is flawed, and one of the most intriguing ways of stress-testing the model is to measure how the clustering of matter evolves over the course of time. To date, this has been done predominantly by: (i) map-making, (ii) compression to 2-point functions, and (iii) parameter constraints. Research into field-level analysis offers an opportunity to change the way that we extract data from surveys, in the process offering an alluring opportunity for a powerful suite of tests of \(\Lambda\)CDM. The basic idea of field-level analysis is to combine all three steps above into one. Early examples of this idea [11; 12; 13; 14; 15; 16] focused on the CMB. In that example, the time-ordered data can be converted into a map at the same time that the power spectrum is determined. The parameters to fit for the data, therefore, are the values of the temperature in all the pixels in the map plus a handful of cosmological parameters that determine the power spectrum. Eriksen _et al._[14] extended the idea to allow for multiple maps to be constructed: e.g., maps of foregrounds in addition to the CMB. This basic technology has been incorporated into the most recent results from Planck [17]. Groups are now applying the technology to galaxy surveys [18; 19; 20; 21; 22; 23]. One way to understand the advantage of the field-level approach is to return to the DES+SPT analysis: first CMB lensing maps were made using the traditional quadratic estimator [24] and then they were used to construct 2-point functions. However, the data in DES itself could in principle help improve the fidelity of the CMB lensing maps: after all, the deflection of the CMB photons is due (at least in part) to the very structure that DES measures. Combining this information would clearly create a better CMB lensing map. Using that improved map with DES maps though would be a form of double counting, so it makes sense to do everything at once: create all the maps and estimate all the power spectrum simultaneously. In the particular example of CMB lensing, the problem is not trivial but Millea _et al._[25; 26; 27] have made significant progress simultaneously measuring the lensing field, the primordial CMB, and several parameters that determine the relevant power spectra. Here we use simulated data sets on large scales to (i) develop the machinery that can handle real data; (ii) explore some of the basics of field-level analyses; and (iii) provide an example of how the field-level analyses can be used to stress-test \(\Lambda\)CDM. Our example is related to the work in Eriksen _et al._[14], except that we attempt to separate the late-time Integrated Sachs-Wolfe (ISW) signal from the primordial CMB anisotropies. Hang _et al._[28] constrained the ISW and lensing amplitudes using the 2-point correlation between the DESI Legacy Survey and Planck temperature and lensing maps, where as we are interested in the full posterior distribution of both the parameters, 2-point functions, and the maps. We begin in SSII by explaining some of the details; then in SSIII, we analyze simulated CMB data assuming that it consists only of noise and CMB anisotropies. We recover some of the known problems of the maximum posterior solution (the Wiener filter) and show that these can be mitigated by instead using samples of the full posterior. Then, in SSIV, we introduce the ISW component and try to separate that from the primordial anisotropies. The degeneracies make this problematic at the map level, but the sampler produces an unbiased power spectrum for each. This is crucial, as the cosmological parameters themselves are embedded in the spectrum so if the spectrum is unbiased, then the parameters will be as well. Specifically, we introduce two free amplitudes of each component that multiply the fiducial spectra and show that the field-level analysis that simultaneously solves for the map values and the parameters produces unbiased estimates of the parameters. The ensuing constraints on the amplitude of the ISW spectrum are not very restrictive, so in SSV, we explore the possibility of adding in other tracers, the galaxy density in several tomographic bins. This adds to the number of free parameters in the field-level analysis but we show that it produces a higher fidelity ISW map and a fairly tight constraint on the amplitude of the ISW spectrum. This leads to the prospect of stress testing \(\Lambda\)CDM by introducing amplitudes in front of all spectra (CMB lensing; galaxy density; cosmic shear) in addition to the standard cosmological parameters: a measurement in which any one of these amplitudes is determined to deviate from unity will disprove \(\Lambda\)CDM by demonstrating that structure does not grow in time as predicted by the model. In short, our goal in this paper is to explain (to some, much of this will not be new) what to expect when carrying out a field-level analysis; demonstrate how well it does on simulated data with increasing numbers of components and probes, and point the way to a simple but powerful way to stress test the fiducial cosmological model. We share our conclusions and thoughts about the next steps in SSVI. ## II Theoretical Framework ### Field-level Multi-probe Analysis The general problem of a field-level multi-probe inference inference is summarized in Fig. 1. The data is an observation, or a set of observations, on the sky. For concreteness, we will focus on the synergy between CMB experiments and photometric galaxy surveys, but the argument generalizes to any combinations of probes. The data is assumed to consist of a set of signals and noise: \[d=\sum_{\alpha}s^{\alpha}+n. \tag{1}\] Our model assumes that the signals \(s^{\alpha}\) in the data are drawn from a Gaussian distribution with mean zero and covariance matrix, \(\mathbb{C}^{\alpha\beta}\), that can be computed given a set of parameters (cosmological and nuisance). The noise is also drawn from a Gaussian distribution with mean zero and known covariance matrix \(\mathbb{C}^{n}\). The likelihood for obtaining the data given the cosmological parameters and the signals is \[-2\ln\mathcal{L}=\left[d-\sum_{\alpha}s^{\alpha}\right]\,[\mathbb{C}^{n}]^{- 1}\,\left[d-\sum_{\beta}s^{\beta}\right]+\ldots \tag{2}\] where the additional terms are irrelevant, and the products on the right involve all pixels. That is, in the case of a single survey with \(N_{\rm pix}\) pixels, \(d\) is a set of the values in all the pixels, and \(\mathbb{C}^{n}\) is a \(N_{\rm pix}\times N_{\rm pix}\) matrix. If only one signal is assumed to contribute, then \(s\) also has \(N_{\rm pix}\) values; if more signals are assumed, then the total number of parameters in all the \(s^{\alpha}\) will be \(N_{\rm pix}\times N_{\rm signal}\). When data from multiple surveys are used, \(d\) will be a concatenated version of all the individual data sets, and not every signal will contribute to each data set. Using the Bayes theorem, we can invoke the prior on all the signals. Since we are confining our analysis to large scales throughout, the prior on all signals is Gaussian with mean zero and full covariance matrix \(\mathbb{C}\), so the posterior is \[-2\ln p=-2\ln\mathcal{L}+\sum_{\alpha\beta}s^{\alpha}\left(\mathbb{C}^{-1} \right)^{\alpha\beta}s^{\beta}+\ln\det\mathbb{C} \tag{3}\] where again each of the terms in the sum on right is implicitly over all pixels and irrelevant terms have been dropped. The parameters in this posterior are those that go into determining \(\mathbb{C}\), typically cosmological and nuisance, and the full set of values that constitute the map(s) of \(s^{\alpha}\). For example, in the case of a single survey, if there is one signal contributing and there are 5 cosmological parameters, then the number of parameters we use to fit the \(N_{\rm pix}\) data points is \(N_{\rm pix}+5\). There are often cases where there are two or more signals contributing. For example, below we model the CMB as consisting of the signal from the last scattering surface plus the contribution from the late-time ISW effect. In that case, there will be \(2N_{\rm pix}+5\) free parameters. As described in SSII.C, we will draw samples from this posterior. The accumulated samples of both the maps and the cosmological and nuisance parameters are fully consistent in the Bayesian sense. More precisely, the distribution of the values of map pixels \(s^{\alpha}\) will provide a set of posterior samples of the signals, and the distribution of the parameters will constrain the relevant models of interest. These distributions will be consistent with one another, so that for example in a sample with a large \(\mathbb{C}^{\alpha\alpha}\), the signal \(s^{\alpha}\) everywhere is likely to have a larger dispersion. ### Pixels Above we glossed over the details of the map. Here, we review the basics of pixels in terms of the coefficients of spherical harmonics and explain why we choose to work with this basis. Consider a map on the curved sky \(s(\mathbf{n})\), where \(\mathbf{n}\) is a 3-dimensional unit vector. Analogous to Fourier transformations in Euclidean spaces, we can study this field in the frequency (or harmonics) space via forward and Figure 1: The flow chart of a general field-level multi-probe analysis that accumulates samples of both the cosmological parameters and the tracer maps (values of each signal in each pixel) as discussed in §II.1 and §II.3. We start from the prior distribution of the cosmological and latent map parameters. We then use the realized cosmological parameters to construct the covariance of the tracers, which in turn transforms the latent map parameters into physical tracer maps. The covariance, tracer maps, and observed data are then combined into the likelihood and – after multiplying by the priors – the posterior functions. If the above calculations are all programmatically differentiable, we can calculate the derivatives of the posterior function easily, and use HMC No-U-Turn Sampler (HMC-NUTS) to efficiently sample from the very high dimensional posterior space. In this diagram, diamonds denote sampled parameters, squares denote model-relevant functions, and the hexagon is the (fixed) observed data vector. The pink diamond represent samples that represents the posterior space. inverse spherical harmonics transform (SHT), \[s(\mathbf{n}) =\sum_{lm}s_{lm}Y_{lm}(\mathbf{n}) \tag{4}\] \[s_{lm} =\int\frac{d\Omega}{4\pi}s(\mathbf{n})Y_{lm}^{*}(\mathbf{n}) \tag{5}\] where \(Y_{lm}(\mathbf{n})\)'s are the set of orthonormal spherical harmonics. We adopt the HEALPix pixelization strategy (where the angular resolution is specified by a single parameter NSIDE), and use the discretized SHT as implemented by the healpy library. As usual in cosmological analyses, we drop the monopole and dipole modes (\(l=0,1\)). In general, if the field \(s\) is statistically homogeneous and isotropic, it is more advantageous to study \(s\)'s correlation structure in harmonic space. In real space, the correlation function between two line-of-sight directions is given by \[w_{\mathbf{n},\mathbf{n}^{\prime}}=w(|\mathbf{n}-\mathbf{n}^{\prime}|)=\langle s (\mathbf{n})s(\mathbf{n}^{\prime})\rangle \tag{6}\] where we see that the correlation function has dense off-diagonals. For a discretized map with NSIDE resolution, the size of \(w\) scales as NSIDE4, which quickly becomes impossible to handle (for example, an NSIDE = 256 map has an angular resolution of \(27^{\prime\prime}\) and \(8\times 10^{5}\) pixels; the full pixel-pixel covariance matrix totals 5 terabytes). However, \(s\)'s power spectrum (\(s\)'s correlation function in harmonic space), \(\mathbb{C}\), defined by \[\langle s_{lm}^{\alpha}s_{l^{\prime}m^{\prime}}^{\beta}=\delta_{ll^{\prime}} \delta_{mm^{\prime}}\mathbb{C}_{l}^{\alpha\beta} \tag{7}\] is diagonal in this basis and depends only on the multiple moment \(l\) and the different sets of signals assumed. Therefore, the amount of memory needed to manipulate \(\mathbb{C}\) is linear in NSIDE. One important caveat to this simplicity is that the field must be homogeneous and isotropic, and these assumptions fail in the presence of instrumental noise patterns, partial sky coverage, and masking. ### Methodology Here we present the details of our implementation of Fig. 1. The fundamental idea behind all MC sampling techniques is: start from the current sample; find the next point in the parameter space and generate a probabilistic proposal to make it a sample (both operations may involve repeated evaluations of the posterior density). How to find the next point and what proposal to make are algorithm-specific, however, they in general satisfy the principle of detailed balance such that, in the limit of large sample size, the samples approximate the posterior distribution. The efficiency of MC sampling rests on the suitability of the MC algorithm for the specific inference context and the effective computation of the posterior distribution. For the first point, since we are inferring both the map pixels and the cosmological parameters, the dimensionality of the posterior space will be quite large. For example, for our final analysis in SSV which includes 8 tracer maps at NSIDE = 32, the total dimensionality of the posterior space is 73704. This is too large for traditional Monte Carlo techniques (such as Metropolis-Hastings) to operate efficiently. Intuitively, this is because as dimensionality increases, the ratio between the neighboring volume pointing towards and away from a particular point in the parameter space (e.g. the mode of the distribution) decays exponentially. Thus, the Random Walk Metropolis algorithm becomes overwhelmingly likely to propose samples outside the typical set, where the target density and hence the acceptance probability vanishes [29]. Hamiltonian Monte Carlo (HMC) solves this efficiency problem in high dimensional spaces [29; 30; 31]. In the HMC framework, we augment the parameter space with a conjugate momentum space and use the gradient of the log posterior surface to guide us to sample only near the bulk of the probabilistic mass [29; 30]. In order to avoid traditional HMC's sensitivity to hyper-parameters such as the integration steps, we further employ the No-U-Turn Sampler (NUTS) variation of the HMC, first proposed by Hoffman and Gelman [31]. This leads us to the second point on computational efficiency. HMC samplers require repeated evaluations of the posterior function and its gradient. Since we want to develop a multi-probe field-level framework that easily extends to different observables and cosmological models, we do not want to hard-code the derivatives in advance. Instead, we choose to make the framework pragmatically differentiable through the JAX auto-differentiation library in python, which interfaces smoothly with the numpy implementation of the NUTS [32; 33]. Turning to the specific problem of the posterior function computation. We follow Fig. 1 and break this calculation into several parts: * We start with a proposal for the cosmological parameters and the latent map parameters (top row), and record their prior probability. The former have uniform prior while the latter are simply random variables, \(r\), where each follows a normal prior distribution with mean zero and unit variance. * Calculate \(\mathbb{C}\) given the cosmological parameters * Transform the latent map parameters using the Cholesky decomposition (\(\mathbb{C}=LL^{t}\)) of the covariance matrix: \(s=Lr\). (The prior of the maps becomes \(s^{2}/(2\mathbb{C})=r^{2}/(2\mathbb{I})\)). * Combine the maps \(s\) with the data to calculate the likelihood (forward modeling) * Use the likelihood and the prior to calculate the posterior and its derivative with respect to the parameters * If the NUT criterion is satisfied, accept this as a valid sample * Use the leapfrog method to generate another sample In practice, through the JAX auto-differentiation library, this framework provides information on both the posterior and its gradient. The treatment of the covariance function deserves more discussion. In this paper, we will keep the shape of the spectra fixed and allow for free amplitudes \((A^{\alpha})^{2}\). We assume the amplitudes have fiducial values equal to one and have a uniform prior distribution. We then construct the full covariance matrix \(\mathbb{C}^{\alpha\beta}\), which consists of the auto- and cross-spectra of each signal. In principle, since \(\mathbb{C}\) encodes the covariance between all the pixels for all the tracer maps, its dimensionality is very high. For a single HEALPix map at the resolution of NSIDE (or a limiting resolution of \(l_{\text{max}}=3\texttt{NSIDE}-1\)), there are \[\sum_{l=2}^{l_{\text{max}}}(2l+1)=l_{\text{max}}^{2}+2l_{\text{max}}-3 \tag{8}\] degrees of freedom ignoring monopole and dipole modes. Again, for our final analysis in SSV, which includes 8 tracers, the size of \(\mathbb{C}\) is on the order of \(8^{2}\times l_{\text{max}}^{4}\). The efficient computation of this covariance matrix is one of the limiting factors in the feasibility of field-level analysis. However, in the limit of full sky and when all the fields are homogeneous and isotropic, the sub-block of \(\mathbb{C}\) for each tracer is diagonal in the \(a_{lm}\) basis. Thus, we can bring \(\mathbb{C}\) into block diagonal forms, with \(l_{\text{max}}-2\) unique \(\mathbb{C}_{l}\) sub-blocks on the diagonal. Each \(\mathbb{C}_{l}\) sub-block has size \(8\times 8\), describing the correlation between the 8 tracers at mode \(l\). Looking ahead, we will be considering the primordial CMB signal (modulated by \(A^{\mathcal{P}}\)); the late-time ISW effect (modulated by \(A^{\mathcal{I}}\)); and the galaxy density in 6 tomographic bins (modulated by \(b_{i}\)'s). The first of these is uncorrelated with the rest, so the ensuing \(8\times 8\) sub-block matrix will be \[\mathbb{C}_{l}(b_{1},...,b_{6},A^{\mathcal{I}},A^{\mathcal{P}})=\] \[\left.\begin{array}{ccccc}b_{1}^{2}\mathbb{C}_{l}^{\mathcal{Q}_ {1},\mathcal{G}_{1}}&b_{1}b_{2}\mathbb{C}_{l}^{\mathcal{G}_{1},\mathcal{G}_{ 2}}&...&b_{1}A^{\mathcal{I}}\mathbb{C}_{l}^{\mathcal{G}_{1},\mathcal{I}}&0\\ b_{1}b_{2}\mathbb{C}_{l}^{\mathcal{G}_{2},\mathcal{G}_{1}}&b_{2}\mathbb{C}_{l}^ {\mathcal{G}_{2},\mathcal{G}_{2}}&...&b_{2}A^{\mathcal{I}}\mathbb{C}_{l}^{ \mathcal{G}_{2},\mathcal{I}}&0\\...&...&...&...&...\\ b_{1}b_{6}\mathbb{C}_{l}^{\mathcal{G}_{6},\mathcal{G}_{l}}&b_{2}b_{6}\mathbb{C}_ {l}^{\mathcal{G}_{6},\mathcal{G}_{2}}&...&b_{6}A^{\mathcal{I}}\mathbb{C}_{l}^ {\mathcal{G}_{6},\mathcal{I}}&0\\ b_{1}A^{\mathcal{I}}\mathbb{C}_{l}^{\mathcal{I},\mathcal{G}_{1}}&b_{6}A^{ \mathcal{I}}\mathbb{C}_{l}^{\mathcal{I},\mathcal{G}_{2}}&...&(A^{\mathcal{I}} )^{2}\mathbb{C}_{l}^{\mathcal{I},\mathcal{I}}&0\\ 0&0&\ldots&0&(A^{\mathcal{P}})^{2}\mathbb{C}_{l}^{\mathcal{P},\mathcal{P}} \end{array}\right. \tag{9}\] With this computationally efficient representation of the covariance matrix, we can transform the latent map variables into the proper tracer maps through either the sub-blocks' Cholesky representations or their eigen-decomposition. We experimented with both, and found the former to be an order of magnitude faster (see also Loureiro _et al._[19]). In summary, the algorithm is very fast. For the largest model we considered in SSV, the analysis was done on an Apple M1 chip running overnight. ## III Reconstruction of the Primordial CMB Map We start with the simplest possible example, a single simulated CMB all-sky map. Although simple, this model demonstrates the key behaviors of two ways of using the posterior: identifying the free parameters by finding the point at which the posterior is maximum (hereafter, maximum a posteriori or MAP) and generating samples of the posterior (hereafter sampling). We compare the potential biases of both methods and discuss the implication for the field, 2-point, and cosmological parameter constraints. In some ways, the idea of asking whether an estimator is biased is introducing frequentist ideas into a Bayesian discussion. Nonetheless, we think that understanding these biases is an important step towards the ultimate goal of extracting the correct cosmological parameters from the data. The intuition we find here will serve us well in the subsequent more complex cases. The primordial CMB temperature fluctuations \(s^{\mathcal{P}}\) originate from the time of recombination (\(z_{*}\approx 1100\)), when the photons decoupled from the photon-electron-proton fluid as the universe cooled below a few percent of the ionization energy of hydrogen. We assume (and all current data is consistent with this assumption, with the tightest constraints coming from the Planck Collaboration _et al._[34]) that the resulting temperature variation is a homogeneous and isotropic random Gaussian field which is fully characterized by the power spectrum \(\mathbb{C}_{l}^{\mathcal{P}}\). Throughout this paper, we assume a fiducial cosmology of \(H_{0}=100h=67.5\text{km/Mpc/sec},\quad\Omega_{b}h^{2}=0.0219,\quad\Omega_{c}h^ {2}=0.1139,\quad A_{s}=2\times 10^{-9}\) and \(n_{s}=0.965\). ### Biases of the Optimal Estimator #### iii.1.1 Fixed cosmological parameters The observed temperature data \(d^{\mathcal{T}}(\mathbf{n})\) is the superposition of the primordial field \(s^{\mathcal{P}}(\mathbf{n})\) and noise \(n^{\mathcal{T}}(\mathbf{n})\). \[d^{\mathcal{T}}(\mathbf{n})=s^{\mathcal{P}}(\mathbf{n})+n^{\mathcal{T}}( \mathbf{n}) \tag{10}\] We call this the CMB model. We ask: Given \(d^{\mathcal{T}}\) and perfect knowledge of \(s^{\mathcal{P}}\)'s and \(n^{\mathcal{T}}\)'s theoretical power spectra (\(\mathbb{C}^{\mathcal{P}}\) and \(\mathbb{C}^{n,\mathcal{T}}\) respectively), how well can we reconstruct the primordial field? Additionally, how accurate is the power spectrum of the reconstructed field? In the Bayesian framework, the posterior probability in Eq. 3 reduces to \[-2\ln p(s^{\mathcal{P}}|d^{\mathcal{T}})\propto\sum_{lm}\left(\frac{|d_{lm}^{ \mathcal{T}}-s_{lm}^{\mathcal{P}}|^{2}}{\mathbb{C}_{l}^{n,\mathcal{T}}}+\frac{ |s_{lm}^{\mathcal{P}}|^{2}}{\mathbb{C}_{l}^{\mathcal{P}}}\right) \tag{11}\] where we drop the determinant terms since the CMB and noise spectra are assumed known and fixed. The MAP solution for \(s^{\mathcal{P}}\) is then given by the Wiener filter \[\hat{s}^{\mathcal{P},\mathbf{MAP}}_{lm}=\frac{\mathbb{C}_{l}^{\mathcal{P}}}{ \mathbb{C}_{l}^{\mathcal{P}}+\mathbb{C}_{l}^{n,\mathcal{T}}}d_{lm}^{\mathcal{T}}. \tag{12}\] The mean power spectrum of the MAP estimator \[\langle\hat{\mathbb{C}}_{l}^{\mathcal{P},\mathbf{MAP}}\rangle\equiv\langle \frac{1}{2l+1}\,\sum_{m}|s_{lm}^{\mathcal{P},\mathbf{MAP}}|^{2}\rangle=\frac{ \mathbb{C}_{l}^{\mathcal{P}}}{\mathbb{C}_{l}^{\mathcal{P}}+\mathbb{C}_{l}^{n, \mathcal{T}}}\mathbb{C}_{l}^{\mathcal{P}} \tag{13}\] is known to be biased [35; 36], an effect more prominent in the low signal-to-noise ratio (SNR) regime. To implement, we simulate \(s^{\mathcal{P}}\) from the fiducial cosmology power spectrum on a HEALPix grid of NSIDE = 64 and then inject isotropic white noise \(n^{\mathcal{T}}\) with a relatively high variance of \(\text{Var}(n^{\mathcal{T}})=1000\mu K^{2}\). Even though this exceeds noise in Planck by several orders of magnitude, we use this value to demonstrate the difficulties of extracting the signal in the presence of appreciable noise. The power spectra of the truth map (black) and the recovered MAP map (green) are shown in Fig. 2. As the amplitude of the noise spectrum (purple) rises on small scales, the MAP spectrum is increasingly suppressed. On the field level, this means that \(s_{lm}^{\mathcal{P},\mathbf{MAP}}\) is damped for over- and under-densities on scales that have small SNR. The estimator does not have an additive bias but does have a multiplicative bias, i.e. \(\langle\hat{s}_{lm}^{\mathcal{P},\mathbf{MAP}}/s_{lm}^{\mathcal{P}}\rangle= \mathbb{C}_{l}^{\mathcal{P}}/(\mathbb{C}_{l}^{\mathcal{P}}+\mathbb{C}_{l}^{n, \mathcal{T}})\) for both the real and the imaginary components. This is shown in the top panel of Fig. 3. #### iii.1.2 Varying cosmological parameters In real cosmological analyses, we are also interested in cosmological parameters (such as the primordial amplitude, spectral index, etc.) that modify the shape and amplitude of the power spectrum. We want to know how the field, 2-point, and parameter MAP estimators behave when the spectrum is allowed to change. For example, consider modulating the fiducial power spectrum \(\mathbb{C}^{\mathcal{P}}\) with a scale-invariant amplitude \(A^{2}\), where \(A\) has a flat prior. The new MAP solutions are given by \[\hat{s}^{\mathcal{P},\mathbf{MAP}}_{lm} =\frac{(\hat{A}^{\mathbf{MAP}})^{2}\mathbb{C}_{l}^{\mathcal{P}}} {(\hat{A}^{\mathbf{MAP}})^{2}\mathbb{C}_{l}^{\mathcal{P}}+\mathbb{C}_{l}^{n, \mathcal{T}}}d_{lm}^{\mathcal{T}} \tag{14}\] \[\hat{\mathbb{C}}_{l}^{\mathcal{P},\mathbf{MAP}} =\frac{(\hat{A}^{\mathbf{MAP}})^{2}\mathbb{C}_{l}^{\mathcal{P}}} {(\hat{A}^{\mathbf{MAP}})^{2}\mathbb{C}_{l}^{\mathcal{P}}+\mathbb{C}_{l}^{n, \mathcal{T}}}(\hat{A}^{\mathbf{MAP}})^{2}\mathbb{C}_{l}^{\mathcal{P}} \tag{15}\] and \(\hat{A}^{\mathbf{MAP}}\) satisfies \[\sum_{l}(2l+1)\left\{1-\frac{(\hat{A}^{\mathbf{MAP}})^{2}\mathbb{D}_{l} \mathbb{C}_{l}^{\mathcal{P}}}{((\hat{A}^{\mathbf{MAP}})^{2}\mathbb{C}_{l}^{ \mathcal{P}}+\mathbb{C}_{l}^{n,\mathcal{T}})^{2}}\right\}=0 \tag{16}\] where \(\mathbb{D}_{l}\) is the data power spectrum. When noise is present, \(\hat{A}^{\mathbf{MAP}}<1\), in this case equal to \(0.81\) (see Fig. 4). Thus, by comparing Eqn. 13 and 15, we see that the new MAP is biased even lower than the truth. We can apply this model to the same reconstruction experiment as before. The result for the new power spectrum estimator is shown in red in Fig. 2. The multiplicative bias in the field-level estimator is shown in the bottom panel of Fig. 3. For power spectra with complicated parameter dependence, we often lack analytical optimal solutions. However, qualitatively speaking, if an increase in the parameter increases the amplitude of the spectrum as in this case, then the parameter will be underestimated by optimal inference, and vice versa. ### Sampling the CMB field #### iii.2.1 Fixed cosmological parameters Now we seek an unbiased estimator for \(\mathbb{C}^{\mathcal{P}}\) that also has a convenient notion of uncertainty. Let us again first fix \(A=1\) and draw a sample of maps \(\{s_{lm,i}^{\mathcal{P}}\}_{i=1,\ldots,N}\) directly from the posterior distribution (Eqn. 11). Using this set of maps, we can construct an associated set of power spectra samples \[\{\mathbb{C}_{l,i}^{\mathcal{P}}\}=\{\sum_{m}\frac{1}{2l+1}|s_{lm,i}^{ \mathcal{P}}|^{2}\} \tag{17}\] Let us call the ensemble average of \(\{s_{lm,i}^{\mathcal{P}}\}\) and \(\{\mathbb{C}_{l,i}^{\mathcal{P}}\}\) as \(\overline{s^{\mathcal{P}}}\) and \(\overline{\mathbb{C}^{\mathcal{P}}}\) respectively. It is crucially important that the power spectrum of \(\overline{s^{\mathcal{P}}}\) is _different_ from \(\overline{\mathbb{C}^{\mathcal{P}}}\). We claim, in the limit of sufficient sample size \(N\), 1. \(\overline{s^{\mathcal{P}}}\) and its spectrum are exactly the field and 2-point MAP estimators (Eqn.12 and 13), and they have the Wiener filter multiplicative bias. 2. On the map (2-point) level, the samples \(\{s_{lm,i}^{\mathcal{P}}\}\) (\(\{\mathbb{C}_{l,i}^{\mathcal{P}}\}\)) gives a proper Bayesian credible interval centered around the truth. 3. Further, \(\overline{\mathbb{C}^{\mathcal{P}}}\) (and more generally, the mean of any n-point power spectrum samples) is an _unbiased_ estimator in the frequentist sense (when we have multiple data realizations). We prove these claims in Appendix B. However, intuitively, how can the power spectrum of the mean map be biased while the sampled power spectrum be unbiased? One way to understand this is to think of each sampled map as \(s^{\mathcal{P}}=\hat{s}^{\mathcal{P},\mathbf{MAP}}+s^{\prime}\). When we compute the power spectrum \(\langle|s^{\mathcal{P}}|^{2}\rangle\), the \(\langle|s^{\prime}|^{2}\rangle\) term exactly compensates for the deficiency of the MAP spectrum. Alternatively, the \(s^{\mathcal{P}}\)'s are normally distributed, and for any Gaussian distribution the mean is equal to the maximum, so \(\overline{s^{\mathcal{P}}}\) is the MAP solution. However, the power spectrum is not normally distributed; its expected value is an unbiased estimator of truth, and not the biased MAP solution. We continue with the numerical experiment above. This time, we construct an HMC NUT sampler following the prescription of SSII.3, using Eqn. 11 as our posterior distribution. After the chain equilibrates, we draw 3000 \(s^{\mathcal{P}}\)'s from the posterior space. We confirm that the power spectrum of the mean map exactly follows the MAP solution for the case of fixed parameters (the green curve in Fig. 2). We further show the distribution of the sampled power spectra \(\{\mathbb{C}_{l,i}^{\mathcal{P}}\}\) in Fig. 2 in shaded orange. Indeed, the distribution of the spectra covers the truth power spectrum within uncertainty, and \(\overline{\mathbb{C}^{\mathcal{P}}}\) is unbiased. We note that similar phenomena have been observed in previous studies. For example, in Fig. 8 of [26], the authors find that the distribution of the sampled CMB spectra scatter around truth while the spectrum of the mean map is biased lower at small scales. The above observations have the following implications. One must de-bias the sampled maps before using them for cosmological analysis, similar to how we currently correct for MAP maps (e.g. with analytical or Monte Carlo-based corrections). However, if we are only performing analysis on the 2-point level, the samples are _unbiased_ and their distribution constitutes a convenient measure of uncertainty. In short, by considering the sampled power spectra, we recapture an unbiased estimator of \(\mathbb{C}^{\mathcal{P}}\). #### iii.1.2 Varying cosmological parameters The exact 2-point statistics recovery motivates us to ask whether the sampled cosmological parameters that modify the power spectrum are also unbiased. To answer this question, we use the HMC NUT sampler from the previous section with an additional \(A^{2}\) modulating the fiducial spectrum. We assume \(A\) has a flat and wide prior on \([0.2,10]\) and collect 3000 samples after appropriate burn-in. We find that \(A\) is unbiased with its marginal distribution shown in Fig. 4. We can also estimate the variance of its distribution, which is predicted by the inverse of the Fisher information \[\mathcal{F}^{-1}=\sum_{l}\frac{2(2l+1)}{1+(\mathbb{C}^{n,\mathcal{T}}/\mathbb{ C}^{\mathcal{P}})^{2}}, \tag{18}\] shown as shaded orange in the same figure. Since the expectation of \(A\) is unbiased, it follows that the posterior samples \(\{\mathbb{C}_{l,i}^{\mathcal{P}}\}\) again scatter around truth unbiased. In fact, their distribution overlaps that of the fixed amplitude model almost exactly as shown in Fig. 2 Figure 2: A numerical experiment of CMB reconstruction that demonstrates the difference between the MAP estimators and the sampling-based estimators. We generated the truth map from a fiducial \(\mathbb{C}^{\mathcal{P}}\) and added isotropic noise. The truth and noise spectra are shown in black and purple. We test two reconstruction models, the first has a fixed \(\mathbb{C}^{\mathcal{P}}\) (Eqn. 11), and the second has a free amplitude \(A^{2}\) modulating the fiducial \(\mathbb{C}^{\mathcal{P}}\). Their MAP solutions are given by Eqn. 13 and 15, and are shown in green and red respectively. We confirm the analytical solutions match the results from direct numerical optimization. Both 2-point and MAP estimators are biased lower than truth and the free amplitude model has a greater bias. We then sampled \(s^{\mathcal{P}}\) directly from the posterior space. The spectra of the mean maps for the fixed and free amplitude cases are biased (both overlap the green curve). However, in both cases, the distributions (the orange shaded region represent 1\(-\sigma\) credible interval) of the spectra scatter around the truth and are unbiased. in shaded orange. Further, the power spectrum of the mean map is also not the MAP anymore; it is the MAP solution of the model with fixed cosmological parameter (Eqn. 13), as if \(A\) is fixed to 1. The important takeaway is the following. The MAP map and the mean sampled map are biased both on the field level and on the 2-point level. However, the distribution of the sampled power spectra (and cosmological parameters that modulate them) is unbiased. ## IV Joint reconstruction of the primordial CMB and the ISW effect Now, we expand on the CMB model and consider extracting the primordial and the ISW contributions from a single noisy temperature measurement. As we shall see, the MAP estimators for both signals are again biased on the field and the 2-point level. The sampled fields have a multiplicative bias but their 2-point statistics are unbiased. The new challenge in this case study is the field-level degeneracy between the primordial and the ISW maps, which motivates the multi-probe approach presented in the next section [14]. We discuss the key properties of this degeneracy, which we expect to be quite general when one separates a low SNR map from a measurement based on a likelihood approach. ### The ISW Effect In the late universe, the primordial CMB fluctuations are modified by the ISW effect on very large scales [37; 38; 39; 40; 41]. A photon is blue-shifted when descending a gravitational potential well and red-shifted when it escapes. When the universe began its accelerated expansion (in the dark energy-dominated era), the large-scale potential wells decayed. As a result, a photon will leave the Figure 4: The posterior distribution of the amplitude \(A\) (blue curve) compared to truth (black dashed line). The shaded orange area represents the Fisher forecast of the 1\(-\sigma\) uncertainty centered on truth, and the shaded blue area represents the 1\(-\sigma\) uncertainty reported by the sampler. The red line shows the MAP value for \(A\), which is quite far from truth. Figure 3: The scatter plot of the ratio between the real components of the MAP \(a_{lm}\) and that of the truth \(a_{lm}\) as a function of \(l\). The cases for fixed and varying amplitude parameters are shown in the upper and lower panels respectively. In each case, the mean of the ratios (black curve) matches the Wiener filter expectations (with the MAP amplitude in the case of varying amplitude). The color map represents point cloud density. decaying well (barrier) with more (less) energy than it enters. The observed ISW temperature modification is thus the integrated effect of the decaying potential well along the line of sight, and its 2-dimensional field is given by [42; 43] \[A^{\mathcal{I}}(\mathbf{n})=\int_{0}^{\infty}\frac{\partial\Phi(\mathbf{x}[ \chi,\mathbf{n}],t(\chi))}{\partial t}\frac{2e^{-\tau(\chi)}}{1+z(\chi)}d\chi \tag{19}\] Here, \(\tau(\chi)\) is the optical depth out to distance \(\chi\) and \(\Phi\) is the 3-dimensional gravitational potential field, which ultimately depends on the matter over-densities \(\delta_{m}(\mathbf{x}[\chi,\mathbf{n}],t(\chi))\). By rewriting \(\Phi\) in terms of \(\delta_{m}\), and moving to the Fourier space, we can rewrite Eqn. 19 as \[a_{lm}^{\mathcal{I}}=4\pi i^{l}\int\frac{d^{3}k}{(2\pi)^{3}}I_{l}^{\mathcal{I} }(k)Y_{lm}^{*}(\mathbf{k})\delta_{m}(\mathbf{k},t_{0}) \tag{20}\] where \[I_{l}^{\mathcal{I}}(k)=\int d\chi D(\chi)W^{\mathcal{I}}(k,\chi)j_{l}(k\chi) \tag{21}\] and the window function is given by \[W^{\mathcal{I}}(\chi,k)=-\Theta(\chi_{*}-\chi)\frac{3\Omega_{m}H_{0}^{2}}{k^ {2}}\frac{\partial\ln((1+z)D(z))}{\partial t} \tag{22}\] where \(D(z)\) is the growth function normalized to unity at \(z=0\). The forms of Eqn.20 - 22 are not peculiar to the ISW effect - by modifying the window function \(W(\chi,k)\) appropriately, the 2 dimensional observable of most tracers can be computed as a line-of-sight integral of \(\delta_{m}\). For general tracers, \(A\) and \(B\), of this form, the covariance is \[\mathbb{C}_{l}^{A,B}=\frac{2}{\pi}\int_{0}^{\infty}k^{2}dkP(k)I_{l}^{A}(k)I_{l }^{B}(k). \tag{23}\] One way of quantifying the correlation between different probes is to compute the scale-dependent correlation coefficients, defined as \[\rho^{AB}=\frac{\mathbb{C}^{AB}}{\sqrt{\mathbb{C}^{AA}\mathbb{C}^{BB}}}. \tag{24}\] We again assume both fields are statistically homogeneous and isotropic, so the covariance is diagonal in the \(a_{lm}\) basis. For example, the primordial CMB and the ISW effect are spatially independent (\(\mathbb{C}^{\mathcal{I},\mathcal{P}}=0\)), and their auto-power spectra are shown in Fig. 5 (\(\mathbb{C}^{\mathcal{I},\mathcal{P}}=0\)). The ISW signal is primarily confined to the very large scales (\(l<10\)) that enter the horizon during the dark energy-dominated era. The ISW signal is also subdominant to the primordial signal on all scales, making it particularly challenging to reconstruct. We explore the case of non-diagonal covariance in SSV in the context of multi-probe joint reconstruction. ### Separating the Primordial and ISW Signals The observed temperature data \(d^{\mathcal{T}}(\mathbf{n})\) is the sum of the primordial field \(s^{\mathcal{P}}(\mathbf{n})\), the ISW field \(s^{\mathcal{I}}(\mathbf{n})\), and noise \(n^{\mathcal{T}}(\mathbf{n})\), so the posterior distribution \[\ln p(s^{\mathcal{P}},s^{\mathcal{I}},A^{P},A^{I}|d^{\mathcal{T}})\propto\] \[\sum_{lm}\frac{1}{2}\left(\log A^{\mathcal{P}2}\mathbb{C}_{l}^{ \mathcal{P}}+\log A^{\mathcal{I}2}\mathbb{C}_{l}^{\mathcal{I}}\right)\] \[+\sum_{lm}\left(\frac{|d_{lm}^{\mathcal{T}}-s_{lm}^{\mathcal{P}} -s_{lm}^{\mathcal{I}}|^{2}}{2\mathbb{C}_{l}^{\mathcal{P},\mathcal{T}}}+\frac {|s_{lm}^{\mathcal{P}}|^{2}}{2A^{\mathcal{P}2}\mathbb{C}_{l}^{\mathcal{P}}}+ \frac{|s_{lm}^{\mathcal{I}}|^{2}}{2A^{\mathcal{I}2}\mathbb{C}_{l}^{\mathcal{I }}}\right) \tag{25}\] We will refer to this model as the CMB-ISW model. One crucial difference between the CMB and the CMB-ISW model is that although both are constrained by the same amount of data \(d^{\mathcal{T}}\), the dimensionality of the latter's posterior space (\(s^{\mathcal{P}}\) and \(s^{\mathcal{I}}\)) is (ignoring the cosmological parameters) twice than that of the former's (only \(s^{\mathcal{P}}\)). In other words, if we have \(N\) independent modes on the full sky, we are trying to constrain \(2N\) parameters with \(N\) data points in the CMB-ISW model. Hence, we expect significant degeneracy in the inferred \(s^{\mathcal{P}}\) and \(s^{\mathcal{I}}\) maps, and we want to explore how this affects the MAP and the sampling-based field-level reconstructions. We emphasize that this problem will be quite common in any field-level analysis where we wish to separate the different physical components of a single observed field. We will again tackle this problem in two ways, first by constructing the MAP estimators and then by sampling directly from the posterior distribution. Figure 5: The power spectra of the primordial CMB and the ISW effect on large scales (low-\(\ell\) modes). Fixed cosmological parameters Let us first fix \(A^{\mathcal{P}}=A^{\mathcal{I}}=1\) and seek the field-level MAP solutions. When we are trying to find the MAP of \(s^{\mathcal{P}}\), the effective noise is the sum of the instrumental noise and the ISW temperature fluctuation (and analogously for \(s^{\mathcal{I}}\)). Thus, invoking the Wiener Filter (Eqn. 12), \[\hat{s}^{\mathcal{P},\mathbf{MAP}}_{lm} =\frac{\mathbb{C}_{l}^{\mathcal{P}}}{\mathbb{C}_{l}^{\mathcal{P}} +\mathbb{C}^{n,\mathcal{T}}+\mathbb{C}^{\mathcal{I}}}d_{lm}^{\mathcal{T}} \tag{26}\] \[\hat{s}^{\mathcal{I},\mathbf{MAP}}_{lm} =\frac{\mathbb{C}_{l}^{\mathcal{I}}}{\mathbb{C}_{l}^{\mathcal{I} }+\mathbb{C}^{n,\mathcal{T}}+\mathbb{C}^{\mathcal{P}}}d_{lm}^{\mathcal{T}} \tag{27}\] and similarly for their power spectra. As in SSIII, we simulate this reconstruction method numerically by generating \(s^{\mathcal{P}}\) and \(s^{\mathcal{I}}\) with fiducial cosmology a HEALPix grid of \(\texttt{NSIDE}=32\) with \(\text{Var}(n^{\mathcal{T}})=200\mu K^{2}\). In the top panels of Fig. 6, the truth power spectra are shown in black, the MAP power spectra in green, and the effective noise in purple. The primordial MAP spectrum is biased low both on large scales (ISW contamination) and small scales (noise contamination). The ISW MAP spectrum is significantly biased low on all scales due to the same Wiener filter suppression. On the field level, we again expect (and indeed observe) no additive bias but a multiplicative bias on the \(\langle\hat{s}^{\mathbf{MAP}}_{lm}/s_{lm}\rangle\) proportional to the Wiener filter factor (for both the real and the imaginary components). The case for the ISW field is particularly egregious, as shown in the top panel of Fig. 8 (note that the \(y\)-axis does not even contain the unbiased case). Now we turn to the reconstructed real space maps (rows 1-3 of Fig.7) which shed more light onto the degeneracy between the reconstructed \(s^{\mathcal{P}}\) and \(s^{\mathcal{I}}\). Qualitatively, the primordial MAP map captures most features of the true signal, although the small-scale structures are suppressed due to Wiener filtering. However, the MAP estimator completely fails in the ISW reconstruction. In fact, the \(\hat{s}^{\mathcal{I},\mathbf{MAP}}\) map looks like a low pass-filtered \(s^{\mathcal{P}}\) map. The physical explanation is that, when we observe a large-scale hot spot in the sky, it is impossible to confidently associate it with either the primordial CMB or the ISW effect since they both have high amplitudes at low \(l\)'s. However, when we optimize the posterior function with respect to \(s^{\mathcal{I}}\), the algorithm neglects the \(s^{\mathcal{P}}\) (Eqn. 27). Thus, the algorithm inclines to increase the amplitude of the \(s^{\mathcal{I}}\) map wherever we observe a large-scale hot spot in \(d^{\mathcal{T}}\), even though it is most likely due to \(s^{\mathcal{P}}\) since it has a greater power spectrum. As a result, the large-scale hot spots of the reconstructed \(s^{\mathcal{I}}\) are heavily correlated with \(s^{\mathcal{P}}\), even though they are spatially independent in theory. An analogous bias can be said for the reconstructed primordial map, i.e., the reconstructed primordial map is biased high where there is an ISW hot spot (although it is slightly more difficult to discern in the figure). In short, as we attempt to reconstruct two maps from a single observation using the MAP estimator, the degeneracy introduces significant bias on the field level that correlates with the two reconstructed maps. Now, we apply the sampler, as defined in SSII.3, with \(s^{\mathcal{P}},s^{\mathcal{I}}\) as the free parameters. The 2-point result is shown in the top panels of Fig. 6 in orange, where we observe that the spectra samples scatter around the truth unbiased, and the spectrum of the mean sampled map is equivalent to the MAP spectrum. The sampling result on the field level is shown in rows 1-3 of Fig. 7. Here we confirm that the mean sampled map is indeed the field-level MAP solution, and thus suffers from the same bias and degeneracy. Therefore, although the sampling approach solves the multiplicative bias on the 2-point level, it is placing the right amount of power in the wrong place at the map level. #### iii.2.2 Varying cosmological parameters We now vary the 2 amplitude parameters (\(A^{\mathcal{P}}\) and \(A^{\mathcal{I}}\)) in Eqn. 25 and attempt to reconstruct them together with the map pixels. The MAP solutions for \(\hat{s}^{\mathcal{P},\mathbf{MAP}}\) and \(\hat{s}^{\mathcal{I},\mathbf{MAP}}\) are analogous to Eqn. 26 and 27. They satisfy \[\sum_{l}(2l+1)\left\{1-\frac{(\hat{A}^{\mathcal{P}}{}^{\mathbf{ MAP}})^{2}\mathbb{D}_{l}\mathbb{C}_{l}^{\mathcal{P}}}{Q_{l}^{2}}\right\} \tag{28}\] \[\sum_{l}(2l+1)\left\{1-\frac{(\hat{A}^{\mathcal{I}}{}^{\mathbf{ MAP}})^{2}\mathbb{D}_{l}\mathbb{C}_{l}^{\mathcal{P}}}{Q_{l}^{2}}\right\} \tag{29}\] where \[Q_{l}=(\hat{A}^{\mathcal{P}{}^{\mathbf{MAP}}})^{2}\mathbb{C}_{l}^{\mathcal{P}} +(\hat{A}^{\mathcal{I}{}^{\mathbf{MAP}}})^{2}\mathbb{C}_{l}^{\mathcal{I}}+ \mathbb{C}_{l}^{n,\mathcal{T}} \tag{30}\] We apply this MAP estimator to the numerical experiment for the CMB-ISW model discussed above. The biased 2-point results are shown in the bottom panels of Fig. 6 in green. In the case of the ISW reconstruction, the effective noise is so large that the slope of the posterior distribution with respect to the amplitude (Eqn. 29) never achieves 0. This results in \(A^{\mathcal{I}}\) and hence the MAP spectrum being set to 0, which we confirm using direct numerical optimization. This effect is also shown in the bottom panel of Fig. 8 where we plot the ratio between the MAP and truth pixel values in harmonic space. We also construct and apply an HMC-NUT sampler similar to the previous section but with the additional amplitude dependence. The sampled spectra and the spectrum of the mean map are shown in Fig. 6 in shaded orange and in red respectively. Similar to the CMB model, the sampled spectra are unbiased. This is also confirmed by the parameter constraint as shown in Fig. 9, where we see a 2.9 \(\sigma\) detection of the ISW amplitude. Meanwhile, as the amplitudes are now free, the spectrum of the mean map is still biased, but to lesser degrees than the MAP solution. This is consistent with the field-level results (rows 4-5 of Fig. 7). Here, for the ISW tracer, the MAP map is essentially constant spatially, whereas the mean map still contains the right amount of power but has placed it all in the wrong place (as in the case of fixed amplitude). Figure 6: The recovered primordial (\(s^{\mathcal{P}}\)) and ISW (\(s^{\mathcal{I}}\)) spectra using the various models in §IV.2. The simulation is drawn from the fiducial cosmology and a noise variance of \(200\mu K^{2}\). The top and bottom panels show the case of free and fixed amplitude respectively, and the left and right panels show the primordial and the ISW results respectively. For each case, the truth spectrum is shown in black, the MAP result is shown in green, and the \(1{-}\sigma\) credible interval of the sampled spectra is shown in shaded orange. The effective noise of the primordial maps is \(\mathbb{C}^{\mathcal{I}}+\mathbb{C}^{n,\mathcal{T}}\) shown in purple. The effective noise of the ISW map (\(\mathbb{C}^{\mathcal{P}}+\mathbb{C}^{n,\mathcal{T}}\)) is above the ISW spectra in the right panels on all scales and hence not shown in the plot. Notice that in the case of free amplitude, the MAP solution is different from the \(\mathbb{C}_{l}\) of the sampled maps (red). Figure 7: Comparison between the observed data (row 1), truth maps, MAP maps, and the mean sampled maps. Rows 2-3 show the model with fixed amplitude, while rows 4-5 show the model of free amplitude. The color bar is shared across each row. Note that in the case of fixed amplitude, the mean sampled map is equal to the MAP map (second and third columns in the second and third rows), while when the amplitude is varied, the two diverge (same columns in the fourth and last row, but most obvious in the ISW maps in the last row) since the posterior marginalized over the amplitude is no longer a simple Gaussian in the signals. Figure 8: The scatter plot of the ratio between the MAP and the truth \(a_{lm}\)’s real components. The upper panel shows the case with fixed amplitude and the bottom shows the case with free amplitude. In the case of the fixed amplitude, the mean of the ratios is again the Wiener filter factor (black). In the case of free amplitude, the MAP solution has a null spectrum. Thus the ratios scatter around 0 for all scales. Joint reconstruction of the CMB, ISW, and galaxy density maps We now present the main analysis, where we generalize the framework to jointly analyze data from CMB and wide-field galaxy surveys on the field level. The main goal is the following. Given an observed temperature map and 6 tomographic galaxy density maps, we want to construct estimates of the primordial (\(s^{\mathcal{P}}\)), the ISW (\(s^{\mathcal{I}}\)), and the galaxy density (\(\{s^{\mathcal{G},i}\}\)) maps, along with 2-point and cosmological parameter constraints, all in a consistent and computationally efficient Bayesian framework. From now on, we will only consider the sampling approach. The addition of galaxy maps introduces off-diagonal terms in the covariance of the posterior distribution, as in Eq. (9). As we shall see, following the algorithmic prescription in SSII.3, we can break the degeneracy between the primordial and the ISW maps using additional maps of the galaxy density. ### Theoretical Covariance Eq. (23) gives the general expression for the theoretical covariance of 2-dimensional tracer fields. In the case of CMB-ISW model, \(\langle s^{\mathcal{P}}s^{\mathcal{I}}\rangle=0\). Now, we introduce tomographic galaxy tracers, which correlate with the ISW effect (but not the primordial CMB) through their common dependence on the matter density field. This correlation will show up as off diagonal terms in their covariance matrix, as indicated explicitly in Eq. (9). Let \(N_{i}(\chi)\) be the normalized line of sight galaxy density distribution for the redshift bin \(i\), then the galaxy clustering window function that goes into Eqn. 21 is given by \[W^{g,i}(\chi)=b_{i}\frac{dN_{i}}{d\chi} \tag{31}\] where \(b_{i}\) is the linear galaxy bias that connects the matter power spectrum and the galaxy number density power spectrum. Throughout this study, we will treat each \(b_{i}\) as a scale-independent parameter with a fiducial value of 1. For wide-field photometric surveys, the galaxy redshift distribution \(N_{i}\) varies widely between experiments and catalogs. We consider the MagLim sample from DES Year3 [44; 45] as an example; projected distributions for LSST can be found in [46]. The MagLim catalog consists of 6 redshift bins spanning a redshift range of 0 to 1.2, calibrated using the self-organizing map methods (SOMPZ) and clustering redshifts. The catalog has been extensively tested on simulations and was used by the DES collaboration for the fiducial DES Year3 cosmology analysis [47; 45; 7]. For each redshift bin, we model \(N_{i}(z)\) using the center (\(z_{c}\)) and width (\(z_{w}\)) of the distribution, following the functional form \[\log N_{i}(z)\propto-\frac{1}{2}\left(\frac{z-z_{i}^{c}}{z_{i}^{w}}\right)^{2} \tag{32}\] We also used the number density (\(\Sigma_{i}\)) of the MagLim catalog for our simulations but assumed full sky coverage instead of the DES footprint. We tabulate the binned \(z_{c}\), \(z_{w}\), and \(\Sigma\) in Table. 1 and plot the normalized redshift distribution in Fig. 10. Using the window function in Eqn. 31, the covariance and the correlation coefficients between the ISW effect and the galaxy density can be computed using Eq. (23) and 24 (recall that primordial CMB is independent of the other tracers). The correlation coefficients are shown in the lower left corner of Fig. 11 in blue. This correlation is the information we hope to leverage to break the degeneracy between the primordial and the ISW maps. The bottom left panel demonstrates that the ISW effect is most strongly correlated with the galaxy maps on large scales at low redshift. ### Cosmological Parameters Let \(\mathbb{C}^{\mathcal{P}}\), \(\mathbb{C}^{\mathcal{I}}\), and \(\mathbb{C}^{\mathcal{G},i}\) be the fiducial power spectra of the primordial, ISW, and the galaxy fields. Similar to the CMB-ISW model, we introduce an amplitude parameter \begin{table} \begin{tabular}{l|c c c c c c} & bin 1 & bin 2 & bin 3 & bin 4 & bin 5 & bin 6 \\ \hline \(z^{c}\) & 0.30 & 0.47 & 0.62 & 0.78 & 0.90 & 1.00 \\ \(z^{w}\) & 0.10 & 0.07 & 0.07 & 0.07 & 0.05 & 0.05 \\ \(\Sigma\) [deg \({}^{-2}\)] & 447.29 & 319.90 & 325.48 & 435.04 & 316.74 & 298.85 \\ \end{tabular} \end{table} Table 1: The redshift parameters. Figure 10: The galaxy number densities for each redshift bin as a function of redshift \(z\). The \(n(z)\) is qualitatively modeled on the DES Year 3 MagLim sample. Figure 11: The correlation structure between different tracers and the sampler results. The lower left portion shows the correlation coefficients (blue) between the ISW and the 6 DES Year3 MagLim -like galaxy tracers from different redshift bins. The primordial field is not shown here since it is independent of all late-time tracers. One crucial observation is that (lowest, left panel) the ISW effect correlates most strongly with the low redshift galaxy density maps, since only at late times did dark energy significantly drive the accelerated cosmic expansion. The upper right corner shows the distribution of power spectra sampled from the joint posterior Eqn. 37 compared to the true power spectra. The input galaxy and ISW signals are shown in black while the distribution of the posterior samples is shown in orange. The diagonal sub-plots also contain the noise (auto-)spectra. Note that the galaxy auto-spectrum is unitless since the galaxy redshift distribution is normalized. The ISW auto-spectrum has the unit of \(\mu K^{2}\). Because of the different numeric scales, the galaxy-ISW cross-spectra and the ISW auto-spectrum are broken off into their own column with independent y-axis limits. for each tracer \[\mathbb{C}^{\mathcal{P}} \rightarrow(A^{\mathcal{P}})^{2}\ \mathbb{C}^{\mathcal{P}}\] \[\mathbb{C}^{\mathcal{I}} \rightarrow(A^{\mathcal{I}})^{2}\ \mathbb{C}^{\mathcal{I}} \tag{33}\] Eqn. 33 encodes a powerful stress test of \(\Lambda\)CDM. Consider the fiducial growth function \(D(z)\). If the truth cosmology deviates from \(\Lambda\)CDM, then the actual growth function will be \(A(z)D(z)\), where \(A(z)\) is some redshift-dependent factor. Thus, we can interpret \(A^{\mathcal{I}}\) as an integral of \(A(z)\) over the ISW window function. Then, any detection of \(A^{\mathcal{I}}\neq 1\) implies a deviation from the \(\Lambda\)CDM model. The ISW window function is rather wide. The tomographic galaxy density window function is much narrower. Therefore, multiplying each binned galaxy spectrum by an amplitude factor \(A^{\mathcal{G},i}\) would inform us of the consistency of \(\Lambda\)CDM at each redshift slice. This would be, and ultimately will be, a much more strenuous test of the \(\Lambda\)CDM model. Unfortunately, since we consider here only galaxy maps, \(A^{\mathcal{G},i}\)'s are entirely degenerate with the \(b_{i}\)'s in Eqn. 31. So in this study, we will set \(A^{\mathcal{G},i}=1\). In future works, one could jointly analyze galaxy and shear maps to break this degeneracy and directly constrain the amplitude of the growth function in relatively narrow redshift intervals. ### Noise Model and Simulation Since we are mostly interested in extracting the large-scale ISW signal, we again perform the simulation on a HEALPix grid of \(\texttt{NSIDE}=32\). We generate the temperature and galaxy tracers using the full covariance as described in SSV.1. For the observed temperature map, we inject white noise with a variance of \(200\mu K^{2}\). For the tomographic galaxy maps, we assume a noise spectrum of \[\mathbb{C}^{n,\mathcal{G},i}(l)=\frac{\pi^{2}}{180^{2}\Sigma_{i}} \tag{34}\] where \(\Sigma_{i}\) is the number density of bin \(i\) per square degrees. The realized spectra (black) and the modeled noise spectra (purple) are shown in Fig.12 (for the primordial and the ISW signals) and the upper right panels of Fig. 11 (for the ISW and the galaxy signals). ### Posterior Modeling and Sampling The observables are \[d^{\mathcal{T}} =s^{\mathcal{P}}+s^{\mathcal{I}}+n^{\mathcal{T}} \tag{35}\] \[d^{\mathcal{G},i} =s^{\mathcal{G},i}+n^{\mathcal{G},i}. \tag{36}\] Therefore, the inference problem is specified by the posterior distribution \[p(s^{\mathcal{P}}, \{s^{\mathcal{G}}_{i}\},s^{\mathcal{I}},A^{\mathcal{P}},A^{ \mathcal{I}},\{b_{i}\}|d^{\mathcal{T}},\{d^{\mathcal{G},i}\})\propto\left( \det\left[\mathbb{C}^{n,\mathcal{T}}\prod_{i=1}^{6}\mathbb{C}^{n,\mathcal{G}, i}\right]\det\left[\mathbb{C}_{l}(b_{1},...,b_{6},A^{\mathcal{I}},A^{\mathcal{P}}) \right]\right)^{-\frac{1}{2}}\] \[\times\exp\sum_{lm}\frac{-|d^{\mathcal{T}}_{lm}-s^{\mathcal{P}}_ {lm}-s^{\mathcal{I}}_{lm}|^{2}}{2\mathbb{C}^{n,\mathcal{T}}_{l}}\quad\exp \sum_{i=1}^{6}\sum_{lm}\frac{-|d^{\mathcal{G},i}_{lm}-s^{\mathcal{G}}_{lm,i}|^ {2}}{2\mathbb{C}^{n,\mathcal{G},i}_{l}}\exp\sum_{lm}\frac{-|\{s^{\mathcal{G} }_{lm,1},...,s^{\mathcal{G}}_{lm,6},s^{\mathcal{I}}_{lm},s^{\mathcal{I}}_{lm }\}|^{2}}{2\mathbb{C}_{l}(b_{1},...,b_{6},A^{\mathcal{I}},A^{\mathcal{P}})} \tag{37}\] where \(\mathbb{C}_{l}\) is given in Eq. (9). The first determinant terms encode the noise covariance, which is fixed in our example. The \(\mathbb{C}_{l}\) determinant encodes the posterior's dependence on the tracer amplitudes. The first two exponential terms come from the likelihood of the temperature and the galaxy maps. The last exponential term is the Gaussian priors on the signals, including the off-diagonal covariance matrix from Eq. (9). Not shown here is that all the free amplitudes have flat priors in the interval \([0.2,3]\). The structure of the sampler is much the same as in the CMB and the CMB-ISW models, following the prescription of SSII.3. Notice that since now the covariance \(\mathbb{C}_{l}(b_{1},...,b_{6},A^{\mathcal{I}},A^{\mathcal{P}})\) is non-diagonal and extremely high dimensional, we have to employ the block diagonal Cholesky decomposition method introduced in SSII.3 to make the sampler computationally feasible. Despite the high dimensionality of the problem, we find that the chain equilibrates quickly (more details in Appendix C). In general, the amplitude parameters have a much longer correlation length than latent map parameters during the sampling phase. Among the amplitude parameters, the ISW amplitude has a much longer correlation length of \(10^{2}\) samples. Overall, the sampler is very fast. The entire analysis took less than 10 hours on a single Apple M1 CPU. ### Results For each iteration of the sampler after burn-in, we collect a set of maps and parameters \[\{\mathbf{s}_{i},\mathbf{A}_{i}\}=\{s^{\mathcal{P}}_{lm,i},s^{\mathcal{I}}_{lm, i},\{s^{\mathcal{G}}_{lm,k}\}_{i},A^{\mathcal{P}},A^{\mathcal{I}},\{b_{k}\}_{i}\} \tag{38}\] for \(k=1,...,6\) labeled by a common sample index \(i\). The collection \(\{\mathbf{s}_{i}\}\) forms the set of posterior samples which we will now analyze. We will present our findings in three parts: cosmological parameter constraints, power spectra reconstruction, and field-level reconstruction. The final results for this joint analysis are shown in Fig. 11 - 13. #### v.5.1 Cosmological parameter constraints The constraints on the 2 temperature tracer amplitudes and the 6 galaxy biases are shown in Fig. 13 and summarized in Table. 2. All 8 parameters are unbiased within \(2\sigma\). Consistent with our previous findings, the best-constrained parameters are the primordial amplitude and the tomographic galaxy biases which all have small effective noise. For these parameters, we achieve percent-level constraints assuming our very simple problem setups. We find that \(A^{\mathcal{I}}\) is also unbiased and constrained to around 15% (or a 6.9\(-\sigma\) detection), improving dramatically compared to the \(\sim 40\%\) constraint (or 2.9\(-\sigma\) detection) in the absence of galaxy data. #### v.5.2 Power spectra constraints Besides obtaining the correct overall power spectra amplitudes, we show that the reconstructed power spectra are unbiased for all tracers for all scales. The results for the primordial CMB and the ISW effect are shown in Fig. 12, where the sampling result is shown in shaded orange and the truth is shown in black. For ISW, we further observe that the uncertainty of the power spectrum estimation also shrinks considerably around the truth compared to the CMB-ISW model. This gain in SNR is directly attributed to the new information from the galaxy tracer fields. The sampling result for the ISW and the galaxy tracers (all the tracers that are correlated with each other) are shown in the upper right panels of Fig. 11 in orange. Here we see that the method has captured all the auto- and cross-spectra of the tracer fields. The galaxy power spectra are especially well reco \begin{table} \begin{tabular}{c c} \hline Parameter & Constraint \\ \hline \(A^{P}\) & \(1.0029^{+0.0085}_{-0.0989}\) \\ \(A^{I}\) & \(1.03^{+0.15}_{-0.15}\) \\ \(b_{1}\) & \(1.0000^{+0.1070}_{-0.0084}\) \\ \(b_{2}\) & \(0.9882^{+0.0083}_{-0.0980}\) \\ \(b_{3}\) & \(1.0068^{+0.0081}_{-0.0082}\) \\ \(b_{4}\) & \(0.9900^{+0.0083}_{-0.0084}\) \\ \(b_{5}\) & \(1.0052^{+0.0079}_{-0.0079}\) \\ \(b_{6}\) & \(1.0060^{+0.0080}_{-0.0094}\) \\ \hline \end{tabular} \end{table} Table 2: Figure 12: The recovered \(s^{\mathcal{P}}\) and \(s^{\mathcal{I}}\) spectra using the joint reconstruction technique described in §V. For each case, the truth spectrum is shown in black, the 1\(-\sigma\) credible interval of the sampled spectra is shown in shaded orange, and the spectrum of the mean map is shown in red. The effective noise of the primordial maps is \(\mathbb{C}^{\mathcal{I}}+\mathbb{C}^{n,\mathcal{T}}\) shown in purple. The effective noise of the CMB map (\(\mathbb{C}^{\mathcal{P}}+\mathbb{C}^{n,\mathcal{T}}\)) is above the ISW spectra on all scales and hence not shown in the plot. Notice that both tracers are reconstructed with much higher SNR comparing to the CMB-ISW model. their intrinsic high SNR observations. #### iv.2.3 Field-level reconstruction The CMB-ISW-Galaxy model reconstructs tracer maps at higher SNR than previous models. The result for the temperature tracers is shown in Fig. 14. Comparing to the CMB-ISW model (Fig. 7), we see a dramatic improvement in the reconstruction accuracy. Under the multi-probe joint reconstruction framework, the field-level information in the galaxy maps funnels into the temperature map-making process and efficiently breaks the degeneracy between the primordial and the ISW field. Figure 13: The posterior distribution of the amplitudes \(A^{\mathcal{P}}\) and \(A^{\mathcal{I}}\) and the 6 galaxy biases \(b_{i}\)’s. The 2-dimensional contours label the 0.68 and 0.95 credible intervals. The KDE-smoothed histograms above the contours are the parameters’ marginal distributions, where the shaded intervals represent the 0.16 and 0.84 quantiles. The truth values are indicated by dashed black lines. Most notably, although the ISW signal is by far noise-dominated on all scales, the mean sampled ISW field is now actually tracing the structures of the true ISW field and de-correlated with the true primordial field. The primordial reconstruction also receives the same benefit, as the residual error of its reconstruction is visibly less correlated with the true ISW field, compared to the CMB-ISW model. The final result also includes samples of de-noised tomographic galaxy maps as shown in Fig. 15. In theory, the cross-correlation with the ISW component of the temperature map can boost the SNR of the reconstructed galaxy maps as well. However, since the observed galaxy maps are already high in SNR and the degrees of freedom in the galaxy maps by far overwhelm that of the temperature tracers, their improvement is negligible. Figure 14: Comparison between the observed data (row 1), truth maps, and the mean sampled maps. Rows 2-3 show the model with fixed amplitude, while rows 4-5 show the model of free amplitude. The color bar is shared across each row. Figure 15: Map-level reconstruction of the galaxy fields. Here we only show the first and the last redshift bins, the results for other bins are similar. Unit is in dimensionless over-density. Note that although the two redshift bins have the same map resolution, the pixel scale of the lower redshift bin corresponds to smaller physical scale. Thus, the lower redshift bin has higher map-level variance, as expected. Conclusions We have implemented a general hierarchical Bayesian framework that employs HMC to sample directly from the joint posterior of the field-level multi-probe model. We did this in a pristine framework: simulated all-sky maps with simple noise properties. One goal of this is to understand the advantages and limitations of field-level analyses prior to including more realistic effects. The other is more specific: to assess how accurately surveys can measure the ISW amplitude. We particularly focused on comparing two approaches: maximum a posteriori (MAP) and sampling. This enabled us to demonstrate both the well-known bias of the MAP two-point estimator [35] (e.g., the lower jagged curves in Fig. 2) and the multiplicative bias of the field-level values (Fig. 3). These MAP biases persist as we added more complexity to the data vector and the contributing signals. The sampling approach is also biased at the field level (although, in general to a lesser extent) but is unbiased for power spectra and cosmological parameter constraints, as illustrated in Figs. 4, 9, and 13. This suggests that _the Bayesian posterior sampler produces unbiased cosmological parameters when multiple surveys are analyzed jointly_. Given the potential biases involved in map-making and then cross-correlating, this seems to us to provide an excellent justification for the use of field-level analyses moving forward. Our results addressing the second goal can be expressed in a single number: using only CMB data, the amplitude of the ISW signal can be extracted from the CMB only with little power, perhaps at the 2-3 \(\sigma\) level. However, when galaxy survey data is added, we project a 6.9 \(\sigma\) detection. Before exploring the limitations of this projection, it is worth emphasizing that the long-term goal is to stress-test \(\Lambda\)CDM, and this method provides a test at the 10-15% level. However, this is but one of a slew of amplitudes that can be measured with upcoming data so we are optimistic about this general idea of constraining amplitudes of large-scale power spectra as a powerful way of testing the fiducial cosmological model. Back to the limitations of our analysis: assuming full-sky slightly overstates the capability of the CMB, which is masked in the Galactic plane and overstates by at least a factor of two the coverage of, e.g., LSST. Statistically, then, one might reasonably inflate our projections by \(\sqrt{2}\). However, there are a number of signals that we did _not_ include: galaxy shapes and CMB lensing [25; 26; 27]. The kernels for both of these - especially the former - overlap significantly with that of ISW, so we expect that including them will quite likely recover this factor of \(\sqrt{2}\). By adding these observables into the analysis, we could construct tomographic (convolved with different, albeit overlapping, kernels) maps of the matter density in a consistent Bayesian framework. However, before turning to real data, we must relax the simplifications of our simulations, so we spend the rest of this conclusion alerting ourselves and our readers to those hurdles. ### General Cosmological Parameters There is no conceptual barrier to including cosmological parameters that modify the shape of the cross-spectra (in contrast to \(A\) and \(b\) which modulate only their amplitudes). The main challenge is that we must specify the derivatives of the posterior distributions with respect to each of the cosmological parameters in a computationally efficient fashion (e.g., finite difference methods will be too inefficient in an inference algorithm of this scale). The process of differentiating through the Boltzmann code and the Limber approximation computation is especially difficult. We see a few ways for future projects to tackle this issue. 1. Use automatic differentiation to take gradients through the cosmological dependence [48]. 2. Use simple fitting functions (e.g., [49]) where the analytical derivatives are easily attainable. 3. Train a neural network-based cosmological emulator where the network is by definition differentiable [50; 51]. 4. Exclude cosmological parameters from the HMC sampling altogether. Instead, we can sample the cosmological parameters and the maps iteratively in the Gibbs sampling paradigm [52; 12; 27]. ### Masking, Anisotropic Noise, and other Systematic Effects The likelihood model we used in this study is very simple. We considered only the case of signal reconstruction on the full sky with isotropic noise and no masking. In order to adapt this algorithm for real data analysis, we must take into account the limitation of survey geometry for both experiments, as well as foreground and point source masks. Further, the noise in real data will often be anisotropic and often a parametric function of a set of systematics spatial templates. Both the masking and the anisotropic noise models will introduce off-diagonal terms in the covariance matrix computation in Fourier space. Thus, a computationally efficient solution is to still sample the full sky, unmasked, and noiseless maps in Fourier space, transform the maps into real space, and define the likelihood there. Once in real space, we can implement different anisotropic and parametric noise models, and even attempt to constrain nuisance noise model parameters during sampling as well. Systematic effects such as foregrounds and survey properties can be handled in the general framework on this field-level analysis. In particular, the _signal_ vector can be expanded to include these. This one-step approach - as opposed to the current treatments - may be necessary for future surveys with increased statistical precision. One simple way to understand why this may be needed: a sample of cosmological parameters that predicts large clustering is more likely to label an ambiguous object a galaxy (rather than a star) if it is near another galaxy. Other systematics - such as photometric redshift uncertainty - can be included by introducing nuisance parameters. Still others, such as intrinsic alignment in cosmic shear surveys, will require both field level parameters and nuisance parameters that define the model. ### Smaller Scales We have included only large scales here, and there is a huge advantage to doing so, in that the prior distributions of the signals is known to be Gaussian. There is a huge disadvantage to throwing out all the information available on small scales. Including small scales in the posterior requires a knowledge of the prior distribution of the signal, a distribution that is less and less Gaussian as we push to smaller scales. There are two possible approaches to this: (i) assume a simple distribution (e.g., Gaussian or log-normal [50]) and investigate the potential biases by running the pipeline on simulations and (ii) the ambitious approach of rolling the clock back and using the primordial fields as the parameters given the observed highly processed fields (e.g., [53; 54]). _Acknowledgements:_ This work is supported by U.S. Dept. of Energy contract DE-SC0019248 and by NSF Award Number 2020295. SD is grateful to the Aspen Center for Physics, where a workshop in Summer 2022 exposed him to some of these ideas. We are also grateful to Alan Heavens, Andrew Jaffe, Xiangchong Li, Marius Millea, Chirag Modi, Fabian Schmidt, and Ben Wandelt for useful conversations.
2304.11194
Digging deeper into NGC\,6868 I: stellar population
We use Gemini integral field unit observations to map the stellar population properties in the inner region ($\sim680\times470$ pc$^2$) of the galaxy NGC 6868. In order to understand the physical and chemical properties of the stellar content of this galaxy, we performed stellar population synthesis using the starlight code with the MILES simple stellar population models. We measured the absorption line indices Fe4383, Mg$_2$, Mg$_b$, Fe5270, Fe5335 for the whole FoV, and used them to derive Fe3 and [MgFe]'. These indices were used to derive [$\alpha$/Fe]. This galaxy is dominated by old metal-rich populations (12.6 Gyr; 1.0 and 1.6 Z$_\odot$) with a negative metallicity gradient. We also found a recent ($\sim63$ Myr) metal-rich (1.6 Z$_{\odot}$) residual star formation in the centre of the galaxy. A dust lane with a peak extinction in the V band of 0.65 mag is seen. No signs of ordered stellar motion are found and the stellar kinematics is dispersion dominated. All indices show a spatial profile varying significantly along the FoV. Mg$_2$ shows a shallow gradient, compatible with the occurrence of mergers in the past. Mg$_b$ and Fe3 profiles suggest different enrichment processes for these elements. We observe three distinct regions: for $R<100$pc and $R>220$pc, Mg$_2$, Mg$_b$ anti correlate with respect to Fe3 and [MgFe]', and for $100 \text{pc}<R<220 \text{pc}$, they correlate, hinting at different enrichment histories. The [$\alpha$/Fe] profile is really complex and has a central value of $\sim 0.2$ dex. We interpret this as the result of a past merger with another galaxy with a different [$\alpha$/Fe] history, thus explaining the [$\alpha$/Fe] maps.
João P. V. Benedetti, Rogério Riffel, Tiago Ricci, Marina Trevisan, Rogemar A. Riffel, Miriani Pastoriza, Luis G. Dahmer-Hahn, Daniel Ruschel-Dutra, Alberto Rodríguez-Ardila, Jose A. Hernandez-Jimenez, João Steiner
2023-04-21T18:02:34Z
http://arxiv.org/abs/2304.11194v1
# Digging deeper into NGC 6868 I: stellar population ###### Abstract We use Gemini integral field unit observations to map the stellar population properties in the inner region (\(\sim 680\times 470\) pc\({}^{2}\)) of the galaxy NGC 6868. In order to understand the physical and chemical properties of the stellar content of this galaxy, we performed stellar population synthesis using the starlight code with the MILES simple stellar population models. We measured the absorption line indices Fe4383, Mg\({}_{2}\), Mg\({}_{\rm B}\), Fe5270, Fe5335 for the whole FoV, and used them to derive Fe3 and [MgFe]'. These indices were used to derive [\(\alpha\)/Fe]. This galaxy is dominated by old metal-rich populations (12.6 Gyr; 1.0 and 1.6 Z\({}_{\odot}\)) with a negative metallicity gradient. We also found a recent (\(\sim 63\) Myr) metal-rich (1.6 Z\({}_{\odot}\)) residual star formation in the centre of the galaxy. A dust lane with a peak extinction in the V band of 0.65 mag is seen. No signs of ordered stellar motion are found and the stellar kinematics is dispersion dominated. All indices show a spatial profile varying significantly along the FoV. Mg\({}_{2}\) shows a shallow gradient, compatible with the occurrence of mergers in the past. Mg\({}_{\rm B}\) and Fe3 profiles suggest different enrichment processes for these elements. We observe three distinct regions: for R\(<\)100 pc and R\(>\)220 pc, Mg\({}_{2}\), Mg\({}_{\rm B}\) anti correlate with respect to Fe3 and [MgFe]', and for 100 pc\(<\)R\(<\)220 pc, they correlate, hinting at different enrichment histories. The [\(\alpha\)/Fe] profile is really complex and has a central value of \(\sim 0.2\) dex. We interpret this as the result of a past merger with another galaxy with a different [\(\alpha\)/Fe] history, thus explaining the [\(\alpha\)/Fe] maps. keywords: galaxies: individual (NGC 6868), galaxies: nuclei, galaxies: elliptical and lenticular, cD, galaxies: stellar content ## 1 Introduction Galaxies can widely be defined as either _passive_, that are not actively forming stars and host a red and old stellar population, and _star-forming_ galaxies, that are blue, hosting large fractions of young stellar populations. This bi-modality was found in many studies over the years (e.g. Kauffmann et al., 2003; Baldry et al., 2004; Noeske et al., 2007; Wetzel et al., 2012; van der Wel et al., 2014), even at high redshifts (\(z>2.5\) Brammer et al., 2009; Muzzin et al., 2013, for example). However, it is not yet clear which mechanisms are regulating star formation and transforming the blue galaxies into _red-and-dead_ ones. A major challenge in modern astrophysics is to determine the physical mechanism acting in quenching star formation in galaxies. Nowadays, it is established that active galactic nuclei (AGN) feedback plays an important role in regulating the star formation (SF) of its host galaxy (Di Matteo et al., 2005; Hopkins and Elvis, 2010; Harrison, 2017; Storchi-Bergmann and Schnorr-Muller, 2019; Riffel et al., 2021; Ellison et al., 2021). The gas inflowing, responsible for SF, also feeds the supermassive black hole (SMBH), triggering the AGN episode that can either heat (or expel) the gas, thus shutting down the SF (Fabian, 2012; King and Pounds, 2015; Zubovas and Bourne, 2017; Trussler et al., 2020). In cosmological simulations without including AGN and supernova (SN) feedback, the observed luminosity function of galaxies cannot be reproduced: the larger and smaller galaxies end up with higher masses than observed in the present-day universe (Springel et al., 2005). Also, the ages of the stars from the most massive galaxies are underestimated when compared with observations (Croton et al., 2006). These results show that some form of gas striping or heating must be taking place in these objects. However, distinguishing the nature of such processes is still challenging, once the simulations cannot reach the physical scales involved and use _ad hoc_ prescriptions to account for these mechanisms (Schaye et al., 2015). In order to really disentangle the effects of SN feedback and AGN we need to look at the vicinity of SMBH and trace the star formation history (SFH) of that population in order to understand the effect of the AGN in the stellar population (Riffel et al., 2021). Past studies have tried to establish this link, but the results have been controversial. Despite SF being common in AGNs (Riffel et al., 2009; Ruschel-Dutra et al., 2017; Mallmann et al., 2018; Riffel et al., 2021; Burtscher et al., 2021; Dahmer-Hahn et al., 2021; Riffel et al., 2022), the time scale for starting the star formation (\(\sim 100\) Myr, Hickox et al., 2014; Burtscher et al., 2021) is far greater than for the AGN triggering (\(\sim 0.1-1\) Myr, Novak et al., 2011; Schawinski et al., 2015), preventing us from connecting the two. Some studies show a correlation between the fraction of young populations and the AGN luminosity, where the most luminous sources present the highest fraction (Riffel et al., 2009; Ruschel-Dutra et al., 2017; Zubovas & Bourne, 2017; Mallmann et al., 2018). However, the hard X-rays (14-195 keV) luminosity from the galaxies does not seem to correlate with the fraction of young populations (Burtscher et al., 2021). Instead, mass loss from intermediate-age stars seems to be important in AGN feeding (Riffel et al., 2022). Most of the previous studies have been done on relatively bright objects. However, the most common form of AGN in the local Universe is low luminosity AGN (LLAGN) in massive galaxies (Ho, 2008). Most of them are classified as low-ionization nuclear emission-line region (LINER) objects (Heckman, 1980). However, despite their significance, the physical nature of these objects is still poorly understood. Since their discovery, many mechanisms have been proposed to explain the LINER signature beyond the LLAGN paradigm (Ferland & Netzer, 1983; Halpern & Steiner, 1983), once many other physical processes can mimic the same spectral signatures without the LLAGN (these objects are known as LIERs Cid Fernandes et al., 2011; Belfiore et al., 2016) such as shocks (Heckman, 1980), hot low-mass evolved stars (HOLMES) (Binette et al., 1994; Yan & Blanton, 2012; Papaderos et al., 2013; Singh et al., 2013; Belfiore et al., 2016; Oliveira et al., 2022) and starbursts with ages between 3 and 5 Myr, dominated by Wolf-Rayet stars (Barth & Shields, 2000). With current observational technology, one can disentangle the different ionization mechanisms by performing detailed spatially resolved studies analysing both the stellar population and the ionized gas components. Even in the LLAGN hypothesis, the effects of such objects in their host galaxy are still uncertain as most studies focusing on the AGN impact over galaxies are performed with high-luminosity AGN (e.g. Seyferts and quasars, e.g. Nayakshin & Zubovas, 2012). With the rising importance of LINERs, new studies have been analysing such impacts, although a complete picture is yet to be drawn and further research is needed. Integral Field Unit (IFU) spectroscopy has expanded our view towards early-type galaxies, their formation, and evolution. This technique allows one to perform spatially resolved studies of stellar populations and better constrain the kinematical structure of these objects, with, for example, the emergence of kinematically distinct cores (KDCs), counter-rotating stellar discs (see Cappellari, 2016, for a review). Despite previous studies being able to reproduce the stellar population parameters of ETGs with a rapid in-situ conversion of gas into stars (e.g. Chiosi & Carraro, 2002) including a fully consistent chemical evolution (e.g. Vazdekis et al., 1997), the emergence of these structures has been seen as evidence for the importance of merger processes in ETG formation. Dry minor mergers have already been established as a subsequent growth pathway for ETGs (the two phases of galaxy formation Oser et al., 2010; Navarro-Gonzalez et al., 2013), however, they rarely affect the central regions of galaxies. Therefore, studying these structures in galactic cores may help us further elucidate the formation and evolution of ETGs (e.g. Krajnovic et al., 2015). From the above, detailed studies thoroughly analysing the nuclear regions of galaxies probing the stellar population and gas content are fundamental to elucidate the impact of the AGN with respect to the galaxy evolution. In this sense, an "artisanal" approach is better at analysing the details that would otherwise be missed in large surveys. With this in mind, here we present a detailed GMOS IFU study of the galaxy NGC 6868, a nearby (27.70 Mpc, Tully et al., 2013) elliptical galaxy (E2, de Vaucouleurs et al., 1991). Some basic parameters extracted from NED can be seen in table 1 and three images from NGC 6868 in different scales are shown in Fig. 1. NGC 6868 is the brightest member of the Telescopium group. Rickes et al. (2008) have shown that NGC 6868 exhibits LINER emission in its centre, which has been attributed to a combination of photoionization by an LLAGN and shocks. They also investigate this galaxy's metallicity distribution and ionized gas by means of long-slit spectroscopy and stellar population synthesis. According to the authors, Lick indices present a negative gradient indicating an overabundance of Fe, Mg, Na and TiO in the central parts with respect to the external regions. Mg\({}_{2}\) correlates with Fe5270 and Fe5335, suggesting that these elements probably underwent the same enrichment process in NGC 6868. The lack of correlation between computed galaxy mass and the Mg\({}_{2}\) gradient suggests that this elliptical galaxy was formed by merger events. The stellar population synthesis shows the presence of at least two populations with ages of 13 and 5 Gyr old. The fact that this galaxy apparently has multiple ionization scenarios and also shows signs of complex star formation history makes NGC 6868 an excellent candidate to further investigate the mechanisms behind LINER emission and the processes involved in the evolution of early-type galaxies. NGC 6868 was already observed in different wavelengths. \begin{table} \begin{tabular}{l c} \hline Parameter & NGC 6868 \\ \hline RA (J2000) & 20\({}^{\rm h}\)09\({}^{\rm m}\)54\({}^{\rm s}\).07 \\ Dec. (J200) & -48\({}^{\circ}\)22\({}^{\circ}\)46.4\({}^{\prime\prime}\) \\ Morphology\({}^{\rm a}\) & E2 \\ R (mag)\({}^{\rm b}\) & 7.91 \\ M\({}_{\rm R}\) (mag)\({}^{\rm b}\) & -24.7 \\ Diameter (kpc)\({}^{\rm c}\) & 73.0 \\ L\({}_{\rm X}\) (erg s\({}^{-1}\))\({}^{\rm d}\) & 8.54 \(\cdot\) 10\({}^{\rm d}\) \\ Nuclear Activity\({}^{\rm e}\) & LINER \\ Radio classification\({}^{\rm f}\) & Flat-Spectrum Radio Source \\ A\({}_{\rm V}\) (mag)\({}^{\rm g}\) & 0.152 \\ Radial Velocity (km s\({}^{-1}\))\({}^{\rm b}\) & 2854 \\ Distance (Mpc)\({}^{\rm j}\) & 27.70 \\ Redshift (\(x^{\rm h}\)) & 0.00952 \\ Velocity dispersion (km s\({}^{-1}\))\({}^{\rm j}\) & 250 \\ \hline \end{tabular} Data available in NED1 Footnote 1: The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration \end{table} Table 1: Table showing some basic parameters of the galaxy NGC 6868. Machacek et al. (2010) using X-ray data found strong evidence of a past encounter between NGC 6868 and NGC 6861, displaying tidal tails and shells. Moreover, they found X-ray cavities, indicative of past AGN activity. Hansen et al. (1991) studied NGC 6868 using CCD images and an International Ultraviolet Explorer (IUE) low-resolution spectrum, and they found a dust lane in the centre of the galaxy with an extended dust component with spiral features. A series of papers have reported the presence of ionized gas, finding a disturbed morphology and complex kinematics for the galaxy with a possible counter-rotating disc (Buson et al., 1993; Zeilinger et al., 1996; Macchetto et al., 1996). Caon et al. (2000) have analysed long-slit observations with distinct position angles (PAs). They reported a rotating disc of ionized gas, in agreement with the stellar velocity field. However, in a different PA, a counter-rotating gas disc with respect to the stars is found, displaying an inner component that shows a counter-rotation. Also, a KDC was seen where the kinematical break radius is at 3". Bregman et al. (1998) using IRAS data confirmed the presence of cold dust. NGC 6868 has been observed in the radio by Slee et al. (1994) at 2.3, 5, and 8.4 GHz and Mauch et al. (2003) at 843 MHz and Healey et al. (2007) reported a low-power flat spectrum radio source in its centre (\(\alpha\sim 0.07\)). The brightness, temperature and spectral slope are inconsistent with HII regions, so an AGN was the most likely source of the radio emission. Rose et al. (2019) using ALMA observations detected molecular gas in the centre of NGC 6868 drifting in non-circular motions. Also, they reported an HI absorption. In this paper, the first of a series aimed at studying in detail this object, we will focus on the stellar content of NGC 6868. It is organized as follows: in SS 2, we describe the observations and the reduction procedures; in SS 3, we present the employed methodology; in SS 4, the results are presented and a comparison with data from other studies. Discussion of the results is made in SS 5 and the conclusion and summary are made in SS 6. Throughout this paper, we assume that solar metallicity corresponds to \(Z_{\odot}=0.019\)(Girardi et al., 2000). ## 2 Observation and Data Reduction NGC 6868 was observed on 2013 May 04 with the Gemini South Telescope using the Gemini Multi-Object Spectrograph (GMOS) in the IFU mode (Allington-Smith et al., 2002; Hook et al., 2004). This object is part of the DIVING\({}^{3}\)D survey, which made observations of the central regions of all 170 galaxies in the Southern hemisphere with \(B<12.0\) and \(|b|>15^{\circ}\)(see Steiner et al., 2022 for more details). The one slit set-up was used for the observations, resulting in an FoV of 5.0 x 3.5 arcsec\({}^{2}\). The B600-G5323 grating was used with a central wavelength of 5620 A and a spectral range from 4260 A to 6795 A. The spectral resolution is 1.8 A, estimated with the O i \(\lambda\)5577A sky line. Flat-field exposures, bias and CuAr lamp spectra were acquired for calibration and correction purposes. The seeing of observation was estimated using stars that are present in the acquisition image of the galaxy, taken with the GMOS imager in the r-band (SDSS system). Moreover, the DA white dwarf EG 274 (Hamuy et al., 1992) was observed in order to perform the spectrophotometric calibration. Figure 1: Images from NGC 6868 in three different scales. (a) Composite DSS image showing NGC 6868 and close neighbours. It is the brightest group member from the Telescopium group (AS0851). (b) Residuals from the photometric modelling of the acquisition image in the r band. A clear dust lane, also reported by Veron-Cetty & Veron (1988), is seen in the centre of NGC 6868. At \(\sim\)11”W and 5”N a stellar-like object is found and, according to Hansen et al. (1991), may be a cannibalised galaxy from which spiral features are emerging. (c) Continuum image from NGC 6868 extracted from our GMOS data cube at 5700 Å exhibiting an irregular light profile with a distortion in the SE direction. The (0,0) is defined as the peak in the continuum image corrected by extinction, as can be seen in Fig. 13. These and some other basic information regarding the observation are given in table 2. Standard IRAF procedures (Tody, 1986, 1993) were followed to reduce the data using the tasks contained in the Gemini IRAF package. Bias, flat-fields, wavelength calibration, dispersion correction, and flux calibration procedures were applied to the science data. To remove cosmic rays, we used the lacos software (van Dokkum, 2001). The data cube was constructed with a spatial sampling of 0.05 arcsec. After the standard reduction procedures, other data treatments were applied by means of improving the visualization of the data as described in Menezes et al. (2019): removal of the high spatial noise using a Butterworth filter, the correction of the differential atmospheric refraction (DAR), instrumental fingerprint removal through PCA Tomography and Richardson-Lucy deconvolution. The removal of high-frequency noise from the spatial dimension was performed by convolving each image of the data cube with a Butterworth filter (Gonzalez and Woods, 2008; Ricci et al., 2014). The filter order used was \(n=2\) and the cut-off frequency was \(\mathrm{F_{c}}=0.14\)\(\mathrm{F_{Ny}}\) where \(\mathrm{F_{Ny}}\) is the Nyquist frequency, corresponding to 0.5 \(\mathrm{spaxel}^{-1}\). This cut-off frequency was chosen to remove only spatial frequencies higher than the PSF of the data cube, assuring that no valid scientific information was lost in this process. The correction of the differential atmospheric refraction (DAR) consists of spatially shifting the wavelength planes of the data cube so that the spectrum of a given point in the galaxy occupies always the same position at all wavelengths. The correction of the differential refraction effect on the data cube of NGC 6868 was performed using the equations from Bonsch and Potulski (1998) and Filippenko (1982), which assume a plane parallel atmosphere and calculate the displacement of the centroid of the galaxy for each wavelength as a function of the zenith distance, the refraction index and other atmospheric parameters. The PCA Tomography technique (Steiner et al., 2009, and references therein) applies Principal Component Analysis (PCA) to data cubes. This procedure searches for spectro-spatial correlation across a given data cube. The results are the eigenvectors (or eigenspectra), which show the correlations between the wavelengths caused by some physical phenomenon or an instrumental fingerprint, and the tomograms, which correspond to the projection of the data cube onto each eigenvector. This is an orthogonal transform meaning it can be reversed. The eigenvectors are ordered by how much of the variance in the data cube they are able to explain, meaning the first eigenvector explains most of the variance and so on. Using this technique, the instrumental fingerprints may appear as one of the eigenvectors that would otherwise be entangled with the data. This instrumental signature may be isolated by building a data cube containing only this issue. In the end, we subtract this fingerprint from the science data cube. After the removal of the instrumental fingerprints, the reddening caused by the dust within the Milky Way was corrected using the CCM law (Cardelli et al., 1989) and \(A_{V}=0.152\) mag (Schlafly and Finkbeiner, 2011). The telluric lines were also removed and the spectra were brought to the rest frame using the redshift shown in Table 1. Lastly, the Richardson-Lucy deconvolution (Richardson, 1972; Lucy, 1974) is an iterative process that aims at reconstructing the image of the galaxy before its convolution with the PSF when it passes through the atmosphere and the optical apparatus of the telescope. After 10 iterations, the final estimated PSF was 0.71 arcsec, estimated from a spatial profile obtained along the red wing of the broad H\(\alpha\) emission. An image from the continuum of NGC 6868, extracted from the final data cube, is shown in Fig. 1. ## 3 Methodology ### Stellar population synthesis In order to derive the SFH (star formation history), we used the starlight code (Cid Fernandes et al., 2004, 2005, 2013; Cid Fernandes, 2018) which fits the continuum spectra by combining in different proportions the contribution from different simple stellar populations (SSPs), taking into account reddening and kinematical parameters. In other words, it tries to match the observed spectrum (\(O_{\lambda}\)) with a modelled one (\(M_{\lambda}\)), described by \[M_{\lambda}=M_{\lambda_{0}}\left[\sum_{j=1}^{N_{x}}x_{j}b_{j,\lambda}r_{ \lambda}\right]\otimes G(v_{*},\sigma_{*}), \tag{1}\] where \(M_{\lambda_{0}}\) is the flux in a predetermined normalization wavelength, \(N_{x}\) is the number of elements in the SSP base, \(x_{j}\) is the j-th component of the population vector (\(\vec{x}\)) that stores the light contribution from each SSP (with respect to the normalization wavelength2, \(\lambda_{0}\) ). \(b_{j,\lambda}\) is the spectrum of the j-th component, \(r_{\lambda}\) is the reddening factor, defined by \(r_{\lambda}=10^{-0.4(A_{\lambda}-A_{\lambda_{0}})}\) and \(A_{\lambda}=A_{\mathrm{V}}q_{\lambda}\), where \(q_{\lambda}\) is the extinction law evaluated at \(\lambda\). Lastly, there is a convolution with a Gaussian distribution to take into account the line-of-sight velocity distribution (LOSVD) of the stellar component in the spectra, where \(v_{*}\) is the line-of-sight stellar velocity and \(\sigma_{*}\) is the stellar velocity dispersion. To determine the best fit, the code tries to minimize a \(\chi^{2}\) defined by Footnote 2: We normalized our spectra in the 5700 Å region, due to the lack of significant stellar absorption bands and having good S/N in the whole FOV. \[\chi^{2}=\sum_{\lambda}[(O_{\lambda}-M_{\lambda})\omega_{\lambda}]^{2} \tag{2}\] where \(\omega_{\lambda}\) is the weight. Using this parameter, we are able to mask (\(\omega_{\lambda}=0\)) spurious features or contributions from other non-stellar components (e.g. emission lines from the ionized gas) or give more weight to important regions of our spectra (e.g. characteristic adsorptions that allow better kinematical predictions, if that is the intended objective). Along with the emission lines present in our spectra, the Mg i absorption (an \(\alpha\) element) was masked during the synthesis in order to minimize possible degeneracies that can be introduced by the \(\alpha\)-enhancement effects in the determination of the galaxy metallicity (the [\(\alpha\)/Fe] is derived in SS 3.2). One of the fundamental ingredients in this method is the SSPs used in the fit. We constructed our base with the models developed by Vazdekis et al. (2016, hereafter E-MILES), using the evolutionary tracks of Girardi et al. (2000) and the Kroupa (2001) initial \begin{table} \begin{tabular}{l c} \hline Parameter & NGC 6868 \\ \hline Observation date & 2013 May 04 \\ Gemini Programme & GS-2013-AQ-52 \\ Seeing (arcsec) & 0.77 \\ Airmass & 1.056 \\ T\({}_{\mathrm{cap}}\) (s) & 1800 \\ Number of exposures & 1 \\ \hline \end{tabular} \end{table} Table 2: Table displaying some basic observational parameters mass function. These models were chosen because their wavelength range overlaps with our data and have a better spectral resolution (2.51 A FWHM) when compared to other stellar population models (e.g. Bruzual & Charlot, 2003; Maraston & Stromback, 2011; Conroy et al., 2009)3. Moreover, MILES is a modern empirical stellar library spanning a wide range of stellar parameters, therefore, allowing us to better explore different stellar population properties in our object. The final SSPs span 15 ages (0.0631, 0.1, 0.16, 0.28, 0.50, 0.89, 1.26, 1.41, 2.51, 3.98, 6.31, 7.94, 10.0, 11.2, 12.6 Gyr) and six metallicities (0.005, 0.02, 0.2, 0.4, 1.0, 1.6 Z\({}_{\odot}\)). Footnote 3: The high resolution is fundamental to precisely modelling the stellar absorptions, allowing a detailed study of the gas kinematics when discounting the stellar component. This allows a self-consistent analysis that will be pursued in a future paper (Benedetti et al., in preparation). The E-MILES models, however, lack really young and hot stars, having only SSPs with ages greater than 63 Myr. In order to assess the possibility of such a young population in NGC 6868, we performed stellar synthesis with the Bruzual & Charlot (2003) models, which include stars as young as 0.1 Myr in the whole FoV. We found no contribution from components with less than 63 Myr. Since this object has no ongoing star formation, we decided to use the reddening law from Cardelli et al. (1989) to model the dust attenuation (A\({}_{V}\)) intrinsic to our object. To account for a possible featureless continuum (FC) emission of an AGN, we followed Riffel et al. (2009) and a power-law spectrum with \(f\sim v^{-1.5}\) was added to the base. In other test runs, we have included FC with different exponents, ranging from -1.75 to -1.0, however, no significant contribution from any of these components was found. Finally, to better understand the age and metallicity spatial distribution, we calculated the light-weighted mean stellar age (\(\langle t\rangle_{L}\)), as \[\langle t\rangle_{L}=\sum_{f}x_{f}\log(t_{f}), \tag{3}\] and the light-weighted mean stellar metallicity(\(\langle Z\rangle_{L}\), Cid Fernandes et al., 2005), as \[\langle Z\rangle_{L}=\sum_{j}x_{j}Z_{j}. \tag{4}\] \(\log(t_{f})\) is used in the computation of \(\langle t\rangle_{L}\) because the ages of the stellar population span many other orders of magnitude from \(10^{6}\) to \(10^{10}\). To optimize the data management, we used the megcabue tool (Mallmann et al., 2018; Riffel et al., 2021) which takes as input the data cube and prepares the data for the synthesis procedure, executes the synthesis and also performs the preliminary analysis (e.g. calculate the equations 3 and 4) as well as mounting the maps with the important parameters. ### Indices measurements and Alpha-enhancement To better constrain the assembly history of NGC 6868, especially the \(\alpha\)-enhancement, we have measured indices for the absorption lines. We measured the indices for Fe4383, Mg\({}_{2}\), Mg\({}_{b}\), Fe5270, Fe5335 using the definitions presented in Riffel et al. (2019) which are based on Worthey et al. (1994) and were subsequently used to derive Fe34 and [MgFe]5. All spaxels were corrected to the rest frame using the line-of-sight Doppler shift velocity derived by starlight. At first, we decide not to correct the effects due to the velocity dispersion. Footnote 4: Fe3 = (Fe4383 + Fe5270 + Fe5335)/3 (Kuntschner, 2000) Footnote 5: [MgFe] \(=\sqrt{\rm Mg_{b}}\) (0.72 \(\times\) Fe5270 + 0.28 \(\times\) Fe5335) (Thomas et al., 2003) The in-house pacce code (Riffel & Borges Vale, 2011) was used to perform the equivalent widths (EW) measurements of these indices. The code uses predefined continuum bands around a given line and fits a pseudo-continuum line. Once the line is fitted, the EW is calculated through \[W_{\lambda}=\left(1-\frac{A_{2}}{C}\right)(\lambda_{u}-\lambda_{i}) \tag{5}\] being \(W_{\lambda}\) the measured EW of the line, \(A_{2}\) and \(C\) are the integrated areas below the absorption and below the pseudo-continuum, respectively, and \(\lambda_{u}\) and \(\lambda_{i}\) are the predetermined final and initial wavelength of the absorption feature. The trapezium integration method is used and the calculation of W\({}_{\lambda}\) is iterated over the whole cube. A problem we faced during these measurements was the contamination from the weak [N i] \(\lambda\) 5199 A emission line which is located between the Mg i absorption and the pseudo continuum definition of Mg\({}_{\rm B}\). We were able to work this around by modelling the emission line profile and subtracting this contribution from the spaxels. In order to properly model this component, we discounted the previously derived stellar spectra and fitted the remaining continuum with a high-order polynomial. Afterwards, we adjusted this line using the irscube package (Ruschel-Dutra & Oliveira, 2020). We modelled the [N i] \(\lambda\) 5199 A emission line using two Gaussian components and coupled its kinematics with the [N ii] \(\lambda\lambda\)6548, 6583 A. Additional details on emission line fitting will be presented in Benedetti et al. (_in preparation_). Once the lines were properly fitted we were able to discount this emission and remeasure the Mg i indices. Aimed at constraining the assembly history of NGC 6868, we derived the [\(\alpha\)/Fe] of the stellar population. Following the approach described in La Barbera et al. (2013) using the indices Mg\({}_{\rm B}\), Fe3 and the luminosity-weighted age derived from starlight to obtain Z\({}_{\rm Mg_{\rm B}}\) and Z\({}_{\rm Fe3}\). To measure the Mg\({}_{\rm B}\) and Fe3 indices, we first broadened our spectra convoiting with a Gaussian to match the spectral resolution of MILES models (2.51 A FWHM, Vazdekis et al., 2015) and the measured indices were corrected by the velocity dispersion following the prescriptions by de la Rosa et al. (2007). We then interpolated the Vazdekis et al. (2015) models grids by fixing the stellar population age to obtain Z\({}_{\rm Mg_{\rm B}}\) and Z\({}_{\rm Fe3}\), as illustrated in Fig. 2. As mentioned in La Barbera et al. (2013), one may need to extrapolate the grids in \(\alpha\)-enhanced populations. This is represented as the dotted lines in the same figure. Afterwards, we calculated the proxy [Z\({}_{\rm Mg_{\rm B}}\)/Z\({}_{\rm Fe3}\)]= Z\({}_{\rm Mg_{\rm B}}\)-Z\({}_{\rm Fe3}\) which finally can be used to get (Vazdekis et al., 2015): \[[\alpha/{\rm Fe}]=0.02+0.56[Z_{\rm Mg_{\rm B}}/Z_{\rm Fe3}]. \tag{6}\] ## 4 Results ### Stellar population synthesis We present an example of the fits for an individual, nuclear spaxel in Fig. 3. A good matching between the observed (black) and modelled (red) spectra can be seen. The quality of the fits over the full FoV is ensured by the signal-to-noise (S/N) ratio map and can be certified in the \(\chi^{2}\) and Adev6 maps (Fig. 4). Footnote 6: Adev is the Allan deviation and serves as a quality indicator of the fit. It corresponds to the percentage mean \(|O_{A}-M_{A}|/O_{A}\) deviation over all fitted pixels. The resulting stellar population derived over the full FoV by the fitting procedure resulted in a contribution of mainly 3 components: two old metal-rich (12.6 Gyr; 1.0 and 1.6 Z\({}_{\odot}\)) and a smaller contribution of a young also metal-rich (63.1 Myr; 1.6 Z\({}_{\odot}\)). The spatial distribution of each component is shown in Fig. 5. It is clear that the central region is dominated by an old stellar population (\(\sim 12\) Gyr), as is to be expected for massive early-type galaxies. However, this contribution is divided into two different components with different metallicities, having 1.0 and 1.6 Z\({}_{\odot}\) each. They also exhibit a distinct spatial distribution, with more metal-rich stars dominating the central region of our FoV. The same distribution is seen for the mass fractions derived for each component. The 1.0 Z\({}_{\odot}\) SSP has a mean relative mass fraction of \(\sim 60\) % ranging from \(\sim 41\) % to \(\sim 84\) %. On the other hand, the 1.6 Z\({}_{\odot}\) SSP has a mean relative mass fraction of \(\sim 40\) % ranging from \(\sim 14\) % to \(\sim 58\) %. Rickes et al. (2008) have also conducted stellar population synthesis studies in NGC 6868 in a larger scale ( \(R<17\) arcsec) and also found an ubiquitous old population. However, they also report an intermediate-age stellar population of 5 Gyr which peaks in the centre of the galaxy, contradicting our findings. However, their data ranges from \(5100-6800\) A therefore lacking the bluer end of the spectrum in order to constrain the presence of intermediate and young stars. Also, their SSP base consists of very few elements, that most likely do not portray all the different SFH a galaxy can have. Therefore, the discrepancies seen are most likely the result of better data employed here (larger wavelength coverage and S/N) and the improvement of the synthesis method. Despite the different FCs we tried in our base, we did not find any contribution from an AGN. As this is a galaxy classified as a LINER, the supermassive black hole is likely accreting at a really low rate, making its contribution to the continuum undetectable. In order to represent the galaxy's age and metallicity in a single map, we have derived the \(\langle t\rangle_{L}\) and \(\langle Z\rangle_{L}\) maps, which are shown in Fig. 6. In these maps, one can observe that the galaxy's mean age is slightly smaller and the metallicity higher in the nucleus. Finally in Fig. 7 we show the reddening map (A\({}_{\rm V}\)). It reaches a peak of \(\sim\)0.65 mag in the centre of the image and its morphology resembles a dust lane embedded in the centre of the galaxy. This is in agreement with literature results (e.g. Veron-Cetty & Veron, 1988; Hansen et al., 1991; Buson et al., 1993). starlight also outputs the line of sight velocity and velocity dispersion. These maps are shown in Fig. 8. It is evident that NGC 6868 does not display a rotation profile or any ordered motion. Instead, it appears that the stars are in random motions, as can be seen in the \(\sigma_{*}\) distribution, showing a clear peak \(\sim 290\) km s\({}^{-1}\) in the centre of the galaxy. In order to verify our results, we have also performed the fits using pPXF (Cappellari & Emsellem, 2004; Cappellari, 2017) with the same SSP base and got the exact same results. Using the spectral resolution from our data and 5000 A as our reference wavelength, we come to the conclusion that we are not able to distinguish any velocities with less than 100 km-s\({}^{-1}\). Therefore, we would need data with a better spectral resolution to properly characterize the kinematics of the central region of NGC 6868. In past studies (Caon et al., 2000), the stellar kinematics was measured using long-slit data and they found (1) a shallow rotation profile with a peak velocity of \(\sim 45\) km s\({}^{-1}\) at a radius of 42 arcsec (\(\sim 5.8\) kpc); (2) a KDC which exhibits a counter rotation with respect to the outer regions and; (3) a drop in velocity dispersion at the centre of the galaxy, only seen in one of their PAs. However, in the inner part (\(<\)150 pc) of NGC6868 we are not detecting any sign of rotation nor drop in \(\sigma_{*}\) reported. Moreover, comparing the derived velocity dispersion profiles from other PAs in their data, there is also no evidence of this behaviour. This is also the case for the KDC, where the velocities within it reach at most 10 km s\({}^{-1}\). The most probable explanation for these mixed findings is the high variation between different PAs as can be seen in Fig. 8. Depending on the angle observed, a rotation (or counter-rotation) can or cannot be observed, due to the high-velocity dispersion in that region, making precise determinations of the velocity challenging. Moreover, an important caveat is that a KDC can only be detected when comparing the core and the outer parts of a galaxy. The FoV of our data does not allow such a comparison, therefore, in the present paper, we are unable to clearly say whether this object hosts a KDC or not. ### Absorption line indices The maps for the absorption line indices calculated in the Lick/IDS resolution are shown in Fig. 9. All the values shown were corrected by the intrinsic velocity dispersion found with starlight, except for the Mg\({}_{2}\) index because it is almost insensitive to Doppler broadening. In our case, this correction would be smaller than the intrinsic error Figure 2: Diagrams showing the grids used to derive the Z\({}_{\rm Mg_{2}}\) (top) and the Z\({}_{\rm Fe3}\) (bottom) using the light-weighted mean age derived from starlight and the measured EW from each absorption.Each spaxel is represented by one dot. of the method (\(\spherical\) 0.003, e.g. Kuntschner 2000), so we chose not to apply any correction. This is not the case for the other indices. The correction factors for each spaxel follow the spatial distribution from the velocity dispersion (Fig.8). The derived correction factors for the Mg\({}_{\rm b}\) index are in the range 1.09-1.14; Fe4383, 1.12-1.17; Fe5270, 1.23-1.29; Fe5335, 1.40-1.54. The maximum error found for each index was Mg\({}_{2}\): 0.0044 mag, Mg\({}_{\rm b}\): 0.15 A, Fe4383: 0.36 A, Fe5270: 0.14 A, Fe5335: 0.18 A, Fe3: 0.14 A [MgFe]\({}^{\ast}\): 0.17 A. These errors leave all the following reported gradients unaffected. In order to assess the confidence of our results, we made extractions Figure 4: Maps displaying the statistical parameters derived from the stellar population synthesis. From left to right: the signal-to-noise ratio (measured between 5670-5730 Å), Adev and \(\chi^{2}\). Together they attest to the robustness of the modelled population. Figure 3: **Top:** Example spectra extracted from the spaxel with the highest continuum flux. The observed spectra (black) and the synthesised spectrum (red) are shown as well as the residuals (blue) and the masked pixels (pink). **Bottom:** Histograms showing the stellar population composition found by starlight. Continuous lines show high-weighted parameters and dashed lines, mass-weighted parameters. From left to right, the sum of contributions of components with the same age but different metallicities; contribution from each SSP in the base; age and metallicities contribution summed in predefined age bins, being xi SSPs between 0.1 Myr until 100 Myr, x, 100 Myr until 2 Gyr and xo, 2 Gyr until 13.7 Gyr. Further right are other parameters derived by the synthesis: The reddening in the V band, light- and mass-weighted mean age, light- and mass-weighted mean metallicity, \(\chi^{2}\), Adev and the S/N measured between 5670-5730 Å. The code clearly indicates the presence of old populations followed by a small fraction of young populations. at PAs from the literature (Carollo et al., 1993; Rickes et al., 2008) matching the spatial extent of our data. This comparison can be seen in Fig. 10 and the error bars displayed for the values derived in this work are the standard deviation measured within each bin which is greater than the systematic errors. Mg\({}_{2}\) presents an offset of at least -0.5 mag between Rickes et al. (2008) and our extractions, with both showing a negative gradient. The likely explanation for this offset is the different ways to measure the continuum applied in each work: they chose custom continuum bands and we followed Riffel et al. (2019). Carollo et al. (1993), on the other hand, matches all our data within the error bars for both of their PAs. Fe5270 and Fe5335 also present an offset between Rickes et al. (2008) and our measurements, again preserving gradients probably related to the continuum definitions, as previously said. Carollo et al. (1993) measured Fe5270 (transparent points in Fig. 10) and verified it was not in the Lick system. Therefore they derived a correction to be applied for their data in the form \(\delta\)Fe5270 = \(0.87(\pm 0.07)\cdot\mathrm{Fe5270}-2.67(\pm 0.02)\), resulting in \(\delta\)Fe5270 \(\sim 0.5\) A. The corrected points are the full circles in Fig. 10 which, once again, match all our data within the error bars. Carollo et al. (1993) did not measure Fe5335. In contrast to the parameters derived in the stellar population synthesis (Fig. 6), the spatial variation of the measured indices is far more intricate. Therefore to gauge the mean behaviour of each index, in Fig. 11 we plot the median of each index in circular apertures of \(\delta r=0.05\) arcsec (7 pc) using the peak in Mg\({}_{2}\) as the origin. Mg\({}_{2}\) has the clearest spatial behaviour when compared to all the other indices, monotonically decreasing across the majority of the FoV. Two peculiar regions deviate from this trend, however: one at NW (+120 pc,+175 pc) in the border of the FoV, and another one SW at (+170 pc, -140 pc). Unfortunately, we do not have any literature results in these areas, nevertheless, given that our method resulted in an excellent agreement with past studies, we are convinced of this detection. The Mg\({}_{\mathrm{B}}\) profile closely resembles Mg\({}_{2}\) also showing the distinct regions previously cited. Fe3 is particularly interesting because it does not follow the metallicity map as previously shown (Fig. 6), already a hint of \(\alpha\)-enhancement processes at play. The most curious result is that in the same SW region previously mentioned, there is a depletion of this index. The [MgFe]' map closely resembles the \(\langle Z_{L}\rangle\) map indicating that the centre of this galaxy really is metal-rich and exhibits a negative gradient. One curious behaviour that can easily be seen in Fig. 11 is that for \(R<100\) pc the Mg\({}_{2}\) and Mg\({}_{\mathrm{B}}\) anti-correlate with Fe3 and [MgFe]' reaching their peaks at \(R\sim 100\) pc away from the centre. Until Figure 5: Maps displaying the light contribution from the three most significant components for the spectral synthesis in our FoV. The synthesis finds two old metal-rich (12.6 Gyr; 1.0 and 1.6 Z\({}_{\odot}\)) components and a smaller contribution of a young also metal-rich (63.1 Myr; 1.6 Z\({}_{\odot}\)) component. For display purposes, the scale of the younger population is different from that of the other two components. Figure 6: Mean age (left) and mean metallicity (right) maps derived from starlight output. Both are luminosity-weighted quantities. The mean age seems to be affected by the younger component found in the synthesis and the mean metallicity map clearly shows the gradient metallicity in this galaxy. Figure 7: Map of the reddening in the V band extracted using starlight. The morphology resembles a dust lane in the line of sight of the observer. This amount of dust has disturbed the continuum image, making it seem like a double core. \(\sim 220\) pc we actually observe a correlation between the two indices which turns again into an anti-correlation beyond \(R\sim 220\) pc, however shallower when compared to the inner region. Rickes et al. (2008) found that Mg\({}_{2}\) and Fe5270 or Fe5335 establish a strong correlation which led them to conclude that the elements traced by both of these indices suffered the same enrichment processes. This can be the case for bigger scales, but in our FoV, we observe a distinct behaviour. ### Alpha-enhancement The grids used to derive the [\(\alpha\)/Fe] are seen in Fig. 2 as well as our measurements. As we only find old stellar populations, the most significant variation is given by the indices. Using these values, the [\(\alpha\)/Fe] map is shown in Fig. 12. It is clear that the whole FoV presents \(\alpha\)-enhanced stellar populations with values between \(\sim 40.07\) and \(+0.24\) dex. Also, we find a really structured profile with diverse morphology. Using once again the peak in Mg\({}_{2}\) as our reference, we plot the median profile of the [\(\alpha\)/Fe] using circular apertures that can also be seen in Fig. 12. What becomes apparent is that the centre of NGC 6868 shows a peak in \(\alpha\)-enhancement followed by a region of shallower [\(\alpha\)/Fe] which again is followed by another region with values as big as the ones in the centre, producing the "U" shape found in the median profile. The only other paper where there is a measurement of the [\(\alpha\)/Fe] from NGC 6868 is Rickes et al. (2008). According to them, the central parts of NGC 6868 present lower [\(\alpha\)/Fe], between -0.3 and 0.0 dex and an above-solar metallicity ([Z/Z\({}_{\odot}\)] \(\sim+0.3\) dex. Moreover, the external parts present higher [\(\alpha\)/Fe] values (\(\sim+0.3\) dex) and lower metallicities ([Z/Z\({}_{\odot}\)] \(\sim-0.33\) dex). In order to obtain these values, they use the Mg\({}_{2}\), Fe5270 and Fe5335. However, as we already showed in SS 4.2, these indices, despite agreeing with the gradients found in other studies, the values present a normalization problem probably due to the continuum bands used to compute the indices. If this shift in measurements were accounted for, the values of [\(\alpha\)/Fe] would increase, reaching the \(\alpha\)-enhanced region in their diagram, thus matching our observations. This would not affect the negative gradient in metallicity nor the positive gradient in [\(\alpha\)/Fe] which are in agreement with observations of other early-type galaxies (e.g. Kuntschner et al.2010). ## 5 Discussion ### Stellar population synthesis NGC 6868 is an early-type galaxy and so what is expected is that it had a fast phase of intense SF that suddenly stopped, forming the bulk of its stellar mass with a subsequent growth attributed to dry minor mergers. Therefore it is expected that the isophotes are not severely disturbed by the accreted galaxies. However, as we will present in the forthcoming paper (Benedetti et al. _in preparation_), the photometric centre of NGC6868 has an offset with respect to the outer parts indicating a recent encounter with the NE dwarf companion galaxy. NGC 6868 at first inspection appears to deviate from this hypothesis, exhibiting a significant distortion in its continuum image (Fig. 1). When corrected by the stellar reddening found in the synthesis procedure, the actual morphology is revealed as an undisturbed spheroidal, as shown in Fig. 13. According to our analysis, the SFH of this galaxy is characterized by a short burst in the early Universe with no major episodes of star formation since then (Fig. 6). This means that NGC 6868 probably did not experience any encounters with star-forming galaxies as they would leave signatures in the SFH from this galaxy. The interpretation of the young metal-rich component (63.1 Myr; 1.6 Z\({}_{\odot}\)) is tricky. For instance, it can be due to residual star formation which can be found in some elliptical galaxies. How common they are is still a matter of debate. Salvador-Rusihol et al. (2020) have detected young stars in massive red galaxies probably related to recycled material within the galaxy with mass fraction up to 0.5 per cent (see also de Lorenzo-Caceres et al.2020; Salvador-Rusihol et al.2021, 2022). On the other hand, Simonian and Martini (2017) interpreted early-type galaxies which typically are UV-weak as lacking younger components with HOLMES stars being responsible for this residual UV emission. Also, Bica et al. (1996) using IUE spectra classified NGC 6868 as a UV-weak source and found no contribution from young stellar populations. In addition, Cid Fernandes and Gonzalez Delgado (2010) interpreted this young component as an artefact of the fitting process, and due to the lack of an old blue population, probably related to the horizontal branch, that current stellar population models do not account for. In order to better understand the behaviour of this young component, we have used the M/L from the E-MILES models, and summing over the contributions from all SSPs with 2 Gyr or less, we get the map available in Fig. 14 with a median mass contribution in our FoV of \(\sim 0.2\) percent. There is a gradient for this component with higher values towards the nucleus of the galaxy (the same behaviour as reported by Salvador-Rusihol et al.2020). In addition, this is a metal-rich (1.6 Z\({}_{\odot}\)) stellar population, thus most likely being formed from recycled material of former stellar generations. Taking this all together, we interpret that the 63 Myr contribution we found is due to a recent generation of stars, most likely formed from recycled gas from stellar evolution. However, it is worth mentioning that the present data do not allow us to completely rule out other mecha Figure 8: Kinematical maps regarding the LOSVD of the stellar component. The velocity map (left) does not seem to follow any particular geometry or distribution and the velocity dispersion (right) has a more defined profile with a clear peak at the centre of the distribution. nisms (e.g. HOLMES stars) that may also be enough to account for this fraction we found when fitting the data. It is worth mentioning that, as said in SS 3.1, we have tested for the presence of younger components in this object by using different SSP models that account for these younger stars in order to perform the stellar population synthesis. We found no evidence for the presence of younger components than 63 Myr. The mean metallicity map (\(Z_{\rm L}\), Fig. 6) is consistent with the scenario where massive early-type galaxies are able to retain the material expelled by SN, thus fixating the elements produced during stellar evolution into new stars, also creating a gradient where central regions tend to be more metal-rich when compared to outer regions. This is the case for the majority of early-type galaxies (Kuntschner et al., 2010). They also show that for fast-rotators characterized by only old populations (\(>\)9 Gyr), a stellar disc is embedded in the stellar population of the galaxy characterized by higher metallicity and lower [\(\alpha\)/Fe], including the central regions. The dust found in our data (Fig. 7) is probably related to the cannibalism of a small gas-rich companion as proposed by Hansen et al. (1991). The molecular (Rose et al., 2019), atomic (Rose et al., 2019) and ionized gas (e.g. Buson et al., 1993) detected are probably related with this event. For instance, as can be seen in figure 2 of Hansen et al. (1991), there are dust filaments with spiral features that can be followed in a ring-like structure around the galaxy centre with a connection at NE which is aligned with the dust lane detected in the present study (Fig. 7). The ring is also connected to a stellar-like object in NW that they proposed is a cannibalised galaxy. In fact, we found some recent, metal-rich, star-formation in this galaxy, as pointed out in SS 4.1 this residual star formation is most likely formed from material ejected from the previous generation of stars. In fact, the triggering/enhancing of AGN activity/luminosity has been related to an extra amount of gas that is added to the regular flow and is most likely originated from stellar evolution processes (Riffel et al., 2022). This gas (from the filaments + stellar evolution mass-loss) is most likely the reason behind the triggering of the AGN which is producing the LINER emission that is detected in the centre of the galaxy. This however is highly speculative and a thorough analysis of the ionized gas component will be carried out in an upcoming paper (Benedetti et al. _in preparation_). In the synthesis, despite including an FC component and testing with different exponents, the code does not find any significant contribution of an AGN to the continuum. This however does not mean Figure 11: Correlations between Mg\({}_{2}\), Mg\({}_{0}\), Fe3 and [MgFe]’ indices. These are the median values within \(\delta r=0.05\) arcsec apertures that have been colour-coded for the distance from the centre. For \(R<100\) pc and \(R>220\) pc the Mg\({}_{2}\) and Mg\({}_{0}\) anti-correlate, with Fe3 and [MgFe]’. The intermediate region displays a correlation. The transparent points indicate the mean of each index in each radial bin. This hints at different possible enrichment processes in each region. Figure 10: Comparison of our Mg\({}_{2}\), Fe5270 and Fe5335 results to past studies using long-slit spectroscopy for different PAs (Rickes et al., 2008; Carollo et al., 1993). Each PA is represented by a colour, literature results are shown as circles and our measurements are shown as squares. The smaller scatter points correspond to the spaxels within the artificial slit observed. As can be seen, even inside the artificial slits there is significant variation. Figure 9: Results for the index measurement. From left to right: Mg\({}_{2}\), Mg\({}_{0}\), Fe3 and [MgFe]’. The indices show complex profiles in contrast to the results from the stellar population synthesis. In the Mg\({}_{2}\) map we over-plotted lines indicating the extractions done in our cube in order to compare with the literature results (Fig. 10). The red ”x” in the panels is the location of the peak Mg\({}_{2}\). there is not an AGN in the centre of NGC 6868 as the emission can be obscured by dust, or the SMBH might not be accreting matter at a sufficiently high rate. As was already pointed out, this galaxy presents AGN evidence from past studies in radio (Slee et al., 1994; Mauch et al., 2003). We already know this is a LINER source, therefore the SMBH cannot be accreting at high rates. However, it is already known that when comparing AGN detection in the radio and optical the ratio of detection is not a 1:1 relation (Comerford et al., 2020). The kinematic revealed by starlight matches what is expected for a cD galaxy, having \(\sigma_{*}\) above 250 km\(\cdot\)s\({}^{-1}\). We do not find any clear sign of rotation or strong ordered motion. Actually, the v\({}_{*}\) map displays values that are below our uncertainty (see SS 4.1, Fig. 8). It is clear that our whole FoV presents values for the velocity in the line-of-sight which we are not able to confirm, however, for completeness we decided to show it in Fig. 8. The velocity dispersion, on the other hand, does not face this problem as all values derived are \(>250\) km\(\cdot\)s\({}^{-1}\). Despite past studies reporting a KDC in this object, we cannot confirm this result due to our small FoV. What remains clear is the fact that the central region of NGC 6868 is really dispersion dominated and no kinematically distinct structure is found. ### Absorption line indices and alpha-enhancement Analysing the profiles derived for the indices (Fig. 9), it is clear that they are far more structures when compared to the ones derived in the synthesis (Fig. 6). Moreover, the variation in each index is heavily dependent on the PA one decides to look at. This hints at a more chaotic assembly history when compared to what is derived in the synthesis that shows only contributions from old metal-rich populations. The [MgFe]' resembles the \(\langle Z\rangle_{L}\) profile probably because we were able to partially isolate the sensitivity to the \(\alpha\)-enhancement processes from the synthesis. This hypothesis is further endorsed by Fig. 11, where at least three distinct chemically enrichment regimes are clear, with two anti-correlations (R\(\lesssim 100\) pc and R\(\gtrsim 220\) pc) and a correlation (R\(\gtrsim 100\) pc and R\(\lesssim 220\) pc), clearer in the Mg\({}_{\rm B}\)-[MgFe]' plot. What this tells us, again is that only a simple monolithic collapse cannot explain all our findings, otherwise we would expect the elements sensitive to the indices measured would have followed similar enrichment processes, thus producing matching gradients. This appears to be the case for larger scales (Rickes et al., 2008), however, this is not true for our FoV. A deviation from past studies that [MgFe]' reveals is a slight dip in the metallicity of the galaxy towards the very centre (R\(<\)100 pc, Fig. 11) that is also apparent in Fe3. As this index is insensitive to [\(\alpha\)/Fe] what we observe is that the central region is depleted in metals with respect to the outer (100 pc \(<\) R \(<\)220 pc) region. This behaviour is unexpected as shown by past studies (e.g. Kuntschner et al., 2010). The fact that we are able to detect such gradients using our observations is another reason why detailed studies on stellar populations on ETGs like the one presented here are necessary to further understand these objects. As shown here, despite these gradients naturally appearing in our data, in long-slit studies, as presented in Rickes Figure 14: Map of the summed mass contribution found by starlight from components younger than 2 Gyr. A clear increase towards the centre is seen, reaching 0.28 % at the centre. Figure 12: **Top panel:** Derived [\(\alpha\)/Fe] for the whole FoV where it becomes clear that \(\alpha\)-enhancement processes are ubiquitous. This parameter shows a really complex morphology, with a clear peak towards the centre, but also alpha-enhanced populations at larger radii. **Bottom panel:** Radial profile of [\(\alpha\)/Fe] with the median (red) and mean (red, transparent) of the distribution over-plotted. It is clear that the profile exhibits a peak in the centre, followed by a region with smaller [\(\alpha\)/Fe] followed again by \(\alpha\)-enhanced region. The red ”x” in the top panel is the peak in Mg\({}_{2}\) following Fig. 11 and is used as the reference to trace the profile in the bottom panel Figure 13: Continuum image extracted from our GMOS data of NGC 6868. The actual photometric centre is uncovered after the correction by extinction. et al. (2008) they are not able to detect any sign of this effect as their analysis is restricted to one PA. One way of distinguishing the formation scenario, as noted by Carollo et al. (1993) is measuring the Mg\({}_{2}\) gradient (d Mg\({}_{2}\)/d log r). Using only r\(>\)1 arcsec we find a gradient of d Mg\({}_{2}\)/d log r\(\approx\)-0.024, which is a shallow gradient, incompatible with the monolithic collapse scenario according to the authors. The [\(\alpha\)/Fe] map (Fig. 12) also presents a complex profile. However, looking at the median curve we see a much clear behaviour. The central (R\(<\)100 pc) and outer (R\(>\)260 pc) regions appear to be significantly more \(\alpha\)-enhanced than the intermediate region. These regions agree with our findings from the indices. What becomes clear is that the signatures in the stellar populations of the centre of NGC 6868 cannot be described only by a burst of star formation in the early universe with a passive evolution since then. In order to test if the dilution by young stellar component was affecting our [\(\alpha\)/Fe] estimates, we followed the same procedure as described in SS 3.2 and estimated the [\(\alpha\)/Fe] measuring the indices only in the synthetic spectrum. We found that the effects of the young population are only able to change at a maximum of 0.04 dex the [\(\alpha\)/Fe] values. Therefore, we conclude that the young component is not able to explain the spatial variation found in Fig. 12. ### Possible formation scenario for NGC 6868 From our findings, NGC 6868 presents compelling evidence that its assembly history was not just comprised of a single burst of star formation without any significant evolution ever since. The [\(\alpha\)/Fe] map is especially suited to understanding this galaxy. A close inspection reveals that the regions where it is smaller are also the more metallic regions. Kuntschner et al. (2010) found stellar populations with these exact same features (high metallicity and low [\(\alpha\)/Fe]) in discs of fast-rotator early-type galaxies. Some of these discs show signs of recent star formation, however, they also find these characteristics in galaxies depleted of gas and dominated only by old stellar populations (\(<\)9 Gyr). Therefore, our hypothesis is that in the past NGC 6868 could have suffered an episode of merger with another galaxy with lower [\(\alpha\)/Fe], such as the ones previously described. This would explain why we see such structured maps in [\(\alpha\)/Fe], the slight increase in metallicity outside from the centre and the absence of a detectable subsequent star formation episode or a lack of gradient in the mean age across our FoV. The fact that we do not observe any clear kinematical signature is probably because we are looking at a really small region and cannot compare with outer regions, despite a KDC being previously reported (Caon et al., 2000). However, we do find an imprinted chemical signature. We emphasize that, in this hypothesis, the accreted galaxy could not have a mass comparable with NGC 6868, because, despite different maps showing structured profiles, Mg\({}_{2}\) shows a (shallow) negative gradient. Therefore, this enrichment process must have disturbed the stars from NGC 6868 only to a certain extent. Simulations could be used to test our predictions, however, this analysis is beyond the scope of this paper. Finally, this might not be the only disturbance NGC 6868 has suffered. We notice some regions that stand out, mainly in the SW in the [\(\alpha\)/Fe] map which displays a \(\sim\) +0.25 dex and, as can be seen in the Fe3 map (Fig. 9) it is depleted in [Fe/H]. This can be a recently captured small galaxy, however, this is unlikely due to its scale and the region it is encountered. We would need more data to properly characterize this region ## 6 Concluding remarks In this paper, we analysed GMOS-IFU data of the inner region of the ETG NGC 6868 mapping for the first time the physical and chemical properties of the stellar content of this source. This, together with an absorption-line indices analysis has allowed us to constrain the assembly history of this object. Our results can be summarized as follows: * This galaxy is dominated by an old metal-rich (12.6 Gyr; 1.0 and 1.6 Z\({}_{\odot}\)) stellar population and presents a negative gradient in metallicity. This is further endorsed by the [MgFe]'. * We found a recent (\(\sim\) 63 Myr) metal-rich (1.6\(Z_{\odot}\)) stellar population in the center of the galaxy. We suggest that this component is most likely due to stars being formed from recycled material of former stellar generations. * The apparent distortion in the continuum image is due to a dust lane embedded in the centre of the galaxy and reaches a peak in A\({}_{\rm V}\)\(\sim\) 0.65 mag. This structure is coincident with the one found in other studies. * No evidence of an FC continuum is found, probably meaning the AGN in the centre of NGC 6868 is accreting at really low rates. * The kinematics in the centre of NGC 6868 is characterized by high dispersion velocities and no apparent circular motion of the stars is seen. * The indices Mg\({}_{2}\), Mg\({}_{\rm B}\), Fe3 and [MgFe]' all present structured profiles, with Mg\({}_{2}\) presenting the steepest negative gradient. However, it is too shallow to support a formation history due to a monolithic collapse. * Three distinct regions can be found when cross-correlating the indices: anti-correlations for R\(\lesssim\) 100 pc and R\(\gtrsim\) 220 pc and a correlation for 100 pc\(\lesssim\)R\(\lesssim\) 220 pc. This reveals different enrichment histories in these regions. * The [\(\alpha\)/Fe] map also does not present a clear gradient. However, the median appears to also show three distinct regions: the central (R\(<\)100 pc) and outer (R\(>\)260 pc) regions appear to be significantly more \(\alpha\)-enhanced than the intermediate region. These findings suggest that NGC 6868 was not formed on a single collapse and has passively evolved since then. Rather we propose that it has suffered a past merge with another galaxy. This can explain the findings in the \(\alpha\)-enhancement and the different regions in the indices correlations together with the stellar population synthesis ones, such as the metallicity gradient and ubiquitous old ages. We do not find evidence of a distinct kinematic component either because this merger supposedly happened too long ago or we would need a larger FoV to asses if this region really is a KDC as other studies have previously reported. ## Acknowledgements We thank the anonymous referee for the very useful comments and suggestions that helped to improve the manuscript. We also thank Alexandre Vazdekis for the insightful discussions. This work was supported by Brazilian funding agencies Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) and Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES) and by the _Programa de Pos-Graduacao em Fisica_ (PPGFis) at UFRGS. JPVB acknowledges financial support from CNPq and CAPES (Proj. 0001). RR acknowledges support from the Fundacion Jesus Serra and the Instituto de Astrofisica de Canarias under the Visiting Researcher Programme 2023-2025 agreed between both institutions. RR, also
2310.02711
Anisotropy of magnetized quark matter
Strong transient magnetic fields are generated in non-central relativistic heavy-ion collisions. These fields induce anisotropy within the strongly interacting medium that, in principle, can affect the thermodynamic properties of the medium. We use the Polyakov loop extended Nambu Jona-Lasinio model to study the quark matter subjected to an external magnetic field at vanishing baryon chemical potential ($\mu_{B}$). We have estimated the degree of anisotropy in the speed of sound and isothermal compressibility within the magnetized quark matter as a function of temperature ($T$) and magnetic field ($eB$). This study helps us to understand the extent of directionality generated in the initial stages of non-central collisions while giving us useful information about the system.
Kangkan Goswami, Dushmanta Sahu, Jayanta Dey, Raghunath Sahoo, Reinhard Stock
2023-10-04T10:28:11Z
http://arxiv.org/abs/2310.02711v2
# Anisotropy of magnetized quark matter ###### Abstract Strong transient magnetic fields are generated in non-central relativistic heavy-ion collisions. These fields induce anisotropy within the strongly interacting medium that, in principle, can affect the thermodynamic properties of the medium. We use the Polaykov loop extended Nambu Jona-Lasinio (PNJL) model to study the quark matter subjected to an external magnetic field at vanishing baryon chemical potential. We have estimated the degree of anisotropy in the speed of sound and isothermal compressibility within the magnetized quark matter as a function of temperature and magnetic field. This study helps us to understand the extent of directionality generated in the initial stages of non-central collisions while giving us useful information about the system. ## I Introduction One of the primary goals of relativistic heavy-ion collisions is to study the deconfined state of strongly interacting quarks and gluons in local thermal equilibrium, known as the quark-gluon plasma (QGP). In the non-central heavy-ion collision, the charged spectators move past the fireball at a relativistic speed. According to the Biot-Savart law, these moving charge particles create a large transient electromagnetic field of the order of \(10^{18}\) G at the Large Hadron Collider (LHC) [1; 2]. Direct experimental evidence for the strength of the magnetic field (\(B\)) is yet to be discovered. However, recent measurements of the directed flow of \(D^{0}\) and \(\bar{D}^{0}\) at the Relativistic Heavy Ion Collider (RHIC) [3] and Large Hadron Collider [4], indicate the creation of a strong magnetic field during the collision. The first principle non-perturbative theory of strong interaction, lattice quantum chromodynamics (lQCD), found many interesting phenomena in QGP in the presence of magnetic field such as chiral magnetic effect [5], magnetic and inverse magnetic catalysis [6]. However, experimental verification of these phenomena are yet to be observed. Under an external magnetic field, the energy levels of the charged particles get quantized following the Landau quantization, which creates momentum anisotropy affecting various thermodynamical [7; 8; 9] and dissipative quantities [10; 11; 12]. For instance, the thermodynamical pressure becomes anisotropic with a longitudinal component along the magnetic field and a transverse component in the transverse plane of the field. This leads to anisotropy in many other thermodynamical quantities, such as speed of sound (\(c_{s}\)), and isothermal compressibility (\(\kappa_{\rm T}\)) as we will show here. In the strong field limit, this leads to a dimensional reduction in the phase space (\(3\to 1\) dimension) at the lowest Landau level (LLL). Moreover, the magnetic field influences the QCD phase diagram. Different effective QCD models are used to study the phase diagram in the \(B-T\) plane, such as linear sigma model [13], Nambu Jona-Lasinio (NJL) model, and its extended version Polayakov loop extended NJL (PNJL) model [14; 15]. Initial prediction of the PNJL model showed that the transition temperature (\(T_{c}\)) and its strength increase with \(eB\), leading to a first-order phase transition. However, lQCD calculation [6] found an opposite trend, which means \(T_{c}\) decreases with \(eB\). The same was also observed in the models, such as in NJL and PNJL models [16; 17] with a magnetic field-dependent coupling constant. Many indirect probes and observables are suggested to study the microscopic and bulk features of QGP under the effect of the magnetic field. Theoretically, one can study the change in the thermodynamic observables to understand the changes in the deconfined medium in the presence of an external magnetic field. The behavior of certain thermodynamic observables such as \(\kappa_{\rm T}\) and \(c_{\rm s}^{2}\) provide useful information about the nature of the phase transition of the system. \(\kappa_{\rm T}\) represents the rate of change in the volume of the system concerning pressure at a constant temperature. Precisely, \(\kappa_{\rm T}\) measures the extent to which the density of quarks and gluons changes in response to changes in external pressure, which is an essential factor in determining the equation of state of the medium [18; 19; 20]. Moreover, it can tell us about the degree of deviation of a system from a perfect fluid. \(\kappa_{\rm T}\) is expected to show a sudden jump near the critical end-point (CEP), where the smooth crossover in the QCD phase diagram meets the first-order phase transition. Thus, it is an interesting observable to explore in the QCD phase diagram [21]. In literature, \(\kappa_{\rm T}\) has been studied as a function of temperature and charged particle multiplicity [22; 23; 24; 25; 26; 27]. In Ref. [23], the high-temperature QCD matter has been found to be the closest to a perfect fluid. On the other hand, the speed of sound reflects the propagation of small perturbations produced in the system in its local rest frame. Its dependence on the environment, i.e. temperature, density, and baryon chemical potential, means that it is an ideal probe to explore the
2301.02050
Open Charm Mesons and Charmonium states in Magnetized Strange Hadronic Medium at Finite Temperature
We investigate the masses of the pseudoscalar ($D$($D^0$, $D^+$), $\bar{D}$($\bar{D^0}$, $D^-$) and vector open charm mesons ($D^*$($D^{*0}$, $D^{*+}$), ${\bar{D}}^*$(${\bar{D}}^{*0}$, $D^{*-}$) as well as the pseudoscalar ($\eta_c(1S)$, $\eta_c(2S)$) and the vector charmonium states ($J/\psi$, $\psi(2S)$, $\psi(1D)$) in the asymmetric hot strange hadronic medium in the presence of strong magnetic fields. In the magnetized medium, the mass modification of open charm mesons due to their interactions with baryons and the scalar fields ($\sigma$, $\zeta$, and $\delta$) are investigated in a chiral effective model. Moreover, the charged pseudoscalar meson ($D^\pm$), as well as the longitudinal component of charged vector meson ($D^{*\pm \parallel}$), experience additional positive mass modifications in the magnetic field due to Landau quantization. The effect of the modification of gluon condensates simulated by the medium change of dilaton field $\chi$ on the masses of the charmonia is also calculated in the chiral effective model. At high temperatures, the magnetically induced modifications of scalar fields significantly reduce the in-medium masses of mesons. The effects of magnetically induced spin mixing between the pseudoscalar and the vector mesons are incorporated in our study. The spin mixing result in a positive mass shift for the longitudinal component of the vector mesons and a negative mass shift for the pseudoscalar mesons in the presence of the magnetic field. From the obtained in-medium mass shifts of charmonia and open charm mesons, we have also calculated the partial decay widths of $\psi(1D)$ to $D\bar{D}$, using a light quark pair creation model, namely the $^3P_0$ model. Spin mixing and strangeness fraction enhance the partial decay width at small magnetic fields.
Amal Jahan C. S., Amruta Mishra
2023-01-05T12:53:39Z
http://arxiv.org/abs/2301.02050v1
# Open Charm Mesons and Charmonium states in Magnetized ###### Abstract We investigate the masses of the pseudoscalar (\(D(D^{0},\,D^{+})\), \(\bar{D}(\bar{D^{0}},\,D^{-})\) and vector open charm mesons (\(D^{*}(D^{*0},\,D^{*+})\), \(\bar{D}^{*}(\bar{D}^{*0},\,D^{*-})\) as well as the pseudoscalar (\(\eta_{c}(1S)\), \(\eta_{c}(2S)\)) and the vector charmonium states (\(J/\psi\), \(\psi(2S)\), \(\psi(1D)\)) in the asymmetric hot strange hadronic medium in the presence of strong magnetic fields. In the magnetized medium, the mass modification of open charm mesons due to their interactions with baryons and the scalar fields (\(\sigma\), \(\zeta\), and \(\delta\)) are investigated in a chiral effective model. Moreover, the charged pseudoscalar meson (\(D^{\pm}\)), as well as the longitudinal component of charged vector meson (\(D^{*\pm\parallel}\)), experience additional positive mass modifications in the magnetic field due to Landau quantization. The effect of the modification of gluon condensates simulated by the medium change of dilaton field \(\chi\) on the masses of the charmonia is also calculated in the chiral effective model. The contribution of masses of light quarks is also considered in the modification of gluon condensates. At high temperatures, the magnetically induced modifications of scalar fields significantly reduce the in-medium masses of mesons. The effects of magnetically induced spin mixing between the pseudoscalar and the corresponding vector mesons are incorporated in our study. The spin-magnetic field interaction of these mesons is considered through a phenomenological effective Lagrangian interaction. The spin mixing result in a positive mass shift for the longitudinal component of the vector mesons and a negative mass shift for the pseudoscalar mesons in the presence of the magnetic field. From the obtained in-medium mass shifts of charmonia and open charm mesons, we have also calculated the partial decay widths of \(\psi(1D)\) to \(D\bar{D}\), using a light quark pair creation model, namely the \({}^{3}P_{0}\) model. Spin mixing and strangeness fraction enhance the partial decay width at small magnetic fields. Introduction The effect of strong magnetic fields on the properties of hadrons has recently received significant interest due to the phenomenological consequences in the relativistic heavy-ion collision experiments. The strength of the magnetic fields in such experiments could be as enormous as \(eB\sim 2{m_{\pi}}^{2}\sim 6\times 10^{18}\) Gauss in the Relativistic Heavy Ion Collider(RHIC) at Brookhaven National Laboratory (BNL) and eB \(\sim 15{m_{\pi}}^{2}\sim 10^{19}\) Gauss in the Large Hadron Collider(LHC) at CERN [1; 2; 3; 4]. These studies indicate that the magnitude of the magnetic field depends on the energy of the collision, as well as the impact parameter, and is produced at the early stages of collision. Since charm quarks, due to their large mass are formed by initial hard scatterings at the early stages of heavy ion collision, the charm quark systems are sensitive to magnetic fields [5; 6]. Hence the properties of charmonium states and open charm mesons will undergo modifications in magnetic fields and must be investigated. Strong magnetic fields can modify the internal structure of mesons as well as the QCD condensates [5; 6]. The chiral condensates in QCD, which is a measure of the spontaneous chiral symmetry breaking of the system, is modified by the magnetic field through the phenomena of magnetic catalysis at low temperature and inverse magnetic catalysis at very high temperature [7]. The gluon condensates, results in the scale invariance breaking or scale anomaly by which the trace of energy-momentum tensor becomes non-zero. The gluon condensates are also modified in strong magnetic fields through gluon catalysis. [8; 9; 10; 11]. The condensates also modify with variation in medium conditions such as baryon density, temperature, isospin asymmetry and the strangeness content of the medium. When the hadrons interact with these condensates, their properties such as masses and decay widths also modify in magnetic field at finite baryon density and temperature. Open heavy flavor mesons and various heavy quarkonium states in magnetic fields have been investigated in QCD sum rule approach [5; 6; 12; 13; 14; 15; 16; 17; 18; 19] as well as using potential models [20; 21; 22; 23; 24]. The masses of heavy flavor mesons are investigated in the chiral effective model in cold magnetized nuclear medium [25; 26; 27; 28] and in magnetized strange hadronic medium [29; 30]. The partial decay widths of charmonium states to \(D\bar{D}\) in the cold magnetized nuclear medium are studied in Refs. [31; 32]. Besides the modification of condensates, the magnetic field also introduces the mixing of the spin eigenstates between the spin-singlet states and spin-triplet states of heavy flavor mesons [5; 6; 13; 14; 15; 16; 20; 21; 22; 23; 24; 33; 34; 35; 36; 37; 38; 39; 40]. Under strong magnetic fields, a part of spatial rotation symmetry is broken, and only the azimuthal component along the direction of the magnetic field remains [13]. Consequently, the spin state, which can act as a good quantum number for the meson, is the one along the direction of the magnetic field. Hence the pseudoscalar meson mix with the longitudinal component of the corresponding vector meson. The transverse component of the vector charmonium state does not take part in this mixing. In Ref.[24], along with the spin mixing effect, the Zeeman splitting between the transverse components of the spin-triplet states of open charm mesons are also investigated. In Ref.[33], the vacuum masses of the pseudoscalar and vector charm mesons in the presence of the magnetic field are studied with the spin mixing effect incorporated through a phenomenological Lagrangian interaction. The masses of charmonia in the cold magnetized nuclear medium have been calculated, accounting for their spin mixing effect in Ref.[34], and the same investigation for open bottom mesons and bottomonium states is carried out in Ref.[35]. In Ref.[34], the partial decay width of \(\psi(1D)\) to \(D\bar{D}\) in the cold magnetized nuclear medium has been calculated using the field-theoretic model for composite hadrons as well as using an effective hadronic model. In this study the effect of spin mixing effect on the in-medium masses of open charm mesons was not considered. Recently the properties of heavy flavor mesons in the cold magnetized nuclear medium have been investigated incorporating the effect of magnetic catalysis and spin mixing effect [36; 37; 38; 39; 40]. In Ref.[29], we have investigated the masses of pseudoscalar open charm mesons and vector charmonia in the magnetized strange medium without considering the effects of spin mixing and finite temperature. In the present work, we have investigated the masses of the pseudoscalar and the vector open charm mesons, as well as the pseudoscalar and the vector charmonium states in the hot magnetized strange hadronic medium. In the present study, the magnetically induced mixing of the pseudoscalar and vector mesons is also taken into account through the effective Lagrangian interaction [5; 6; 13; 33; 34]. In the magnetized medium, the mass modifications of the pseudoscalar open charm mesons accounting for the modification of light quark condensates and the mass modifications of charmonium states accounting for the modification of gluon condensates are calculated within the chiral effective model [25; 27; 29]. Within the chiral model, the modifications of light quark condensates are calculated from the modification of the scalar fields (\(\sigma\), \(\zeta\), \(\delta\)), and that of gluon condensates are calculated from the medium change of the dilaton field (\(\chi\)), introduced through a scale breaking term in the Lagrangian. The contribution of the mass term of light quarks to the modification of gluon condensates is also incorporated in the present study [29]. The in-medium masses of the vector \(D^{*}\),\(\bar{D}^{*}\) mesons are calculated by assuming that the magnitude of their mass shifts due to the interaction with nucleons and scalar fields are similar to that of pseudoscalar open charm mesons. The charged open charm mesons, \(D^{\pm}(D^{*\pm\parallel})\), experience additional positive mass shift in magnetic fields due to Landau quantization. From the mass modifications of both \(\psi(1D)\) and open charm mesons accounting for the spin mixing effect, we compute the decay widths of \(\psi(1D)\) to \(D\bar{D}\) pair in the magnetized medium using a light quark pair creation model called the \({}^{3}P_{0}\) model [31; 41]. The outline of the paper is as follows. In section II, we describe the effect of magnetized medium on the mass modifications of open charm mesons and charmonium states using a chiral effective model. We also describe the incorporation of magnetically induced spin mixing on the masses of pseudoscalar and vector mesons using a phenomenological interaction Lagrangian in the same section. Section III describes the mathematical formalism of the \({}^{3}P_{0}\) model and the expressions for the partial decay widths of \(\psi(1D)\) to \(D\bar{D}\) pair. In Section IV, we discuss and analyze the results obtained and later summarize our findings in section V. ## II Masses of open charm mesons and charmonia in hot magnetized medium In this section, we discuss the modifications of the masses of pseudoscalar (P) and vector (V) open charm mesons and charmonium states in strange hadronic matter at finite temperatures in strong magnetic fields. In magnetized medium, the open charm mesons will have mass shifts due to the medium modification of light quark condensates, whereas charmonium states experience mass modifications due to the modification of gluon condensates. Such modifications are taken into consideration using a chiral effective model. The Hadronic Lagrangian density in chiral effective model [25; 29; 34] is given as \[\mathcal{L}_{\rm eff}=\mathcal{L}_{\rm kin}+\sum_{W=X,Y,A,V,u}\mathcal{L}_{\rm BW }+\mathcal{L}_{\rm vec}+\mathcal{L}_{0}+\mathcal{L}_{\rm scale\ break}+ \mathcal{L}_{\rm SB}+\mathcal{L}_{\rm mag}^{\rm B\gamma}. \tag{1}\] In this equation, \(\mathcal{L}_{\rm kin}\) refers to the kinetic energy terms of the mesons and baryons. is the baryon-meson interaction term, where the index W covers both spin-0 and spin-1 mesons. \({\cal L}_{\rm vec}\) concerns the dynamical mass generation of the vector mesons through couplings with scalar mesons, apart from bearing the quartic self-interaction terms of these mesons. \({\cal L}_{0}\) contains the meson-meson interaction terms introducing the spontaneous breaking of chiral symmetry, and \({\cal L}_{\rm scale\ break}\) incorporates the scale invariance breaking of QCD through a logarithmic potential given in terms of scalar dilaton field \(\chi\). \({\cal L}_{\rm SB}\) corresponds to the explicit chiral symmetry breaking term. Finally \({\cal L}_{\rm mag}^{\rm B\gamma}\) is the contribution by the magnetic field which describe the interactions of octet baryons with the magnetic field. We choose the magnetic field to be uniform and along the z-axis. The field term of the magnetic part of the Lagrangian has vectorial as well as tensorial interactions with the electromagnetic field [25]. The tensorial interaction is related to the anomalous magnetic moment (AMM) of the baryons [25; 29]. We then invoke mean-field approximation, where fermions are treated as quantum fields and mesons as classical fields. In this approximation, only the scalar and the vector fields contribute as the expectation values vanish for all other terms. The magnetic field introduces Landau quantization for the charged baryons. For neutral baryons, the magnetic field's effect is only through AMM effects. The effects of temperature can be introduced through Fermi distribution functions in the expressions of scalar and number densities of baryons [30]. The scalar fields depend on the scalar density of baryons and will be modified with changes in baryon density, magnetic field, strangeness content of the medium, and temperature [29; 30]. The equations of motion for scalar fields (the non-strange field \(\sigma\), strange field \(\zeta\), isovector field \(\delta\), and the dilaton field \(\chi\)) are solved as functions of the magnetic field for isospin asymmetric hadronic matter (\(\eta=0.5\)) at nuclear matter saturation density for different values of strangeness fraction (\(f_{s}\)) and temperature (\(T\)). The modifications of scalar fields in the hot magnetized strange hadronic medium in the chiral effective model are already investigated in Ref.[30]. The in-medium masses of open charm mesons are investigated using the chiral effective Lagrangian approach. Here the chiral \(SU(3)\) model has been generalized to chiral \(SU(4)\) to include the charm degrees of freedom [25; 29]. The mass modifications of pseudoscalar (P) open charm mesons \(D(D^{0}\), \(D^{+})\) and \(\bar{D}(\bar{D^{0}}\), \(D^{-})\) arise due to their interactions with baryons, and the scalar fields (\(\sigma\), \(\zeta\), and \(\delta\)) in the presence of the magnetic field [25; 29]. The interaction Lagrangian of these mesons gives rise to equations of motion of the open charm mesons, and their Fourier transforms lead to the dispersion relations for the pseudoscalar \(D\) and \(\bar{D}\) mesons. In the rest frame, the dispersion relation is given as \[-\omega^{2}+{m_{D(\bar{D})}}^{2}-\Pi_{D(\bar{D})}(\omega,0)=0, \tag{2}\] where \(m_{D(\bar{D})}\) is their mass in the vacuum. The medium effects on these mesons are incorporated in their self-energy denoted by \(\Pi_{D(\bar{D})}(\omega,0)\). The expression of self energies for \(D\) mesons in the magnetized strange hadronic matter is given as [29]. \[\Pi_{D}(\omega,0) = \frac{1}{4f_{D}^{2}}\Big{[}3(\rho_{p}+\rho_{n})\pm(\rho_{p}-\rho_ {n})+2\big{(}\left(\rho_{\Sigma^{+}}+\rho_{\Sigma^{-}}\right)\pm(\rho_{\Sigma^ {+}}-\rho_{\Sigma^{-}})\,\big{)} \tag{3}\] \[+ 2(\rho_{\Lambda^{0}}+\rho_{\Sigma^{0}})+((\rho_{\Xi^{0}}+\rho_{ \Xi^{-}})\pm(\rho_{\Xi^{0}}-\rho_{\Xi^{-}}))\Big{]}\omega\] \[+ \frac{m_{D}^{2}}{2f_{D}}(\sigma^{\prime}+\sqrt{2}{\zeta_{c}}^{ \prime}\pm\delta^{\prime})+\Big{[}-\frac{1}{f_{D}}(\sigma^{\prime}+\sqrt{2}{ \zeta_{c}}^{\prime}\pm\delta^{\prime})+\frac{d_{1}}{2f_{D}^{2}}(\rho_{p}^{s}+ \rho_{n}^{s}\] \[+ \rho_{\Lambda^{0}}^{s}+\rho_{\Sigma^{+}}^{s}+\rho_{\Sigma^{0}}^{ s}+\rho_{\Sigma^{-}}^{s}+\rho_{\Xi^{0}}^{s}+\rho_{\Xi^{-}}^{s})+\frac{d_{2}}{4f_{D} ^{2}}\Big{(}(\rho_{p}^{s}+\rho_{n}^{s})\pm(\rho_{p}^{s}-\rho_{n}^{s})+\frac{1} {3}\rho_{\Lambda^{0}}^{s}\] \[+ (\rho_{\Sigma^{+}}^{s}+\rho_{\Sigma^{-}}^{s})\pm(\rho_{\Sigma^{+ }}^{s}-\rho_{\Sigma^{-}}^{s})+\rho_{\Sigma^{0}}^{s}\Big{)}\Big{]}\omega^{2},\] where \(\pm\) refers to \(D^{0}\) and \(D^{+}\), respectively. For \(\bar{D}\) mesons, the self-energy is given as \[\Pi_{\bar{D}}(\omega,0) = -\frac{1}{4f_{D}^{2}}\Big{[}3(\rho_{p}+\rho_{n})\pm(\rho_{p}-\rho _{n})+2\big{(}\left(\rho_{\Sigma^{+}}+\rho_{\Sigma^{-}}\right)\pm(\rho_{ \Sigma^{+}}-\rho_{\Sigma^{-}})\,\big{)} \tag{4}\] \[+ 2(\rho_{\Lambda^{0}}+\rho_{\Sigma^{0}})+((\rho_{\Xi^{0}}+\rho_{ \Xi^{-}})\pm(\rho_{\Xi^{0}}-\rho_{\Xi^{-}}))\Big{]}\omega\] \[+ \frac{m_{D}^{2}}{2f_{D}}(\sigma^{\prime}+\sqrt{2}{\zeta_{c}}^{ \prime}\pm\delta^{\prime})+\Big{[}-\frac{1}{f_{D}}(\sigma^{\prime}+\sqrt{2}{ \zeta_{c}}^{\prime}\pm\delta^{\prime})+\frac{d_{1}}{2f_{D}^{2}}(\rho_{p}^{s}+ \rho_{n}^{s}\] \[+ \rho_{\Lambda^{0}}^{s}+\rho_{\Sigma^{+}}^{s}+\rho_{\Sigma^{0}}^{ s}+\rho_{\Sigma^{-}}^{s}+\rho_{\Xi^{0}}^{s}+\rho_{\Xi^{-}}^{s})+\frac{d_{2}}{4f_{D} ^{2}}\Big{(}(\rho_{p}^{s}+\rho_{n}^{s})\pm(\rho_{p}^{s}-\rho_{n}^{s})+\frac{1} {3}\rho_{\Lambda^{0}}^{s}\] \[+ (\rho_{\Sigma^{+}}^{s}+\rho_{\Sigma^{-}}^{s})\pm(\rho_{\Sigma^{+} }^{s}-\rho_{\Sigma^{-}}^{s})+\rho_{\Sigma^{0}}^{s}\Big{)}\Big{]}\omega^{2}.\] In eq.(3) and eq.(4), \(\sigma^{\prime}=(\sigma-\sigma_{0})\), \(\zeta_{c}^{\prime}=(\zeta_{c}-\zeta_{c0})\), \(\delta^{\prime}=(\delta-\delta_{0})\) denotes the fluctuation of scalar fields from their vacuum values. The fluctuation \(\zeta_{c}^{\prime}\) has been observed to be negligible [42], and its contribution to the in-medium masses of open charm mesons will be neglected in the present investigation. Here \(f_{D}\) refers to the decay constant of \(D\) mesons. The parameters \(d_{1}\) and \(d_{2}\) are determined by a fit of the empirical values of the Kaon-Nucleon scattering lengths [43; 44; 45] for I = 0, and I = 1 channels [46; 47]. The dispersion relations are solved at various values of magnetic fields, strangeness fraction, and temperature to obtain the masses of these mesons. Since the mass modification of the mesons depends upon the modification of scalar density, number density, and scalar fields, the effects of baryon density, isospin asymmetry, strangeness fraction, temperature, and magnetic field get reflected on their in-medium masses. For the neutral mesons (\(D^{0}\), \(\bar{D^{0}}\)), the in-medium effective masses are the solutions of the dispersion relations given as \[m^{eff}_{D^{0}(\bar{D^{0}})}=m^{*}_{D^{0}(\bar{D^{0}})}. \tag{5}\] We assume the mass shift of the vector (V) open charm mesons \(D^{*}(\bar{D}^{*})\) from their vacuum mass at nuclear matter saturation density, arising from the medium modification of scalar fields and baryons is similar in magnitude to the mass shift of pseudoscalar open charm mesons. Such identical mass shifts in hadronic matter are obtained for vector and pseudoscalar open charm mesons in Quark Meson coupling model [48; 49]. Hence, we have the relation \(\Delta m_{D^{*}(\bar{D}^{*})}\equiv m^{*}_{D^{*}(\bar{D}^{*})}-m_{D^{*}(\bar{D} ^{*})}=m^{*}_{D(\bar{D})}-m_{D(\bar{D})}=\Delta m_{D(\bar{D})}\) where \(m_{D^{*}(\bar{D}^{*})}\) is the vacuum mass of vector open charm mesons. Moreover, the charged pseudoscalar mesons (\(D^{\pm}\)) and the longitudinal component of the charged vector open charm mesons (\(D^{*\pm}\)) have additional positive mass modifications in magnetic fields through Landau quantization retaining up to the lowest Landau level and given as [29; 35; 36; 38] \[m^{eff}_{D^{\pm}}= \sqrt{m^{*2}_{D^{\pm}}+|eB|}, m^{eff}_{D^{*\pm}}= \sqrt{m^{*2}_{D^{*\pm}}+|eB|}. \tag{6}\] In the chiral effective model, the leading order mass shift formula of the charmonium states is given as [29; 50] \[\Delta m_{P,V}=\frac{1}{18}\int dk^{2}\langle|\frac{\partial\psi(\vec{k})}{ \partial\vec{k}}|^{2}\rangle\frac{k}{k^{2}/m_{c}+\epsilon}\times\bigg{(}\left< \frac{\alpha_{s}}{\pi}G^{a}_{\mu\nu}G^{\mu\nu a}\right>-\left<\frac{\alpha_{s }}{\pi}G^{a}_{\mu\nu}G^{\mu\nu a}\right>_{0}\bigg{)}, \tag{7}\] where \[\langle|\frac{\partial\psi(\vec{k})}{\partial\vec{k}}|^{2}\rangle=\frac{1}{4 \pi}\int|\frac{\partial\psi(\vec{k})}{\partial\vec{k}}|^{2}d\Omega. \tag{8}\] In the above, \(m_{c}\) denotes the mass of the charm quark, and \(\epsilon=2m_{c}-m_{P,V}\) represents the binding energy of the charmonium state where \(m_{P,V}\) is the vacuum mass of the concerned pseudoscalar or vector charmonium state. Here, \(\langle\frac{\alpha_{s}}{\pi}G^{a}_{\mu\nu}G^{\mu\nu a}\rangle\) and \(\langle\frac{\alpha_{s}}{\pi}G^{a}_{\mu\nu}G^{\mu\nu a}\rangle_{0}\) are the expectation values of the scalar gluon condensates in the magnetized medium and in the vacuum, respectively, and \(\psi(k)\) is the harmonic oscillator wave functions [27; 29; 50; 51] of charmonia in the momentum space normalized as \(\int\frac{d^{3}k}{(2\pi)^{3}}|\psi(k)|^{2}=1\).For \(N_{f}\)=3, modification of scalar gluon condensate in the chiral effective model is given by, \[\bigg{(}\left<\frac{\alpha_{s}}{\pi}G^{a}_{\mu\nu}G^{\mu\nu a}\right>-\left< \frac{\alpha_{s}}{\pi}G^{a}_{\mu\nu}G^{\mu\nu a}\right>_{0}\bigg{)}=\] \[\frac{8}{9}\left[(1-d)\left(\chi^{4}-\chi_{0}^{4}\right)+m_{\pi}^{2}f_{\pi}\sigma ^{\prime}+\left(\sqrt{2}m_{K}^{2}f_{K}-\frac{1}{\sqrt{2}}m_{\pi}^{2}f_{\pi} \right)\zeta^{\prime}\right]. \tag{9}\] Here \(\chi\) and \(\chi_{0}\) are the values of the dilaton field in the magnetized medium and in the vacuum, respectively. The terms proportional to \(\sigma^{\prime}=(\sigma-\sigma_{0})\) and \(\zeta^{\prime}=(\zeta-\zeta_{0})\) originate due to the finite quark mass term \(\sum_{i}m_{i}\bar{q}_{i}q_{i}\) in the expression of energy-momentum tensor of QCD [29; 30]. Here, \(d\) is a a parmeter related to the QCD-Beta function [29], \(f_{K}\) is the kaon decay constant, \(f_{\pi}\) is the pion decay constant, and \(m_{K}\), \(m_{\pi}\) are their respective vacuum masses. The effective mass of pseudoscalar and vector charmonium states in the medium is then given as. \[m_{P,V}^{eff}=m_{P,V}+\Delta m_{P,V} \tag{10}\] Moreover, in the presence of the magnetic field, the coupling of the particle's spin with the magnetic field results in M1 transitions, which convert spin-1 states into spin-0 states by the emission of a photon [13]. Such an interaction can result in the mixing between pseudoscalar mesons and longitudinal component of vector mesons [5; 6; 13; 33; 34]. The effect of spin mixing on the open charm mesons and charmonium states are taken into account through a phenomenological effective Lagrangian interaction \(\mathcal{L}_{PV\gamma}\)[5; 6; 13; 33; 34] given as \[\mathcal{L}_{PV\gamma}=\frac{g_{PV}}{m_{av}}e\tilde{F}_{\mu\nu}(\partial^{\mu} P)V^{\nu}. \tag{11}\] Here \(g_{PV}\) is the dimensionless spin-mixing coupling parameter, \(e\) is the unit electric charge, \(m_{av}\) is the average masses of the pseudoscalar and the vector mesons, and \(\tilde{F}_{\mu\nu}\) is the dual electromagnetic field strength tensor. The coupling parameter \(g_{PV}\) is calculated from the observed value of the vacuum radiative decay width \(\Gamma(V\to P+\gamma)\) through the expression \[\Gamma(V\to P\gamma)=\frac{e^{2}}{12}\frac{g_{PV}^{2}{p_{cm}}^{3}}{\pi m_{av}^ {2}}/ \tag{12}\] Here, \(p_{cm}=(m_{V}^{2}-m_{P}^{2})/(2m_{V})\) is the magnitude of the center of mass momentum in the final state with \(m_{V}\) and \(m_{P}\) are the vacuum masses of the corresponding vector and pseudoscalar mesons, respectively. From the effective phenomenological Lagrangian [5], we obtain the masses of the pseudoscalar and the longitudinal component of the vector mesons(\(V^{\parallel}\)) after incorporating the mixing effects and are given by [5; 6; 13; 33; 34] \[m_{P,V^{\parallel}}^{(PV)}=\frac{1}{2}\Bigg{(}M_{+}^{2}+\frac{c_{PV}^{2}}{m_ {av}^{2}}\mp\sqrt{M_{-}^{4}+\frac{2c_{PV}^{2}M_{+}^{2}}{m_{av}^{2}}+\frac{c_{ PV}^{4}}{m_{av}^{4}}}\Bigg{)}, \tag{13}\] where \(M_{+}^{2}={m_{P}^{eff}}^{2}+{m_{V}^{eff}}^{2}\), \(M_{-}^{2}={m_{P}^{eff}}^{2}-{m_{V}^{eff}}^{2}\) and \(c_{PV}=g_{PV}eB\). Here \(m_{P}^{eff}\) and \(m_{V}^{eff}\) are the effective masses of pure states of open charm mesons and charmonia in the medium calculated in the chiral effective model. The \(+\) and \(-\) signs are for the vector and pseudoscalar states, respectively, indicating that the mass of the longitudinal component of the vector meson increases with the magnetic field, and that of the pseudoscalar meson drops under the effect of spin mixing. Hence spin mixing introduces a level repulsion between the mixing partners. For the neutral open charm mesons and charmonia, in the absence of a medium, the effect of the magnetic field is through the spin mixing effect only. However, for the charged mesons, in the absence of a medium, the effect of the magnetic field is through Landau quantization and spin mixing effect. In the hadronic medium, the modification of scalar fields, number densities, and scalar densities of baryons with respect to the changes in baryon density, isospin asymmetry, magnetic field, temperature, and strangeness fraction will also contribute to the effective masses of mesons along with spin mixing effect and Landau quantization effect. ## III Decay widths of \(\psi(1d)\) to \(D\bar{D}\) within \({}^{3}p_{0}\) model In this section, we describe the partial decay widths of the vector charmonium state \(\psi(1D)\) to \(D\bar{D}\) under strong magnetic fields using the \({}^{3}P_{0}\) model. In this model, a light quark-antiquark pair is created in the \({}^{3}P_{0}\) state, and this light quark (antiquark) combines with the heavy charm antiquark (charm quark) of the decaying charmonium state at rest, resulting in the production of the open charm (\(D\), \(\bar{D}\)) mesons. When spin mixing is taken into account, the general expression for the partial width of the longitudinal component of \(\psi(1D)\) is given as [41] \[\Gamma_{\psi^{\parallel}(1D)\to D\bar{D}}=2\pi\frac{p_{D}E_{D}E_{\bar{D}}}{m_{ \psi^{\parallel}(1D)}^{(PV)}}\sum_{LS}\left|M_{LS}\right|^{2}, \tag{14}\] where \(M_{LS}\) is the invariant matrix element representing the decay of the parent charmonia to \(D\bar{D}\) pairs. This matrix element will involve an overlap integral consisting of the momentum space wave functions of the parent and the daughter mesons. Here \(p_{D}\) is given as [31; 50] \[p_{D}=\left(\frac{(m_{\psi^{\parallel}(1D)}^{PV})^{2}}{4}-\frac{(m_{D}^{PV}) ^{2}+(m_{D}^{(PV)})^{2}}{2}+\frac{((m_{D}^{(PV)})^{2}-(m_{\bar{D}}^{(PV)})^{2 })^{2}}{4(m_{\psi^{\parallel}(1D)}^{(PV)})^{2}}\right)^{1/2}. \tag{15}\] Here \(m^{(PV)}_{\psi\parallel(1D)}\) is the in-medium mass of the longitudinal component of charmonium state and \(m^{(PV)}_{D}\), \(m^{(PV)}_{\bar{D}}\) are the medium masses of the outgoing \(D\) and \(\bar{D}\) mesons in the magnetic field after spin mixing. Here, \(E_{D}\), \(E_{\bar{D}}\) denotes the energy of outgoing \(D\) and \(\bar{D}\) mesons and given as \[E_{D}=\big{(}p_{D}^{2}+(m^{(PV)}_{D})^{2}\big{)}^{1/2},\hskip 28.452756ptE_{\bar{D} }=\big{(}p_{D}^{2}+(m^{(PV)}_{D})^{2}\big{)}^{1/2}. \tag{16}\] There are two possible decay channels for the charmonium states, through \(D^{0}\bar{D}^{0}\) channel and through \(D^{+}\)\(D^{-}\) channel. The expression for partial decay widths for \(\psi^{\parallel}(1D)\) is given as [31; 41; 50] \[\Gamma_{\psi^{\parallel}(1D)\to D\bar{D}}=\pi^{1/2}\frac{E_{D}E_{ \bar{D}}\gamma^{2}}{2m^{(PV)}_{\psi^{\parallel}(1D)}}\frac{2^{11}5}{3^{2}} \Bigg{(}\frac{r}{1+2r^{2}}\Bigg{)}^{7} x^{3}\Bigg{(}1-\frac{1+r^{2}}{5(1+2r^{2})}x^{2}\Bigg{)}^{2}\] \[\times\exp\Bigg{(}-\frac{x^{2}}{2(1+2r^{2})}\Bigg{)}. \tag{17}\] Here \(\gamma\) is the coupling strength related to the strength of the \({}^{3}P_{0}\) vertex and signifies the probability of creating a light quark-antiquark pair. The ratio \(r=\beta/\beta_{D}\) is a constant for a particular charmonium state, where \(\beta\) is the strength of the harmonic potential of the parent charmonium state and \(\beta_{D}\) is the strength of the harmonic potential of the daughter \(D(\bar{D})\) mesons. The scaled momentum \(x\) is defined as \(x=p_{D}/\beta_{D}\). The dependence of the partial decay width of charmonia on the magnetic field, baryon density, and isospin asymmetry is encoded in the scaled momentum \(x\) and the masses of charmonia and open charm mesons. When spin mixing is not considered, the masses of open charm mesons and charmonia in eq.(15), eq.(16) and in eq.(17) are taken to be their effective masses of pure states. ## IV Results and discussion We have investigated the masses of the pseudoscalar (\(D(D^{0}\), \(D^{+}\)), \(\bar{D}(\bar{D^{0}}\), \(D^{-}\)) and the vector (\(D^{*}(D^{*0}\), \(D^{*+}\)), \(\bar{D}^{*}(\bar{D}^{*0}\), \(D^{*-}\)) open charm mesons, as well as the pseudoscalar (\(\eta_{c}\equiv\eta_{c}(1S)\), \(\eta^{\prime}_{c}\equiv\eta_{c}(2S)\)) and the vector charmonium states (\(J/\psi\), \(\psi(2S)\), \(\psi(1D)\equiv\psi(3770)\)), in isospin asymmetric strange hadronic medium at finite temperature in the presence of strong magnetic fields. The masses of open charm mesons due to the modification of scalar fields and baryons in the hot magnetized medium are obtained by solving the dispersion relation Figure 1: (Color online) The masses of pseudoscalar \(D^{0}\) meson and the longitudinal component of the vector \(D^{*0}\) meson are plotted as functions of \(eB/m_{\pi}^{2}\) for \(\rho_{B}=\rho_{0}\) in asymmetric (\(\eta\)=0.5) magnetized hadronic matter for fixed values of strangeness fraction \(f_{s}\) = 0, 0.3, 0.5 and temperature T= 0, 100 and 150 MeV. The effects of spin mixing between \(D^{0}\) and \(D^{*0\parallel}\) on their in-medium masses calculated using eq.(13) are shown and compared to the case where the mixing effects are ignored (dotted lines). Figure 2: (Color online) The masses of pseudoscalar \(D^{+}\) meson and the longitudinal component of the vector \(D^{*+}\) meson are plotted as functions of \(eB/m_{\pi}^{2}\) for \(\rho_{B}=\rho_{0}\) in asymmetric (\(\eta\)=0.5) magnetized hadronic matter for fixed values of strangeness fraction \(f_{s}\) = 0, 0.5 and temperature T= 0, 100 and 150 MeV. The effects of spin mixing between \(D^{+}\) and \(D^{*+\parallel}\), as well as Landau quantization, on their in-medium masses are shown. These plots are compared to the case where only the mixing effects are ignored (dotted lines), and both the mixing and Landau quantization effects are ignored (dashed-dotted lines). Figure 3: (Color online) The masses of pseudoscalar \(\bar{D^{0}}\) meson and the longitudinal component of the vector \(\bar{D^{*0}}\) meson are plotted as functions of \(eB/m_{\pi}^{2}\) for \(\rho_{B}=\rho_{0}\) in asymmetric (\(\eta\)=0.5) magnetized hadronic matter for fixed values of strangeness fraction \(f_{s}\) = 0, 0.5 and temperature T= 0, 100 and 150 MeV. The effects of spin mixing between \(\bar{D^{0}}\) and \(\bar{D^{*0\parallel}}\) on their in-medium masses are shown and compared to the case where the mixing effects are ignored (dotted lines). Figure 4: (Color online) The masses of pseudoscalar \(D^{-}\) meson and the longitudinal component of the vector \(D^{*-}\) meson are plotted as functions of \(eB/m_{\pi}^{2}\) for \(\rho_{B}=\rho_{0}\) in asymmetric (\(\eta\)=0.5) magnetized hadronic matter for fixed values of strangeness fraction \(f_{s}\) = 0, 0.5 and temperature T= 0, 100 and 150 MeV. The effects of spin mixing between \(D^{-}\) and \(D^{*-\parallel}\), as well as Landau quantization on their in-medium masses, are shown. These plots are compared to the case where only the mixing effects are ignored (dotted lines), and both the mixing and Landau quantization effects are ignored (dashed-dotted lines). given in eq.(2). The mass modification of vector open charm mesons from medium modifications of scalar fields and baryons is assumed to be similar in magnitude as compared to pseudoscalar mesons. The effect of Landau quantization on charged mesons in the presence of magnetic fields is incorporated using eq.(6). The mass shift of charmonium states due to the modification of gluon condensates in the medium are obtained using eq.(7) and eq.(9). Finally, the effects of magnetically induced spin mixing of \(D^{*0}-D^{0}\), \(D^{*+}-D^{+}\), \(\vec{D^{*0}}-\vec{D^{0}}\) and \(D^{*-}-D^{-}\) on the masses of these open charm mesons and the effects of mixing of \(J/\psi-\eta_{c}(1S)\), \(\psi(2S)-\eta_{c}(2S)\) and \(\psi(1D)-\eta_{c}(2S)\) on the masses of charmonia are incorporated using eq.(13). The values of the various parameters of the chiral model and their fitting procedure is given in Ref. [29]. The vacuum masses (in MeV) of the open charm mesons are taken to be \(m_{D^{*+}}=\)\(m_{D^{*-}}=\) 2010.26, \(m_{D^{*0}}=m_{\bar{D^{*0}}}=\) 2006.85, \(m_{D^{+}}=m_{D^{-}}=\) 1869.65 MeV and \(m_{D^{0}}=\)\(m_{\bar{D^{0}}}=\) 1864.8 MeV. The values of the mixing coupling parameters \(g_{PV}\equiv g_{D^{+}D^{*+}}\) is taken to be 0.9089 using eq.(12) from the observed vacuum values of the radiative decay width of \(\Gamma(D^{*+}\to D^{+}\gamma)\)=1.33 keV [33]. The value of \(g_{D^{-}D^{*-}}\) is also taken to 0.9089 since \(D^{-}\) and \(D^{*-}\) being charge conjugate particles of \(D^{+}\) and \(D^{*+}\). For neutral open charm mesons, the coupling parameter \(g_{D^{0}D^{*0}}\) can be determined from the partial decay width \(\Gamma(D^{*0}\to D^{0}\gamma)\) which is 35.3 percent of the total decay width of \(D^{*0}\). However, the experimental value of total decay width of \(D^{*0}\) is not known with sufficient accuracy. Hence the value of \(g_{D^{0}D^{*0}}\) is taken to be 3.426 by initially calculating the partial decay width of \(\Gamma(D^{*0}\to D^{0}\pi^{0})\) appropriately and making use of the observed branching ratio of pionic and radiative decay modes of \(D^{*0}\) to calculate the partial decay width \(\Gamma(D^{*0}\to D^{0}\gamma)\) as 19.593 keV as given in Ref.[33]. From this value of partial radiative decay width, the value of the coupling parameter \(g_{D^{0}D^{*0}}\) is taken to be 3.426 in our investigation and may be compared to the value of 3.6736 given Ref.[13]. The value of the mixing coupling parameter \(g_{\bar{D^{0}}\bar{D^{*0}}}\) for the decay (\(\vec{D^{*0}}\rightarrow\bar{D^{0}}\gamma\)) is taken to be the same as the value of \(g_{D^{0}D^{*0}}\) due to the charge conjugation symmetry. In Figures 1, 2, 3, and 4, the masses of the pseudoscalar and the longitudinal component of vector open charm mesons after spin mixing are plotted as a function of the magnetic field (\(eB/{m_{\pi}}^{2}\)) in asymmetric (\(\eta=0.5\)) magnetized hadronic matter at nuclear matter saturation density (\(\rho_{B}\)= \(\rho_{0}\) ). Each panel is plotted at a particular value of strangeness fraction (\(f_{s}\)) where \(f_{s}\)=0 corresponds to pure nuclear matter and \(f_{s}\)= 0.5 corresponding to strange hadronic matter and for fixed values of temperature T=0 MeV, 100 MeV, and 150 MeV. The plots are compared to the case where the spin-mixing effects are ignored, shown as dotted lines in Figures 1 and 3. In Figures, 2 and 4, dotted lines represent the masses without mixing effects but with only Landau quantization, and dashed-dotted lines represent when both spin mixing and Landau quantization are ignored. At \(\rho_{B}=\)\(\rho_{0}\), the masses of all open charm mesons decrease in the medium as compared to their vacuum values since the scalar densities of baryons and the fluctuations of scalar fields (\(\sigma^{\prime}=(\sigma-\sigma_{0})\), \(\delta^{\prime}=(\delta-\delta_{0})\)) increase with baryon density resulting a net attractive interaction. In the magnetized strange hadronic medium (\(f_{s}=0.5\)), the scalar fields undergo significant modifications, and the scalar densities of hyperons also contribute to the self-energy of the open charm mesons. Hence the open charm mesons have a larger mass drop in the \(f_{s}=0.5\) case compared to the \(f_{s}=0\) case. In the medium, the mass degeneracy of pseudoscalar \(D^{+}\) and \(D^{-}\) as well as that of \(D^{0}\) and \(\bar{D^{0}}\) are broken due to the Weinberg-Tomozawa term in the interaction Lagrangian density [29; 46]. The effect of isospin asymmetry results in a further drop in the mass of \(D^{+}\), whereas \(D^{0}\) experiences a positive contribution to the mass from the second term of the Weinberg-Tomozawa term. Similarly, the mass degeneracy between the vector \(D^{*+}\) and \(D^{*-}\) as well as that of \(D^{*0}\) and \(\bar{D}^{*0}\) will also be broken in the medium since their mass shifts due to the modification of nucleons and scalar fields are assumed to be similar to that of their pseudoscalar counterparts. The mass degeneracy of charge conjugate partners is also broken in the magnetized medium. When the effect of spin mixing is ignored, the masses of neutral, open charm mesons experience marginal modifications as a function of the magnetic field in isospin asymmetric (\(\eta=0.5\)) medium at \(T\)=0 MeV and \(T\)=100 MeV. This behavior is because the magnetic field induced modifications of scalar fields (\(\sigma\) and \(\delta\)) and cumulative scalar densities of baryons are marginal when the temperature is not very high [30]. Due to the Landau quantization effect, the charged open charm mesons are subjected to additional positive mass modifications in the presence of the magnetic field. The dotted lines in Figures 2 and 4 represent the combined contribution of medium effects and Landau quantization. Hence at \(T\)=0 MeV and \(T\)=100 MeV, the masses of the pseudoscalar mesons and the longitudinal component of charged vector open charm mesons increase almost linearly as a function of the magnetic field. However, at \(T\)=150 MeV, the magnitude of scalar fields significantly drops with a change in the magnetic field [30], making the scalar meson exchange term more attractive with the magnetic field. Consequently, there is a significant mass drop for the neutral open charm mesons with an increase in the magnetic field at \(T\)=150 MeV. Even for charged mesons, at \(T\)=150 MeV, the positive mass modification due to Landau quantization is subdued by the negative mass shift due to the modification of scalar fields induced by the magnetic field. Hence at \(T\)=150 MeV, above eB\(=6m_{\pi}^{2}\), even the masses of charged mesons drop as the magnetic field is further increased when spin mixing is ignored. In the strange hadronic medium, the effects of the magnetic field and temperature on the masses are more significant compared to the nuclear medium. The qualitative behavior of open charm mesons as a function of the magnetic field at finite temperature without spin mixing effect is similar to that of open bottom mesons investigated in Ref.[30]. In the medium, without spin mixing, although the individual masses of the neutral open charm meson decreases as compared to vacuum, the value of mass splitting between \(D^{*0\parallel}\) and \(D^{0}\) as well as \(\bar{D^{*0\parallel}}\) and \(\bar{D^{0}}\) remains the same. This constant mass splitting is due to the assumption of equal mass shifts for these vector and pseudoscalar mesons from the medium modification of baryons and scalar fields. In addition to the medium effects, when the effect of spin mixing is incorporated, the mass of the longitudinal component of neutral vector open charm mesons (\(V^{\parallel}\)) increases, and that of the neutral pseudoscalar mesons (P) drops as the magnitude of the magnetic field is increased. Hence a level of repulsion between the mixing partners is observed for neutral, open charm mesons. When spin mixing is included, at \(T\)=150 MeV, \(D^{0}\) and \(\bar{D^{0}}\) mesons experience a more significant mass drop compared to \(T\)=0 case due to the additional mass drop by the modification of scalar fields and scalar densities of baryons by the magnetic field. In contrast, at \(T\)=150 MeV, the positive mass shift experienced by \(D^{*0\parallel}\) and \(\bar{D^{*0\parallel}}\) due to spin mixing is reduced due to the mass reduction by the modification of scalar fields and scalar densities of baryons by the magnetic field. The in-medium masses of open charm mesons are smaller in the magnetized strange hadronic medium than in the nuclear medium. The mixing effect is more substantial for the neutral open charm mesons due to the large value of the mixing coupling parameter compared to that of charged mesons. For \(D^{*\pm\parallel}\) mesons, mass shift due to spin mixing and mass shift due to Landau quantization are both positive. Hence when the spin mixing effect is incorporated, the in-medium masses of \(D^{*\pm\parallel}\) increases further compared to the case where these effects are ignored. However, for pseudoscalar \(D^{\pm}\) mesons, although the dominant Landau quantization effect results in an overall positive mass modification, the spin mixing effect contributes negatively to the mass shift and subdue the net positive mass shift above eB= \(3m_{\pi}^{2}\). Hence the in-medium masses of \(D^{\pm}\) decrease when spin mixing effects are incorporated compared to the case where these effects are ignored. At \(T\)=150 MeV, the mass drop due to the modifications of scalar fields and scalar density by the magnetic field reduces the cumulative positive mass shift experienced by vector \(D^{*\pm}\) as well as pseudoscalar \(D^{\pm}\). At \(T\)=150 MeV, in the case of \(D^{\pm}\), mass shift due to spin mixing and mass shift due to purely medium effects are in the same direction and negative. Their combined negative mass contribution can even nullify the positive mass shift due to Landau quantization at large magnetic fields. Hence in strange hadronic medium, at \(T\)=150 MeV, the in-medium masses of \(D^{\pm}\) initially increase marginally till eB= \(6m_{\pi}^{2}\) and subsequently decreases. In this case, the in-medium masses of \(D^{\pm}\) at \(eB=0m_{\pi}^{2}\) and \(eB=8m_{\pi}^{2}\) are similar in magnitude. Hence, the interplay of the various effects induced by the magnetic field is crucial for the charged mesons at large temperatures. Consequently, to probe the effects of the magnetic field at finite density and temperature, among the open charm mesons, \(D^{0}\) and \(\bar{D^{0}}\) would be ideal candidates since they have a large negative mass shift due to spin mixing (due to large value of \(g_{PV}\)) as well as due to scalar field modifications. From an experimental point of view, it is also essential to quantitatively analyze the different contributions of the magnetic field. The mass splitting of \(D^{*0\parallel}\) and \(D^{0}\) and that of \(\bar{D^{*0\parallel}}\) and \(\bar{D^{0}}\) mesons increases with the magnetic field due to level repulsion when spin mixing is taken into account. Due to the smaller value of \(g_{PV}\), the variation of mass splitting of \(D^{*\pm}\) and \(D^{\pm}\) as a function of the magnetic field is smaller compared to that of neutral mesons. Including the spin mixing effect, the masses of \(D^{*0\parallel}\), \(D^{*+\parallel}\), \(\bar{D^{*0\parallel}}\) and \(D^{*-\parallel}\) (in MeV) in \(f_{s}=0.5\) hadronic medium at \(\rho_{B}=\rho_{0}\) and \(T=0\), are 1969.03 (2031.98), 1937.74 (1965.12), 1993.09 (2054.86) and 1995.81 (2022.20) respectively at eB= \(4m_{\pi}^{2}\) (\(8m_{\pi}^{2}\)). At \(T=150\) MeV, these masses in the exact order are 1965.61 (2004.57), 1938.75 (1944.24), 1989.78 (2027.51), and 1996.77(2001.10). For \(f_{s}=0.5\), the masses of pseudoscalar \(D^{0}\), \(D^{+}\), \(\bar{D^{0}}\) and \(D^{-}\) (in MeV) at \(\rho_{B}=\rho_{0}\) and \(T=0\) are 1767.47 (1712.66), 1793.69 (1808.71), 1792.86 (1738.89) and 1851.96 (1866.66) respectively at eB= \(4m_{\pi}^{2}\) (\(8m_{\pi}^{2}\)). At \(T=150\) MeV, these masses in the same order (in MeV) become 1763.86 (1681.09), 1794.71 (1787.48), 1789.37 (1707.52), and 1852.93 (1845.25). In Figures 5, 6, and 7, the masses of the longitudinal component of vector and pseu Figure 5: (Color online) The masses of the pseudoscalar charmonium state \(\eta_{c}\equiv\eta_{c}(1S)\) and the longitudinal component of the vector charmonium state \(J/\psi\) are plotted as functions of \(eB/m_{\pi}^{2}\) for \(\rho_{B}=\rho_{0}\) in asymmetric (\(\eta\)=0.5) magnetized hadronic matter for fixed values of strangeness fraction \(f_{s}=0\), 0.5 and temperature T= 0, 100 and 150 MeV. The effects of spin mixing between \(\eta_{c}\) and \(J/\psi^{\parallel}\) on their in-medium masses are shown and compared to the case where the mixing effects are ignored (dotted lines). Figure 6: (Color online) The masses of the pseudoscalar charmonium state \(\eta_{c}^{\prime}\equiv\eta_{c}(2S)\) and the longitudinal component of the vector charmonium state \(\psi(2S)\equiv\psi(3686)\) are plotted as functions of \(eB/m_{\pi}^{2}\) for \(\rho_{B}=\rho_{0}\) in asymmetric (\(\eta\)=0.5) magnetized hadronic matter for fixed values of strangeness fraction \(f_{s}=0\), 0.5 and temperature T= 0, 100 and 150 MeV. The effects of spin mixing between \(\eta_{c}^{\prime}\) and \(\psi^{\parallel}(2S)\) on their in-medium masses are shown and compared to the case where the mixing effects are ignored (dotted lines). Figure 7: (Color online) The masses of the pseudoscalar charmonium state \(\eta_{c}^{\prime}\equiv\eta_{c}(2S)\) and the longitudinal component of the vector charmonium state \(\psi(1D)\equiv\psi(3770)\) are plotted as functions of \(eB/m_{\pi}^{2}\) for \(\rho_{B}=\rho_{0}\) in asymmetric (\(\eta\)=0.5) magnetized hadronic matter for fixed values of strangeness fraction \(f_{s}=\) 0, 0.5 and temperature T= 0, 100 and 150 MeV. The effects of spin mixing between \(\eta_{c}^{\prime}\) and \(\psi^{\parallel}(1D)\) on their in-medium masses are shown and compared to the case where the mixing effects are ignored (dotted lines). doscalar charmonium states are plotted as a function of the magnetic field (\(eB/{m_{\pi}}^{2}\)) for different values of temperature and strangeness fraction. The medium mass modifications of charmonia account for both the spin mixing effect as well as the modification of gluon condensates calculated using a chiral effective Lagrangian model in accord with eq.(7) and eq.(9). The value of the parameter \(\beta\), which characterizes the strength of the harmonic potential, in MeV for \(J/\psi\), \(\psi(2S)\) and \(\psi(1D)\) are taken to be 513, 384, and 368. They are calculated using a fit of their rms radii which are \(0.47^{2}fm^{2}\), \(0.96^{2}fm^{2}\) and \(1fm^{2}\) respectively [52; 53]. For the \(\eta_{c}\) and \({\eta_{c}}^{\prime}\) states, the values of \(\beta\) in MeV are taken to be 535 and 394.6 by linear extrapolation of the vacuum mass versus \(\beta\) graph of the charmonium states \(J/\psi\) and \(\psi(2S)\)[34]. The values of the spin mixing coupling parameters \(g_{PV}\equiv g_{\eta_{c}J/\psi}\), \(g_{\eta_{c}^{\prime}\psi(2S)}\), \(g_{\eta_{c}^{\prime}\psi(1D)}\) are taken to be 2.094, 3.184, 7.657 from the observed vacuum values of their radiative decay width of \(\Gamma(J/\psi\rightarrow\eta_{c}\gamma)\)=92.9 keV, \(\Gamma(\psi(2S)\rightarrow\eta_{c}{}^{\prime}\gamma)\)=0.2058 keV, \(\Gamma(\psi(1D)\rightarrow\eta_{c}{}^{\prime}\gamma)\)=24.48 keV respectively [34]. Without the spin mixing effect, all the charmonium states experience a negative mass shift at finite baryon density compared to their vacuum masses. This behavior is due to the reduction in the value of \(\chi\) from its vacuum value of 409.77 MeV, making the contribution of the dominant term proportional to \(\chi^{4}-{\chi_{0}}^{4}\) in eq.(7) to be negative. In this investigation, we have also considered the effect of the quark mass term in the modification of gluon condensates through the terms proportional to \(\sigma^{\prime}(=\sigma-\sigma_{0})\) and \(\zeta^{\prime}(=\zeta-\zeta_{0})\) in the eq.(9). These terms, being positive, reduce the net magnitude of the negative mass shift of charmonium states in the medium compared to the case where these terms are neglected. Since \(\sigma\) and \(\zeta\) undergo significant modifications in strange hadronic medium, the quark mass term results in smaller mass shift (larger mass) for charmonium states in \(f_{s}=0.5\) case as compared to \(f_{s}=0\) case at \(\rho_{B}=\rho_{0}\)[29]. This tendency is in contrast to open charm mesons whose in-medium masses are smaller in the strange hadronic medium than in the nuclear medium. The excited charmonium states undergo more significant mass shifts compared to \(J/\psi\) and \(\eta(1S)\) in the medium since the momentum space integral (eq.(7)) calculated for the excited state amplifies the medium dependence of the mass shift [29; 30]. The dilaton field \(\chi\) increases marginally as a function of the magnetic field at \(T\)=0 MeV. Hence at T=0, the masses of charmonium states also increase marginally with magnetic field in \(f_{s}=0\) case. Although the magnetic field induced mass shifts of \(J/\psi^{\parallel}\) and \(\eta(1S)\) are marginal, this is more apparent in panel (a) of the plots given in fig 6 and fig 7. For \(f_{s}=0\) case, at \(T\)=0 MeV, without including the spin mixing effects, the mass of \(J/\psi\), \(\psi(2S)\) and \(\psi(1D)\) (in MeV) are modified to 3094.02 (3094.23), 3645.68 (3648.48), 3723.31 (3726.75) respectively at \(\rho_{B}=\rho_{0}\) and eB= \(4m_{\pi}^{2}\) (\(8m_{\pi}^{2}\)) compared to their vacuum masses of 3097, 3686 and 3773 MeV. Under the same conditions, the mass of \(\eta_{c}\) and \({\eta_{c}}^{\prime}\) (in MeV) drops to 2981.47 (2981.64) and 3605.00 (3607.25) respectively at eB= \(4m_{\pi}^{2}\) (\(8m_{\pi}^{2}\)) compared to their vacuum masses of 2983.9 and 3637.5 MeV. However, in \(f_{s}=0.5\) case, the positive mass contribution of terms proportional \(\sigma^{\prime}\) and \(\zeta^{\prime}\) increases as a function of the magnetic field and opposes the negative mass contribution term proportional to \(\chi^{4}-{\chi_{0}}^{4}\). Hence at T=0, due to the interplay of these terms, the effect of the magnetic field on the mass shifts of charmonium states is smaller in the strange hadronic medium than in the nuclear medium. Also, at \(T\)=100 MeV, the magnetic field's effect is insignificant without the spin mixing effect. However, at \(T\)=150 MeV, the magnitude of \(\chi\), as well as \(\sigma\) and \(\zeta\), drops significantly with an increase in the magnetic field. As a consequence, the contribution of the dominant term proportional to \(\chi^{4}-{\chi_{0}}^{4}\) enhances with the magnetic field at \(T\)=150 MeV resulting in a larger negative mass shift for charmonium states. The behavior of charmonium states in hot magnetized strange hadronic medium without incorporating spin mixing effect is similar to that of upsilon states investigated in Ref.[30] When the spin mixing effects are taken into account, similar to the neutral, open charm mesons, the mass of the longitudinal component of vector charmonium states (\(V^{\parallel}\)) increases, and that of the pseudoscalar charmonium states (P) drops as the magnitude of the magnetic field is increased. The magnitude of mass shift of charmonium states purely due to spin mixing is observed to be more at finite density compared to zero density case [34]. This behavior is because \({m_{V}}^{eff}-{m_{P}}^{eff}\) is smaller at finite density compared to vacuum values, and the mass shift due to spin mixing is inversely proportional to the mass difference of the unmixed vector and pseudoscalar states at leading order [34]. The mixing between \(\psi(2S)\) with \(\eta_{c}\) and between \(\psi(1D)\) with \(\eta_{c}\) are neglected in our calculation due to the larger mass difference between these states. Moreover, the contribution of the mass shift from spin mixing is observed to be larger at larger magnetic fields, as evident from the plots. The mass splitting (in MeV) between \(J/\psi^{\parallel}\) and \(\eta_{c}\) mesons, \(\psi(2S)^{\parallel}\) and \({\eta_{c}}^{\prime}\) mesons as well as \(\psi(1D)^{\parallel}\) and \({\eta_{c}}^{\prime}\) mesons increases with the magnetic field when spin mixing is taken into account. The large mass splitting for and \(\eta_{c}{}^{\prime}\) from the mixing effect is mainly due to the large value of mixing coupling parameter \(g_{\eta_{c}^{\prime}\psi(1D)}\) compared to the value of \(g_{\eta_{c}^{\prime}\psi(2S)}\) and \(g_{\eta_{c}J/\psi}\). For \(f_{s}=0.5\) and \(T=0\), including the spin mixing effect, the masses of \(J/\psi^{\parallel}\), \(\psi(2S)^{\parallel}\) and \(\psi(1D)^{\parallel}\) at \(\rho_{B}=\rho_{0}\), are (in MeV) 3101.14 (3116.94), 3677.27 (3709.22) and 3779.95 (3854.58) respectively at eB\(=4m_{\pi}^{2}\) (\(8m_{\pi}^{2}\)). At \(T=150\) MeV, these masses in the exact order are (in MeV) 3100.96 (3114.70), 3674.91 (3680.47), and 3777.13 (3821.67). Hence at \(T\)=150 MeV, the drop in the magnitude of \(\chi\) due to the magnetic field reduces the net positive mass shift experienced by \(J/\psi^{\parallel}\), \(\psi(2S)^{\parallel}\) and \(\psi(1D)^{\parallel}\) due to spin mixing. In this case, the spin mixing effect for \(\psi(2S)^{\parallel}\) becomes dominant above eB\(=1.5m_{\pi}^{2}\). This behavior of \(\chi\) as a function of the magnetic field at \(T\)=150 MeV also enhances the negative mass shift of pseudoscalar \(\eta_{c}\) and \(\eta_{c}{}^{\prime}\) mesons. In the present work, we have also investigated the partial decay widths of \(\psi(1D)\) to \(D\bar{D}\) pairs under strong magnetic fields using the \({}^{3}P_{0}\) model. In this study, the value of \(\beta_{D}\) to calculate \(r\) in the expression for decay width (eq.(17)) is taken as 0.31 GeV [31]. The value of \(\gamma\) is chosen to be 0.33 to reproduce the observed decay widths of \(\psi(1D)\) to \(D^{0}\)\(\bar{D^{0}}\) as well to \(D^{+}\)\(D^{-}\) in vacuum [31; 50]. In Figure 8, the partial decay widths of vector charmonium state \(\psi(1D)\) to \(D^{+}\)\(D^{-}\) (indicated as \(\Gamma_{1}\)) as well as to \(D^{0}\)\(\bar{D^{0}}\) (indicated as \(\Gamma_{2}\)) and the sum of these decay widths (indicated as \(\Gamma_{total}\)) are plotted as a function of magnetic fields in isospin asymmetric (\(\eta=0.5\)) hadronic matter for various values of \(f_{s}\) and \(T\). In these plots, the decay widths are shown with the effect of spin mixing on the masses of both \(\psi(1D)\) and open charm mesons incorporated. The plots are compared to the case where the spin mixing effects are ignored, shown as dotted lines. In the vacuum, the partial decay width of \(\psi(1D)\) to \(D^{+}D^{-}\) (\(\Gamma_{1}\)) takes the value of 12.44 MeV and \(\psi(1D)\) to \(D^{0}\bar{D^{0}}\) (\(\Gamma_{2}\)) takes the value of 16.28 MeV. Without considering the spin mixing effect, the masses of \(\psi(1D)\) and open charm mesons drop in the medium compared to their vacuum values. At \(T=0\), the masses of the \(D^{0}\) and \(\bar{D^{0}}\) mesons and \(\psi(1D)\) modify negligibly with an increase in the magnetic field. Hence the value of \(p_{D}\) and consequently \(\Gamma_{2}\) also modify marginally in the medium with an increase in the magnetic field. Since the masses of \(D^{\pm}\) mesons increases with the magnetic field due to Landau quantization, the value of \(\Gamma_{1}\) reduces linearly as a function of the magnetic field. Hence at \(T=0\), the value of \(\Gamma_{2}\) is larger than \(\Gamma_{1}\) at large magnetic fields. In the magnetized strange hadronic medium, the mass of parent meson \(\psi(1D)\) is larger, and the mass of open charm mesons is smaller than their respective values in the nuclear medium. Hence the Figure 8: (Color online) Decay widths of \(\psi^{\parallel}(1D)\) to (1) \(D^{+}D^{-}\),(2) \(D^{0}\bar{D^{0}}\), and the total of these two channels (1+2), are plotted as functions of of \(eB/m_{\pi}^{2}\) for \(\rho_{B}=\rho_{0}\) in asymmetric (\(\eta\)=0.5) magnetized hadronic matter for fixed values of strangeness fraction \(f_{s}=0\), 0.5 and temperature T= 0, 100 and 150 MeV. The effects of spin mixing of both parent and daughter mesons are incorporated on the decay widths shown and compared to the case where only the mixing effects are ignored (dotted lines). value of \(\Gamma_{2}\) in the strange hadronic medium is observed to be larger, especially at smaller magnetic fields. The spin mixing effect of \(\psi(1D)\) enhances the value of its partial decay width compared to the case where these effects are not included. The effect of spin mixing on the enhancement of partial decay width is more evident in the nuclear medium than in the strange hadronic medium. When spin mixing is incorporated, the mass of \(\psi(1D)^{\parallel}\) increases substantially with the magnetic field, leading to a larger value of \(p_{D}\) for the decay. In Ref.[34], the partial decay width of \(\psi(1D)\) to \(D\bar{D}\) was investigated in magnetized cold nuclear medium without considering the spin mixing effect of daughter mesons. In the present investigation, when the spin mixing effect of both \(\psi(1D)\) and open charm mesons are taken into account, the partial decay width modifies significantly, especially in the \(D^{0}\)\(\bar{D^{0}}\) channel due to the significant spin mixing of neutral \(D\) mesons. The in-medium mass of the daughter \(D^{0}\) and \(\bar{D^{0}}\) drops significantly due to spin mixing, and hence the value of corresponding scaled momentum \(p_{D}\) is larger at a fixed magnetic field, compared to the case when the spin mixing effect of open charm mesons are not taken into account. For \(D^{+}\) and \(D^{-}\)mesons, since the positive mass contribution due to Landau quantization overshoots their mass drop due to spin mixing, the values of \(\Gamma_{1}\) modify marginally with the magnetic field and get enhanced only at large magnetic fields. This enhancement at large magnetic fields is due to the positive mass shift of the parent \(\psi(1D)\) due to spin mixing effects. However, the enhancement of \(\Gamma_{1}\) as a function of the magnetic field is suppressed in the strange hadronic medium. For \(f_{s}=0\), due to the increasing value of \(p_{D}\) with the magnetic field, \(\Gamma_{2}\) increases initially as a function of the magnetic field up to eB=3.5\({m_{\pi}}^{2}\) to 4\({m_{\pi}}^{2}\) and the decay width for this channel encounters a maximum at a particular value of \(p_{D}\). Thereafter \(\Gamma_{2}\) begins to decrease in value due to the exponential nature of the partial decay width. Hence \(\Gamma_{2}\) encounters a maximum at intermediate values of magnetic fields in this case. For \(f_{s}=0.5\) case, when spin mixing is incorporated, \(p_{D}\) is larger and approaches the value of maximal point even at relatively small magnetic fields eB=1.5\({m_{\pi}}^{2}\) to 2\({m_{\pi}}^{2}\). Hence \(\Gamma_{2}\) encounters a maximum at small magnetic fields, and subsequently, the value of \(\Gamma_{2}\) decreases. Hence at large magnetic fields, due to the internal structure of wave functions of \(psi(1D)\), the value of \(\Gamma_{2}\) with spin mixing incorporated is small compared to the case where these effects are ignored. Hence the dominant decay of \(\psi(1D)\) is through \(D^{+}\)\(D^{-}\) channel at large magnetic fields above eB=5\({m_{\pi}}^{2}\) in nuclear medium and above eB=3\({m_{\pi}}^{2}\) in strange hadronic medium. The qualitative behavior total decay width (\(\Gamma_{total}\)) is largely determined by \(\Gamma_{2}\) compared to \(\Gamma_{1}\). The effect of the magnetic field on the individual masses of mesons is drastically different due to significant modifications of scalar fields at \(T=150\) MeV, compared to the \(T=0\) case. However, the effects of the magnetic field on the qualitative and quantitative nature of the decay widths at \(T=150\) MeV is quite similar to that at \(T=0\) case. The magnitude of decay width depends on the momentum \(p_{D}\) (eq.(15)), which encodes a measure of the mass difference between \(\psi(1D)\) and the combined mass of daughter mesons. Since the modification of scalar fields by the magnetic field at \(T=150\) MeV contributes negatively to the mass of both parent and daughter mesons, the value of \(p_{D}\) is similar in magnitude at \(T=0\) as well as \(T=150\) MeV. This results in a similar magnitude of decay widths at large temperatures. The effect of \(f_{s}\) is more prominent than the effects of temperature since an increase in \(f_{s}\) results in a larger mass of parent charmonium and a smaller mass of daughter mesons, thereby increasing the value of \(p_{D}\). The effects of magnetic field, baryon density, and temperature on the masses of open charm mesons and charmonia will have experimental consequences in ultra-relativistic heavy ion collision experiments. At LHC and RHIC, strong magnetic fields are produced. The baryon density of the produced medium in these experiments is small, but the temperature of the medium would be extremely high. To the best of our knowledge, in most previous studies investigating the effects of magnetically induced spin mixing on the properties of heavy flavor mesons, the effects of such high temperatures were not taken into account. A more baryon-dense medium with moderate temperature can be produced by reducing the collision energy. However, this would result in a decrease in the magnitude of the magnetic field produced. Strange baryons will also be present in the medium; hence, the strangeness fraction's effects are also significant. The magnetically induced spin mixing of open charm mesons can modify their production ratios in heavy ion collision experiments. Since the masses of pseudoscalar \(D^{0}\) and \(\bar{D^{0}}\) are smaller than the masses of \(D^{\pm}\), the former mesons will be more copiously produced at large magnetic fields and high temperatures. The spin mixing of charmonia will have observational consequences on their dilepton spectra and the production of the charmonium states as well as open charm mesons in ultra-relativistic heavy ion collision experiments, e.g., at RHIC and LHC. Due to the mixing of spin-eigen states, the dileptons that should have originally arisen from vector charmonia will instead manifest from pseudoscalar charmonia resulting in anomalous decay modes \(\eta_{c},\eta^{\prime}_{c}\to l^{+}l^{-}\)[34]. The larger masses of \(J/\psi^{\parallel}\), \(\psi(2S)^{\parallel}\) and \(\psi(1D)^{\parallel}\) in the magnetic field due to spin mixing suppress their production whereas the smaller mass of \(\eta_{c}(1S)\) and \(\eta_{c}(2S)\) due to spin mixing enhance their production. Moreover, due to the enhancement of partial decay width, the spin mixing will have observable consequences on the suppression of \(\psi(1D)\) (and hence of \(J/\psi\) due to feed-down effect) at small to intermediate magnetic fields. This suppression would be more significant in the strange hadronic medium than in the nuclear medium when magnetic fields are not very large. ## V Summary The masses of pseudoscalar and vector open charm mesons and charmonium states in the magnetized hot hadronic medium are investigated, including the effect of magnetically induced spin mixing of these mesons. The effect of medium modifications of chiral condensates on the masses of open charm mesons and that of gluon condensates on the masses of charmonia are computed using a chiral effective model. The charged open charm mesons, \(D^{\pm}(D^{*\pm\parallel})\), experience additional positive mass modifications in magnetic fields through Landau quantization. At T=150 MeV, the scalar fields \(\sigma\), \(\zeta\), \(\delta\), which mimics the chiral condensates, and \(\chi\), which mimics the gluon condensates modify significantly with the magnetic field, resulting in a more significant mass drop of all mesons when spin mixing is not incorporated. The masses of open charm mesons are smaller in the magnetized strange hadronic medium than in the nuclear medium. However, due to the presence of quark mass term, the masses of charmonium states are larger in the magnetized strange hadronic medium than in the nuclear medium. When the spin mixing effect is incorporated, the mass of the longitudinal component of the neutral vector meson increases, and the mass of the pseudoscalar neutral mesons decreases with the magnetic field. For charged mesons, the effect of Landau quantization is observed to be dominant compared to the effect of spin mixing. At T=150 MeV, there is a strong interplay between the effects of scalar field modifications, Landau quantization, and spin mixing at large magnetic fields for charged mesons. The magnitude of spin mixing in the medium is observed to be more significant for charmonium states compared to that in the vacuum, whereas, for open charm mesons, such a medium dependence is weak. From the mass modifications of charmonium states and open charm mesons, the decay width of \(\psi(1D)\) state to \(D\bar{D}\) are evaluated in the \({}^{3}P_{0}\) model. In general, the positive mass shift of the longitudinal component of \(\psi(1D)\) due to the spin mixing effect, enhances the value of its partial decay width when the magnetic field is not very large. However, the enhancement of \(\Gamma_{1}\) is suppressed due to the positive mass shift of pseudoscalar \(D^{\pm}\) mesons through Landau quantization and due to the smaller value of spin mixing coupling parameter for charged \(D\) mesons. At large magnetic fields, the reduction of the mass of charged \(D\) mesons due to spin mixing enhances the value of \(\Gamma_{1}\) marginally. When the effect of spin mixing is considered, especially in the nuclear medium, \(\Gamma_{2}\) increases initially as a function of the magnetic field, and thereafter \(\Gamma_{2}\) decreases due to the exponential nature of the decay width. The effects of the strangeness fraction and the magnetic field are observed to be more significant than the effects of temperature on the decay width. ## VI Acknowledgements A.J.C.S acknowledges the support towards this work from the Department of Science and Technology, Government of India, via an INSPIRE fellowship (INSPIRE Code IF170745). AM acknowledges financial support from Department of Science and Technology (DST), Government of India (project no.CRG/2018/002226).
2306.16082
Support varieties for finite tensor categories: the tensor product property
We show that in a finite tensor category, the tensor product property holds for support varieties if and only if it holds between indecomposable periodic objects. We apply this to certain Hopf algebras in the form of skew group algebras. In particular, we show that the tensor product property holds for all objects in a symmetric finite tensor category over an algebraically closed field of characteristic zero.
Petter Andreas Bergh, Julia Yael Plavnik, Sarah Witherspoon
2023-06-28T10:29:19Z
http://arxiv.org/abs/2306.16082v3
# Support varieties for finite tensor categories: the tensor product property ###### Abstract. We show that in a finite tensor category, the tensor product property holds for support varieties if and only if it holds between indecomposable periodic objects. We apply this to certain Hopf algebras in the form of skew group algebras. In particular, we show that the tensor product property holds for all objects in a symmetric finite tensor category over an algebraically closed field of characteristic zero. Key words and phrases:finite tensor categories; support varieties; tensor product property 2020 Mathematics Subject Classification: 16E40, 16T05, 18M05, 18M15 ## 1. Introduction Given a finite tensor category \(\mathscr{C}\), one can attach a support variety \(V_{\mathscr{C}}(X)\) to each object \(X\), using the spectrum of the cohomology ring. It has been conjectured by Etingof and Ostrik that every finite tensor category has finitely generated cohomology. As shown in [6], whenever this holds, the support varieties encode homological properties of the objects, in much the same way as do cohomological support varieties over group algebras, more general cocommutative Hopf algebras, and commutative complete intersection rings. When does the tensor product property hold for support varieties? That is, what conditions - if any - will guarantee that \[V_{\mathscr{C}}(X\otimes Y)=V_{\mathscr{C}}(X)\cap V_{\mathscr{C}}(Y)\] for all objects \(X,Y\in\mathscr{C}\)? This property always holds for support varieties over group algebras of finite groups, for example. One reason why one would seek such a property is a possible classification of thick tensor ideals in the stable category; the tensor product property is a necessary ingredient for known classifications. In this paper, we show that when \(\mathscr{C}\) is braided, the tensor product property holds for all objects if and only if it holds between _indecomposable periodic_ objects. In other words, we show that if \(V_{\mathscr{C}}(X\otimes Y)=V_{\mathscr{C}}(X)\cap V_{\mathscr{C}}(Y)\) for all indecomposable periodic objects \(X,Y\in\mathscr{C}\), then the tensor product property holds for all objects. Thus the question of whether the tensor product property holds reduces to indecomposable periodic objects, or, equivalently, to indecomposable objects of complexity one. We prove this reduction in the slightly more general setting of a module category over \(\mathscr{C}\). As a consequence of the main result, we show that the tensor product property holds for all finitely generated modules over Hopf algebras that arise as certain skew group algebras of exterior algebras. In particular, using Deligne's classification theorem, we show ## 1. Introduction A _regular_ (or _regular_) _graph_ is a graph \(G\) with a finite set \(\mathcal{G}\), and a _regular_ (or _regular_) graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _regular_) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). The _regular_ (or _) graph \(G\) is a _regular_ graph \(G\) with a finite set \(\mathcal{G}\). (4) By [12, Corollary 7.6.4], both \(\mathscr{C}\) and \(\mathscr{M}\) are quasi-Frobenius, that is, the projective objects are precisely the injective objects. (5) The category \(\mathscr{C}\) is trivially a left module category over itself, with the tensor product as the module product. Therefore everything we develop and prove for objects of \(\mathscr{M}\) holds for objects of \(\mathscr{C}\). (6) Since the unit object \(\mathbf{1}\) is simple, the \(k\)-algebra \(\operatorname{Hom}_{\mathscr{C}}(\mathbf{1},\mathbf{1})\) is a division ring, that is, all the nonzero elements are invertible. This ring is in fact commutative (see the paragraphs following this remark), and therefore a finite field extension of \(k\). In particular, when \(k\) is algebraically closed, then \(\operatorname{Hom}_{\mathscr{C}}(\mathbf{1},\mathbf{1})=k\). (7) We refer to [6, Section 2] for an overview of some of the homological properties and techniques for finite tensor categories that we use throughout. Almost all the results and concepts carry over to \(\mathscr{M}\) as well. There are many important examples of tensor categories in which the tensor product is not commutative. However, in our main results, we need this property, both the standard and a stronger version. The tensor category \(\mathscr{C}\) is called _braided_ if for all objects \(X,Y\in\mathscr{C}\), there are functorial isomorphisms \[X\otimes Y\xrightarrow{b_{X,Y}}Y\otimes X\] that satisfy the so-called hexagonal identities defined in [12, Definition 8.1.1]. If, in addition, these braiding isomorphisms satisfy \[b_{Y,X}\circ b_{X,Y}=1_{X\otimes Y}\] for all objects \(X\) and \(Y\), then \(\mathscr{C}\) is _symmetric_. An example of the latter is the category of finitely generated left modules over a group algebra. However, in general, if \(H\) is a finite dimensional Hopf algebra, then the category \(\operatorname{mod}H\) of finitely generated left \(H\)-modules is a finite tensor category that is not necessarily braided. Given objects \(M,N\in\mathscr{M}\), we denote by \(\operatorname{Ext}^{*}_{\mathscr{M}}(M,N)\) the graded \(k\)-vector space \(\oplus_{n=0}^{\infty}\operatorname{Ext}^{n}_{\mathscr{M}}(M,N)\). The module product \(-*M\) induces a homomorphism \[\operatorname{Ext}^{*}_{\mathscr{C}}(\mathbf{1},\mathbf{1})\xrightarrow{ \varphi_{M}}\operatorname{Ext}^{*}_{\mathscr{M}}(M,M)\] of graded \(k\)-algebras, making \(\operatorname{Ext}^{*}_{\mathscr{M}}(M,N)\) both a left and a right module over the cohomology algebra \(\operatorname{Ext}^{*}_{\mathscr{C}}(\mathbf{1},\mathbf{1})\), via \(\varphi_{N}\) and \(\varphi_{M}\) followed by Yoneda composition. In particular, for objects \(X,Y\in\mathscr{C}\), the left and right scalar actions of \(\operatorname{Ext}^{*}_{\mathscr{C}}(\mathbf{1},\mathbf{1})\) on \(\operatorname{Ext}^{*}_{\mathscr{C}}(X,Y)\) are induced by the tensor products \(-\otimes Y\) and \(-\otimes X\), respectively, followed by Yoneda composition. However, not only is the algebra \(\operatorname{Ext}^{*}_{\mathscr{C}}(\mathbf{1},\mathbf{1})\) graded-commutative by [22, Theorem 1.7], the following lemma and its corollary show that for objects \(M,N\in\mathscr{M}\), the left and the right scalar actions of \(\operatorname{Ext}^{*}_{\mathscr{C}}(\mathbf{1},\mathbf{1})\) on \(\operatorname{Ext}^{*}_{\mathscr{M}}(M,N)\) coincide up to a sign, when we only consider homogeneous elements. The proof is an adaption of the proof of [20, Theorem 1.1]. We use the symbol \(\circ\) to denote Yoneda composition, as well as ordinary composition of maps. **Lemma 2.2**.: _Given any objects \(X,Y\in\mathscr{C}\), \(M,N\in\mathscr{M}\), integers \(m,n\geq 0\) and elements \(\eta\in\operatorname{Ext}^{m}_{\mathscr{C}}(X,Y)\) and \(\theta\in\operatorname{Ext}^{n}_{\mathscr{M}}(M,N)\), the equality_ \[(\eta*N)\circ(X*\theta)=(-1)^{mn}(Y*\theta)\circ(\eta*M)\] _holds in \(\operatorname{Ext}^{m+n}_{\mathscr{M}}(X*M,Y*N)\)._ Proof.: Everything boils down to the fact that the module product \(*\) is a bifunctor \(\mathscr{C}\times\mathscr{M}\to\mathscr{M}\); this makes all the diagrams in what follows commute. In the case when \(m=n=0\), so that \(\eta\in\operatorname{Hom}_{\mathscr{C}}(X,Y)\) and \(\theta\in\operatorname{Hom}_{\mathscr{M}}(M,N)\), we use this directly. Namely, the diagram commutes, giving the equality that we seek. Next, suppose that \(m>0\) and \(n=0\), and fix a projective resolution \((P,d,\mathfrak{l})\) of \(X\) in \(\mathscr{C}\). Then \(\eta\) is represented by a morphism \(f_{\eta}\colon P_{m}\to Y\), and since \((P*M,d,\mathfrak{l}*1_{M})\) is a projective resolution of \(X*M\) in \(\mathscr{M}\), we see that the morphism \(f_{\eta}*1_{M}\colon P_{m}*M\to Y*M\) represents the element \(\eta*M\). The composition \[P_{m}*M\xrightarrow{f_{\eta}*1_{M}}Y*M\xrightarrow{1_{Y}*\theta}Y*N\] now represents \((Y*\theta)\circ(\eta*M)\). For \((\eta*N)\circ(X*\theta)\), we first lift the morphism \(1_{X}*\theta\) along the projective resolutions \((P*M,d,\mathfrak{l}*1_{M})\) and \((P,\mathfrak{l}*N,d,\mathfrak{l}*1_{N})\); for every \(i\), the map \(1_{P_{i}}*\theta\colon P_{i}*M\to P_{i}*N\) works. Consequently, the composition \[P_{m}*M\xrightarrow{1_{P_{m}}*\theta}P_{m}*N\xrightarrow{f_{\eta}*1_{N}}Y*N\] represents \((\eta*N)\circ(X*\theta)\). The two compositions are equal, and the equality follows. Now suppose that \(m=0\) and \(n>0\), and fix a projective resolution \((Q,\mathfrak{l})\) of \(M\) in \(\mathscr{M}\). Then \(\theta\) is represented by a map \(g_{\theta}\colon Q_{n}\to N\), and so since \((X*Q,1_{X}*\partial_{\mathfrak{l}})\) is a projective resolution of \(X*M\) in \(\mathscr{M}\), the composition \[X*Q_{n}\xrightarrow{1_{X}*g_{\theta}}X*N\xrightarrow{\eta*1_{N}}Y*N\] represents \((\eta*N)\circ(X*\theta)\). In this case, for \((Y*\theta)\circ(\eta*M)\), we start by lifting the morphism \(\eta*1_{M}\) along the projective resolutions \((X*Q,1_{X}*\partial)\) and \((Y*Q,1_{Y}*\partial)\); for every \(j\), the map \(\eta*1_{Q_{j}}\colon X*Q_{j}\to Y*Q_{j}\) works. The composition \[X*Q_{n}\xrightarrow{\eta*1_{Q_{n}}}Y*Q_{n}\xrightarrow{1_{Y}*g_{\theta}}Y*N\] now represents \((Y*\theta)\circ(\eta*M)\), and it equals the composition above. Thus equality holds also in this case. Finally, suppose that both \(m\) and \(n\) are positive integers. In this case, we represent our elements by exact sequences in \(\mathscr{C}\) and \(\mathscr{M}\), starting with the case when \(m=n=1\). Thus \(\eta\) and \(\theta\) are given by short exact sequences \[0\to Y\xrightarrow{f}Z\xrightarrow{g}X\to 0\] and \[0\to N\xrightarrow{u}L\xrightarrow{v}M\to 0\] from which we obtain the diagram All the rows and the columns are exact, and the diagram commutes. Moreover, the top row is precisely \(\eta*N\), the bottom row is \(\eta*M\), the left column is \(Y*\theta\), and the right column is \(X*\theta\). Therefore, by the abelian category version of the \(3\times 3\) splicing lemma (cf. [14, Lemma VIII.3.1]), we obtain the equality \[(\eta*N)\circ(X*\theta)=-(Y*\theta)\circ(\eta*M)\] For arbitrary \(m,n\geq 1\), we decompose \(\eta\) and \(\theta\) into Yoneda products \(\eta=\eta_{1}\circ\cdots\circ\eta_{m}\) and \(\theta=\theta_{1}\circ\cdots\circ\theta_{n}\) of short exact sequences, say with \(\eta_{i}\in\operatorname{Ext}^{1}_{\mathscr{C}}(X_{i},Y_{i})\) and \(\theta_{i}\in\operatorname{Ext}^{1}_{\mathscr{M}}(M_{j},N_{j})\), and with \(X_{m}=X\), \(Y_{1}=Y\) and \(M_{n}=M\), \(N_{1}=N\). Using repeatedly what we have just proved, we obtain \[(\eta*N)\circ(X*\theta) = ((\eta_{1}*N)\circ\cdots\circ(\eta_{m}*N))\circ((X*\theta_{1}) \circ\cdots\circ(X*\theta_{n}))\] \[= (-1)^{mn}\left((Y*\theta_{1})\circ\cdots\circ(Y*\theta_{n}) \right)\circ((\eta_{1}*M)\circ\cdots\circ(\eta_{m}*M))\] \[= (-1)^{mn}(Y*\theta)\circ(\eta*M)\] This concludes the proof. Specializing to the case when \(X=Y=\mathbf{1}\), we obtain what we are after, recorded in the following corollary. Note also that when we specialize even further, by taking \(\mathscr{M}=\mathscr{C}\) and \(M=N=\mathbf{1}\), we recover the graded-commutativity of \(\operatorname{Ext}^{*}_{\mathscr{C}}(\mathbf{1},\mathbf{1})\). **Corollary 2.3**.: _Given any objects \(M,N\in\mathscr{M}\) and elements \(\eta\in\operatorname{Ext}^{m}_{\mathscr{C}}(\mathbf{1},\mathbf{1})\) and \(\theta\in\operatorname{Ext}^{n}_{\mathscr{M}}(M,N)\), the equality_ \[\eta\cdot\theta=(-1)^{mn}\theta\cdot\eta\] _holds._ The algebra \(\operatorname{Ext}^{*}_{\mathscr{C}}(\mathbf{1},\mathbf{1})\) is the _cohomology ring_\(\operatorname{H}^{*}(\mathscr{C})\) of \(\mathscr{C}\). It is at the center of the following conjecture from [13], a conjecture which is still open: **Conjecture**.: The cohomology ring \(\operatorname{H}^{*}(\mathscr{C})\) is finitely generated, and \(\operatorname{Ext}^{*}_{\mathscr{C}}(X,X)\) is a finitely generated \(\operatorname{H}^{*}(\mathscr{C})\)-module for all objects \(X\in\mathscr{C}\). If the characteristic of the ground field \(k\) is two, then graded-commutativity is the same as ordinary commutativity. If, on the other hand, the characteristic of \(k\) is not two, then the even part of the cohomology ring \(\operatorname{H}^{*}(\mathscr{C})\) is commutative, and the homogenous elements of odd degrees square to zero. When we work with support varieties, nilpotent elements in the ambient commutative ring are redundant, and this motivates the first part of the following definition. **Definition**.: (1) We define \[\mathrm{H}^{*}(\mathscr{C})=\left\{\begin{array}{ll}\mathrm{H}^{*}(\mathscr{C} )&\text{if the characteristic of $k$ is two,}\\ \mathrm{H}^{2*}(\mathscr{C})&\text{if not.}\end{array}\right.\] (2) We say that the finite tensor category \(\mathscr{C}\) satisfies the _finiteness condition_\(\mathbf{Fg}\) if the cohomology ring \(\mathrm{H}^{*}(\mathscr{C})\) is finitely generated, and \(\mathrm{Ext}^{*}_{\mathscr{C}}(X,X)\) is a finitely generated \(\mathrm{H}^{*}(\mathscr{C})\)-module for all objects \(X\in\mathscr{C}\). As explained in [6, Remark 3.5], the finiteness condition \(\mathbf{Fg}\) and the conjecture can be stated in terms of \(\mathrm{H}^{*}(\mathscr{C})\) instead of \(\mathrm{H}^{*}(\mathscr{C})\). Namely, the condition \(\mathbf{Fg}\) holds for \(\mathscr{C}\) if and only if \(\mathrm{H}^{*}(\mathscr{C})\) is finitely generated, and \(\mathrm{Ext}^{*}_{\mathscr{C}}(X,X)\) is a finitely generated \(\mathrm{H}^{*}(\mathscr{C})\)-module for every object \(X\in\mathscr{C}\). Note also that when \(\mathbf{Fg}\) holds for \(\mathscr{C}\), then for all objects \(X,Y\in\mathscr{C}\), the \(\mathrm{H}^{*}(\mathscr{C})\)-module \(\mathrm{Ext}^{*}_{\mathscr{C}}(X,Y)\) is finitely generated, and not just the two modules \(\mathrm{Ext}^{*}_{\mathscr{C}}(X,X)\) and \(\mathrm{Ext}^{*}_{\mathscr{C}}(Y,Y)\). This follows from the simple fact that the \(\mathrm{H}^{*}(\mathscr{C})\)-module \(\mathrm{Ext}^{*}_{\mathscr{C}}(X\oplus Y,X\oplus Y)\) is finitely generated by assumption, and it has \(\mathrm{Ext}^{*}_{\mathscr{C}}(X,Y)\) as a direct summand. **Remark 2.4**.: When the finiteness condition \(\mathbf{Fg}\) holds for \(\mathscr{C}\), then what about the cohomology of \(\mathscr{M}\)? It turns out that it is automatically finitely generated. Namely, by [16, Proposition 3.5], if \(\mathbf{Fg}\) holds for \(\mathscr{C}\), then \(\mathrm{Ext}^{*}_{\mathscr{M}}(M,M)\) is a finitely generated \(\mathrm{H}^{*}(\mathscr{C})\)-module for every object \(M\in\mathscr{M}\). As for \(\mathscr{C}\), this implies that for all objects \(M,N\in\mathscr{M}\), the \(\mathrm{H}^{*}(\mathscr{C})\)-module \(\mathrm{Ext}^{*}_{\mathscr{M}}(M,N)\) is finitely generated. Moreover, also here we may replace \(\mathrm{H}^{*}(\mathscr{C})\) with \(\mathrm{H}^{*}(\mathscr{C})\). For objects \(M,N\in\mathscr{M}\), we now define \[I_{\mathscr{M}}(M,N)=\left\{\eta\in\mathrm{H}^{*}(\mathscr{C})\ |\ \eta\cdot\theta=0\text{ for all }\theta\in\mathrm{Ext}^{*}_{\mathscr{M}}(M,N)\right\},\] that is, the annihilator ideal of \(\mathrm{Ext}^{*}_{\mathscr{M}}(M,N)\) in \(\mathrm{H}^{*}(\mathscr{C})\). For a single object \(M\) we write just \(I_{\mathscr{M}}(M)\) instead of \(I_{\mathscr{M}}(M,M)\). Moreover, for any ideal \(I\subseteq\mathrm{H}^{*}(\mathscr{C})\), we denote by \(Z(I)\) the set of maximal ideals \(\mathfrak{m}\in\mathrm{H}^{*}(\mathscr{C})\) with \(I\subseteq\mathfrak{m}\). Finally, we set \(\mathfrak{m}_{0}=\mathrm{H}^{+}(\mathscr{C})\), the ideal generated by all the homogeneous elements of positive degrees in \(\mathrm{H}^{*}(\mathscr{C})\). Then \(\mathfrak{m}_{0}\) is the unique graded maximal ideal of \(\mathrm{H}^{*}(\mathscr{C})\), since \(\mathrm{H}^{0}(\mathscr{C})\) is a field; see Remark 2.1(6). Consequently, the annihilator ideal that we just defined, which is graded, must be contained in \(\mathfrak{m}_{0}\) whenever \(\mathrm{Ext}^{*}_{\mathscr{M}}(M,N)\) is nonzero. **Definition**.: The _support variety_ of an ordered pair of objects \((M,N)\) in \(\mathscr{M}\) is \[V_{\mathscr{M}}(M,N)\stackrel{{\mathrm{def}}}{{=}}\{\mathfrak{m} _{0}\}\cup Z(I_{\mathscr{M}}(M,N))\] For a single object \(M\in\mathscr{M}\), we define its support variety as \(V_{\mathscr{M}}(M)=V_{\mathscr{M}}(M,M)\). In the definition, the explicit inclusion of the unique graded maximal ideal \(\mathfrak{m}_{0}\) has been made in order to avoid empty support varieties; if \(\mathrm{Ext}^{*}_{\mathscr{M}}(M,N)\) is nonzero, then \(\mathfrak{m}_{0}\) is automatically contained in the set \(Z(I_{\mathscr{M}}(M,N))\), since \(I_{\mathscr{M}}(M,N)\) is a graded proper ideal of \(\mathrm{H}^{*}(\mathscr{C})\). The support variety \(V_{\mathscr{M}}(M,N)\) is called _trivial_ if \(V_{\mathscr{M}}(M,N)=\{\mathfrak{m}_{0}\}\). **Remark 2.5**.: (1) When we deal with objects in the category \(\mathscr{C}\) itself, we use the notation \(I_{\mathscr{C}}(X,Y)\), \(V_{\mathscr{C}}(X,Y)\) and \(V_{\mathscr{C}}(X)\). (2) We define \(V_{\mathscr{C}}\) as \(V_{\mathscr{C}}(\mathbf{1})\); this is just the set of maximal ideals of the cohomology ring \(\operatorname{H}^{*}(\mathscr{C})\). Note that \(V_{\mathscr{M}}(M,N)\subseteq V_{\mathscr{C}}\) for all \(M,N\in\mathscr{M}\). (3) An important feature of support varieties - probably the most important - is the dimension. For objects \(M,N\in\mathscr{M}\), the dimension of \(V_{\mathscr{M}}(M,N)\), denoted \(\dim V_{\mathscr{M}}(M,N)\), is defined to be the Krull dimension of the ring \(\operatorname{H}^{*}(\mathscr{C})/I_{\mathscr{M}}(M,N)\). If this dimension is zero, then the support variety is necessarily trivial. For suppose that \(V_{\mathscr{M}}(M,N)\) contains a maximal ideal \(\mathfrak{m}\) other than \(\mathfrak{m}_{0}\), and let \(\mathfrak{m}^{*}\) be the graded ideal of \(\operatorname{H}^{*}(\mathscr{C})\) generated by all the homogeneous elements in \(\mathfrak{m}\). By [7, Lemma 1.5.6], this is a prime ideal, and so since the graded ideal \(I_{\mathscr{M}}(M,N)\) is contained in \(\mathfrak{m}\), we see that \(I_{\mathscr{M}}(M,N)\subseteq\mathfrak{m}^{*}\) (for \(\mathfrak{m}^{*}\) is the unique maximal graded ideal contained in \(\mathfrak{m}\)). As \(\mathfrak{m}\) is not graded, the inclusion \(\mathfrak{m}^{*}\subset\mathfrak{m}\) is strict, hence the Krull dimension of \(\operatorname{H}^{*}(\mathscr{C})/I_{\mathscr{M}}(M,N)\) is at least \(1\). Thus when \(\dim V_{\mathscr{M}}(M,N)=0\), then \(V_{\mathscr{M}}(M,N)=\{\mathfrak{m}_{0}\}\). However, when the finiteness condition \(\mathbf{Fg}\) holds for \(\mathscr{M}\), then the converse is also true, so that \[\dim V_{\mathscr{M}}(M,N)=0\Longleftrightarrow V_{\mathscr{M}}(M,N)=\{ \mathfrak{m}_{0}\}\] For in this case, if \(V_{\mathscr{M}}(M,N)=\{\mathfrak{m}_{0}\}\), then if \(\operatorname{Ext}^{*}_{\mathscr{M}}(M,N)\) is nonzero, the radical \(\sqrt{I_{\mathscr{M}}(M,N)}\) equals \(\mathfrak{m}_{0}\), by [15, Theorem 25]. Consequently, the Krull dimension of \(\operatorname{H}^{*}(\mathscr{C})/I_{\mathscr{M}}(M,N)\) must be zero. In the following result, we collect some of the basic properties enjoyed by support varieties for objects of \(\mathscr{M}\). For objects of \(\mathscr{C}\), these properties were listed in [6, Proposition 3.3]. In that paper, we made the assumption that the ground field \(k\) be algebraically closed, but that assumption was never needed. The proofs carry over to the general setting of exact module categories, and note that some of them rely on Corollary 2.3. Only one of the properties, number (6), requires an argument that is special to \(\mathscr{M}\). **Proposition 2.6**.: _For objects \(M,N\in\mathscr{M}\), the following hold._ (1)_\(V_{\mathscr{M}}(M\oplus N)=V_{\mathscr{M}}(M)\cup V_{\mathscr{M}}(N)\)._ (2)_\(V_{\mathscr{M}}(M,N)\subseteq V_{\mathscr{M}}(M)\cap V_{\mathscr{M}}(N)\)._ (3)_\(V_{\mathscr{M}}(M)=\cup_{i=1}^{t}V_{\mathscr{M}}(M,S_{i})=\cup_{i=1}^{t}V_{ \mathscr{M}}(S_{i},M)\), where \(S_{1},\ldots,S_{t}\) are all the simple objects of \(\mathscr{M}\) (up to isomorphism)._ (4) _Given any short exact sequence_ \[0\to L_{1}\to L_{2}\to L_{3}\to 0\] _in \(\mathscr{M}\), the inclusion \(V_{\mathscr{M}}(L_{u})\subseteq V_{\mathscr{M}}(L_{v})\cup V_{\mathscr{M}}(L _{w})\) holds whenever \(\{u,v,w\}=\{1,2,3\}\)._ (5) _If there is a short exact sequence_ \[0\to M\to P\to N\to 0\] _in \(\mathscr{M}\), in which \(P\) is projective, then \(V_{\mathscr{M}}(M)=V_{\mathscr{M}}(N)\)._ (6) _For every object \(X\in\mathscr{C}\), the inclusion \(V_{\mathscr{M}}(X*M)\subseteq V_{\mathscr{C}}(X)\) holds. Moreover, if the category \(\mathscr{C}\) is braided, then \(V_{\mathscr{M}}(X*M)\subseteq V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\)._ Proof.: As mentioned, only (6) needs an argument, as the proof of [6, Proposition 3.3] works for the rest. The scalar action of \(\operatorname{H}^{\bullet}(\mathscr{C})\) on \(\operatorname{Ext}^{*}_{\mathscr{M}}(X*M,X*M)\) is defined in terms of the ring homomorphism \[\operatorname{H}^{\bullet}(\mathscr{C})\xrightarrow{\varphi_{X*M}} \operatorname{Ext}^{*}_{\mathscr{M}}(X*M,X*M)\] which in turn is induced by the module product \(-*(X*M)\). Now, for every object \(Y\in\mathscr{C}\) there is an isomorphism \(Y*(X*M)\simeq(Y\otimes X)*M\), functorial in \(Y\). Therefore, the ring homomorphism factors as the composition \[\operatorname{H}^{\bullet}(\mathscr{C})\xrightarrow{\varphi_{X}} \operatorname{Ext}^{*}_{\mathscr{C}}(XX)\xrightarrow{-*M}\operatorname{Ext}^{* }_{\mathscr{M}}(X*M,X*M)\] where the ring homomorphism \(\varphi_{X}\) is induced by the tensor product \(-\otimes X\). This implies \(I_{\mathscr{C}}(X)\subseteq I_{\mathscr{M}}(X*M)\), and so the inclusion \(V_{\mathscr{M}}(X*M)\subseteq V_{\mathscr{C}}(X)\) follows by definition of support varieties. Suppose now that the category \(\mathscr{C}\) is braided, and take any homogeneous element \(\eta\in\operatorname{H}^{\bullet}(\mathscr{C})\). By definition, for every object \(Y\in\mathscr{C}\) there is an isomorphism \(Y\otimes X\to X\otimes Y\), functorial in \(Y\), giving \[\varphi_{X*M}(\eta)=\eta*(X*M)=(\eta\otimes X)*M=(X\otimes\eta)*M=X*(\eta*M)=X* \varphi_{M}(\eta)\] as elements of \(\operatorname{Ext}^{*}_{\mathscr{M}}(X*M,X*M)\). Thus \(I_{\mathscr{M}}(M)\subseteq I_{\mathscr{M}}(X*M)\), giving \(V_{\mathscr{M}}(X*M)\subseteq V_{\mathscr{M}}(M)\). Recall from Remark 2.1(3) that every object \(M\in\mathscr{M}\) (and every object of \(\mathscr{C}\)) admits a minimal projective resolution \((P_{\bullet},d_{\bullet})\), which is unique up to isomorphism. We define the \(n\)th _syzygy_ of \(M\) to be the image of the morphism \(d_{n}\), and denote it by \(\Omega^{n}_{\mathscr{M}}(M)\) (or \(\Omega^{n}_{\mathscr{C}}(X)\) for an object \(X\in\mathscr{C}\)). As shown in [6, Lemma 2.4], the minimal projective resolution has the property that \[\operatorname{Ext}^{n}_{\mathscr{M}}(M,S)\simeq\operatorname{Hom}_{\mathscr{M }}(P_{n},S)\simeq\operatorname{Hom}_{\mathscr{M}}(\Omega^{n}_{\mathscr{M}}(M),S)\] for every \(n\geq 1\) and every simple object \(S\in\mathscr{M}\). Given a sequence \(a_{\bullet}=(a_{0},a_{1},a_{2},\dots)\) of nonnegative real numbers, we denote by \(\gamma(a_{\bullet})\) its polynomial rate of growth, that is, the infimum of integers \(c\geq 0\) for which there exists a number \(b\in\mathbb{R}\) with \(a_{n}\leq bn^{c-1}\) for all \(n\geq 0\). We now define the _complexity_ of the object \(M\), denoted \(\operatorname{cx}_{\mathscr{M}}(M)\), to be \(\gamma(\ell P_{\bullet})\), where \(\ell P_{n}\) denotes the length of the object \(P_{n}\). For objects of \(\mathscr{C}\), this is not the same definition as used in [6], where we defined the complexity to be the rate of growth of the Frobenius-Perron dimensions of the objects of the minimal projective resolution. However, the two definitions are equivalent; there are only finitely many indecomposable projective objects of \(\mathscr{C}\) (one for each simple object), and the two definitions just rely on attaching different sets of positive real numbers - all at least \(1\) - to these. As explained in [6, Remark 4.2], the complexity of \(M\) is the same as the rate of growth of the sequence \((\dim_{k}\operatorname{Ext}^{*}_{\mathscr{M}}(M,S_{1}\oplus\dots\oplus S_{t}))\), where \(S_{1},\dots,S_{t}\) are all the simple objects of \(\mathscr{M}\). Moreover, it also equals the rate of growth of the sequence whose \(n\)th term is the number of indecomposable summands of \(P_{n}\). We end this section with a result which sums up the properties that were proved in [6] for support varieties when \(\mathbf{Fg}\) holds. These properties were proved for objects in a finite tensor category, and not objects in a module category, but as for Proposition 2.6, the proofs carry over to the more general setting, so we omit them. Moreover, as mentioned before Proposition 2.6, the assumption we made in [6] that \(k\) be algebraically closed was never needed. Recall first that if \(\zeta\) is a nonzero homogeneous element of \(\operatorname{H}^{\prime}(\mathscr{C})\), say of degree \(n\), then it can be represented by an epimorphism \(f_{\zeta}\colon\Omega^{n}_{\mathscr{C}}(\mathbf{1})\to\mathbf{1}\) (it is necessarily an epimorphism since the unit object is simple in \(\mathscr{C}\)). We denote the kernel of this morphism by \(L_{\zeta}\); this object is known as _Carlson's \(L_{\zeta}\)-object_. The module version of [6, Theorem 5.2] gives an inclusion \(V_{\mathscr{M}}(L_{\zeta}\ast M)\subseteq Z(\zeta)\cap V_{\mathscr{M}}(M)\) for every object \(M\in\mathscr{M}\), even without assuming that \(\mathscr{C}\) is braided, as in Proposition 2.6(6). **Theorem 2.7**.: _If \(\mathscr{C}\) satisfies \(\mathbf{Fg}\), then the following hold for every object \(M\in\mathscr{M}\)._ (1) \(\operatorname{cx}_{\mathscr{M}}(M)=\gamma\left(\dim_{k}\operatorname{Ext}^{*} _{\mathscr{M}}(M,M)\right)=\dim V_{\mathscr{M}}(M)\leq\dim\operatorname{H}^{ \prime}(\mathscr{C})\)_, where \(\dim\operatorname{H}^{\prime}(\mathscr{C})\) is the Krull dimension of the cohomology ring \(\operatorname{H}^{\prime}(\mathscr{C})\)._ (2) _The object \(M\) is projective if and only if \(V_{\mathscr{M}}(M)\) is trivial, and if and only if \(\operatorname{cx}_{\mathscr{M}}(M)=0\)._ (3)_\(V_{\mathscr{M}}(L_{\zeta}\ast M)=Z(\zeta)\cap V_{\mathscr{M}}(M)\) for every nonzero homogeneous element \(\zeta\in\operatorname{H}^{\prime}(\mathscr{C})\)._ (4) _Given any nonempty conical subvariety \(V\subseteq V_{\mathscr{M}}(M)\), there exists an object \(N\in\mathscr{M}\) with \(V_{\mathscr{M}}(N)=V\)._ (5) _Given any nonnegative integer \(c\leq\operatorname{cx}_{\mathscr{M}}(M)\), there exists an object \(N\in\mathscr{M}\) with \(\operatorname{cx}_{\mathscr{M}}(N)=c\)._ (6) _If \(\operatorname{cx}_{\mathscr{M}}(M)\geq 1\), then there exists a short exact sequence_ \[0\to M\to K\to\Omega^{n}_{\mathscr{M}}(M)\to 0\] _for some \(n\geq 0\) and some object \(K\in\mathscr{M}\) with \(\operatorname{cx}_{\mathscr{M}}(K)=\operatorname{cx}_{\mathscr{M}}(M)-1\)._ (7) _Given any object \(N\in\mathscr{M}\), the support variety \(V_{\mathscr{M}}(M,N)\) is trivial if and only if \(\operatorname{Ext}^{n}_{\mathscr{M}}(M,N)=0\) for all \(n\gg 0\), if and only if \(\operatorname{Ext}^{n}_{\mathscr{M}}(M,N)=0\) for all \(n\geq 1\)._ (8) _If \(V_{\mathscr{M}}(M)=V_{1}\cup V_{2}\) for conical subvarieties \(V_{1},V_{2}\) with \(V_{1}\cap V_{2}=\{\mathfrak{m}_{0}\}\), then \(M\simeq M_{1}\oplus M_{2}\) for some objects \(M_{1},M_{2}\) with \(V_{\mathscr{M}}(M_{i})=V_{i}\)._ ## 3. The module product property Recall that we have fixed a field \(k\) - not necessarily algebraically closed - together with a finite tensor \(k\)-category \((\mathscr{C},\otimes,\mathbf{1})\) and an exact left module category \((\mathscr{M},\ast)\) over \(\mathscr{C}\). Moreover, we have assumed that \(\mathscr{M}\) has a finite set of isomorphism classes of simple objects. In this section we prove the main result: the question of whether the module product property holds for support varieties reduces to the question of whether it holds if we only consider indecomposable periodic objects. By a _periodic_ object, we mean an object \(M\in\mathscr{M}\) with \(M\simeq\Omega^{n}_{\mathscr{M}}(M)\) for some \(n\geq 1\). In other words, the minimal projective resolution of \(M\) is periodic of period \(n\). We start with the following result, which, together with its corollary, characterizes the indecomposable periodic objects in terms of their complexities. **Theorem 3.1**.: _If \(\mathscr{C}\) satisfies \(\mathbf{Fg}\), then the following are equivalent for an object \(M\in\mathscr{M}\):_ 1. \(\operatorname{cx}_{\mathscr{M}}(M)=1\)_;_ 2. \(\dim V_{\mathscr{M}}(M)=1\)_;_ 3. \(M\) _is isomorphic to_ \(N\oplus Q\) _for some nonzero periodic object_ \(N\) _and projective object_ \(Q\)_._ Proof.: The equivalence of (1) and (2) is Theorem 2.7(1). If (3) holds, then the sequence \[\left(\dim_{k}\operatorname{Ext}^{n}_{\mathscr{M}}(M,M)\right)_{n=0}^{\infty}\] is bounded and not eventually zero, and so its rate of growth is \(1\). By Theorem 2.7(1) again, this rate of growth equals the complexity of \(M\), hence (1) follows. Finally, suppose that (1) holds. Then by Theorem 2.7(6), there exists a short exact sequence \[0\to M\to K\to\Omega_{\mathscr{M}}^{n}(M)\to 0\] for some \(n\geq 0\), with \(\operatorname{cx}_{\mathscr{M}}(K)=0\). By Theorem 2.7(2), the object \(K\) is then projective, and so by Schanuel's Lemma for abelian categories (see [6, Lemma 2.2]), there is an isomorphism \(M\simeq\Omega_{\mathscr{M}}^{n+1}(M)\oplus Q\) for some projective object \(Q\). Now take \(N=\Omega_{\mathscr{M}}^{n+1}(M)\); this is a periodic object since \[N=\Omega_{\mathscr{M}}^{n+1}(M)\simeq\Omega_{\mathscr{M}}^{n+1}\left(\Omega_{ \mathscr{M}}^{n+1}(M)\oplus Q\right)=\Omega_{\mathscr{M}}^{n+1}(N\oplus Q) \simeq\Omega_{\mathscr{M}}^{n+1}(N)\] This shows that (1) implies (3). **Corollary 3.2**.: _If \(\mathscr{C}\) satisfies \(\mathbf{Fg}\), and \(M\) is a nonzero indecomposable object of \(\mathscr{M}\), then \(\operatorname{cx}_{\mathscr{M}}(M)=1\) if and only if \(M\) is periodic._ We have defined support varieties in terms of the maximal ideal spectrum of \(\operatorname{H}^{\ast}(\mathscr{C})\). However, in some of the arguments that follow, we need to consider prime ideals in general; as usual, we denote the set of prime ideals of \(\operatorname{H}^{\ast}(\mathscr{C})\) by \(\operatorname{Spec}\operatorname{H}^{\ast}(\mathscr{C})\). For an object \(M\in\mathscr{M}\), we denote the support of the \(\operatorname{H}^{\ast}(\mathscr{C})\)-module \(\operatorname{Ext}_{\mathscr{M}}^{\ast}(M,M)\) by \(\operatorname{Supp}_{\mathscr{M}}(M)\), that is, the set of prime ideals of \(\operatorname{H}^{\ast}(\mathscr{C})\) with \(\operatorname{Ext}_{\mathscr{M}}^{\ast}(M,M)_{\mathfrak{p}}\neq 0\). When the finiteness condition \(\mathbf{Fg}\) holds, then \[\operatorname{Supp}_{\mathscr{M}}(M)=\{\mathfrak{p}\in\operatorname{Spec} \operatorname{H}^{\ast}(\mathscr{C})\mid I_{\mathscr{M}}(M)\subseteq\mathfrak{ p}\}\] and hence \[V_{\mathscr{M}}(M)=\operatorname{Supp}_{\mathscr{M}}(M)\cap\operatorname{ MaxSpec}\operatorname{H}^{\ast}(\mathscr{C})\] whenever \(M\) is a nonzero object. Note also that the finiteness condition implies that the set of minimal primes of \(\operatorname{Supp}_{\mathscr{M}}(M)\) is finite, and that these are associated primes of the \(\operatorname{H}^{\ast}(\mathscr{C})\)-module \(\operatorname{Ext}_{\mathscr{M}}^{\ast}(M,M)\); see [11, Theorem 3.1.a]. Furthermore, by [7, Lemma 1.5.6], these minimal primes are in fact graded. When \(M\) is nonzero and \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{t}\) are the minimal primes of \(\operatorname{Supp}_{\mathscr{M}}(M)\), then \(V_{\mathscr{M}}(M)=Z(\mathfrak{p}_{1})\cup\cdots\cup Z(\mathfrak{p}_{t})\). The \(Z(\mathfrak{p}_{i})\) are the irreducible components of \(V_{\mathscr{M}}(M)\), hence the support variety is irreducible if and only if \(\operatorname{Supp}_{\mathscr{M}}(M)\) contains a unique minimal prime (when \(i\neq j\) then \(Z(\mathfrak{p}_{i})\neq Z(\mathfrak{p}_{j})\); see the paragraphs following [6, Remark 3.5]). The following result shows that the support variety of an indecomposable periodic object is of this form. **Proposition 3.3**.: _Suppose that \(\mathscr{C}\) satisfies \(\mathbf{Fg}\), and that \(X\in\mathscr{C}\) and \(M\in\mathscr{M}\) are nonzero indecomposable periodic objects. Then \(V_{\mathscr{C}}(X)\) and \(V_{\mathscr{M}}(M)\) are irreducible_ (_that is, \(\operatorname{Supp}_{\mathscr{C}}(X)\) contains a unique minimal prime, and so does \(\operatorname{Supp}_{\mathscr{M}}(M)\)_)_, and either \(V_{\mathscr{C}}(X)=V_{\mathscr{M}}(M)\), or \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)=\{\mathfrak{m}_{0}\}\)._ Proof.: Let \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{t}\) be the minimal primes of \(\operatorname{Supp}_{\mathscr{M}}(M)\), so that \(V_{\mathscr{M}}(M)=Z(\mathfrak{p}_{1})\cup\cdots\cup Z(\mathfrak{p}_{t})\); recall that these primes are all graded. Note first that none of them can be maximal, that is, equal to \(\mathfrak{m}_{0}\) (there is only one graded maximal ideal in \(\operatorname{H}^{\ast}(\mathscr{C})\), namely \(\mathfrak{m}_{0}\)). For suppose this were the case, with, say, \(\mathfrak{p}_{1}=\mathfrak{m}_{0}\). If \(t=1\), then \(V_{\mathscr{M}}(M)=Z(\mathfrak{m}_{0})=\{\mathfrak{m}_{0}\}\), and hence \(\dim V_{\mathscr{M}}(M)=0\). But \(M\) is nonzero and periodic, and so by Theorem 3.1, the dimension of \(V_{\mathscr{M}}(M)\) must be \(1\). If \(\mathfrak{p}_{1}=\mathfrak{m}_{0}\) and \(t\geq 2\) then the prime \(\mathfrak{p}_{1}\) is not minimal in \(\operatorname{Supp}_{\mathscr{M}}(M)\), since the other primes \(\mathfrak{p}_{2},\ldots,\mathfrak{p}_{t}\) are contained in \(\mathfrak{m}_{0}\). Thus none of the minimal primes \(\mathfrak{p}_{i}\) are maximal, and so the Krull dimension of \(\operatorname{H}^{\!\ast}(\mathscr{C})/\mathfrak{p}_{i}\) is at least \(1\), for each \(i\). But since \(\dim V_{\mathscr{M}}(M)=1\), and this dimension is the maximum among \(\dim\operatorname{H}^{\!\ast}(\mathscr{C})/\mathfrak{p}_{1},\ldots,\dim \operatorname{H}^{\!\ast}(\mathscr{C})/\mathfrak{p}_{t}\), it follows that \(\dim\operatorname{H}^{\!\ast}(\mathscr{C})/\mathfrak{p}_{i}=1\) for each \(i\). Of course, since \(\mathfrak{p}_{i}\) is graded, the graded maximal ideal \(\mathfrak{m}_{0}\) belongs to \(Z(\mathfrak{p}_{i})\), but this irreducible component must also contain a non-graded maximal ideal. For if \(\mathfrak{m}_{0}\) were the only maximal ideal containing \(\mathfrak{p}_{i}\), then by [15, Theorem 25] the radical \(\sqrt{\mathfrak{p}_{i}}\) of \(\mathfrak{p}_{i}\) would be \(\mathfrak{m}_{0}\), a contradiction since \(\dim\operatorname{H}^{\!\ast}(\mathscr{C})/\sqrt{\mathfrak{p}_{i}}=\dim \operatorname{H}^{\!\ast}(\mathscr{C})/\mathfrak{p}_{i}=1\). Suppose now that \(t\geq 2\), and set \(V_{1}=Z(\mathfrak{p}_{1})\) and \(V_{2}=Z(\mathfrak{p}_{2})\cup\cdots\cup Z(\mathfrak{p}_{t})=Z(\mathfrak{p}_{2 }\cdots\mathfrak{p}_{t})\). If the intersection \(V_{1}\cap V_{2}\) contains a non-graded maximal ideal \(\mathfrak{m}\), then both \(\mathfrak{p}_{1}\) and \(\mathfrak{p}_{i}\) are contained in \(\mathfrak{m}\), for some \(i\geq 2\). Now consider the graded ideal \(\mathfrak{m}^{\ast}\) generated by all the homogeneous elements of \(\mathfrak{m}\); it is a graded prime ideal by [7, Lemma 1.5.6]. Then since \(\mathfrak{p}_{1}\subseteq\mathfrak{m}^{\ast}\) and \(\mathfrak{p}_{i}\subseteq\mathfrak{m}^{\ast}\), and \(\mathfrak{m}^{\ast}\) is properly contained in \(\mathfrak{m}\), we see that \(\mathfrak{p}_{1}=\mathfrak{m}^{\ast}=\mathfrak{p}_{i}\). Namely, if \(\mathfrak{p}_{1}\), say, were properly contained in \(\mathfrak{m}^{\ast}\), then the Krull dimension of \(\operatorname{H}^{\!\ast}(\mathscr{C})/\mathfrak{p}_{1}\) would be at least \(2\), and similarly for \(\mathfrak{p}_{i}\). But \(\mathfrak{p}_{1}\neq\mathfrak{p}_{i}\), and so we conclude that \(V_{1}\cap V_{2}=\{\mathfrak{m}_{0}\}\). Then by Theorem 2.7(8), the object \(M\) admits a direct sum decomposition \(M\simeq M_{1}\oplus M_{2}\), where \(M_{1}\) and \(M_{2}\) are objects with \(V_{\mathscr{M}}(M_{i})=V_{i}\). Moreover, since \(\dim V_{i}=1\), none of these objects can be zero. This is a contradiction, since the object \(M\) is indecomposable, and so \(V_{\mathscr{M}}(M)\) must be irreducible. The same proof works for \(V_{\mathscr{C}}(X)\). For the last part of the statement, let \(\mathfrak{p}\) and \(\mathfrak{q}\) be the unique minimal primes of \(\operatorname{Supp}_{\mathscr{C}}(X)\) and \(\operatorname{Supp}_{\mathscr{M}}(M)\), respectively. Then \(V_{\mathscr{C}}(X)=Z(\mathfrak{p})\) and \(V_{\mathscr{M}}(M)=Z(\mathfrak{q})\), giving \[V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)=Z(\mathfrak{p})\cap Z(\mathfrak{q}) =Z(\mathfrak{p}+\mathfrak{q})\] Suppose that \(V_{\mathscr{C}}(X)\neq V_{\mathscr{M}}(M)\), so that \(\mathfrak{p}\neq\mathfrak{q}\). Then both \(\mathfrak{p}\) and \(\mathfrak{q}\) must be properly contained in \(\mathfrak{p}+\mathfrak{q}\); if \(\mathfrak{p}=\mathfrak{p}+\mathfrak{q}\), say, then \(\mathfrak{q}\subseteq\mathfrak{p}\), and the containment would be strict since \(\mathfrak{p}\neq\mathfrak{q}\). But then the dimension of \(Z(\mathfrak{q})\) would be greater than that of \(Z(\mathfrak{p})\), a contradiction since \(\dim Z(\mathfrak{q})=\dim V_{\mathscr{M}}(M)=1=V_{\mathscr{C}}(X)=\dim Z( \mathfrak{p})\) by Theorem 3.1. Now take any maximal ideal \(\mathfrak{m}\in V_{\mathscr{C}}(X)\setminus\{\mathfrak{m}_{0}\}\). Since \(\mathfrak{m}\) is not graded, the inclusion \(\mathfrak{m}^{\ast}\subset\mathfrak{m}\) is proper, where, as before, \(\mathfrak{m}^{\ast}\) is the graded (prime) ideal of \(\operatorname{H}^{\!\ast}(\mathscr{C})\) generated by all the homogeneous elements of \(\mathfrak{m}\). As this is necessarily the unique maximal graded ideal contained in \(\mathfrak{m}\), and \(\mathfrak{p}\) is graded, we see that \(\mathfrak{p}\) must equal \(\mathfrak{m}^{\ast}\); otherwise, the dimension of \(V_{\mathscr{C}}(X)\) would have been at least \(2\), but we know that it is \(1\). This implies that \(\mathfrak{m}\) can not belong to \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\), for if it did, then it would have to contain \(\mathfrak{p}+\mathfrak{q}\), which is a graded ideal that strictly contains \(\mathfrak{m}^{\ast}\). In general, if \(I\) is an ideal of \(\operatorname{H}^{\!\ast}(\mathscr{C})\), then there is an equality \(Z(I)=Z(\sqrt{I})\), where \(\sqrt{I}\) denotes the radical of \(I\). When the finiteness condition \(\mathbf{Fg}\) holds, then by [15, Theorem 25], the radical of a proper ideal of \(\operatorname{H}^{\!\ast}(\mathscr{C})\) equals the intersection of all the maximal ideals containing it, a fact that we just used in the proof of Proposition 3.3, and also in Remark 2.5(3). Consequently, in this setting, whenever \(I\) and \(J\) are two proper ideals of \(\operatorname{H}^{\!\ast}(\mathscr{C})\), we see that \(Z(I)=Z(J)\) if and only if \(\sqrt{I}=\sqrt{J}\). We shall use this fact in the proof of the following result, which is the key ingredient in the main theorem; it allows us to reduce the complexities of the objects when we want to establish the module product property for support varieties. A general such reduction result is provided by Theorem 2.7(6), but now we need a much stronger version. **Proposition 3.4**.: _Suppose that \(\mathscr{C}\) satisfies \(\mathbf{F}\mathbf{g}\), and that \(M\in\mathscr{M}\) is an object with \(\operatorname{cx}_{\mathscr{M}}(M)\geq 2\) and \(V_{\mathscr{M}}(M)\) irreducible_ (_so that \(\operatorname{Supp}_{\mathscr{M}}(M)\) contains a unique minimal prime_)_. Then for every \(\mathfrak{m}\in V_{\mathscr{M}}(M)\) there exists a short exact sequence_ \[0\to W\to\Omega^{n}_{\mathscr{M}}(M)\oplus Q\to M\to 0\] _in \(\mathscr{M}\), with the following properties:_ 1. _The object_ \(Q\) _is projective, and_ \(n\geq 1\)_;_ 2. \(\operatorname{cx}_{\mathscr{M}}(W)=\operatorname{cx}_{\mathscr{M}}(M)-1\)_;_ 3. \(\mathfrak{m}\in V_{\mathscr{M}}(W)\)_._ Proof.: Let \(\mathfrak{p}_{0}\) be the unique minimal (graded) prime of \(\operatorname{Supp}_{\mathscr{M}}(M)\), and denote the complexity of \(M\) by \(d\). Since \(\operatorname{H}^{*}(\mathscr{C})\) is a finitely generated \(k\)-algebra by assumption, the quotient \(\operatorname{H}^{*}(\mathscr{C})/\mathfrak{p}_{0}\) is a finitely generated integral domain. Therefore, by [11, Corollary 13.4], all the maximal ideals of \(\operatorname{H}^{*}(\mathscr{C})/\mathfrak{p}_{0}\) are of the same height, namely \(\dim\operatorname{H}^{*}(\mathscr{C})/\mathfrak{p}_{0}\). The dimension of \(\operatorname{H}^{*}(\mathscr{C})/\mathfrak{p}_{0}\) equals that of \(\operatorname{H}^{*}(\mathscr{C})/I_{\mathscr{M}}(M)\), which by definition is the dimension of \(V_{\mathscr{M}}(M)\). Thus from Theorem 2.7(1) we see that every maximal ideal of \(\operatorname{H}^{*}(\mathscr{C})/\mathfrak{p}_{0}\) is of height \(d\). Now let \(\mathfrak{m}\) be a point in \(V_{\mathscr{M}}(M)\). It follows from the above that there exists a strictly increasing chain \[\mathfrak{p}_{0}\subset\cdots\subset\mathfrak{p}_{d-1}\subset\mathfrak{m}\] in \(\operatorname{Spec}\operatorname{H}^{*}(\mathscr{C})\). However, by [7, Theorem 1.5.8], there actually exists such a chain in which all the prime ideals \(\mathfrak{p}_{i}\) are graded; if \(\mathfrak{m}\neq\mathfrak{m}_{0}\), we can take as \(\mathfrak{p}_{d-1}\) the graded ideal \(\mathfrak{m}^{*}\) of \(\operatorname{H}^{*}(\mathscr{C})\), generated by all the homogeneous elements of \(\mathfrak{m}\) (recall that by [7, Lemma 1.5.6], this is a graded prime ideal). Take any nonzero homogeneous element \(\zeta\in\mathfrak{p}_{1}\setminus\mathfrak{p}_{0}\); this is possible since \(d\geq 2\). Note that since \(\mathfrak{p}_{1}\subseteq\mathfrak{m}_{0}=\operatorname{H}^{+}(\mathscr{C})\), the degree \(n\) of \(\zeta\) is positive. From this element we obtain a short exact sequence \[0\to L_{\zeta}\to\Omega^{n}_{\mathscr{C}}(\mathbf{1})\to\mathbf{1}\to 0\] in \(\mathscr{C}\), where the object \(L_{\zeta}\) is Carlson's \(L_{\zeta}\)-object discussed right before Theorem 2.7. Since \(\mathscr{M}\) is an exact \(\mathscr{C}\)-module category, we obtain a (not necessarily minimal) projective resolution of \(M\) when we apply \(-*M\) to the minimal projective resolution of the unit object \(\mathbf{1}\). Consequently, by Schanuel's Lemma for abelian categories (see [6, Lemma 2.2]), there is an isomorphism \(\Omega^{n}_{\mathscr{C}}(\mathbf{1})*M\simeq\Omega^{n}_{\mathscr{M}}(M)\oplus Q\) for some projective object \(Q\in\mathscr{M}\). As a result, when we apply \(-*M\) to the above short exact sequence, we obtain a short exact sequence \[0\to L_{\zeta}*M\to\Omega^{n}_{\mathscr{M}}(M)\oplus Q\to M\to 0\] in \(\mathscr{M}\). By Theorem 2.7(3), there is an equality \(V_{\mathscr{M}}(L_{\zeta}*M)=V_{\mathscr{M}}(M)\cap Z(\zeta)\), and so in particular we see that \(\mathfrak{m}\in V_{\mathscr{M}}(L_{\zeta}*M)\). It remains to show that \(\operatorname{cx}_{\mathscr{M}}(L_{\zeta}*M)=d-1\), or, what amounts to the same thing by Theorem 2.7(1), that \(\dim V_{\mathscr{M}}(L_{\zeta}*M)=d-1\). Now \[Z\left(I_{\mathscr{M}}(M)+(\zeta)\right) = Z(I_{\mathscr{M}}(M))\cap Z(\zeta)\] \[= V_{\mathscr{M}}(M)\cap Z(\zeta)\] \[= V_{\mathscr{M}}(L_{\zeta}*M)\] \[= Z(I_{\mathscr{M}}(L_{\zeta}*M))\] hence there is an equality \(\sqrt{I_{\mathscr{M}}(L_{\zeta}*M)}=\sqrt{I_{\mathscr{M}}(M)+(\zeta)}\). The dimension of \(V_{\mathscr{M}}(L_{\zeta}*M)\) is by definition the Krull dimension of \(\operatorname{H}^{\prime}(\mathscr{C})/I_{\mathscr{M}}(L_{\zeta}*M)\), which in turn equals that of \(\operatorname{H}^{\prime}(\mathscr{C})/\sqrt{I_{\mathscr{M}}(L_{\zeta}*M)}\). Therefore it suffices to show that the Krull dimension of \(\operatorname{H}^{\prime}(\mathscr{C})/\sqrt{I_{\mathscr{M}}(M)+(\zeta)}\) is \(d-1\). For this, consider the chain \[\mathfrak{p}_{0}\subset\cdots\subset\mathfrak{p}_{d-1}\subset\mathfrak{m}\] of prime ideals from the beginning of the proof. Since \(I_{\mathscr{M}}(M)\subseteq\mathfrak{p}_{0}\) and \(\zeta\in\mathfrak{p}_{1}\), the radical ideal \(\sqrt{I_{\mathscr{M}}(M)+(\zeta)}\) is contained in \(\mathfrak{p}_{1}\), giving \(\dim\operatorname{H}^{\prime}(\mathscr{C})/\sqrt{I_{\mathscr{M}}(M)+(\zeta)} \geq d-1\). However, if the inequality were strict, then there would exist a strictly increasing chain \[\mathfrak{q}_{0}\subset\cdots\subset\mathfrak{q}_{d}\] of prime ideals in \(\operatorname{H}^{\prime}(\mathscr{C})\), all containing the ideal \(\sqrt{I_{\mathscr{M}}(M)+(\zeta)}\). Since \(\zeta\notin\mathfrak{p}_{0}\), and \(\mathfrak{p}_{0}\) is the unique minimal prime ideal lying over the ideal \(I_{\mathscr{M}}(M)\), we would obtain a strictly increasing chain \[\mathfrak{p}_{0}\subset\mathfrak{q}_{0}\subset\cdots\subset\mathfrak{q}_{d}\] in \(\operatorname{Supp}_{\mathscr{M}}(M)\). But then \(\operatorname{cx}_{\mathscr{M}}(M)=\dim V_{\mathscr{M}}(M)\geq d+1\), a contradiction. This shows that the complexity of the object \(L_{\zeta}*M\) is \(d-1\). We are now ready to prove the main result. **Theorem 3.5**.: _Let \(k\) be a field, and \((\mathscr{C},\otimes,\mathbf{1})\) a finite braided tensor \(k\)-category satisfying \(\mathbf{Fg}\). Furthermore, let \((\mathscr{M},*)\) be an exact left \(\mathscr{C}\)-module category, whose set of isomorphism classes of simple objects is finite. Then the following are equivalent:_ 1. \(V_{\mathscr{M}}(X*M)=V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\) _for all objects_ \(X\in\mathscr{C},M\in\mathscr{M}\)_;_ 2. \(V_{\mathscr{M}}(X*M)=V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\) _for all objects_ \(X\in\mathscr{C},M\in\mathscr{M}\) _of complexity_ \(1\)_;_ 3. \(V_{\mathscr{M}}(X*M)=V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\) _for all indecomposable periodic objects_ \(X\in\mathscr{C},M\in\mathscr{M}\)_._ Proof.: If the module product property holds for all objects, then in particular it holds for objects of complexity \(1\), hence (2) trivially follows from (1). Since every nonzero indecomposable periodic object has complexity \(1\) by Corollary 3.2, we see that (3) follows from (2). Moreover, (2) follows from (3) because both module products and support varieties respect direct sums. For suppose that \(X\in\mathscr{C}\) and \(M\in\mathscr{M}\) are objects of complexity \(1\), and decompose them both into direct sums \(X\simeq X_{1}\oplus\cdots\oplus X_{s}\) and \(M\simeq M_{1}\oplus\cdots\oplus M_{t}\), with all the \(X_{i}\) and \(M_{j}\) indecomposable. Then each of these summands is either projective, or periodic by Corollary 3.2. In general, if \(Y\in\mathscr{C}\) and \(N\in\mathscr{M}\) are objects, and one of them is projective, then so is \(Y*N\) by Remark 2.1(2) and the definition of an exact module category, hence both \(V_{\mathscr{M}}(Y*N)\) and \(V_{\mathscr{C}}(Y)\cap V_{\mathscr{M}}(N)\) equal \(\{\mathfrak{m}_{0}\}\). Therefore, if (3) holds, then \[V_{\mathscr{M}}(X*M) = V_{\mathscr{M}}\left(\bigoplus_{i,j}\left(X_{i}*M_{j}\right)\right)\] \[= \bigcup_{i,j}V_{\mathscr{M}}\left(X_{i}*M_{j}\right)\] \[= \bigcup_{i,j}\left(V_{\mathscr{C}}(X_{i})\cap V_{\mathscr{M}}(M_{ j})\right)\] \[= \left(\bigcup_{i}V_{\mathscr{C}}(X_{i})\right)\cap\left(\bigcup _{j}V_{\mathscr{M}}(M_{j})\right)\] \[= V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\] where we have used Proposition 2.6(1). It follows that (2) holds. Finally, we will prove that (1) follows from (2). Suppose now that (2) holds, and let \(X\) and \(M\) be arbitrary objects of \(\mathscr{C}\) and \(\mathscr{M}\), respectively. As above, if one of them is projective, then so is \(X*M\), and both \(V_{\mathscr{M}}(X*M)\) and \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\) equal \(\{\mathfrak{m}_{0}\}\). We may therefore suppose that both \(X\) and \(M\) are nonprojective, that is, that \(\operatorname{cx}_{\mathscr{C}}(X)\geq 1\) and \(\operatorname{cx}_{\mathscr{M}}(M)\geq 1\). We now argue by induction on the sum \(\operatorname{cx}_{\mathscr{C}}(X)+\operatorname{cx}_{\mathscr{M}}(M)\); the assumption being that the module product property holds when this sum is \(2\). By Proposition 2.6(6), the inclusion \(V_{\mathscr{M}}(X*M)\subseteq V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\) holds, hence we must only show the reverse inclusion. Suppose that \(\operatorname{cx}_{\mathscr{C}}(X)+\operatorname{cx}_{\mathscr{M}}(M)>2\), and that \(\operatorname{cx}_{\mathscr{M}}(M)\geq 2\). Let \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{t}\) be the minimal primes of \(\operatorname{Supp}_{\mathscr{M}}(M)\), so that \(V_{\mathscr{M}}(M)=Z(\mathfrak{p}_{1})\cup\cdots\cup Z(\mathfrak{p}_{t})\); recall that these primes are graded, by [7, Lemma 1.5.6]. We now construct objects \(M_{1},\ldots,M_{t}\in\mathscr{M}\) with the property that \(V_{\mathscr{M}}(M_{i})=Z(\mathfrak{p}_{i})\) and \(V_{\mathscr{M}}(X*M_{i})\subseteq V_{\mathscr{M}}(X*M)\) for each \(i\). If \(t=1\), we simply take \(M_{1}=M\). If \(t\geq 2\), then fix one of the \(\mathfrak{p}_{i}\), and let \(\zeta_{1},\ldots,\zeta_{s}\) be homogeneous elements in \(\operatorname{H}(\mathscr{C})\) with \(\mathfrak{p}_{i}=(\zeta_{1},\ldots,\zeta_{s})\). Each \(\zeta_{j}\) gives a short exact sequence \[0\to L_{\zeta_{j}}\to\Omega_{\mathscr{C}}^{n_{j}}(\mathbf{1})\to\mathbf{1}\to 0\] in \(\mathscr{C}\), where \(n_{j}\) is the degree of \(\zeta_{j}\). Now take \(M_{i}=L_{\zeta_{s}}*\cdots*L_{\zeta_{1}}*M\). Then from Theorem 2.7(3) we obtain \[V_{\mathscr{M}}(M_{i}) = V_{\mathscr{M}}(M)\cap Z(\zeta_{1})\cap\cdots\cap Z(\zeta_{s})\] \[= V_{\mathscr{M}}(M)\cap Z\left((\zeta_{1},\ldots,\zeta_{s})\right)\] \[= V_{\mathscr{M}}(M)\cap Z(\mathfrak{p}_{i})\] \[= Z(\mathfrak{p}_{i})\] Next, denote the object \(L_{\zeta_{j}}*\cdots*L_{\zeta_{1}}*M\) by \(N_{j}\) for \(1\leq j\leq s\), and put \(N_{0}=M\). By applying \(-*N_{j-1}\) to the exact sequence above, we obtain an exact sequence \[0\to N_{j}\to\Omega_{\mathscr{M}}^{n_{j}}(N_{j-1})\oplus Q_{j}\to N_{j-1}\to 0\] in \(\mathscr{M}\) (for some projective object \(Q_{j}\)), on which we apply \(X*-\) and obtain an exact sequence \[0\to X*N_{j}\to\Omega^{n_{j}}_{\mathscr{M}}(X*N_{j-1})\oplus P_{j}\to X*N_{j-1}\to 0\] for some projective object \(P_{j}\). To get the middle terms in these two sequences, we have applied Schanuel's Lemma for abelian categories (see [6, Lemma 2.2]), together with the fact that the module product commutes with syzygies up to projective objects. From the properties listed in Proposition 2.6, we now obtain the inclusions \[V_{\mathscr{M}}(X*M_{i})=V_{\mathscr{M}}(X*N_{s})\subseteq\cdots\subseteq V_{ \mathscr{M}}(X*N_{0})=V_{\mathscr{M}}(X*M)\] hence the object \(M_{i}\) has the properties that we wanted. We now claim that if we can show that the inclusion \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M_{i})\subseteq V_{\mathscr{M}}(X*M_{i})\) holds for each \(i\), then we are done. Namely, if this is the case, then \[V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M) = V_{\mathscr{C}}(X)\cap\left(\bigcup_{i=1}^{t}Z(\mathfrak{p}_{i})\right)\] \[= V_{\mathscr{C}}(X)\cap\left(\bigcup_{i=1}^{t}V_{\mathscr{M}}(M_ {i})\right)\] \[= \bigcup_{i=1}^{t}\left(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M_ {i})\right)\] \[\subseteq \bigcup_{i=1}^{t}V_{\mathscr{M}}(X*M_{i})\] \[\subseteq V_{\mathscr{M}}(X*M)\] This proves the claim. What remains to show is that the inclusion \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M_{i})\subseteq V_{\mathscr{M}}(X*M_{i})\) holds for each \(i\). To do this, note first that \(\operatorname{cx}_{\mathscr{M}}(M_{i})\leq\operatorname{cx}_{\mathscr{M}}(M)\). Namely, the primes \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{t}\) are the minimal ones in \(\operatorname{Supp}_{\mathscr{M}}(M)\), whereas \(\mathfrak{p}_{i}\) is the only minimal prime in \(\operatorname{Supp}_{\mathscr{M}}(M_{i})\). Thus \(\dim V_{\mathscr{M}}(M)\) is the length of the longest chain in \(\operatorname{Spec}\operatorname{H}^{\ast}(\mathscr{C})\) starting with one of the primes \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{t}\), and \(\dim V_{\mathscr{M}}(M_{i})\) is the length of the longest chain in \(\operatorname{Spec}\operatorname{H}^{\ast}(\mathscr{C})\) starting with \(\mathfrak{p}_{i}\). Consequently, from Theorem 2.7(1) we see that \(\operatorname{cx}_{\mathscr{M}}(M_{i})\leq\operatorname{cx}_{\mathscr{M}}(M)\). If \(\operatorname{cx}_{\mathscr{M}}(M_{i})\leq\operatorname{cx}_{\mathscr{M}}(M)-1\), then \(\operatorname{cx}_{\mathscr{C}}(X)+\operatorname{cx}_{\mathscr{M}}(M_{i})\leq \operatorname{cx}_{\mathscr{C}}(X)+\operatorname{cx}_{\mathscr{M}}(M)-1\), and so by induction \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M_{i})\subseteq V_{\mathscr{M}}(X*M_{i})\) holds in this case. If on the other hand \(\operatorname{cx}_{\mathscr{M}}(M_{i})=\operatorname{cx}_{\mathscr{M}}(M)\), then since \(\operatorname{cx}_{\mathscr{M}}(M_{i})\geq 2\) and \(\operatorname{Supp}_{\mathscr{M}}(M_{i})\) contains a unique minimal prime, we can apply Proposition 3.4; for each \(\mathfrak{m}\in V_{\mathscr{M}}(M_{i})\) there exists a short exact sequence \[0\to W(\mathfrak{m})\to\Omega^{n(\mathfrak{m})}_{\mathscr{M}}(M_{i})\oplus Q( \mathfrak{m})\to M_{i}\to 0\] in which the object \(Q(\mathfrak{m})\) is projective, the complexity of the object \(W(\mathfrak{m})\) is \(\operatorname{cx}_{\mathscr{M}}(M)-1\), and \(\mathfrak{m}\in V_{\mathscr{M}}(W(\mathfrak{m}))\). Note that \(V_{\mathscr{M}}(W(\mathfrak{m}))\subseteq V_{\mathscr{M}}(M_{i})\) by Proposition 2.6, and consequently that \[\bigcup_{\mathfrak{m}\in V_{\mathscr{M}}(M_{i})}V_{\mathscr{M}}(W(\mathfrak{m} ))=V_{\mathscr{M}}(M_{i})\] since \(\mathfrak{m}\in V_{\mathscr{M}}(W(\mathfrak{m}))\). As explained earlier in this proof, when we apply \(X*-\) to the sequence we just obtained, the result is a short exact sequence \[0\to X*W(\mathfrak{m})\to\Omega^{n(\mathfrak{m})}_{\mathscr{M}}(X*M_{i})\oplus P (\mathfrak{m})\to X*M_{i}\to 0\] where the object \(P(\mathfrak{m})\) is projective. Using Proposition 2.6 again, we obtain the inclusion \(V_{\mathscr{M}}(X*W(\mathfrak{m}))\subseteq V_{\mathscr{M}}(X*M_{i})\), and by induction we also see that \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(W(\mathfrak{m}))\subseteq V_{\mathscr{ M}}(X*W(\mathfrak{m}))\), since \(\operatorname{cx}_{\mathscr{C}}(X)+\operatorname{cx}_{\mathscr{M}}(W( \mathfrak{m}))=\operatorname{cx}_{\mathscr{C}}(X)+\operatorname{cx}_{\mathscr{ M}}(M)-1\). Combining everything, we now obtain \[V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M_{i}) = V_{\mathscr{C}}(X)\cap\left(\bigcup_{\mathfrak{m}\in V_{ \mathscr{M}}(M_{i})}V_{\mathscr{M}}(W(\mathfrak{m}))\right)\] \[= \bigcup_{\mathfrak{m}\in V_{\mathscr{M}}(M_{i})}\left(V_{\mathscr{ C}}(X)\cap V_{\mathscr{M}}(W(\mathfrak{m}))\right)\] \[\subseteq \bigcup_{\mathfrak{m}\in V_{\mathscr{M}}(M_{i})}V_{\mathscr{M}}( X*W(\mathfrak{m}))\] \[\subseteq V_{\mathscr{M}}\left(X\otimes M_{i}\right)\] This concludes the induction proof in the case when \(\operatorname{cx}_{\mathscr{M}}(M)\geq 2\). Finally, if \(\operatorname{cx}_{\mathscr{M}}(M)=1\) and \(\operatorname{cx}_{\mathscr{C}}(X)\geq 2\), then we use virtually the same arguments to reach the conclusion. Namely, we reduce the complexity of \(X\) while keeping the object \(M\) fixed. We have shown that (1) follows from (2). Thus in order to verify that the module product property holds for support varieties, it is enough to check that it holds for the indecomposable periodic objects. The following result provides an alternative way of verifying all this, by considering whether certain module products are projective or not. **Theorem 3.6**.: _Let \(k\) be a field, and \((\mathscr{C},\otimes,\mathbf{1})\) a finite braided tensor \(k\)-category satisfying \(\mathbf{Fg}\). Furthermore, let \((\mathscr{M},*)\) be an exact left \(\mathscr{C}\)-module category, whose set of isomorphism classes of simple objects is finite. Then the following are equivalent:_ 1. \(V_{\mathscr{M}}(X*M)=V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\) _for all objects_ \(X\in\mathscr{C},M\in\mathscr{M}\)_;_ 2. _For all objects_ \(X\in\mathscr{C},M\in\mathscr{M}\)_, if_ \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\neq\{\mathfrak{m}_{0}\}\)_, then_ \(X*M\) _is not projective;_ 3. _For all objects_ \(X\in\mathscr{C},M\in\mathscr{M}\) _of complexity_ \(1\)_, if_ \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\neq\{\mathfrak{m}_{0}\}\)_, then_ \(X*M\) _is not projective;_ 4. _For all indecomposable periodic objects_ \(X\in\mathscr{C},M\in\mathscr{M}\)_, if_ \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\neq\{\mathfrak{m}_{0}\}\)_, then_ \(X*M\) _is not projective;_ 5. _For all nonzero indecomposable periodic objects_ \(X\in\mathscr{C},M\in\mathscr{M}\) _with_ \(V_{\mathscr{C}}(X)=V_{\mathscr{M}}(M)\)_, the object_ \(X*M\) _is not projective._ Proof.: Suppose that (1) holds, and let \(X\in\mathscr{C},M\in\mathscr{M}\) be objects with \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\neq\{\mathfrak{m}_{0}\}\). Then \(V_{\mathscr{M}}(X*M)\neq\{\mathfrak{m}_{0}\}\), hence \(X*M\) can not be projective. This shows that (1) implies (2). The implication (2) \(\Rightarrow\) (3) is trivial, the implication (3) \(\Rightarrow\) (4) follows from Corollary 3.2, and the implication (4) \(\Rightarrow\) (5) follows from the fact that the support variety of a nonzero periodic object is non-trivial by Theorem 3.1. Finally, suppose that (5) holds. By Theorem 3.5, in order to show that (1) holds, it is enough to show that \(V_{\mathscr{M}}(X*M)=V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\) for all indecomposable periodic objects \(X\in\mathscr{C},M\in\mathscr{M}\). This is trivially true if either \(X\) or \(M\) is zero, so suppose that they are both nonzero, indecomposable and periodic. If \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)=\{\mathfrak{m}_{0}\}\), then \(V_{\mathscr{M}}(X*M)=\{\mathfrak{m}_{0}\}\) by Proposition 2.6(6), and we are done. If \(V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\neq\{\mathfrak{m}_{0}\}\), then \(V_{\mathscr{C}}(X)=V_{\mathscr{M}}(M)\) by Proposition 3.3, and so by assumption the object \(X*M\) is not projective. Then \(\dim V_{\mathscr{M}}(X*M)\geq 1\) by (1) and (2) of Theorem 2.7, that is, the Krull dimension of \(\operatorname{H}^{*}(\mathscr{C})/I_{\mathscr{M}}(X*M)\), and therefore also of \(\operatorname{H}^{*}(\mathscr{C})/\sqrt{I_{\mathscr{M}}(X*M)}\), is at least \(1\). Now apply Proposition 3.3 once more; let \(\mathfrak{p}\) be the unique minimal prime of \(\operatorname{Supp}_{\mathscr{C}}(X)\), so that \(V_{\mathscr{C}}(X)=Z(\mathfrak{p})=V_{\mathscr{M}}(M)\). Then \[Z\left(I_{\mathscr{M}}(X*M)\right)=V_{\mathscr{M}}(X*M)\subseteq V_{\mathscr{C }}(X)=Z(\mathfrak{p})\] by Proposition 2.6(6), giving \(\mathfrak{p}\subseteq\sqrt{I_{\mathscr{M}}(X*M)}\) by [15, Theorem 25]. If this inequality is strict, then \(\operatorname{Supp}_{\mathscr{C}}(X)\) contains a strictly increasing chain of length at least \(2\), since \(\dim\operatorname{H}^{*}(\mathscr{C})/\sqrt{I_{\mathscr{M}}(X*M)}\geq 1\). This is impossible since \(\dim V_{\mathscr{C}}(X)=1\) by Theorem 3.1, hence \(\mathfrak{p}=\sqrt{I_{\mathscr{M}}(X*M)}\). But then \[V_{\mathscr{M}}(X*M)=Z\left(\sqrt{I_{\mathscr{M}}(X*M)}\right)=Z(\mathfrak{p})= V_{\mathscr{C}}(X)\cap V_{\mathscr{M}}(M)\] and we are done. ## 4. Skew group algebras and symmetric tensor categories In this section, we apply the results of Section 3 to finite symmetric tensor categories of a very particular form: those that are equivalent to categories of finite dimensional representations of certain skew group algebras. For such tensor categories, the finiteness condition \(\mathbf{Fg}\) holds, and we shall see that the tensor product property holds for support varieties. Using Deligne's classification theorem from [10], we obtain an important class of examples, namely the finite symmetric tensor categories over algebraically closed ground fields of characteristic zero. The skew group algebras in which we are interested arise from group actions on exterior algebras, so let us fix some notation that we will use throughout this section. Let \(k\) be a field, \(c\) a positive integer, and \(\Lambda\) the exterior algebra in \(c\) indeterminates \(x_{1},\ldots,x_{c}\) over \(k\): \[\Lambda=k\langle x_{1},\ldots,x_{c}\rangle/(x_{i}^{2},x_{i}x_{j}+x_{j}x_{i})\] Furthermore, let \(G\) be a finite group acting on \(\Lambda\), via a homomorphism into the group of algebra automorphisms of \(\Lambda\). We may then form the skew group algebra \(\Lambda\rtimes G\). As a \(k\)-vector space, this is just \(\Lambda\otimes_{k}kG\), which is finite dimensional, and every element is of the form \(\sum_{g\in G}w_{g}\otimes g\) for some \(w_{g}\in\Lambda\). Multiplication is defined by \[(w_{1}\otimes g_{1})(w_{2}\otimes g_{2})=w_{1}(^{g_{1}}w_{2})\otimes g_{1}g_{2}\] for \(w_{i}\in\Lambda\) and \(g_{i}\in G\). The skew group algebra is often also called the smash product algebra, and then typically denoted by \(\Lambda\#kG\). If the characteristic of \(k\) does not divide the order of \(G\), then since exterior algebras are selfinjective, it follows from [19, Theorem 1.1 and Theorem 1.3] that \(\Lambda\rtimes G\) is also selfinjective. Finally, note that the natural inclusion \(\Lambda\to\Lambda\rtimes G\) given by \(w\mapsto w\otimes e\) (where \(e\) is the identity element of \(G\)) turns \(\Lambda\rtimes G\) into a left and right \(\Lambda\)-module, in both cases free of rank \(|G|\). **Remark 4.1**.: (1) Suppose that \(\Lambda\rtimes G\) happens to be a Hopf algebra, and that the characteristic of \(k\) does not divide the order of \(G\). Then the finite tensor category \(\operatorname{mod}(\Lambda\rtimes G)\) of finitely generated left modules satisfies \(\mathbf{Fg}\). To see this, denote the algebra by \(H\). By [4, Theorem 4.1(2)], the Hochschild cohomology ring \(\operatorname{HH}^{*}(H)\) is Noetherian, and for every \(H\)-bimodule \(X\), the right \(\operatorname{HH}^{*}(H)\)-module \(\operatorname{Ext}^{*}_{H^{e}}(H,X)\) is finitely generated (here \(H^{e}\) denotes the enveloping algebra \(H\otimes_{k}H^{\operatorname{op}}\)). By [4, Lemma 3.2], this implies that \(\operatorname{Ext}^{*}_{H}(M,M)\) is a finitely generated \(\operatorname{HH}^{*}(H)\)-module, for every finitely generated left \(H\)-module \(M\). Finally, by [4, Lemma 4.2], this in turn implies that the finite tensor category \(\operatorname{mod}H\) satisfies \(\mathbf{Fg}\). (2) Given any ring \(R\) together with an automorphism \(\psi\colon R\to R\), we may twist a left module \(X\) and obtain a module \({}_{\psi}X\). The underlying abelian group is the same, but the module action becomes \(r\cdot x=\psi(r)x\) for \(r\in R\) and \(x\in X\). There is an isomorphism \({}_{\psi}X\simeq{}_{\psi}R\otimes_{R}X\), hence twisting induces an exact functor. In particular, for \(\Lambda\) and \(G\), every \(g\in G\) acts on the cohomology ring \(\operatorname{Ext}^{*}_{\Lambda}(k,k)\) by twisting of extensions. That is, given a homogeneous element \(\eta\) realized as an extension \[0\to k\xrightarrow{f_{0}}X_{1}\xrightarrow{f_{1}}\cdots\xrightarrow{f_{n-1}}X _{n}\xrightarrow{f_{n}}k\to 0\] we obtain the element \({}^{g}\eta\) realized as the extension \[0\to k\xrightarrow{f_{0}}{}_{g}X_{1}\xrightarrow{f_{1}}\cdots\xrightarrow{f_{ n-1}}{}_{g}X_{n}\xrightarrow{f_{n}}k\to 0\] Here we have used the notation \({}_{g}X\) for the \(\Lambda\)-module obtained from \(X\) by twisting with the automorphism on \(\Lambda\) given by \(g\); note that \({}_{g}k=k\). (3) Suppose, as in (1), that \(H=\Lambda\rtimes G\) is a Hopf algebra, and that the characteristic of \(k\) does not divide the order of \(G\). Then the cohomology ring \(\operatorname{Ext}^{*}_{H}(k,k)\) is isomorphic to the \(G\)-invariant subring \(\operatorname{Ext}^{*}_{\Lambda}(k,k)^{G}\) of \(\operatorname{Ext}^{*}_{\Lambda}(k,k)\), via the restriction map \[\operatorname{Ext}^{*}_{H}(k,k)\xrightarrow{\tau^{*}_{H,\Lambda}(k,k)} \operatorname{Ext}^{*}_{\Lambda}(k,k)\] see, for example, [21, Theorem 2.17]. The following lemma shows that if we take any subalgebra of \(\Lambda\rtimes G\) containing the exterior algebra, then restriction of cohomology is injective. **Lemma 4.2**.: _If the characteristic of \(k\) does not divide the order of \(G\), then for any algebra \(A\) with \(\Lambda\subseteq A\subseteq\Lambda\rtimes G\), and any pair of \(\Lambda\rtimes G\)-modules \(M,N\), the restriction map_ \[\operatorname{Ext}^{*}_{\Lambda\rtimes G}(M,N)\xrightarrow{\tau^{*}_{\Lambda \rtimes G,A}(M,N)}_{A}\operatorname{Ext}^{*}_{A}(M,N)\] _is injective._ Proof.: Let us denote \(\Lambda\rtimes G\) by \(H\). The composition of restriction maps \[\operatorname{Ext}^{*}_{H}(M,N)\xrightarrow{\tau^{*}_{H,A}(M,N)}\operatorname {Ext}^{*}_{A}(M,N)\xrightarrow{\tau^{*}_{A,\Lambda}(M,N)}_{A}\operatorname{ Ext}^{*}_{\Lambda}(M,N)\] equals the restriction map \(\tau^{*}_{H,\Lambda}(M,N)\) from \(H\) to \(\Lambda\). It therefore suffices to show that the latter is injective. If \(\theta\in\operatorname{Ext}^{n}_{H}(M,N)\) for some \(n\), then its restriction to \(\operatorname{Ext}^{n}_{\Lambda}(M,N)\) is \(H\otimes_{H}\theta\), where we view \(H\) as a \(\Lambda\)-\(H\)-bimodule. Inducing back to \(H\), we obtain the element \(H\otimes_{\Lambda}H\otimes_{H}\theta\in\operatorname{Ext}^{n}_{H}(H\otimes_{ \Lambda}H\otimes_{H}M,H\otimes_{\Lambda}H\otimes_{H}N)\), where we view the leftmost \(H\) in the tensor products as an \(H\)-\(\Lambda\)-bimodule. By [19, Theorem 1.1], \(H\) is a direct summand of \(H\otimes_{\Lambda}H\) as a bimodule over itself, and so we see that \(\theta\) is a direct summand of \(H\otimes_{\Lambda}H\otimes_{H}\theta\). This shows that the restriction map from \(\operatorname{Ext}_{H}^{*}(M,N)\) to \(\operatorname{Ext}_{\Lambda}^{*}(M,N)\) is injective. Suppose now that the characteristic of \(k\) is not \(2\), and let \(C_{2}\) be a (multiplicative) group of order \(2\), say \(C_{2}=\{e,h\}\) with \(h^{2}=e\). This group acts on \(\Lambda\) by defining \({}^{h}x_{i}=-x_{i}\) for each \(i\), and we set \(A=\Lambda\rtimes C_{2}\). As a \(k\)-algebra, it is isomorphic to the algebra generated by \(h,x_{1},\dots,x_{c}\), with relations \(h^{2}=1,x_{i}^{2}=0,x_{i}x_{j}+x_{j}x_{i}=0\) and \(hx_{i}+x_{i}h=0\). We see that it is a Hopf algebra by defining a comultiplication \(\Delta\), antipode \(S\) and counit \(\varepsilon\) as follows: \(\Delta(h)=h\otimes h,\Delta(x_{i})=x_{i}\otimes 1+h\otimes x_{i},S(h)=h,S(x_{i})=- hx_{i},\varepsilon(h)=1\) and \(\varepsilon(x_{i})=0\). The finite tensor category \(\operatorname{mod}A\) of finitely generated left \(A\)-modules is symmetric. To see this, take two modules \(M,N\in\operatorname{mod}A\), and decompose them into subspaces \[M=M_{0}\oplus M_{1},\hskip 14.226378ptN=N_{0}\oplus N_{1}\] given by eigenspaces for the action of \(h\); this is possible since the characteristic of \(k\) is not \(2\). Thus \(hm_{0}=m_{0}\) and \(hm_{1}=-m_{1}\) whenever \(m_{i}\in M_{i}\), and similarly for \(N\). One now checks that the map \(M\otimes N\to N\otimes M\) given by \[m_{i}\otimes n_{j}\mapsto(-1)^{ij}n_{j}\otimes m_{i}\] is a functorial isomorphism, and it squares to the identity. Hence \(\operatorname{mod}A\) is symmetric. Moreover, by Remark 4.1(1), it also satisfies \(\mathbf{Fg}\). For a module \(M\in\operatorname{mod}A\), we shall denote the support variety \(V_{\operatorname{mod}A}(M)\) by just \(V_{A}(M)\); these are defined in terms of the maximal ideal spectrum of the (commutative) even degree cohomology ring \(\operatorname{Ext}_{A}^{2*}(k,k)\). We denote by \(\mathfrak{m}_{0}\) the unique graded maximal ideal of this ring. **Remark 4.3**.: By Remark 4.1(3), the cohomology ring \(\operatorname{Ext}_{A}^{*}(k,k)\) is isomorphic to \(\operatorname{Ext}_{\Lambda}^{*}(k,k)^{C_{2}}\) via the restriction map \[\operatorname{Ext}_{A}^{*}(k,k)\xrightarrow{\tau_{A,\Lambda}^{*}(k,k)} \operatorname{Ext}_{\Lambda}^{*}(k,k)\] The action of \(C_{2}\) on \(\operatorname{Ext}_{\Lambda}^{*}(k,k)\) is quite simple: the generator \(h\in C_{2}\) acts as \((-1)^{n}\) on \(\operatorname{Ext}_{A}^{n}(k,k)\). This can be seen from the action of \(h\) on the Koszul resolution of \(k\) in degree \(n\), induced by the action of \(h\) on each \(x_{i}\) as multiplication by \(-1\). Thus \(\operatorname{Ext}_{\Lambda}^{*}(k,k)^{C_{2}}\) is nothing but the even degree subspace \(\operatorname{Ext}_{\Lambda}^{2*}(k,k)\) of \(\operatorname{Ext}_{\Lambda}^{*}(k,k)\). In particular, we see that \(\operatorname{Ext}_{A}^{n}(k,k)=0\) for odd \(n\), so that \(\operatorname{Ext}_{A}^{*}(k,k)=\operatorname{Ext}_{A}^{2*}(k,k)\). Now take a \(c\)-tuple \(\lambda=(\lambda_{1},\dots,\lambda_{c})\in k^{c}\), and denote the element \(\lambda_{1}x_{1}+\dots+\lambda_{c}x_{c}\) of \(\Lambda\) by \(u_{\lambda}\). Then \(u_{\lambda}^{2}=0\), and so the subalgebra \(k[u_{\lambda}]\) generated by \(u_{\lambda}\) is isomorphic to the truncated polynomial ring \(k[y]/(y^{2})\) whenever \(\lambda\) is nonzero. For every such \(c\)-tuple \(\lambda\), the algebra \(\Lambda\) is free as a left and as a right module over the subalgebra \(k[u_{\lambda}]\); this follows, for example, from [3, Theorem 2.6]. Combining with the above, we see that the same holds for the algebra \(A\). The inclusion \(k[u_{\lambda}]\to A\) gives a restriction map \[\operatorname{Ext}_{A}^{*}(k,k)\xrightarrow{\tau_{A,\lambda}^{*}(k,k)} \operatorname{Ext}_{k[u_{\lambda}]}^{*}(k,k)\] We denote by \(\tau_{A,\lambda}^{2*}(k,k)\) the restriction of this map to the even cohomology ring \(\operatorname{Ext}_{A}^{2*}(k,k)\). Of course, since \(\operatorname{Ext}_{A}^{*}(k,k)=\operatorname{Ext}_{A}^{2*}(k,k)\), we have not in practice restricted the map \(\tau^{*}_{A,\lambda}(k,k)\) to a subalgebra. The first result we prove is that when \(\lambda\) is a nonzero \(c\)-tuple, then the kernel of this map is a graded prime ideal of \(\operatorname{Ext}_{A}^{2*}(k,k)\). Moreover, two \(c\)-tuples give rise to different prime ideals if and only if they are not on the same line. **Lemma 4.4**.: _For every nonzero \(c\)-tuple \(\lambda\in k^{c}\), the ideal \(\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)\) is a graded prime ideal of \(\operatorname{Ext}_{A}^{2*}(k,k)\), different from \(\mathfrak{m}_{0}\). Moreover, if \(\mu\) is another nonzero \(c\)-tuple, then \(\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)=\operatorname{Ker}\tau^{2*}_{A, \mu}(k,k)\) if and only if \(\mu=\alpha\lambda\) for some \((\)necessarily nonzero\()\) scalar \(\alpha\in k\)._ Proof.: Let \(\lambda\) be a nonzero \(c\)-tuple in \(k^{c}\). Since \(k[u_{\lambda}]\) is isomorphic to the truncated polynomial ring \(k[y]/(y^{2})\), the cohomology ring \(\operatorname{Ext}_{k[u_{\lambda}]}^{*}(k,k)\) is isomorphic to a polynomial ring \(k[z]\) with \(z\) in degree one. In particular, the even cohomology ring \(\operatorname{Ext}_{k[u_{\lambda}]}^{2*}(k,k)\) is an integral domain. It follows that if \(\eta\) and \(\theta\) are elements of \(\operatorname{Ext}_{A}^{2*}(k,k)\) with \(\eta\theta\in\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)\), then either \(\eta\in\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)\) or \(\theta\in\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)\), since the restriction map is a ring homomorphism. Thus \(\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)\) is a prime ideal since it is proper (it does not contain the identity element \(1\in\operatorname{Hom}_{A}(k,k)\), for example). Now take another nonzero \(c\)-tuple \(\mu\in k^{c}\). If \(\mu=\alpha\lambda\) for some \(\alpha\in k\), then \(u_{\mu}=\alpha u_{\lambda}\), and so \(k[u_{\mu}]=k[u_{\lambda}]\) as subalgebras of \(A\). Then trivially \(\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)=\operatorname{Ker}\tau^{2*}_{A, \mu}(k,k)\). Note that when \(c=1\), then \(\mu\) must be on the same line as \(\lambda\). Conversely, suppose that \(\lambda\) and \(\mu\) are not on the same line (so \(c\) must be at least \(2\)), and consider the linear map \(\phi_{\lambda}\colon k^{c}\to k\) given by \(\rho\mapsto\langle\lambda,\rho\rangle\), where \(\langle\lambda,\rho\rangle=\lambda_{1}\rho_{1}+\cdots+\lambda_{c}\rho_{c}\). This map is surjective since \(\lambda\) is nonzero, and so \(\operatorname{Ker}\phi_{\lambda}\) is of dimension \(c-1\). Now choose a basis for \(\operatorname{Ker}\phi_{\lambda}\), and consider the \((c-1)\times c\)-matrix \(E\) whose rows are these \(c\)-tuples, in any order. The rank of \(E\) is \(c-1\), and so its null space is of dimension one, and contains \(\lambda\). Since \(\mu\) is not on the same line as \(\lambda\), it cannot belong to the nullspace, i.e. \(E\mu\neq 0\). Consequently, there exists a \(c\)-tuple \(\rho\in k^{c}\) with \(\langle\lambda,\rho\rangle=0\) and \(\langle\mu,\rho\rangle\neq 0\) (for example, one of the rows of \(E\) has this property). Choose one such \(c\)-tuple \(\rho\). Consider the projective cover \[0\to I\to\Lambda\to k\to 0\] of \(k\) as a left \(\Lambda\)-module, where \(I\) is the left ideal \((x_{1},\ldots,x_{c})\subseteq\Lambda\). Furthermore, look at the map \(I\to k\) given by \[\beta_{1}x_{1}+\cdots+\beta_{c}x_{c}+w\mapsto\langle\beta,\rho\rangle\] for \(w\in I^{2}\) and \(\beta=(\beta_{1},\ldots,\beta_{c})\). This map is a \(\Lambda\)-homomorphism mapping \(u_{\lambda}\) to zero and \(u_{\mu}\) to something nonzero, and does not factor through \(\Lambda\). Consequently, it represents a nonzero element \(\eta\in\operatorname{Ext}_{\Lambda}^{1}(k,k)\). Now for any nonzero \(c\)-tuple \(\sigma\in k^{c}\), the ideal \(I\) decomposes over \(k[u_{\sigma}]\) as \((u_{\sigma})\oplus Q_{\sigma}\), for some free \(k[u_{\sigma}]\)-module \(Q_{\sigma}\). Furthermore, the restriction map \[\operatorname{Ext}_{\Lambda}^{*}(k,k)\xrightarrow{\tau^{*}_{\Lambda,\sigma}(k, k)}\operatorname{Ext}_{k[u_{\sigma}]}^{*}(k,k)\] maps \(\eta\) to the element of \(\operatorname{Ext}_{k[u_{\sigma}]}^{1}(k,k)\) represented by the map \((u_{\sigma})\oplus Q_{\sigma}\to k\) given by \(\alpha u_{\sigma}+q\mapsto\alpha\langle\sigma,\rho\rangle\) for \(\alpha\in k\) and \(q\in Q_{\sigma}\). Then \(\tau^{*}_{\Lambda,\lambda}(k,k)(\eta)=0\), whereas \(\tau^{*}_{\Lambda,\mu}(k,k)(\eta)\neq 0\) since \(\langle\mu,\rho\rangle\neq 0\), and the \(k[u_{\mu}]\)-homomorphism \((u_{\mu})\oplus Q_{\mu}\to k\) above does not factor through \(\Lambda\). The restriction maps are ring homomorphisms, hence \(\tau^{*}_{\Lambda,\lambda}(k,k)(\eta^{2})=0\) and \(\tau^{*}_{\Lambda,\mu}(k,k)(\eta^{2})\neq 0\), the latter because \(\operatorname{Ext}^{*}_{k[u_{\mu}]}(k,k)\) is an integral domain. For every nonzero \(c\)-tuple \(\sigma\in k^{c}\), the inclusions \(k[u_{\sigma}]\to\Lambda\to A\) of \(k\)-algebras induce the sequence \[\operatorname{Ext}^{*}_{A}(k,k)\xrightarrow{\tau^{*}_{A,\Lambda}(k,k)} \operatorname{Ext}^{*}_{\Lambda}(k,k)\xrightarrow{\tau^{*}_{\Lambda,\sigma}(k, k)}\operatorname{Ext}^{*}_{k[u_{\sigma}]}(k,k)\] of restriction maps. The composition equals the restriction map \(\tau^{*}_{A,\sigma}(k,k)\). Now by Remark 4.3, the restriction map \[\operatorname{Ext}^{2*}_{A}(k,k)\xrightarrow{\tau^{2*}_{A,\Lambda}(k,k)} \operatorname{Ext}^{2*}_{\Lambda}(k,k)\] is an isomorphism, and so the element \(\eta^{2}\in\operatorname{Ext}^{2}_{\Lambda}(k,k)\) belongs to the image of \(\tau^{*}_{A,\Lambda}(k,k)\), where \(\eta\in\operatorname{Ext}^{1}_{\Lambda}(k,k)\) is the element from above. Choosing an element \(\theta\in\operatorname{Ext}^{2}_{A}(k,k)\) such that \(\tau^{*}_{A,\Lambda}(k,k)(\theta)=\eta^{2}\), we obtain \[\tau^{2*}_{A,\sigma}(k,k)(\theta)=\tau^{*}_{A,\sigma}(k,k)(\theta)=\tau^{*}_{ \Lambda,\sigma}(k,k)\left(\tau^{*}_{A,\Lambda}(k,k)(\theta)\right)=\tau^{*}_{ \Lambda,\sigma}(k,k)(\eta^{2})\] for every nonzero \(c\)-tuple \(\sigma\in k^{c}\). We showed above that \(\tau^{*}_{\Lambda,\lambda}(k,k)(\eta^{2})=0\) whereas \(\tau^{*}_{\Lambda,\mu}(k,k)(\eta^{2})\neq 0\), and so \(\theta\) is an element of \(\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)\), but not of \(\operatorname{Ker}\tau^{2*}_{A,\mu}(k,k)\). This shows that \(\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)\) does not equal \(\operatorname{Ker}\tau^{2*}_{A,\mu}(k,k)\) when \(\lambda\) and \(\mu\) are not on the same line. Finally, we prove that \(\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)\) does not equal the graded maximal ideal \(\mathfrak{m}_{0}\) of \(\operatorname{Ext}^{2*}_{A}(k,k)\). If \(c=1\), then \(u_{\lambda}\) is just the generator \(x_{1}\) multiplied with a nonzero scalar, and so \(k[u_{\lambda}]=\Lambda\) in this case. The restriction map from \(\operatorname{Ext}^{2*}_{A}(k,k)\) to \(\operatorname{Ext}^{2*}_{\Lambda}(k,k)\) is an isomorphism, hence \(\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)=0\neq\mathfrak{m}_{0}\). When \(c\geq 2\), we proved above that for the \(c\)-tuple \(\mu\) the element \(\eta^{2}\in\operatorname{Ext}^{2}_{A}(k,k)\) did not belong to \(\operatorname{Ker}\tau^{2*}_{A,\mu}(k,k)\). Thus \(\operatorname{Ker}\tau^{2*}_{A,\mu}(k,k)\neq\mathfrak{m}_{0}\), and by switching the roles of \(\lambda\) and \(\mu\) we see that also \(\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k)\neq\mathfrak{m}_{0}\). We now turn our attention to a class of \(A\)-modules whose support varieties are determined by the prime ideals of \(\operatorname{Ext}^{2*}_{A}(k,k)\) considered in the lemma. Namely, for a nonzero \(c\)-tuple \(\lambda\in k^{c}\), denote the left \(A\)-module \(A(u_{\lambda}\otimes e)\) by just \(Au_{\lambda}\). Analogues of these modules have been used earlier, in particular in connection with rank varieties; see [3, 5, 18]. In the following result, we establish the properties that we need for \(Au_{\lambda}\); see also [18, Section 2]. Recall that \(\underline{\operatorname{Hom}}\) denotes the quotient of the space of homomorphisms by the subspace of those factoring through a projective module. **Proposition 4.5**.: _For every nonzero \(c\)-tuple \(\lambda\in k^{c}\), the following hold._ \((1)\) _The \(A\)-module \(Au_{\lambda}\) is \(1\)-periodic, i.e. \(\Omega^{1}_{A}(Au_{\lambda})\simeq Au_{\lambda}\). Moreover, it is isomorphic to the induced module \(A\otimes_{k[u_{\lambda}]}k\)._ \((2)\) _A module \(M\in\operatorname{mod}A\) is free as a \(k[u_{\lambda}]\)-module if and only if \(\underline{\operatorname{Hom}}_{A}(Au_{\lambda},M)=0\)._ \((3)\)_\(\operatorname{Ext}^{n}_{A}(Au_{\lambda},k)\neq 0\) for every positive integer \(n\), and the restriction map_ \[\operatorname{Ext}^{*}_{A}\left(Au_{\lambda},k\right)\xrightarrow{\tau^{*}_{A, \lambda}(Au_{\lambda},k)}_{k}\operatorname{Ext}^{*}_{k[u_{\lambda}]}\left(Au_ {\lambda},k\right)\] _is injective in every positive degree._ \((4)\)_\(V_{A}(Au_{\lambda})=Z(\operatorname{Ker}\tau^{2*}_{A,\lambda}(k,k))\), and this variety is irreducible._ Proof.: By [3, Lemma 2.14], the sequence \[\cdots\to\Lambda\xrightarrow{\cdot u_{\lambda}}\Lambda\xrightarrow{\cdot u_{ \lambda}}\Lambda\xrightarrow{\cdot u_{\lambda}}\Lambda\to\cdots\] of left \(\Lambda\)-modules is exact. Applying \(A\otimes_{\Lambda}-\), we obtain an exact sequence of left \(A\)-modules, since \(A\) is free as a right \(\Lambda\)-module. The canonical isomorphism \(A\otimes_{\Lambda}\Lambda\simeq A\) then gives an exact sequence ( \[\dagger\] ) \[\cdots\to A\xrightarrow{\cdot(u_{\lambda}\otimes e)}A\xrightarrow{\cdot(u_{ \lambda}\otimes e)}A\xrightarrow{\cdot(u_{\lambda}\otimes e)}A\to\cdots\] of left \(A\)-modules, hence \(Au_{\lambda}\) is \(1\)-periodic. The last part of (1) follows from the isomorphisms \[A\otimes_{k[u_{\lambda}]}k\simeq A\otimes_{k[u_{\lambda}]}k[u_{\lambda}]/(u_{ \lambda})\simeq A/Au_{\lambda}=A/A(u_{\lambda}\otimes e)\] of left \(A\)-modules, together with the isomorphism \(A/A(u_{\lambda}\otimes e)\simeq A(u_{\lambda}\otimes e)\) which is immediate from the exact sequence (\(\dagger\)). For (2), we use the isomorphism from (1) together with the Eckmann-Shapiro Lemma, and obtain \[\underline{\operatorname{Hom}}_{A}\left(Au_{\lambda},M\right)\simeq\underline {\operatorname{Hom}}_{A}\left(A\otimes_{k[u_{\lambda}]}k,M\right)\simeq \underline{\operatorname{Hom}}_{k[u_{\lambda}]}\left(k,M\right)\] Since the algebra \(k[u_{\lambda}]\) is isomorphic to \(k[y]/(y^{2})\), the \(k[u_{\lambda}]\)-module \(M\) is free if and only if it does not contain \(k\) as a direct summand. Consequently, it is free if and only if \(\underline{\operatorname{Hom}}_{k[u_{\lambda}]}(k,M)=0\). For (3), we use the periodicity of \(Au_{\lambda}\) and the fact that \(A\) is selfinjective to obtain \[\operatorname{Ext}_{A}^{n}\left(Au_{\lambda},k\right)\simeq\underline{ \operatorname{Hom}}_{A}\left(\Omega_{A}^{n}(Au_{\lambda}),k\right)\simeq \underline{\operatorname{Hom}}_{A}\left(Au_{\lambda},k\right)\] for every positive integer \(n\). From (2) we see that \(\underline{\operatorname{Hom}}_{A}(Au_{\lambda},k)\neq 0\), and so \(\operatorname{Ext}_{A}^{n}(Au_{\lambda},k)\neq 0\) as well. For the restriction map, note first that since \(A\) is free as a left \(k[u_{\lambda}]\)-module, the sequence (\(\dagger\)) restricts to a sequence of free \(k[u_{\lambda}]\)-modules. Therefore \(\Omega_{k[u_{\lambda}]}^{n}(Au_{\lambda})\) is stably isomorphic to \(Au_{\lambda}\) for every \(n\geq 1\), giving \[\operatorname{Ext}_{k[u_{\lambda}]}^{n}\left(Au_{\lambda},k\right)\simeq \underline{\operatorname{Hom}}_{k[u_{\lambda}]}\left(\Omega_{k[u_{\lambda}]}^{ n}(Au_{\lambda}),k\right)\simeq\underline{\operatorname{Hom}}_{k[u_{\lambda}]} \left(Au_{\lambda},k\right)\] The restriction map \(\tau_{A,\lambda}^{n}(Au_{\lambda},k)\) is compatible with the isomorphisms \(\operatorname{Ext}_{A}^{n}(Au_{\lambda},k)\simeq\underline{\operatorname{Hom }}_{A}(Au_{\lambda},k)\) and \(\operatorname{Ext}_{k[u_{\lambda}]}^{n}(Au_{\lambda},k)\simeq\underline{ \operatorname{Hom}}_{k[u_{\lambda}]}(Au_{\lambda},k)\), in the sense that the diagram commutes, where the horizontal maps are the isomorphism, and \(\tau\) the restriction. It therefore suffices to show that \(\tau\) is injective. The left \(A\)-module \(Au_{\lambda}\) decomposes over \(k[u_{\lambda}]\) as a direct sum \(k\langle u_{\lambda}\otimes e\rangle\oplus N\), where \(k\langle u_{\lambda}\otimes e\rangle\) denotes the \(k\)-vector space generated by the element \(u_{\lambda}\otimes e\). The latter is isomorphic to \(k\) as a \(k[u_{\lambda}]\)-module. One now checks that the diagram commutes, where the lower horizontal map is the natural isomorphism, the upper one is the Eckmann-Shapiro isomorphism from the proof of (2) above, and the vertical map to the right is the inclusion into the summand corresponding to \(k\langle u_{\lambda}\otimes e\rangle\). This shows that \(\tau\), and therefore also \(\tau_{A,\lambda}^{n}(Au_{\lambda},k)\), is injective. To prove (4), note first that \(A\) decomposes as a direct sum \(A=A^{+}\oplus A^{-}\), with \(A^{+}=A(1\otimes(e+h))\) and \(A^{-}=A(1\otimes(e-h))\), where \(h\) is the generator of \(C_{2}\). Similarly, one checks that \(Au_{\lambda}\) decomposes as \(M_{\lambda}^{+}\oplus M_{\lambda}^{-}\), where \[M_{\lambda}^{+}=\{wu_{\lambda}\otimes(e+h)\mid w\in\Lambda\},\ \ M_{\lambda}^{-}=\{wu_{ \lambda}\otimes(e-h)\mid w\in\Lambda\}\] As left \(\Lambda\)-modules, the modules \(A^{+}\) and \(A^{-}\) are isomorphic to \(\Lambda\), hence they are indecomposable also as left \(A\)-modules (they represent the two isomorphism classes of indecomposable projective \(A\)-modules). As a consequence, the modules \(M_{\lambda}^{+}\) and \(M_{\lambda}^{-}\) must also be indecomposable. Now look at the exact sequence (\(\dagger\)). One checks that for \(A^{+}\), the image of the multiplication map \(\cdot(u_{\lambda}\otimes e)\) is \(M_{\lambda}^{-}\), with kernel \(M_{\lambda}^{+}\) (and vice versa), so that \(\Omega_{A}^{1}(M_{\lambda}^{-})=M_{\lambda}^{+}\). It now follows from Proposition 2.6 that \[V_{A}\left(Au_{\lambda}\right)=V_{A}\left(M_{\lambda}^{+}\right)\cup V_{A} \left(M_{\lambda}^{-}\right)=V_{A}\left(M_{\lambda}^{+}\right)\] and so since \(M_{\lambda}^{+}\) is indecomposable, we see from Proposition 3.3 that \(V_{A}(Au_{\lambda})\) is irreducible. Let us first consider the support variety \(V(Au_{\lambda},k)\), which by definition equals \(Z(I_{A}(Au_{\lambda},k))\), where \(I_{A}(Au_{\lambda},k)\) is the (graded) annihilator ideal of \(\operatorname{Ext}_{A}^{*}(Au_{\lambda},k)\) in \(\operatorname{Ext}_{A}^{2*}(k,k)\). Let \(\eta\) be a homogeneous element of \(I_{A}(Au_{\lambda},k)\), and choose an element \(\theta\in\operatorname{Ext}_{A}^{2}(Au_{\lambda},k)\) with \(\tau_{A,\lambda}^{*}(Au_{\lambda},k)(\theta)\neq 0\) in \(\operatorname{Ext}_{k[u_{\lambda}]}^{2}(Au_{\lambda},k)\); this is possible by (3). Then \(\eta\cdot\theta=0\) in \(\operatorname{Ext}_{A}^{*}(Au_{\lambda},k)\) since \(\eta\in I_{A}(Au_{\lambda},k)\), giving \[0=\tau_{A,\lambda}^{*}\left(Au_{\lambda},k\right)(\eta\cdot\theta)=\tau_{A, \lambda}^{2*}\left(k,k\right)(\eta)\cdot\tau_{A,\lambda}^{*}\left(Au_{ \lambda},k\right)(\theta)\] in \(\operatorname{Ext}_{k[u_{\lambda}]}^{n}(Au_{\lambda},k)\), where \(\tau_{A,\lambda}^{2*}(k,k)\) is the restriction map from \(\operatorname{Ext}_{A}^{2*}(k,k)\) to \(\operatorname{Ext}_{k[u_{\lambda}]}^{2*}(k,k)\). We know that \(\operatorname{Ext}_{k[u_{\lambda}]}^{2*}(k,k)\) is just a polynomial ring of the form \(k[y]\) with \(y\) in degree two (see the start of the proof of Lemma 4.4), and so if \(\tau_{A,\lambda}^{2*}(k,k)(\eta)\) were nonzero it would have to equal \(\alpha y^{t}\) for some nonzero scalar \(\alpha\). It is well known that multiplication by \(y\) induces an isomorphism \[\operatorname{Ext}_{k[u_{\lambda}]}^{n}(X,k)\xrightarrow{y\cdot}\operatorname {Ext}_{k[u_{\lambda}]}^{n+2}(X,k)\] for every \(n\geq 1\) and every \(k[u_{\lambda}]\)-module \(X\) (see, for example, [18, pages 583-584]), and so since \(\tau_{A,\lambda}^{*}(Au_{\lambda},k)(\theta)\neq 0\), we see from the above equation that \(\tau_{A,\lambda}^{2*}(k,k)(\eta)\) cannot be nonzero in \(\operatorname{Ext}_{k[u_{\lambda}]}^{2*}(k,k)\). In other words, the element \(\eta\) belongs to \(\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k)\), giving \(I_{A}(Au_{\lambda},k)\subseteq\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k)\), and then in turn \[Z\left(\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k)\right)\subseteq Z\left(I_ {A}(Au_{\lambda},k)\right)=V_{A}\left(Au_{\lambda},k\right)\subseteq V_{A} \left(Au_{\lambda}\right)\cap V_{A}\left(k\right)=V_{A}\left(Au_{\lambda}\right)\] where the last inclusion is Proposition 2.6(2). By definition, the support variety \(V_{A}(Au_{\lambda})\) equals \(Z(I_{A}(Au_{\lambda}))\), where \(I_{A}(Au_{\lambda})\) is the annihilator ideal of \(\operatorname{Ext}_{A}^{*}(Au_{\lambda},Au_{\lambda})\) in \(\operatorname{Ext}_{A}^{2*}(k,k)\). The inclusion \(Z(\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k))\subseteq Z(I_{A}(Au_{\lambda}))\) gives the inclusion \[\sqrt{I_{A}(Au_{\lambda})}\subseteq\sqrt{\operatorname{Ker}\tau_{A,\lambda}^{ 2*}(k,k)}\] by [15, Theorem 25], and so since \(\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k)\) is a prime ideal by Lemma 4.4, we see that \(I_{A}(Au_{\lambda})\subseteq\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k)\). We also know, from the same lemma, that \(\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k)\neq\mathfrak{m}_{0}\), so that the chain \[\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k)\subset\mathfrak{m}_{0}\] of prime ideals containing \(I_{A}(Au_{\lambda})\) has length one. Since the \(A\)-module \(Au_{\lambda}\) is periodic, we know from Theorem 3.1 that the dimension of \(V_{A}(Au_{\lambda})\) is one. Moreover, we saw above that the support variety is irreducible, and so it follows that \(V_{A}(Au_{\lambda})=Z(\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k))\); see the paragraph following Corollary 3.2. We now use the properties we just proved for the modules \(Au_{\lambda}\) to show that every non-trivial support variety contains \(V_{A}(Au_{\lambda})\) for some nonzero \(\lambda\). **Proposition 4.6**.: _Let \(M\in\operatorname{mod}A\) be a non-projective module._ (1) _There exists a nonzero \(c\)-tuple \(\lambda\in k^{c}\) with the property that \(M\) is not a free module over the subalgebra \(k[u_{\lambda}]\). Moreover, for every such \(\lambda\), the support variety \(V_{A}(M)\) contains the one-dimensional irreducible variety \(V_{A}(Au_{\lambda})\) from_ Proposition 4.5_._ (2) _If \(M\) is indecomposable and periodic, then there exists a nonzero \(c\)-tuple \(\lambda\in k^{c}\) with the following property: given a nonzero \(c\)-tuple \(\mu\in k^{c}\), the module \(M\) is not free over \(k[u_{\mu}]\) if and only if \(\mu=\alpha\lambda\) for some \((\)necessarily nonzero\()\) scalar \(\alpha\in k\). Moreover, \(V_{A}(M)=V_{A}(Au_{\lambda})\)._ Proof.: The first part of (1) follows from [3, Section 3]. Now take such a \(c\)-tuple \(\lambda\). Then \(\underline{\operatorname{Hom}}_{A}(Au_{\lambda},M)\) is nonzero by Proposition 4.5(2), and combining this with Proposition 4.5(1), we obtain \[\operatorname{Ext}_{A}^{n}\left(Au_{\lambda},M\right)\simeq\underline{ \operatorname{Hom}}_{A}\left(\Omega_{A}^{n}(Au_{\lambda}),M\right)\simeq \underline{\operatorname{Hom}}_{A}\left(Au_{\lambda},M\right)\neq 0\] for every \(n\geq 1\). It now follows from Theorem 2.7(7) that the support variety \(V_{A}(Au_{\lambda},M)\) is non-trivial, i.e. \(V_{A}(Au_{\lambda},M)\neq\{\mathfrak{m}_{0}\}\). The inclusion \[V_{A}\left(Au_{\lambda},M\right)\subseteq V_{A}\left(Au_{\lambda}\right) \cap V_{A}\left(M\right)\] which holds by Proposition 2.6(2), now implies that the intersection \(V_{A}(Au_{\lambda})\cap V_{A}(M)\) is also non-trivial. But \(V_{A}(Au_{\lambda})\) is irreducible by Proposition 4.5(4), and so \(V_{A}(Au_{\lambda})\subseteq V_{A}(M)\). This proves (1). To prove (2), suppose that \(M\) is indecomposable and periodic, and let \(\lambda\) be any nonzero \(c\)-tuple for which the module is not free over \(k[u_{\lambda}]\); such a tuple exists by (1). Consider the module \(M_{\lambda}^{+}\) from the proof of Proposition 4.5(4). We showed that this module is indecomposable and periodic, and that its support variety equals that of \(Au_{\lambda}\). We saw above that the intersection \(V_{A}(Au_{\lambda})\cap V_{A}(M)\) is non-trivial, hence the same is trivially true for the intersection \(V_{A}(M_{\lambda}^{+})\cap V_{A}(M)\). It now follows from Proposition 3.3 that \(V_{A}(M)=V_{A}(M_{\lambda}^{+})=V_{A}(Au_{\lambda})\). Finally, if \(\mu\) is another nonzero \(c\)-tuple for which \(M\) is not a free \(k[u_{\mu}]\)-module, then what we have just shown implies that the support varieties \(V_{A}(Au_{\lambda})\) and \(V_{A}(Au_{\mu})\) must be equal. Then \(Z(\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k))=Z(\operatorname{Ker}\tau_{A, \mu}^{2*}(k,k))\) by Proposition 4.5(4), giving in turn \(\operatorname{Ker}\tau_{A,\lambda}^{2*}(k,k)=\operatorname{Ker}\tau_{A,\mu}^{2* }(k,k)\) since both ideals are prime ideals by Lemma 4.4. The very same result gives \(\mu=\alpha\lambda\) for some (nonzero) \(\alpha\in k\). Conversely, if \(\mu=\alpha\lambda\) for a nonzero \(\alpha\), then \(u_{\mu}=\alpha u_{\lambda}\). The subalgebra \(k[u_{\mu}]\) then equals \(k[u_{\lambda}]\), hence \(M\) is not free over \(k[u_{\mu}]\). In the main result of this section, we consider more general braided Hopf algebras of the form \(\Lambda\rtimes G\), for \(G\) a finite group containing \(C_{2}\), and over an algebraically closed field \(k\). We know from Remark 4.1(1) that when the characteristic of \(k\) does not divide the order of \(G\), then the finite tensor categories \(\operatorname{mod}(\Lambda\rtimes G)\) and \(\operatorname{mod}(\Lambda\rtimes C_{2})\) satisfy \(\mathbf{Fg}\). The following lemma allows us to pass from support varieties over \(\Lambda\rtimes G\) to support varieties over \(\Lambda\rtimes C_{2}\). **Lemma 4.7**.: _Let \(k\) be an algebraically closed field, \(c\) a positive integer, and \(\Lambda\) the exterior algebra on \(c\) generators over \(k\). Furthermore, let \(G\) be a finite group whose order is not divisible by the characteristic of \(k\), acting on \(\Lambda\) in such a way that the algebra \(H=\Lambda\rtimes G\) is a Hopf algebra. Finally, suppose that \(G\) contains a central subgroup \(C_{2}\) of order two, acting on \(\Lambda\) by letting its generator change the sign of the generators of \(\Lambda\), and that \(A=\Lambda\rtimes C_{2}\) is a Hopf subalgebra of \(H\). Then_ \[V_{H}(M)=V_{H}(N)\ \implies\ V_{A}(M)=V_{A}(N)\] _for all \(M,N\in\operatorname{mod}H\)._ Proof.: We know from Remark 4.1(3) that the cohomology ring \(\operatorname{Ext}_{H}^{*}(k,k)\) is isomorphic to \(\operatorname{Ext}_{\Lambda}^{*}(k,k)^{G}\) via the restriction map \[\operatorname{Ext}_{H}^{*}(k,k)\xrightarrow{\tau_{H,\Lambda}^{*}(k,k)} \operatorname{Ext}_{\Lambda}^{*}(k,k)\] This map is the composite \[\operatorname{Ext}_{H}^{*}(k,k)\xrightarrow{\tau_{H,A}^{*}(k,k)} \operatorname{Ext}_{A}^{*}(k,k)\xrightarrow{\tau_{A,\Lambda}^{*}(k,k)} \operatorname{Ext}_{\Lambda}^{*}(k,k)\] and from Remark 4.3 we also know that \[\operatorname{Ext}_{A}^{2*}(k,k)\xrightarrow{\tau_{A,\Lambda}^{2*}(k,k)} \operatorname{Ext}_{\Lambda}^{2*}(k,k)\] is an isomorphism. Since \(C_{2}\subseteq G\), both \(\operatorname{Ext}_{H}^{*}(k,k)\) and \(\operatorname{Ext}_{A}^{*}(k,k)\) are concentrated in even degrees. By definition, the action of \(G\) on \(\Lambda\) is defined in terms of a group homomorphism \(G\to\operatorname{Aut}(\Lambda)\). Now for an element \(a=w_{1}\otimes e+w_{2}\otimes h\) in \(A\) and \(g\in G\), we define \({}^{g}a\) to be \({}^{g}w_{1}\otimes e+{}^{g}w_{2}\otimes h\). One checks that this induces an automorphism of \(A\), using the fact that \(C_{2}\) is central in \(G\). Moreover, in this way we obtain a homomorphism \(G\to\operatorname{Aut}(A)\), with the action of \(G\) on \(A\) extending the action on \(\Lambda\). As in Remark 4.1(2), we obtain a \(G\)-action on \(\operatorname{Ext}_{A}^{*}(k,k)\), and this action commutes with the restriction map (and isomorphism) \[\operatorname{Ext}_{A}^{2*}(k,k)\xrightarrow{\tau_{A,\Lambda}^{2*}(k,k)} \operatorname{Ext}_{\Lambda}^{2*}(k,k)\] Then \(\operatorname{Ext}_{\Lambda}^{2*}(k,k)^{G}\) is the image of \(\operatorname{Ext}_{A}^{2*}(k,k)^{G}\), and so \(\operatorname{Ext}_{H}^{2*}(k,k)\) is isomorphic to \(\operatorname{Ext}_{A}^{2*}(k,k)^{G}\) via the restriction map \[\operatorname{Ext}_{H}^{2*}(k,k)\xrightarrow{\tau_{H,A}^{2*}(k,k)}\operatorname {Ext}_{A}^{2*}(k,k)\] in light of the above. Let \(M\) and \(N\) be \(H\)-modules with \(V_{H}(M)=V_{H}(N)\). There is a commutative diagram where the horizontal maps are restrictions (we have skipped the arguments in the upper one, since we shall be using it quite a lot in what follows), and the vertical maps are induced by tensoring with \(M\). The module \(N\) gives rise to a similar diagram. By Lemma 4.2, the horizontal restriction maps are injective. Denote by \(I_{A}(M)\) the annihilator ideal of \(\operatorname{Ext}_{A}^{*}(M,M)\) in \(\operatorname{Ext}_{A}^{2*}(k,k)\), that is, \(I_{A}(M)=\operatorname{Ker}\varphi_{M}^{A}\), and similarly for \(I_{A}(N),I_{H}(M)\) and \(I_{H}(N)\). These are the ideals defining the four support varieties we are considering. Suppose we can show that \(I_{A}(M)\) and \(I_{A}(N)\) are \(G\)-invariant in \(\operatorname{Ext}_{A}^{2*}(k,k)\), so that \({}^{g}I_{A}(M)=I_{A}(M)\) and \({}^{g}I_{A}(N)=I_{A}(N)\) for all \(g\in G\). Let \(\mathfrak{m}\in V_{A}(M)\); thus \(\mathfrak{m}\) is a maximal ideal of \(\operatorname{Ext}_{A}^{2*}(k,k)\) with \(I_{A}(M)\subseteq\mathfrak{m}\). Since \(k\) is algebraically closed and the algebras \(\operatorname{Ext}_{H}^{2*}(k,k)\) and \(\operatorname{Ext}_{A}^{2*}(k,k)\) are finitely generated, the ideal \((\tau_{H,A}^{2*})^{-1}(\mathfrak{m})\) is maximal in \(\operatorname{Ext}_{H}^{2*}(k,k)\); see, for example, [2, Section 5.4]. The commutativity of the diagram gives \(I_{H}(M)\subseteq(\tau_{H,A}^{2*})^{-1}(\mathfrak{m})\), and therefore \((\tau_{H,A}^{2*})^{-1}(\mathfrak{m})\in V_{H}(M)\). Suppose, on the other hand, that \(\mathfrak{m}\notin V_{A}(N)\), so that \(I_{A}(N)\nsubseteq\mathfrak{m}\). As \(I_{A}(N)\) is \(G\)-invariant, this gives \(I_{A}(N)\nsubseteq^{g}\mathfrak{m}\) for every \(g\in G\), and so by prime avoidance there exists a homogeneous element \(\eta\in I_{A}(N)\) with \(\eta\notin{}^{g}\mathfrak{m}\) for every \(g\in G\). Consider now the element \[w=\prod_{g\in G}{}^{g}\eta\] It belongs to \(I_{A}(N)\) since \(\eta\) is one of the factors, but it cannot belong to \(\mathfrak{m}\); if it did, then \({}^{g}\eta\) would belong to \(\mathfrak{m}\) for some \(g\), giving \(\eta\in{}^{g^{-1}}\mathfrak{m}\). Furthermore, this element is \(G\)-invariant, and therefore belongs to the image of \(\tau_{H,A}^{2*}(k,k)\), i.e. \(w=\tau_{H,A}^{2*}(\theta)\) for some \(\theta\in\operatorname{Ext}_{H}^{2*}(k,k)\). The commutativity of the diagram with \(M\) replaced by \(N\) gives \[\tau_{H,A}^{*}(N,N)\circ\varphi_{N}^{H}(\theta)=\varphi_{N}^{A}\circ\tau_{H,A }^{2*}(\theta)=\varphi_{N}^{A}(w)=0\] since \(w\in I_{A}(N)\), and so since \(\tau_{H,A}^{*}(N,N)\) is injective we obtain \(\theta\in I_{H}(N)\). Now \(\theta\) does not belong to \((\tau_{H,A}^{2*})^{-1}(\mathfrak{m})\), for if it did, then \(w\) would be an element of \(\mathfrak{m}\). Therefore \(I_{H}(N)\nsubseteq(\tau_{H,A}^{2*})^{-1}(\mathfrak{m})\), so that \((\tau_{H,A}^{2*})^{-1}(\mathfrak{m})\notin V_{H}(N)\). But \((\tau_{H,A}^{2*})^{-1}(\mathfrak{m})\in V_{H}(M)\) from above, and \(V_{H}(M)=V_{H}(N)\) by assumption, and so we have reached a contradiction. It must therefore be the case that \(\mathfrak{m}\in V_{A}(N)\), giving \(V_{A}(M)\subseteq V_{A}(N)\). The reverse inclusion is proved similarly, hence \(V_{A}(M)=V_{A}(N)\). It only remains to show that the ideals \(I_{A}(M)\) and \(I_{A}(N)\) are \(G\)-invariant in \(\operatorname{Ext}_{A}^{2*}(k,k)\). We prove this for \(I_{A}(M)\); the proof for \(I_{A}(N)\) is similar. It follows from [23, Theorem 9.3.9] that \(I_{A}(M)\) equals the annihilator ideal of the \(\operatorname{Ext}_{A}^{2*}(k,k)\)-module \(\operatorname{Ext}_{A}^{*}(k,M^{*}\otimes M)\), where the module action is given in terms of Yoneda composition. Now let \(\eta\) and \(\theta\) be homogeneous elements of \(I_{A}(M)\) and \(\operatorname{Ext}_{A}^{*}(k,M^{*}\otimes M)\), respectively. Given any \(H\)-module \(X\) and an element \(g\in G\), the twisted \(A\)-module \({}_{g}X\) is isomorphic to \(X\), with an isomorphism \({}_{g}X\to X\) mapping an element \(m\) to \((1\otimes g^{-1})m\). Consequently, when we twist a homogeneous element of \(\operatorname{Ext}_{A}^{*}(k,M^{*}\otimes M)\) by an element from \(G\), we obtain a new element in \(\operatorname{Ext}_{A}^{*}(k,M^{*}\otimes M)\), since \(k\) and \(M^{*}\otimes M\) are \(H\)-modules. Therefore, as \(\eta\) belongs to \(I_{A}(M)\), we obtain \[\theta\circ({}^{g}\eta)=({}^{gg^{-1}}\theta)\circ({}^{g}\eta)={}^{g}\left(({}^ {g^{-1}}\theta)\circ\eta\right)=0\] since \({}^{g^{-1}}\theta\) belongs to \(\operatorname{Ext}_{A}^{*}(k,M^{*}\otimes M)\). This shows that \({}^{g}\eta\in I_{A}(M)\), and hence \(I_{A}(M)\) is \(G\)-invariant. We now prove the main result of this section: the tensor product property holds for support varieties over braided Hopf algebras of the form we have been considering. **Theorem 4.8**.: _Let \(k\) be an algebraically closed field, \(c\) a positive integer, and \(\Lambda\) the exterior algebra on \(c\) generators over \(k\). Furthermore, let \(G\) be a finite group whose order is not divisible by the characteristic of \(k\), acting on \(\Lambda\) in such a way that the algebra \(H=\Lambda\rtimes G\) is a braided Hopf algebra. Finally, suppose that \(G\) contains a central subgroup \(C_{2}\) of order two, acting on \(\Lambda\) by letting its generator change the sign of the generators of \(\Lambda\), and that \(\Lambda\rtimes C_{2}\) is a Hopf subalgebra of \(H\). Then_ \[V_{H}\left(M\otimes N\right)=V_{H}\left(M\right)\cap V_{H}\left(N\right)\] _for all \(M,N\in\operatorname{mod}H\)._ Proof.: As before, denote by \(A\) the Hopf subalgebra \(\Lambda\rtimes C_{2}\) of \(H\). Let \(M\) and \(N\) be two nonzero periodic \(H\)-modules with \(V_{H}(M)=V_{H}(N)\), and decompose them as \(A\)-modules into direct sums \(M=\oplus M_{i}\), \(N=\oplus_{j}N_{j}\) of indecomposable modules. Since \(H\) is free as a left \(A\)-module (see [17, Theorem 7]), both \(M\) and \(N\) are of complexity at most one over \(A\), because the projective resolutions over \(H\) restrict to projective resolutions over \(A\). Moreover, the modules cannot be projective over \(A\); if \(M\), say, is \(A\)-projective, then it is also projective - and hence free - over \(\Lambda\), since \(A\) is free over \(\Lambda\). Then we would obtain a free module when we induced \(M\) (as a \(\Lambda\)-module) back to \(H\), but as in the proof of Lemma 4.2, the original \(H\)-module \(M\) is a summand of this induced module. As \(M\) is not projective over \(H\), it must be the case that it is not projective over \(A\) either. Therefore both \(M\) and \(N\) are of complexity one over \(A\). In particular, at least one of the \(M_{i}\), and one of the \(N_{j}\), is not projective, and therefore periodic from Corollary 3.2. By Lemma 4.7 there is an equality \(V_{A}(M)=V_{A}(N)\), and by Theorem 2.7(2) these support varieties are non-trivial since \(M\) and \(N\) are not projective over \(A\). Consequently, by Proposition 2.6(1), there exist indices \(i\) and \(j\) for which \(V_{A}(M_{i})\cap V_{A}(N_{j})\neq\{\mathfrak{m}_{0}\}\), where \(\mathfrak{m}_{0}\) is the graded maximal ideal of \(\operatorname{Ext}_{A}^{2*}(k,k)\). Using Theorem 2.7(2) again, we see that \(M_{i}\) and \(N_{j}\) are not projective, and therefore periodic from the above. It now follows from Proposition 3.3 that \(V_{A}(M_{i})=V_{A}(N_{j})\), and so from Proposition 4.6 we see that there exists a nonzero \(c\)-tuple \(\lambda\in k^{c}\) with \(V_{A}(M_{i})=V_{A}(N_{j})=V_{A}(Au_{\lambda})\), and with \(M_{i}\) and \(N_{j}\) not free over the subalgebra \(k[u_{\lambda}]\) of \(A\). Then \(M\) and \(N\) are not free over \(k[u_{\lambda}]\), either. Since \(u_{\lambda}\) is just a linear combination of the elements \(x_{1},\ldots,x_{c}\in\Lambda\), the group \(C_{2}\) acts on \(k[u_{\lambda}]\), and we may form the four-dimensional skew group algebra \(H_{4}^{\lambda}=k[u_{\lambda}]\rtimes C_{2}\). This is a Hopf subalgebra of \(A\) (and therefore also of \(H\)), isomorphic to the Sweedler Hopf algebra \(H_{4}\), and it contains \(k[u_{\lambda}]\) as a subalgebra. Since it is free over \(k[u_{\lambda}]\), the modules \(M\) and \(N\) cannot be projective as \(H_{4}^{\lambda}\)-modules, for if they were, then they would also be free over \(k[u_{\lambda}]\). The algebra \(H_{4}^{\lambda}\) has two simple modules, namely the trivial module \(k\) and a module \(S\). The latter is one-dimensional, with \(u_{\lambda}S=0\), and \(h\) acting as \(-1\) (we identify \(H_{4}^{\lambda}\) with a \(k\)-algebra with basis \(1,u_{\lambda},h\) and \(hu_{\lambda}\), where \(h\) is the generator of \(C_{2}\)). It is well-known that these are the only non-projective indecomposable \(H_{4}^{\lambda}\)-modules (see, for example, [9, Page 467] or [8, Corollary 2.4 and Theorem 2.5]), and so it follows that there are elements \(m\in M\) and \(n\in N\) that generate summands isomorphic to either \(k\) or \(S\) when we restrict \(M\) and \(N\) to \(H_{4}^{\lambda}\). Let \(W\) be the one-dimensional subspace of \(M\otimes N\) generated by \(m\otimes n\). This is an \(H_{4}^{\lambda}\)-submodule of \(M\otimes N\); the comultiplication on \(H_{4}^{\lambda}\) maps \(u_{\lambda}\) to \(u_{\lambda}\otimes 1+h\otimes u_{\lambda}\) and \(h\) to \(h\otimes h\), so that \(u_{\lambda}\) acts as zero on \(W\), and \(h\) as \(1\) or \(-1\). Therefore, over \(H_{4}^{\lambda}\), the module \(M\otimes N\) has a direct summand isomorphic to either \(k\) or \(S\). In particular, \(M\otimes N\) is not projective as an \(H_{4}^{\lambda}\)-module. Now since \(H_{4}^{\lambda}\) is a Hopf subalgebra of \(H\), we know from [17, Theorem 7] that \(H\) is free as an \(H_{4}^{\lambda}\)-module. This implies that \(M\otimes N\) cannot be projective over \(H\), for it it were, then it would also be projective over \(H_{4}^{\lambda}\). We have shown that for every pair of nonzero periodic \(H\)-modules whose support varieties coincide, the tensor product is not projective. It therefore follows from Theorem 3.6 that \(V_{H}(M\otimes N)=V_{H}(M)\cap V_{H}(N)\) for all \(H\)-modules \(M\) and \(N\). By Deligne's famous classification theorem (see [10]), every symmetric finite tensor category over an algebraically closed field of characteristic zero is equivalent to the category of finite-dimensional representations of some affine supergroup scheme. This means precisely that such a category is equivalent to \(\operatorname{mod}H\), where \(H\) is a Hopf-algebra of the form \(\Lambda\rtimes G\) for some exterior algebra \(\Lambda\) and finite group \(G\). Furthermore, there is a subgroup of \(G\) of order two, and all the assumptions in Theorem 4.8 are satisfied (see [1] and also [16, Section 7.1]). Thus we obtain the following result. **Theorem 4.9**.: _Suppose that \((\mathscr{C},\otimes,\mathbf{1})\) is a symmetric finite tensor category over an algebraically closed field of characteristic zero. Then_ \[V_{\mathscr{C}}(X\otimes Y)=V_{\mathscr{C}}(X)\cap V_{\mathscr{C}}(Y)\] _for all objects \(X,Y\in\mathscr{C}\)._
2302.04171
Non-Coplanar Magnetic Orders in Classical Square-Kagome Antiferromagnets
Motivated by the recent synthesis of a number of Mott insulating square-kagome materials, we explore the rich phenomenology of frustrated magnetism induced by this lattice geometry, also referred to as the squagome or shuriken lattice. On the classical level, square-kagome antiferromagnets are found to exhibit extensive degeneracies, order-by-disorder, and non-coplanar ordering tendencies, which we discuss for an elementary, classical Heisenberg model with nearest-neighbor and cross-plaquette interactions. Having in mind that upon introducing quantum fluctuations non-coplanar order can melt into chiral quantum spin liquids, we provide detailed information on the multitude of non-coplanar orders, including some which break rotational symmetry (possibly leading to nematic quantum orders), as well as a number of (incommensurate) spin spiral phases. Using extensive numerical simulations, we also discuss the thermodynamic signatures of these phases, which often show multi-step thermal ordering. Our comprehensive discussion of the classical square-kagome Heisenberg model, often drawing comparisons to the conventional kagome antiferromagnet, sets the stage for future explorations of quantum analogs of the various phases, either conceptually such as in quantum spin-1/2 generalizations of our model or experimentally such as in the Cu-based candidate materials.
Martin Gembé, Heinz-Jürgen Schmidt, Ciarán Hickey, Johannes Richter, Yasir Iqbal, Simon Trebst
2023-02-08T16:22:07Z
http://arxiv.org/abs/2302.04171v2
# Non-Coplanar Magnetic Orders in Classical Square-Kagome Antiferromagnets ###### Abstract Motivated by the recent synthesis of a number of Mott insulating square-kagome materials, we explore the rich phenomenology of frustrated magnetism induced by this lattice geometry, also referred to as the squagome or shuriken lattice. On the classical level, square-kagome antiferromagnets are found to exhibit extensive degeneracies, order-by-disorder, and non-coplanar ordering tendencies, which we discuss for an elementary, classical Heisenberg model with nearest-neighbor and cross-plaquette interactions. Having in mind that upon introducing quantum fluctuations non-coplanar order can melt into chiral quantum spin liquids, we provide detailed information on the multitude of non-coplanar orders, including some which break rotational symmetry (possibly leading to nematic quantum orders), as well as a number of (incommensurate) spin spiral phases. Using extensive numerical simulations, we also discuss the thermodynamic signatures of these phases, which often show multi-step thermal ordering. Our comprehensive discussion of the classical square-kagome Heisenberg model, often drawing comparisons to the conventional kagome antiferromagnet, sets the stage for future explorations of quantum analogs of the various phases, either conceptually such as in quantum spin-1/2 generalizations of our model or experimentally such as in the Cu-based candidate materials. ## I Introduction Classical Heisenberg spin models with frustrated interactions are known to host a rich variety of magnetic orders such as collinear, coplanar, or helimagnetic states [1; 2; 3]. Of particular interest are non-Bravais lattices (with more than one atom per unit cell) which offer the possibility of stabilizing _non-coplanar_ magnetic ordering [1; 4]. Such non-coplanar ground states distinguish themselves from other magnetic orders by exhibiting a scalar spin chirality. Notably, the spontaneous breaking of such a \(\mathbb{Z}_{2}\) (chiral) symmetry manifests itself in a finite-temperature phase transition - even in two spatial dimensions [5], while other types of magnetic order are, for the Heisenberg models of interest here, subject to the Mermin-Wagner theorem [6]. The latter implies that, for the thermodynamic limit of infinite system size, any fluctuation-driven phase transition in two spatial dimensions, which only breaks the continuous spin symmetry, occurs at zero temperature. Finite systems (often explored in numerical simulations) will, however, exhibit a thermal crossover to magnetic order at some finite temperature, with the accompanying entropy release leading to a peak in the specific heat. This should be distinguished from cooperative paramagnetic phases [7], which defy magnetic ordering tendencies due to the existence of substantial residual entropies, even at temperatures orders of magnitude below the coupling scales and for infinitely large systems. In classical Heisenberg models, the thermodynamics of such cooperative paramagnetic phases, also referred to as classical spin liquids, is typically signified by a plateau in the specific heat [8]. The two themes, the formation of non-coplanar magnetic order and cooperative paramagnetic phases, are conceptually tied when looking at their quantum mechanical counterparts. By melting non-coplanar magnetic order via quantum fluctuations, e.g. by going to small spins such as \(S=1/2\), one could possibly restore spin rotational symmetry, i.e., realize a non-magnetic quantum ground state. If the chiral symmetry breaking present in the parent classical magnetic order would persist (at some finite temperature scale), one would realize a much sought after chiral quantum spin liquid phase [9; 10; 11]. Similarly, the inclusion of quantum fluctuations on cooperative paramagnetic ground states provides another promising route towards realizing unconventional quantum phases such as quantum spin liquids, valence bond crystals [12; 13; 14; 15], or spin and lattice nematics [16]. An ideal playground to explore this physics in experiment has come in the arrival of materials based on the novel square-kagome lattice geometry [17], whose potential to host intricately textured magnetic ground states or quantum spin liquid phases is currently under much investigation [18]. Indeed, no sign of long-range magnetic order down to 50 mK has been observed in the spin \(S=1/2\) Cu\({}^{2+}\) based materials KCu\({}_{6}\)AlBiO\({}_{4}\)(SO\({}_{4}\))\({}_{5}\)Cl [19] and Na\({}_{6}\)Cu\({}_{7}\)BiO\({}_{4}\)(PO\({}_{4}\))\({}_{4}\)[Cl,OH]\({}_{3}\)[20] despite having large negative Curie-Weiss temperatures of \(-237\,\mathrm{K}\) and \(-212\,\mathrm{K}\), respectively. On the other hand, their sister compound KCu\({}_{7}\)(TeO\({}_{4}\))(SO\({}_{4}\))\({}_{5}\)Cl presumably orders into a non-collinear antiferromagnetic structure [21]. In general, the model Hamiltonians for these materials can host up to three symmetry inequivalent couplings on the three sides of the elementary triangles as well as potential longer-range Heisenberg couplings across the octagonal plaquettes, see Fig. 1, whose presence could be resolved by ab-initio density functional theory calculations. The latter are, in contrast to diagonal square cou plings, a key ingredient towards stabilizing non-coplanar magnetic orders on the classical level and, potentially, chiral spin liquids in the quantum realm. This is in a spirit similar to the diagonal couplings across hexagons on the kagome lattice which are known to yield non-coplanar spin structures dubbed cuboc orders [1; 5; 22; 23]. The details of these non-coplanar states are, by their very nature, rather sensitive to the underlying lattice geometry with unique features expected for the square-kagome lattice geometry at hand. The purpose of this manuscript is to set the staging ground for future explorations of square-kagome antiferromagnets by providing a comprehensive discussion of their physics in the classical realm and identifying its unique features. To this end, we investigate the ground state and thermodynamics of the classical Heisenberg model on the square-kagome lattice in the presence of nearest-neighbor \((J_{1},J_{2},J_{3})\) couplings as well as cross-plaquette interactions inside the octagons, \(J_{\times}\) and \(J_{+}\), as indicated in Fig. 1. Our analysis is based on extensive classical Monte Carlo simulations and an analytical construction of ground states (beyond the Luttinger-Tisza approach), which is shown to be rendered exact for some orders. As summarized in the phase diagram of Fig. 5 below, we find a rich variety of non-coplanar magnetic orders with cuboctohedral symmetry, including types of cuboc order not found on the Kagome lattice (or any other known lattice geometry). In addition, we report a multitude of non-coplanar incommensurate spirals, in addition to commensurate coplanar orders. Exploring the quantum analogs of these phases in the future, either conceptually such as in quantum spin-1/2 generalizations of our model or experimentally such as in the Cu-based candidate materials, might prove fruitful in identifying chiral quantum spin liquids. ## II Nearest-neighbor model With just nearest-neighbor Heisenberg interactions the square-kagome model shares much of the same physics as the nearest-neighbor kagome Heisenberg model. Below, we briefly summarize some of the known results for the ground states of the square-kagome model that can be inferred from the conventional kagome antiferromagnet. We then move on to discuss its finite temperature physics and explore critical fluctuations, going beyond what has been studied for the conventional kagome scenario. ### Ground states The Hamiltonian with nearest-neighbor couplings can be written as \[\mathcal{H}=\sum_{\langle i,j\rangle\in a}J_{a}\,\mathbf{S}_{i}\cdot\mathbf{S }_{j}\,, \tag{1}\] where \(a=\{1,2,3\}\) runs over the three different types of bonds, as in Fig. 1. However, it can be more easily understood by rewriting it as \[\mathcal{H}=\sum_{i,j,k\in\triangle} \left[\frac{1}{2}\left(\sqrt{\frac{J_{1}J_{3}}{J_{2}}}\mathbf{S}_ {i}+\sqrt{\frac{J_{1}J_{2}}{J_{3}}}\mathbf{S}_{j}+\sqrt{\frac{J_{2}J_{3}}{J_{ 1}}}\mathbf{S}_{k}\right)^{2}\right. \tag{2}\] \[\left.-\frac{1}{2}\left(\frac{J_{1}J_{3}}{J_{2}}+\frac{J_{1}J_{2} }{J_{3}}+\frac{J_{2}J_{3}}{J_{1}}\right)\right]\,,\] where the sum is now over all four types of elementary triangles of the lattice and we have assumed all couplings to be antiferromagnetic, \(J_{a}>0\) (such that all of the arguments of the square roots are positive). The Hamiltonian of the kagome Heisenberg model can be written in the same form, but with a key distinction being the number and nature of elementary triangles that are summed over. The new form of the Hamiltonian allows us to easily construct a special class of classical ground states. These are the states that satisfy the constraint \[\left(\sqrt{\frac{J_{1}J_{3}}{J_{2}}}\mathbf{S}_{i}+\sqrt{\frac{J_{1}J_{2}}{J _{3}}}\mathbf{S}_{j}+\sqrt{\frac{J_{2}J_{3}}{J_{1}}}\mathbf{S}_{k}\right)=0\,. \tag{3}\] where \(\mathbf{S}_{i},\mathbf{S}_{j}\) are the spins on the squares and \(\mathbf{S}_{k}\) is the spin on the bow-ties. To see that such states are indeed ground states note that the Hamiltonian, for a given set of parameters, is the sum of a squared term, which is always positive, and a constant term. The minimal possible energy is thus obtained when the squared term is precisely zero, i.e., the constraint above. However, there is the additional constraint that the spins at each site are all properly normalized, \(\mathbf{S}_{i}=1\)\(\forall\)\(i\). Satisfying both of these constraints is only possible within a restricted region of parameter space, namely \[-2\leq\frac{J_{2}J_{3}}{J_{1}^{2}}-\left(\frac{J_{2}}{J_{3}}+\frac{J_{3}}{J_{ 2}}\right)\leq+2\,. \tag{4}\] Figure 1: **Square-kagome lattice and interactions.** The square-kagome lattice, also referred to as the squagome or shurken lattice in the literature, consists of two sets of topologically distinct sites – square sites and bow-tie sites. Nearest-neighbor interactions are referred to as \(J_{1}\) (square) and \(J_{2}\), \(J_{3}\) (bow-tie), respectively. Additionally, we introduce further neighbor cross-plaquette \(J_{+}\)-bonds (\(J_{\times}\)-bonds) as indicated in blue (red). The unit cell contains six spins, namely the four square sites and two bow-tie sites of a single _shurken_ star. Thus, within this parameter region, all spin configurations satisfying the constraint (3) on each and every triangle are guaranteed to be bonafide ground state spin configurations. This includes both globally coplanar and non-coplanar spin configurations as, though the three spins in each triangle are constrained to lie within the same plane, it is not necessary that the planes for different triangles are the same. The resulting unusual and highly degenerate phase is the classical spin liquid indicated in the phase diagram of Fig. 2(a) and previously reported in Ref. [24], which shares the same qualitative physics as the classical spin liquid found in the distorted kagome version of the model [26; 27; 28; 29]. At the isotropic point, \(J_{1}=J_{2}=J_{3}\), the spins in each triangle lie within the same plane at an angle of exactly \(120^{\circ}\) away from one another. On the other hand, away from the isotropic point, e.g. along the diagonal line \(J_{2}=J_{3}\), the coplanar spin configuration for a single triangle obeying the constraint can be written as \[\mathbf{S}_{i} =(-J_{2}/(2J_{1}),\,-\sqrt{4-(J_{2}/J_{1})^{2}}/2,\,0),\] \[\mathbf{S}_{j} =(-J_{2}/(2J_{1}),\,+\sqrt{4-(J_{2}/J_{1})^{2}}/2,\,0),\] \[\mathbf{S}_{k} =(1,\,0,\,0)\,, \tag{5}\] where we have fixed the spins to lie in the \(xy\)-plane for simplicity. As one increases \(J_{2}\) from \(J_{2}=0\) to \(J_{2}=2J_{1}\), the angle \(\theta\) between \(\mathbf{S}_{k}\) (the spin on the bow-tie) and the other two spins (on the squares) increases from \(\pi/2\) to \(\pi\), passing through \(2\pi/3\) exactly at the isotropic point \(J_{2}=J_{1}\)[15]. For globally coplanar spin configurations, which are the relevant configurations at the lowest temperatures as we will see in the next section, as well as the constraints already mentioned, there is one additional form of constraint [26]. It is related to how the spins in the triangles around the square and octagonal plaquettes of the lattice are arranged. For globally coplanar ground state spin configurations, we can define on each triangle a chirality variable, \(\eta_{a}=\pm 1\), which encodes whether the spins rotate clockwise or anti-clockwise as one goes from say \(i\) to \(j\) to \(k\)[30] (keep in mind that the angles between the spins in each triangle are fixed, e.g. all three angles are fixed to \(2\pi/3\) at the isotropic point). Now, starting from an initial spin which points in some specific direction, if one travels in a closed loop on the lattice then one must return back to that same initial spin pointing in that same specific direction. However, as one travels along each bond the chirality variables dictate in which direction the spins rotate, and so in order to get back to the same initial spin there is a constraint on the sum of the chirality variables along the closed loop. At the isotropic point, we require \(\sum_{a}\eta_{a}=0\) for the four triangles surrounding a square, and \(\sum_{a}\eta_{a}=0,\pm 6\) for the eight triangles surrounding an octagon. Away from the isotropic point there will in general be more stringent constraints as the angle between spins is no longer \(2\pi/3\), but instead some angle incommensurate with respect to \(2\pi\). This results, for a given lattice size, in a smaller number of allowed coplanar ground state spin configurations, just as in the analogous kagome case [26]. Outside of the classical spin liquid region there are two distinct Neel states in the phase diagram of Fig. 2(a), depending on whether \(J_{2}\) or \(J_{3}\) dominates. In each phase, spins are arranged antiferromagnetically along the bonds with the dominant coupling, and ferromagnetically along the weaker bonds. On the other hand, for dominant \(J_{2}=J_{3}\), there is an up-up-down (UUD) state, sometimes also referred to as a Lieb ferrimagnet [31], in which the spins on the squares point along one direction while the spins on the bow-ties point in the opposite direction (this can be seen from Eq. (5) with \(J_{2}=2J_{1}\), the critical point between the classical spin liquid and UUD state). Figure 2: **The nearest-neighbor model.** (a) Zero-temperature phase diagram as a function of nearest-neighbor couplings \(J_{2}\) and \(J_{3}\), reproduced from Ref. [24]. (b) Specific heat traces for three representative points along the \(J_{2}=J_{3}\) diagonal in the phase diagram of panel (a). The one for the isotropic point, \(J_{1}=J_{2}=J_{3}=1\), closely resembles the well-studied specific heat trace of the kagome AFM [8; 25]. (c) Specific heat scan across the phase diagram of panel (a) at fixed, low temperature of \(T=0.04\) (in units of \(J_{1}=1\)). ### Finite-temperature physics The unusual nature of the classical spin liquid phase is naturally revealed by examining how the specific heat behaves as a function of temperature. For two-dimensional Heisenberg models with finite-range exchange interactions, true long-range magnetic order cannot set in at any non-zero temperature, as laid out in the Mermin-Wagner theorem [6]. However, it is possible for quasi-long-range order to develop, signaled by a peak in the specific heat. Our discussion, in the following, of such quasi-long-range "orders" is based on an analysis of the symmetry of spin-spin correlations at distances shorter than the correlation radius. In the case of non-coplanar, i.e., chiral orders, however, a true thermal phase transition associated with the breaking of discrete symmetries occurs. At the lowest temperatures, one generically expects that, in the thermodynamic limit, the specific heat per site \(c_{V}\to 1\) as \(T\to 0\). This is because each spin is free to fluctuate about its ordered ground state in two orthogonal directions on the unit sphere. These two quadratic modes each contribute \((1/2)\cdot T\), as dictated by classical equipartition, to the energy and thus \(1/2\) to the specific heat (setting \(k_{B}=1\)). However, as first discussed in the context of the kagome antiferromagnet [25], this simple counting breaks down within the classical spin liquid region, as well as in an extended finite temperature fan about the critical lines in the phase diagram. The breakdown is due to the entropic selection of a subset of ground state spin configurations, those which carry the largest entropy and thus the lowest free energy at finite temperature. Classical fluctuations about this favored subset include one or more zero modes at the harmonic level, which contribute \(1/4\) (i.e. they are quartic modes), rather than \(1/2\), to the specific heat. The deviation of the low-temperature specific heat from \(c_{V}\to 1\) thus serves as a signature of this phenomenon of thermal order-by-disorder. In Fig. 2(b), we show the specific heat as a function of temperature for three special parameter points along the diagonal line \(J_{2}=J_{3}\), with a number of curves in between shown in Fig. 3(a) and (b). (i) First, starting with \(J_{2}=J_{3}=0\), we have the trivial limit of fully disconnected squares. Alternatively, one can think of this limit as consisting of decoupled four-site periodic Heisenberg chains. The spins order in a simple antiferromagnetic arrangement within each square. There are 8 quadratic modes per square, minus two due to the global rotational symmetry of the antiferromagnetic moment. This leaves us with 6 independent quadratic modes, and a contribution to the specific heat per site as \(c_{V}\rightarrow[6\cdot(1/2)]/6=1/2\) as \(T\to 0\)[32]. (ii) At the isotropic point, there are three distinct regimes, which share the same physics as the isotropic kagome model at finite temperatures [8] (the similarity even extends to the spin-\(1/2\) quantum case [33]). There is the usual high-temperature paramagnetic region, followed by a cooperative paramagnetic regime and finally a coplanar state at the lowest temperatures. These three regimes can be observed throughout the classical spin liquid phase. The cooperative paramagnet is clearly distinguished by a plateau in the specific heat with \(c_{V}\approx 1\). Within this temperature window, the system fluctuates between the full (extensive) number of states within the ground state manifold that satisfy the constraint in Eq. (3). At lower temperatures, within the coplanar states, fluctuations select the subset of globally coplanar states within the ground state manifold via the entropic driven order-by-disorder mechanism. This is accompanied by \(c_{V}\rightarrow[10\cdot(1/2)+2\cdot(1/4)]/6=11/12\) as \(T\to 0\) due to the presence of one zero mode per triangle (thus two zero modes per unit cell) within the spectrum of classical harmonic fluctuations (very similar to the conventional kagome case [25]). (iii) At \(J_{2}=2J_{1}\) we have the transition between the classical spin liquid and UUD state. Precisely at this critical point an additional zero mode per triangle leads to \(c_{V}\rightarrow[8\cdot(1/2)+4\cdot(1/4)]/6=10/12\) as \(T\to 0\). The evolution of the specific heat between these three special points, as one increases \(J_{2}=J_{3}\), is shown in Fig. 3. Figure 3: **The nearest-neighbor model II.** (a) Specific heat traces for \(0.0\leq J_{2}=J_{3}\leq 1.0\) (in steps of 0.2). (b) Specific heat traces for \(1.0\leq J_{2}=J_{3}\leq 2.0\). Of special interest is the curve for \(J_{2}=J_{3}=2.0\) which is the only one which converges to a value of \(c_{V}=10/12\). (c) Finite temperature specific heat along the diagonal \(J_{2}=J_{3}\) illustrating the \(c_{V}=1\) plateau of the cooperative paramagnet (dark red area), the coplanar phase with \(c_{V}=11/12\) (light red area), the critical fan with \(c_{V}\leq 10/12\) (grey area), and the paramagnetic phase (black region). Starting from the trivial limit, \(J_{2}=J_{3}=0\), the width of the plateau that develops at \(c_{V}\approx 1/2\) shrinks as the interaction scale that couples squares, \(J_{2}=J_{3}\), grows. At low temperatures, the characteristic two temperature regimes emerge, with the cooperative paramagnet and coplanar regimes giving rise to plateaus at \(c_{V}=1\) and \(11/12\), respectively. The \(1/2\)-feature disappears completely at the isotropic point. As one moves past the isotropic point, the width of the coopera Figure 4: **Structure factors of the \(J_{1}\)-\(J_{2}\)-\(J_{3}\) model.** Structure factors for a system with 864 Spins (\(L=12\)) for different temperatures between T = 0.003 and T = 0.2 (left to right) and for different values of \(J_{2}\) and \(J_{3}\) as indicated in the phase diagram in the left column (top to bottom). The shown squares extent in reciprocal space from \(-4\pi\) to \(4\pi\) in both dimensions. Rows (a) to (c) show structure factors on the diagonal \(J_{2}=J_{3}\) within the classical spin liquid regime, especially for the isotropic point \(J_{1}=J_{2}=J_{3}=1.0\) in row (b), which agrees well with the result obtained via large-\(N\) analysis in [13]. Rows (d) and (e) as well show structure factors in the classical spin liquid regime, but off the diagonal with different values for \(J_{2}\) and \(J_{3}\). All structure factors within the classical spin liquid regime show sharp pinch point-like features upon entering the cooperative paramagnetic regime at around \(T=0.01\), which broaden with increasing temperature. The statistical noise at lower temperatures is due to freezing in Monte Carlo sampling. The lower three rows (f) to (h) display structure factors outside the classical spin liquid regime, namely directly on the transition line to the UUD phase, \(J_{2}=J_{3}=2.0\), in (f), within the UUD phase, \(J_{2}=J_{3}=2.5\), in (g), and in the \(J_{1}\)-\(J_{3}\)-Neel ordered phase, \(J_{2}=0.25\) and \(J_{3}=2.5\), in (h). tive paramagnet plateau starts to decrease, until it eventually disappears completely at the critical point \(J_{2}=2J_{1}\). At the same time, in the temperature regime above the cooperative paramagnet, there is a substantial drop in the specific heat as a result of increasing fluctuations on approaching the critical point (which damps the entropy loss in this temperature regime and thus the specific heat). This can be more clearly seen in Fig. 3(c), where the drop manifests itself as a _finite temperature fan_, emanating from the zero-temperature critical point - reminiscent of the fan-like structure at quantum critical points [34]. ### Spin-spin correlations To explore the formation of quasi-long-range order and cooperative paramagnetic phases in the absence of any true thermal phase transitions, we turn to the static spin structure factor, i.e. the Fourier transform of the equal-time real-space spin-spin correlations. Sharp peak-like features indicate the formation of quasi-long-range order, while the tell-tale signatures of cooperative paramagnets are pinch points in momentum space, which map to algebraically decaying correlations in real space [35]. In Fig. 4, we provide a comprehensive overview of the static structure factor over a wide range of temperatures and parameter points. In some cases, such as rows (g) and (h), the ground state is quasi-long-range-ordered and sharp peaks are clearly visible at the corresponding ordering wavevectors. In other cases, i.e. within the classical spin liquid parameter region, we observe a complex redistribution of weight as the system passes through the three distinct finite temperature regimes. For the isotropic point (\(J_{2}=J_{3}=1.0\)), the \(S(\mathbf{q})\) at the highest temperature shown in Fig. 4(b) only displays a broad diffuse profile corresponding to a weakly correlated thermal paramagnet. Inside the cooperative paramagnetic regime, at \(T=0.1\) and \(0.05\), pinch points (with a finite-width set by the inverse correlation length) appear between the square and lobed shaped regions of stronger relative intensity. Their presence signifies the approximate fulfilment of the \(\mathbf{S}_{\triangle}=0\) constraint of Eq. 3 and thus the onset of strong correlations between spins within triangular plaquettes. Upon cooling further, at \(T=0.03\) and \(0.02\), the intensity is redistributed to the centers of squares and lobes located at \(\mathbf{q}=(4\pi,0)\) and \(\mathbf{q}=(2\pi,2\pi)\) (and symmetry related points), respectively. Since one is still in the cooperative paramagnetic regime characterized by a dipolar \(\sim 1/r^{2}\) decay of spin-spin correlations (at distances smaller compared to the correlation length), these high intensity features do not correspond to Bragg peaks. Finally, once order-by-disorder kicks in at \(T\lesssim 0.01\) selecting coplanar states, we notice the disappearance of spectral weight at the location of the pinch point as well as the absence of narrow necks connecting the squares with the lobes. The presence of well-defined maxima at the aforementioned points indicates enhanced correlations of the \(120^{\circ}\ \mathbf{q}=\mathbf{0}\) type order [13; 36], in contrast to the conventional kagome antiferromagnet which favors \(\sqrt{3}\times\sqrt{3}\) correlations [37; 38; 39]. While the kagome antiferromagnet develops long-range dipolar magnetic order of the \(\sqrt{3}\times\sqrt{3}\) type in the limit \(T\to 0\)[39], it remains to be established on the square-kagome lattice whether true long-range \(\mathbf{q}=\mathbf{0}\) dipolar ordering of \(120^{\circ}\) type asymptotically develops as \(T\to 0\). Away from the isotropic point, but still on the line \(J_{2}=J_{3}\), we first note that the cooperative paramagnetic and coplanar temperature regimes are pushed down to smaller \(T\) and shrink in extent [see Fig. 3]. At \(J_{2}=J_{3}=0.5\), within the cooperative paramagnetic regime (\(0.008\lesssim T\lesssim 0.05\)), the pinch points are seen to be present, and with the principal spectral weight at the center of the lobes \(\mathbf{q}=(2\pi,2\pi)\) (and symmetry related points) which progressively increases, together with a relatively weaker signal at the centers of the squares \(\mathbf{q}=(4\pi,0)\) (and symmetry related points) [see Fig. 4(a)]. However, upon entering the coplanar regime, at \(T=0.003\), an equally strong maximum develops at the \(\mathbf{q}=(4\pi,0)\) type points, but in contrast to the isotropic point, these are not indicative of enhanced \(\mathbf{q}=\mathbf{0}\) correlations of the \(120^{\circ}\) type. Similar observations hold true at \(J_{2}=J_{3}=1.5\) [see Fig. 4(c)] with the noticeable difference being the presence of a finite spectral weight at the Brillouin zone centre, being more pronounced in the intermediate temperature, i.e., cooperative paramagnetic regime. Moving away from the symmetric line, i.e., \(J_{2}\neq J_{3}\) but still inside the degenerate manifold region, one observes that the \(S(\mathbf{q})\) are only rotationally invariant possibly reflective of the underlying symmetries of the incipient magnetic order in the limit \(T\to 0\). Upon cooling, a progressive redistribution of spectral weight occurs leading to the appearance of new soft maxima in the coplanar regime at \((J_{2},J_{3})=(1.0,2.5)\) [see Fig. 4(d)] while at \((J_{2},J_{3})=(1.7,0.8)\) [see Fig. 4(e)] interestingly a similar intensity distribution prevails across all temperatures. Finally, inside the magnetically ordered regions, the \(S(\mathbf{q})\) become more sharply peaked, as expected, and interestingly the \(S(\mathbf{q})\) at \((J_{2},J_{3})=(2.0,2.5)\) [see Fig. 4(f)] and \((J_{2},J_{3})=(2.5,2.5)\) [see Fig. 4(g)] resemble those inside the disordered regime at \((J_{2},J_{3})=(1.5,1.5)\) seen in Fig. 4(c) along the \(J_{2}=J_{3}\) axis as if pre-empting the UUD order which onsets for \(J_{2}=J_{3}\geq 2.0\). Inside the Neel phase at \((J_{2},J_{3})=(0.25,2.5)\), in Fig. 4(h) there is no spectral weight at the centre of the Brillouin zone as expected, and instead we find dominant peaks at the \((2\pi,2\pi)\) (and symmetry related) points, and subdominant peaks at \((2\pi,0)\) (and symmetry related) points. ## III Octagon-plaquette interactions We now augment the nearest-neighbor model, discussed in the previous Section, with the cross octagon-plaquette interactions \(J_{+}\) and \(J_{\times}\) (cf. Fig. 1), which microscopically arise upon the inclusion of longer-range Heisenberg couplings. Conceptually, they are interesting as they are expected to stabilize non-coplanar magnetic orders, akin to the cross-hexagonal interactions in the conventional kagome case, and in distinction to the square-diagonal couplings. Indeed, we find that the cross octagon-plaquette interactions induce a plethora of non-coplanar orders as summarized in the global phase diagram of Fig. 5. One can, in fact, distinguish _ten_ different phases, indicated by the different colors in the phase diagram, as one varies the relative coupling strength of the two couplings, \(J_{+}\) and \(J_{\times}\), starting from an isotropic nearest-neighbor model (i.e. for fixed \(J_{1}=J_{2}=J_{3}=1\)). The distinct nature of these phases can be easily visualized by representative common origin plots for each phase, i.e. the collapse of an extended ground-state real-space configuration of spins to a single unit sphere by placing all spins at a joint origin. These are shown around the phase diagram. In addition, we show their respective spin structure factors. Let us briefly go through these phases here, before providing a much more detailed description in the remainder of this Section. The only phases with coplanar order come in the form of a \(120^{\circ}\) ordered phase (VI) in the lower left quadrant, i.e. for ferromagnetic \(J_{+}\) and \(J_{\times}\) (indicated by dark gray in Fig. 5), as well as a distorted version of this \(120^{\circ}\) order (VII) for slightly positive \(J_{\times}\) (indicated by light gray in the phase diagram). In the plain-vanilla \(120^{\circ}\) ordered phase spins on elementary triangles form mutual angles of \(2\pi/3\), while this angle is increased beyond \(2\pi/3\) in the distorted phase as discussed in Sec. III.2. All other phases of the phase diagram exhibit non-coplanar order, which come in commensurate and incommensurate forms. The simpler, commensurate variant of such non-coplanar order is the extended phase with cuboctahedral order (I) in the upper right quadrant and a distorted version of this (II), which we discuss in depth in Sec. III.3. Somewhat more complex non-coplanar orders come in incommensurate spin spiral order, which we find for the remaining six phases (III-V and VIII-X). Remarkably, however, these can still be described by a semi-analytical ground-state construction, which we describe in Sec. III.4. ### Physics on the axes But before we duleg into the various magnetic orders that can be stabilized by the combined effects of cross octagon-plaquette interactions, we first consider their exclusive effect, Figure 5: **Phase diagram in the presence of cross-octagonal couplings.** At the center we show the phase diagram with ten different phases (labeled I-X) as a function of \(J_{+}\) and \(J_{\times}\) for a nearest neighbor model with isotropic, antiferromagnetic interactions, i.e. \(J_{1}=J_{2}=J_{3}=1\). (A companion phase diagram for ferromagnetic nearest neighbor couplings is shown in Fig. 15 below.) Besides indicating the phase boundaries (solid, dashed and dotted lines), the phases are described by symmetry (with \(D_{n}\) referring to the dihedral group of order \(n\) and \(O_{h}\) to full octahedral symmetry), coplanarity, and magnetization of their ground states. As a quick visualization of their distinct nature, we show common origin plots and spin structure factors for each phase left and right around the phase diagram. The only coplanar orders are found in form of \(120^{\circ}\) order (VI) in the lower left quadrant, i.e. ferromagnetic \(J_{+}\) and \(J_{\times}\) (darker gray phase), as well as a distorted version of this (VII) for slightly positive \(J_{\times}\) (lighter gray phase). All other phases are non-coplanar, which besides a rigid phase with cuboctahedral order (I) in the upper right quadrant and a distorted version of this (II) includes a variety of different spirals (III-V and VIII-X). Phase boundaries of special interest are found upon exiting the coplanar \(120^{\circ}\) order, indicated by the dashed and dotted lines, along which the low temperature specific heat \(c_{V}(T\to 0)\) takes values of \(11/12\) and \(23/24\), respectively (cf. Fig. 6). i.e. we consider the horizontal and vertical axes in the middle of the phase diagram of Fig. 5. Adding _only one_ of the two cross octagon-plaquette interactions (\(J_{\times}\) or \(J_{+}\) only), it turns out that it is still possible to locally satisfy the constraint of Eq. (3) provided that the added interaction is ferromagnetic in nature. For FM \(J_{+}\), the spins located on the bow-ties become locked together and are all ferromagnetically aligned, meaning that \(\mathbf{S}_{k}\) in the constraint (3) is fixed in each and every triangle to be the same. Denoting this fixed direction by \(\mathbf{M}\), the remaining two spins (located on the squares) are thus subject to the local constraint on each triangle \[\mathbf{S}_{i}+\mathbf{S}_{j}=-\mathbf{M}. \tag{6}\] This results in a global spin configuration in which \(1/3\) of the spins point along \(\mathbf{M}\) while the other \(2/3\) point along a ring at an angle of \(2\pi/3\) away from \(\mathbf{M}\). This removes the possibility of having an extensive number of both globally coplanar and globally non-coplanar ground states, and thus there are only two distinct finite temperature regimes. In other words, entropic-driven selection of globally coplanar configurations would result in a regular \(120^{\circ}\) ordered state. At high temperatures there is the usual paramagnetic regime, and at low temperatures a crossover into the ground state manifold with an accompanying specific heat \(c_{V}\to 11/12\) as \(T\to 0\), due to the continued existence of one zero mode per triangle. This is precisely what we find in finite-temperature Monte Carlo simulations as shown in Fig. 6. For FM \(J_{\times}\), the spins within each octagon coupled by \(J_{\times}\) become locally ferromagnetically aligned, but the spins from one octagon to the next are not. This locks neighboring triangles together, resulting in one zero mode per unit cell, as opposed to one zero mode per triangle, and thus a low-temperature specific heat \(c_{V}\rightarrow[11\cdot(1/2)+1\cdot(1/4)]/6=23/24\) as \(T\to 0\). This, again, is in perfect agreement with finite-temperature Monte Carlo simulations as shown in Fig. 6. In comparison to the spin structure factors, summarized for the nearest-neighbor model in Fig. 4, the spin-spin correlations discussed above lead to a deformation of \(S(\mathbf{q})\) as shown in Fig. 7. The \(S(\mathbf{q})\) for (\(J_{+}=-1.0,J_{\times}=0.0\)) show maxima at positions where the Bragg peaks of the incipient \(\mathbf{q}=\mathbf{0}\) order of the \(120^{\circ}\) type would show up as \(T\to 0\)[13]. In contrast, for (\(J_{+}=0.0,J_{\times}=-1.0\)), the \(S(\mathbf{q})\) display maxima at the expected locations for \(\sqrt{3}\times\sqrt{3}\) order [13]. ### Coplanar 120\({}^{\circ}\) order Turning to the magnetically ordered states of our phase diagram in Fig. 5, we start with the coplanar \(120^{\circ}\) order found in the lower left quadrant where both cross octagon-plaquette couplings are ferromagnetic. For the square-kagome lattice geometry at hand, one can, in principle, distinguish two different types of \(120^{\circ}\) orders, each with three sublattices where the spins on each sublattice point to a different corner of an equilateral triangle such that each neighboring pair of spins forms an angle of \(2\pi/3\). As illustrated in Fig. 8, the magnetic unit cell contains either 12 sites and is two times larger than the geometric unit cell for the type shown in Fig. 8(a), or 24 sites for the type in Fig. 8(d), with the difference between the two types coming down to the bow-tie spins. While these bow-tie spins are all pointing in the same direction in the order of Fig. 8(a) forming a bow-tie ferromagnet, the bow-tie spins in the second type of \(120^{\circ}\) order form stripy antiferromagnets with alternating rows/columns of up and down-pointing spins as shown in Fig. 8(d). A schematic of the static spin structure factors corresponding to these two types of real-space \(120^{\circ}\) order is shown for an extended Brillouin zone of the square-kagome lattice in Fig. 8(c) and (f), respectively. Both spin configurations exhibit the symmetry of the dihedral group \(D_{3}\). Their net magnetization vanishes, \(m=0\), and the energy per Figure 7: **Structure factors of the extended model on the axes.** Structure factors for different temperatures between T = 0.003 and T = 0.1 (left to right) on the axes (\(J_{+}=-1.0,J_{\times}=0.0\) on the top and \(J_{+}=0.0,J_{\times}=-1.0\) on the bottom). Note the sharp maxima at \((-4\pi,0)\) and symmetry related momenta in the top row (indicated by red circle). The shown squares extent in reciprocal space from \(-4\pi\) to \(4\pi\) in both dimensions. For \(T=0.1\), the first Brillouin zone and the extended Brillouin zone are indicated. Figure 6: **Specific heat and ground states of the extended model on the axes.** For either only \(J_{+}\) or \(J_{\times}\) turned on and being ferromagnetic, one again finds special values of \(c_{V}(T\to 0)\) that differ from 1. For For \(J_{\times}=0.0,J_{+}<0.0\), the value is 1/1/12 (cf. purple curve with \(J_{+}=-1.0\)), and for \(J_{+}=0.0,J_{\times}<0.0\), we find a value of 23/24 (cf. mint curve with \(J_{\times}=-1.0\)). The upper insets show common origin plots of the corresponding ground states. site can be calculated as \[E_{\text{120}^{\circ}}=-1+\frac{1}{3}(J_{+}+J_{\times})\,, \tag{7}\] for varying strengths of the two couplings. To elucidate the thermodynamics associated with these \(120^{\circ}\) orders we show, in Fig. 9, specific heat traces for the point in the lower left corner, \(J_{+},J_{\times}=-1\) (i.e. deep in the phase), for different system sizes between \(L=8\) and \(L=32\). Next to a sharp peak at \(T=0.27(3)\), there is a second, more subtle feature slightly at \(T=0.33(2)\). This smaller bump can, in fact, be associated with the build-up of quasi-long-range \(120^{\circ}\) order - but without selecting one of the two types of \(120^{\circ}\) order. Accordingly, the structure factor for the regime between the two features in the specific heat \(c_{V}\), e.g. for \(T=0.3\), can be obtained by averaging the real space correlations of the two possible \(120^{\circ}\) order arrangements. Also the net magnetization of the bow-tie spins, shown in the lower panel of Fig. 9, builds up at this feature at higher temperatures. The lower temperature peak at \(T=0.27(3)\) then corresponds to the spontaneous selection of one of the two types \(120^{\circ}\) orders, resulting in a sharp feature in the specific heat. Notably, both type of orders appear with the same probability, which is signified in the bow-tie magnetization. For small enough temperatures and large enough system system sizes a bimodal distribution around \(m_{\text{bow-tie}}=1\) and \(m_{\text{bow-tie}}=0.5\) is sampled, whereas for higher temperatures and smaller system sizes the Monte Carlo sampling is able to overcome the free energy barrier between these two states and samples them in an ergodic fashion, resulting in a net bow-tie magnetization of \(m_{\text{bow-tie}}=0.75\). A deformed version of the \(120^{\circ}\) order, henceforth termed \(120^{\circ}\)-d, exists in a small region touching the conventional \(120^{\circ}\)-order phase in the upper left quadrant of the phase diagram, indicated as phase VII in Fig. 5. Its deformation - two of the three mutual angles between neighboring spins take a value of \(\alpha>2\pi/3\), while the third one becomes smaller than \(2\pi/3\) - can be seen in the common origin plot of Fig. 5. It can derived by elementary geometric considerations: For instance, let the angle between the blue and the red sublattices and between the blue and the green sublattices shown in Fig. 8 increase to \(\alpha>2\pi/3\), while the third angle, between the red and the green sublattices, becomes \(2\pi-2\alpha<2\pi/3\). The angle \(\alpha\) is found to vary as \(\cos\alpha=-(2+J_{\times})^{-1}\) and the ground-state energy per site takes the value \[E_{\text{120}^{\circ}\text{-d}}=-\frac{6+J_{\times}^{2}-2J_{+}-J_{\times}(J_{+ }-4)}{3(J_{\times}+2)}\,. \tag{8}\] The ground state symmetry is reduced to the symmetry of the dihedral symmetry group \(D_{1}\) and has a non-zero, yet small Figure 8: **120\({}^{\circ}\) orders.** (a)+(d) Real space arrangement of spins in two different coplanar \(120^{\circ}\) ordered states. There are each three sublattices (each one corresponding to one color), with a 12-site magnetic unit cell (large gray square), which is two times larger than the geometric unit cell (small square), and a 24-site magnetic unit cell, respectively. The magnetization of the bow-tie spins, \(m_{\text{bow-tie}}\), is \(1.0\) for arrangement (a) and \(0.5\) for arrangement (d). (b)+(e) Each sublattice of spins points towards a different corner of an equilateral triangle such that each neighboring pair of spins forms an angle of \(2\pi/3\). (c)+(f) First and extended Brillouin zones of the square-kagome lattice showing the positions of the corresponding dominant and subdominant Bragg peaks. In (f), the ratio of the weight of the subdominant peaks \(\lambda\) to the weight of the dominant peaks \(\Lambda\) is \(\lambda/\Lambda\approx 85\%\). Figure 9: **Thermodynamics of 120\({}^{\circ}\) order.** (top) The specific heat of the \(120^{\circ}\) phase shows a sharp feature at \(T=0.27(3)\) and a subtle bump slightly at \(T=0.33(2)\). The subtle bump can be associated with the build-up of \(120^{\circ}\) order, whereas at the sharp feature a specific \(120^{\circ}\) order is selected – in the example shown the structure factor at \(T=0.1\) (inset) conincides with the analytical structure factor in Fig. 8 (f), whereas the intermediate structure factor can be obtained by averaging the real space correlations of all possible \(120^{\circ}\) orders, i.e. both shown in Fig. 8 plus all possible rearrangements with the same order. The bottom panel shows the magnetization of the bow-tie spins. The displayed bow-tie magnetization starts to build up right at the bump-like feature slightly above \(T=0.3\). Below \(T=0.3\), at the sharp peak in the specific heat, the bow-tie magnetization splits up into different branches that converge to the values of \(m_{\text{bow-tie}}=0.5\), and \(m_{\text{bow-tie}}=1.0\), respectively. magnetization \(m_{120^{\circ}\text{-d}}=\frac{J_{\times}}{3(2+J_{\times})}\), which only depends on the values of \(J_{\times}\) (see also Fig. 20 in Appendix C). ### Cuboc order Let us now turn to the first instance of non-coplanar order and consider the cuboctahedral (cuboc) order found in the upper right quadrant of the phase diagram, where both cross-octagon plaquette couplings are antiferromagnetic. In real-space order this order is described by 12 sublattices where the spins on each sublattice point towards a different corner of a cuboctahedron, as illustrated in Fig. 10(b). The underlying magnetic unit cell contains 24 sites and is thus four times larger than the geometric unit cell, see Fig. 10(a). All neighboring spins form an angle of \(2\pi/3\), which corresponds to the _cuboc1_ state, discussed for the kagome lattice with cross-hexagonal couplings [1, 22]. Note that there are eight possible ways to arrange cuboc1 order on the square-kagome lattice as illustrated in Fig. 17 of Appendix A, each of which breaks \(C_{4}\) rotation symmetry. The remaining symmetry of this ordered state is the full octahedral symmetry group \(O_{h}\). It has zero magnetization, \(m=0\), and its energy per site is given by \[E_{\text{cuboc}}=-1-\frac{1}{3}(J_{+}+J_{\times})\,. \tag{9}\] The real-space correlations of the cuboc1 order give rise to a spin structure factor as schematically visualized for one of the eight possible real-space configurations in Fig. 10 (c), which shows the positions of the corresponding Bragg peaks within the extended Brillouin zone of the square-kagome lattice together with the associated fraction of total spectral weight. Averaging over the real-space correlations of all eight possible cuboc1 arrangements, restores \(C_{4}\) symmetry in the structure factor (cf. Appendix A). Turning to the thermodynamics of the cuboc phase, we show, in Fig. 11, the specific heat, the cuboc1 order parameter \(\mathcal{O}\), and its associated susceptibility \(\chi_{\mathcal{O}}\) (both introduced in Appendix A) for different system sizes between \(L=8\) and \(L=32\). The specific heat displays a clearly visible double-peak structure which, similar to our discussion of the coplanar \(120^{\circ}\) order, can be rationalized by the coexistence of multiple possible cuboc1 arrangements on the square-kagome lattice: At the high-temperature peak in \(c_{V}\) (at \(T=0.371\)), the system builds up cuboc1 order, but does not select a specific arrangement out of the eight possible realization, as can be seen from the \(C_{4}\) symmetric structure factor in the inset. At the low-temperature peak (at \(T=0.256\)) a specific cuboc1 order is then spontaneously selected. At this temperature, the order parameter \(\mathcal{O}\) (Eq. A1), which takes values of \(\pm 1\) for different Figure 11: **Thermodynamics of cuboc1 phase.** The specific heat (top panel) of the cuboc1 phase displays a double-peak structure. The high-temperature peak at \(T=0.371\) can be associated with the initial build-up of coexisting cuboc1 orders. At the low-temperature peak (at \(T=0.256\)), one specific realization of cuboc1 order is selected (cf. Appendix A). This can be seen from the structure factors (inset). While the structure factor at \(T=0.1\) corresponds to one specific realization of cuboc1 order, the intermediate structure factor at \(T=0.3\) conincides with the analytical structure factor of the superposition of all possible cuboc1 order realizations (see also Fig. 17). Middle and bottom panels show the cuboc1 order parameter \(\mathcal{O}\) (Eq. A1), and the corresponding susceptibility \(\chi_{\mathcal{O}}\) (Eq. A2). The lower inset shows how \(\mathcal{O}_{ijkl}\) is being calculated on a single skew bow-tie. This quantity is then averaged over all skew bow-ties in order to calculate \(\mathcal{O}\). Figure 10: **Cuboctahedral order _cuboc1_.** (a) Real space arrangement of spins in a cuboctahedral (cuboc) ordered state. There are 12 sublattices (each one corresponding to one color) with a 24-site magnetic unit cell (large gray square), which is four times larger than the geometric unit cell (small square). (b) Each sublattice of spins points towards a different corner of a cuboctahedron such that each neighboring pair of spins forms an angle of \(2\pi/3\). This order corresponds to the _cuboc1_ state in [1]. (c) First and extended Brillouin zones of the square-kagome lattice showing the positions and the fractions of total spectral weight of the corresponding Bragg peaks. The order breaks \(C_{4}\) symmetry. specific cuboc1 arrangements, builds up and the corresponding susceptibility \(\chi_{\text{O}}\) (Eq. 16) diverges as shown in the lowest panel of Fig. 11. A deformed version of cuboc1 order (denoted as phase II in our phase diagram), termed cuboc-d, extends to a part of the lower right quadrant in the phase diagram Fig. 5 with \(J_{\times}<0\) and \(J_{+}\geq 0\), which can be derived by applying the generalized Luttinger-Tisza method of Ref. [40], see also the Mathematica files in the Supplement [41]. In this cuboc-d phase, the antipodal squares of the cubocthedron are deformed into non-coplanar closed polygon chains with equal edges, while the square in the equatorial plane is deformed into a coplanar rectangle. Its symmetry is thereby reduced to the symmetry of the dihedral symmetry group \(D_{4}^{s}\) where the superscript \(s\) denotes additional mirror symmetry on the \(xy\)-plane, i.e. \(s=\text{diag}(1,1,-1)\). The ground state energy per site can be calculated to be \[E_{\text{cube-d}}=\frac{3+J_{\times}^{2}+J_{+}-J_{\times}(J_{+}+3)}{3(J_{ \times}-1)}. \tag{10}\] The cuboc-d ground state still has zero magnetization, \(m=0\). ### Spiral orders The upper left and lower right quadrant of our phase diagram are occupied by spin spiral phases, coming in the form of six different variants (labeled III-V and VIII-X, respectively). The complexity of these incommensurate, non-coplanar orders becomes immediately clear when looking at their common origin plots, whose intricate patterns point to magnetic unit cells of hundreds of spins. This renders any direct analytical description of these phases rather elusive, but it turns out that one can, in fact, deduce a _semi-analytical description_ of these phases from low-temperature numerical simulation data. As we will discuss below, this approach provides us with a symmetry-optimized description of these spin spirals including explicit expressions of their ground-state energy as function of the coupling parameters. The latter then allows us to establish sharp phase boundaries between these complex spin spiral phases as depicted in the phase diagram of Fig. 5. #### Semi-analytical approach The starting point of our semi-analytical approach is numerical data in the form of a common origin plot of the \(N\) spin vectors of a ground-state spin configuration sampled in Monte Carlo simulations at ultra-low temperatures \(T=10^{-4}\), typically explored in conjunction with a parallel tempering scheme. Example input for the spin spiral phase III in the lower right quadrant is shown on the left in the schematic illustration of Fig. 12. In a second step, we then identify a smaller number of \(M<N\) unique spin vectors by grouping spins in the initial common origin plot that point approximately in the same direction, see the middle panel of Fig. 12. In practice, we say that two spins \(\mathbf{S}_{i}\) and \(\mathbf{S}_{j}\) point approximately in the same direction if \(\mathbf{S}_{i}\). \(\mathbf{S}_{j}\geq\gamma\), where the exact value of \(\gamma\) slightly varies from case to case, but typically \(\gamma\approx 0.995\). For these \(M\) spin directions we then identify all possible symmetries, which allows us to further reduce the number of unique spin vectors to \(K<M\). A symmetry in the aforementioned sense is a tuple \((R,\pi)\) with \(R\in O(3)\) and \(\pi\in S_{M}\) where \(S_{M}\) is the permutation group of \(M\) elements, such that \(R\mathbf{S}_{i}=\mathbf{S}_{\pi(i)}\) for all spin vectors \(\mathbf{S}_{i}\). From the remaining \(K\) spin vectors, all spin vectors can be generated by applying these symmetries. In total, this approach allows us to describe the ground state by at most \(2K-1\) parameters - maximally two parameters per spin minus one parameter due to a global rotation around the symmetry axis, but less if some polar or azimuthal angles of the ground state assume fixed values. This compact representation is summarized in Table 1 for all six spin spiral phases of our phase diagram. Having such an analytical representation at hand, we can then explicitly calculate various observables such as the magnetization or ground-state energy for arbitrary couplings \(J_{+}\) and \(J_{\times}\) for all phases, which in turn allows us to determine the phase boundaries shown in the phase diagram \begin{table} \begin{tabular}{c|c|c c|c|c} spiral phase & symmetry & \(N\) & \(M\) & \(K\) & \# parameters \\ \hline \hline III & \(D_{12}^{s}\) & 864 & 72 & 4 & 3 \\ IV & \(D_{6}^{s,t}\) & 864 & 228 & 13 & 19 \\ V & \(\{\text{id},\text{s}\}\) & 864 & 48 & 24 & 47 \\ \hline VIII & \(D_{12}\) & 864 & 37 & 3 & 3 \\ IX & \(D_{3}^{s}\) & 864 & 108 & 18 & 26 \\ X & \(D_{6}^{s}\) & 864 & 32 & 4 & 3 \\ \end{tabular} \end{table} Table 1: **Semi-analytical approach.** Compression of the parametrization of the spin spirals via clustering and symmetrization (Fig. 12), starting with a common origin plot with \(N=864\) spins sampled at \(T=10^{4}\) for a linear system size \(L=12\). Figure 12: **Semi-analytical scheme for spin spiral phases.** Starting point is a common origin plot of the \(N\) spin vectors of a ground-state spin configuration sampled in Monte Carlo simulations at ultra-low temperatures \(T=10^{-4}\), as shown on the left. In a second step, we identify \(M<N\) unique spin directions by grouping spins that point approximately in the same direction (middle). From these unique spin vectors, we then identify symmetries that further reduce the number of unique spin vectors to \(K<M\) and allow us to describe the spiral phase analytically (right). The data shown is for spin spiral spiral phase III of the lower right quadrant with couplings \(J_{+}=+1\) and \(J_{\times}=-0.2\). The initial common origin plot has \(N=864\) spins corresponding to a system size of \(L=12\). The initial reduction leads to \(M=72\) points on five circles (as indicated in the middle panel). The symmetry group \(G\) of the example is generated by rotations of \(\pi/6\) about the vertical symmetry axis as well as by a reflection at the equatorial plane, i.e. \(G=D_{12}^{s}\), and therefore \(K=4\). Fig. 5. Several cross-checks can be used to validate this approach, including a comparison of the analytical ground-state energy and the Monte Carlo result as well as the determination of the phase boundaries, which we compare to scans of derivatives of the Monte Carlo energy, as shown in Appendix C. In general, we find excellent agreement. #### Example: Spiral phase III with \(D_{12}^{s}\) symmetry Let us illustrate this semi-classical approach and its validity for an explicit example, picking the spiral phase III in the lower right quadrant of the phase diagram Fig. 5 for \(J_{+}=+1\) and \(J_{\times}=-0.2\). Our schematic illustration of the semi-classical approach in Fig. 12 also uses this example. The common origin plot on the left consists \(N=864\) spin vectors of the numerical ground state (at \(T=10^{-4}\)) of a system of linear length \(L=12\). By grouping spins that point to the same direction, using the criterion \(\mathbf{S}_{i}\cdot\mathbf{S}_{j}\geq 0.999\), we find that there are only \(M=72\) unique spin directions which are shown in the middle panel. Performing a symmetry analysis, one finds that the symmetry group \(G\) of this state is generated by rotations of \(\pi/6\) about the vertical symmetry axis as well as by a reflection at the equatorial plane, i.e. \(G=D_{12}^{*}\). This leaves us with just \(K=4\) representative spins that describe the entire spin spiral configuration - a significant reduction compared to the \(N=864\) spins in the original real-space configuration. The \(K=4\) representative spins can be written as functions of three parameters \(\alpha,z_{1},z_{2}\) in the following way \[\mathbf{S}_{1}= \left(0,\sqrt{1-z_{1}^{2}},z_{1}\right)\,\] \[\mathbf{S}_{2}= \left(\frac{\sqrt{1-z_{2}^{2}}}{\sqrt{2}},\frac{\sqrt{1-z_{2}^{2}} }{\sqrt{2}},z_{2}\right)\,\] \[\mathbf{S}_{3,4}= \left(\cos\left(\frac{\pi}{4}\pm\alpha\right),\sin\left(\frac{\pi }{4}\pm\alpha\right),0\right). \tag{11}\] With this compact representation at hand, the energy per site can now be explicitly calculated as \[E_{\text{III}}= \frac{1}{12}\bigg{[}J_{\times}\cos 2\alpha+\sqrt{3}J_{\times} \sin 2\alpha\] \[-2\Big{(}\sqrt{3}J_{+}-(\sqrt{3}-2)J_{+}z_{1}^{2}+4z_{1}z_{2}\] \[+(\sqrt{6}-\sqrt{2})\sqrt{(1-z_{1}^{2})(1-z_{2}^{2})}+J_{\times}( 2z_{2}^{2}-1)\] \[+2(\sqrt{3-3z_{2}^{2}}-\sqrt{2-2z_{1}^{2}})\sin\alpha\] \[+2(\sqrt{2-2z_{1}^{2}}+\sqrt{1-z_{2}^{2}})\cos\alpha\Big{)}\bigg{]}\,. \tag{12}\] Numerical minimization of this energy with \(J_{+}=+1\) and \(J_{\times}=-0.2\) then leads to \(E_{\text{III,semi-analytical}}=-1.34515\) which is in excellent agreement with and (as expected) slightly below the Monte Carlo result \(E_{\text{III,MC}}=-1.345\), which is shifted upwards by finite-temperature fluctuations commensurate with a temperature of \(T=10^{-4}\). We present explicit results of this semi-classical approach for all other spin spiral phases in Appendix B. ### Cuboc3 and pentagonal order for mixed interactions Finally, we note that one could also consider variations of the model at hand where one changes the sign of the interactions in the original nearest-neighbor \((J_{1},J_{2},J_{3})\) Heisenberg model (see Fig. 1 for a reminder of the coupling geometries). Flipping the sign of the bow-tie interactions to ferromagnetic couplings, i.e. \(J_{2}=J_{3}=-1\), while keeping the square interactions antiferromagnetic, i.e. \(J_{1}=1\), yields exactly the same phase diagram as shown in Fig. 5, up to local spin transformations. Specifically, since the spins on the bow-ties are Figure 14: **Pentagonal order.** Coplanar pentagonal order arises when flipping the bow-tie interactions \(J_{2}\) and \(J_{3}\) of the nearest neighbor model to ferromagnetic (while keeping the square interactions \(J_{1}\) antiferromagnetic). It is obtained from the original \(120^{\circ}\) order (Fig. 8) via a local spin transformation (see main text). (a) Real space arrangement of spins in the coplanar pentagonal state. There are five sublattices (each one corresponding to one color) with a 24-site magnetic unit cell (large gray square), four times larger than the geometric unit cell (small square). (b) The spins on the five sublattices point to five out of the six corners of a hexagon. (c) First and extended Brillouin zones of the square-kagome lattice showing the positions of the corresponding dominant a nd subdominant Bragg peaks. The ratio of the weight of the subdominant peaks \(\lambda\) to the weight of the dominant peaks \(\Lambda\) is \(\lambda/\Lambda\approx 85\%\). Figure 13: **Cuboctahedral order _cuboc3._ This variant of a non-coplanar cuboctahedral order is found when flipping the bow-tie interactions \(J_{2}\) and \(J_{3}\) of the nearest neighbor model to ferromagnetic (while keeping the square interactions \(J_{1}\) antiferromagnetic). It is obtained from the original cuboc order (Fig. 10) via a local spin transformation (see main text). (a) Real space arrangement of spins in the cuboc3 ordered state. There are 12 sublattices (each one corresponding to one color) with a 24-site magnetic unit cell (large gray square), four times larger than the geometric unit cell (small square). (b) Each sublattice of spins points towards a different corner of a cuboctahedron such that each neighboring pair of spins forms an angle of \(2\pi/3\) on the squares and an angle of \(\pi/3\) on the triangles, making it different from the _cuboc1_ and _cuboc2_ orders discussed in the literature [1]. (c) First and extended Brillouin zones of the square-kagome lattice showing the positions of the corresponding Bragg peaks. The order breaks \(C_{4}\) symmetry. coupled via \(J_{2}\) and \(J_{3}\) to their nearest neighbors and via \(J_{+}\) to other bow-tie spins, the total energy remains unchanged if all bow-tie spins are inverted simultaneously while the sign of the triangular couplings is changed, \(J_{2}\rightarrow-J_{2}\) and \(J_{3}\rightarrow-J_{3}\). Performing these local spin transformations on the orders of our original phase diagram yields two qualitatively new types of orders: The original cuboctahedral and \(120^{\circ}\) orders (Figs. 10 and 8) turn into a non-coplanar _cuboc3_ order (Fig. 13) and a coplanar _pentagonal_ order (Fig. 14), respectively. In the new cuboctahedral order _cuboc3_, each neighboring pair of spins forms an angle of \(2\pi/3\) on the squares and an angle of \(\pi/3\) on the triangles, which is different to _cuboc1_ order (where all neighboring pairs of spins form an angle of \(2\pi/3\)) and to _cuboc2_ order (where all neighboring pairs of spins form an angle of \(\pi/3\)) [1]. ### Octagonal and conical orders for FM interactions Flipping the sign of all three couplings to ferromagnetic in the underlying nearest neighbor Heisenberg model, i.e. \(J_{1}=J_{2}=J_{3}=-1\), the phase diagram for varying cross octagon-plaquette interactions changes its topology substantially. As depicted in Fig. 15 there are only three distinct phases. Trivially, there is a large ferromagnetic phase when \(J_{+},J_{\times}\leq 0\), which also extents to the other three quadrants in the phase diagram. Its ground state energy per site is given by \[E_{\text{FM}}=-2+\frac{J_{\times}+J_{+}}{3}\,. \tag{13}\] It is bound by the hyperbola defined by \((J_{\times}-1)(J_{+}-1)=\frac{1}{2}\). Bound by a second hyperbola, given by \((J_{\times}+\frac{1}{\sqrt{2}})(J_{+}+\frac{1}{\sqrt{2}})=1\), there is a coplanar ordered phase with eight sublattices where the spins on each sublattice point to the corners of an octagon. The magnetic unit cell contains 24 sites and is four times larger than the geometric unit cell (cf. Fig. 16 (d) - (f)). Its ground state has zero magnetization, \(m=0\), and its energy per site is given by \[E_{\text{octagonal}}=-\frac{1}{3}(2+2\sqrt{2}+J_{+}+J_{\times})\,. \tag{14}\] Between these two phases, which touch each other only at the point \(J_{+}=J_{\times}=1-\frac{1}{\sqrt{2}}\), there exist non-coplanar umbrella-like states that smoothly interpolate between octagonal and FM order. The magnetic unit cell of this phase coincides with the one of the octagonal phase, but the spins are no longer coplanar, but rather the directions in which the spins on the sublattices point form two cones. The first cone is formed by the four sublattices of spins located on the squares with mutual angles of \(\pi/2\), whereas the second cone is formed by the four sublattices of bow-tie spins, again with mutual angles of \(\pi/2\) but rotated \(\pi/4\) with respect to the first cone (cf. Fig. 16 (h)). The two polar angles that describe these two cones depend on \(J_{+}\) and \(J_{\times}\), the ground state energy per site for those states Figure 16: **Octagonal and double-conical orders.** (a) Real space arrangement of spins in the coplanar octagonal ordered state stabilized in the purely ferromagnetic nearest-neighbor model. There are eight sublattices with a 24-site magnetic unit cell (large gray square), four times larger than the geometric unit cell (small square). (b) Each sublattice of spins points towards a different corner of an octagon such that each pair of neighboring spins forms an angle of \(\pi/4\). (c) First and extended Brillouin zones showing the positions and the fractions of total spectral weight of the corresponding Bragg peaks. (d) Real space arrangement of spins in the non-coplanar double cone state. The eight sublattices coincide with those of the octagonal state shown in (b). (e) The directions in which the spins on the sublattices point form two cones. The first cone is formed by the four sublattices of spins located on the squares with mutual angles of \(\pi/2\), whereas the second cone is formed by the four sublattices of bow-tie spins, again with mutual angles of \(\pi/2\) but rotated \(\pi/4\) with respect to the first cone. (f) First and extended Brillouin zones showing the positions of the corresponding Bragg peaks. The ratio of the weight of the subdominant peaks \(\lambda\) to the weight of the dominant peaks \(\Lambda\) depends on the two polar angles of the cones and therefore on \(J_{+}\) and \(J_{\times}\). Figure 15: **FM phase diagram.** For fixed \(J_{1}=J_{2}=J_{3}=-1\), the model shows a large ferromagnetic phase (light gray), a coplanar phase with octagonal order (dark gray) and non-coplanar umbrella-like double cone states (turquoise) that are an interpolation between octagonal and ferromagnetic states. Ferromagnetic and coplanar phases touch at the single point \(J_{+}=J_{\times}=1-\frac{1}{\sqrt{2}}\). On the right hand side are corresponding common origin plots and structure factors from Monte Carlo simulations. can be calculated analytically and is given by \[E_{\text{conical}}= -\frac{1}{6J_{+}J_{\times}}\bigg{(}J_{+}+J_{\times}+4J_{+}J_{\times}\] \[\pm(J_{+}-J_{\times})\sqrt{1-12J_{+}J_{\times}+4J_{+}^{2}J_{\times} ^{2}}\bigg{)}\,, \tag{15}\] where the plus sign applies to the left domain with \(J_{+}<1-\frac{1}{\sqrt{2}}\), and the minus sign to the right domain with \(J_{+}>1-\frac{1}{\sqrt{2}}\). While the ferromagnetic phase has uniform magnetization and the octagonal phase has zero magnetization, the magnetization in the conical phase depends on \(J_{+}\) and \(J_{\times}\). Analytically, one finds \[m_{\text{conical}} =[(2(J_{\times}+4)J_{+}\pm w)+1][24J_{+}|J_{\times}|]^{-1}\] \[\times\sqrt{\frac{(2J_{\times}^{2}+1)\,w\pm\left(-4J_{\times}^{3} J_{+}+6J_{\times}^{2}+6J_{\times}J_{+}-1\right)}{w}}\,. \tag{16}\] Here, the plus sign applies to the right domain with \(J_{+}>1-\frac{1}{\sqrt{2}}\), and the minus sign to the left domain with \(J_{+}<1-\frac{1}{\sqrt{2}}\), and \(w=\sqrt{4J_{\times}^{2}J_{+}^{2}-3J_{\times}J_{+}+1}\). ## IV Discussion and outlook We have unveiled a rich variety of magnetic textures that span the phase diagram of an extended classical Heisenberg model on the square-kagome lattice. Motivated by the possibility of stabilizing non-coplanar magnetic orders, we show that a minimal set of interactions needed to realize these involves introducing cross-plaquette interactions on top of the nearest-neighbor Heisenberg model, with either antiferromagnetic or ferromagnetic nearest-neighbor interactions, or a combination of both (ferromagnetic on bowtie bonds and antiferromagnetic on square bonds). A thorough classical Monte Carlo analysis reveals a plethora of non-coplanar states including a new type of cuboc order (dubbed cuboc3, Fig. 13), as well as highly intricate incommensurate non-coplanar spirals. The underlying magnetic unit cells of these spin spiral states feature a highly complex structure and large sizes but, remarkably, we are able to provide a semi-analytical construction of these phases based on a symmetry-optimized parameterization. This renders possible obtaining explicit expressions (depending only on a small number of parameters) for their ground-state energy as a function of the coupling strengths, which in turn enables us to establish, with high precision, the phase boundaries between these complex spiral phases. Besides the ground state, we also study the thermodynamics of non-coplanar states employing classical Monte Carlo simulations. By virtue of being chiral, non-coplanar states are expected to feature a symmetry breaking phase transition at \(T\neq 0\). In particular, for the cuboc order we present the temperature evolution of the specific heat, the chiral order parameter, and its susceptibility, for different system sizes which manifestly exhibits signatures of a chiral phase transition. For the elementary model with only three symmetry inequivalent nearest-neighbor couplings, we show that, besides the isotropic point, within the entire region occupied by an extensively degenerate manifold of ground states one can always identify three distinct temperature regimes, namely, a high-temperature thermal paramagnet, an intermediate-temperature cooperative paramagnet, and a low-temperature coplanar regime selected via an order-by-disorder mechanism. We show that upon introduction of ferromagnetic cross-plaquette interactions of just one type, i.e., either \(J_{+}\) or \(J_{\times}\), the extensive degeneracy is reduced, but there still persists one zero mode per per triangle or per unit cell, respectively. This results in a \(T\to 0\) limiting value of specific heat which is less than one, however, the intermediate-temperature cooperative paramagnetic temperature regime disappears. Since classical non-coplanar magnetic orders are characterized by a finite scalar spin chirality, it is plausible that, if quantum fluctuations are turned on and are successful in restoring spin rotational symmetry, e.g., in the extreme quantum limit of small spin \(S=1/2\), the chiral symmetry breaking still persists and carries over in the resulting non-magnetic ground state [9; 10; 11]. Such quantum melting would give rise to novel chiral paramagnetic phases, characterized by a spontaneous breaking of time reversal and lattice symmetries, while preserving their product. One such example is the chiral quantum spin liquid, first elucidated by Kalmeyer and Laughlin [42], which hosts bulk semion excitations and a chiral gapless edge mode [43]. Furthermore, in our phase diagrams, the non-coplanar orders break lattice symmetries in such a way that simply restoring spin rotational symmetry would not fully restore all lattice symmetries [44], potentially leading to descendent nematic chiral liquids for small values of spin [45]. We thus provide a detailed symmetry analysis for the myriad of non-coplanar orders, paving the way for the systematic classification of descendant chiral spin liquid states. It should be noted here that, unlike the kagome lattice, the square-kagome has an _even_ number of sites per unit cell. This leaves open the possibility of fully trivial paramagnetic phases appearing, as the Lieb-Schultz-Mattis-Hastings-Oshikawa theorem does not preclude a fully symmetric, yet topologically trivial, gapped phase [46; 47; 48]. For the non-coplanar umbrella-like double cone states with \(m\neq 0\), interpolating between the ferromagnetic and octagonal orders, the proximity to ferromagnetism opens the exciting possibility of realizing in the corresponding quantum model the elusive spin nematic orders [49; 50; 51]. The spin rotational symmetry breaking in these quadrupolar ordered states is described by a time-reversal invariant order parameter, given by a symmetric traceless rank-2 tensor in contrast to a conventional dipolar order parameter which breaks time-reversal symmetry. The exploration of the aforementioned exotic phases in the corresponding quantum models employing state-of-the-art numerical quantum many-body approaches constitute an important direction of future research. In the context of material realizations, it is worth noting that in Na\({}_{6}\)Cu\({}_{7}\)BiO\({}_{4}\)(PO\({}_{4}\))\({}_{4}\)[Cl,(OH)]\({}_{3}\)[20], the \(J_{\times}\) interactions are mediated by Cu-O-Na-O-Cu superexchange path ways, while the \(J_{+}\) bonds pass through a chloride group but distances that prevent any Cu-Cl hybridization, and thus not triggering a superexchange. The presence of the nonmagnetic Na ions in the center of the octagons is likely to trigger a finite \(J_{\times}\) interaction akin to the scenario realized in the kagome based materials kapellasite and haydeeite [52]. One could think of possible chemical substitutions which are likely to enhance this interactions, e.g., by replacing Na with Cs which has a larger ionic radius. Another route towards strengthening the cross-plaquette couplings would involve preparing the corresponding sulfide version instead of oxides leading to Cu-S-Na-S-Cu type superexchange pathways, similar to the breathing chromium spinels [53]. In KCu\({}_{6}\)AlBiO\({}_{4}\)(SO\({}_{4}\))\({}_{5}\)Cl [19], it is the sulfate SO\({}_{4}^{2-}\) that occupies the octagon centers and one may consider the possibility of substituting it with a selenate group SeO\({}_{4}^{2-}\) to enhance both the cross-plaquette couplings. These would constitute interesting future explorations on the material synthesis front. ###### Acknowledgements. We thank Harald O. Jeschke for helpful discussions. The Cologne group acknowledges partial funding from the DFG within Project-ID 277146847, SFB 1238 (projects C02, C03). S.T. thanks the Center for Computational Quantum Physics at the Flatiron Institute, New York, for hospitality during the initial stages of this project. M.G. thanks the Bonn-Cologne Graduate School of Physics and Astronomy (BCGS) for support. The numerical simulations were performed on the JUWELS cluster at the Forschungszentrum Juelich and the CHEOPS cluster at RRZK Cologne. Y. I. acknowledges support from the Department of Science and Technology (DST), India through the MATRICS Grant No. MTR/2019/001042, CEFIPRA Project No. 64T3-1, the ICTP through the Associates Programme and from the Simons Foundation through grant number 284558FY19. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958, IIT Madras through the Institute of Eminence (IoE) program for establishing the QuCenDiEM group (Project No. SB20210813PHMHRD002720), the International Centre for Theoretical Sciences (ICTS), Bengaluru, India during a visit for participating in the program "Frustrated Metals and Insulators" (Code: ICTS/frumi2022/9). Y. I. acknowledges the use of the computing resources at HPCE, IIT Madras.
2307.10902
Strong Invariants Are Hard: On the Hardness of Strongest Polynomial Invariants for (Probabilistic) Programs
We show that computing the strongest polynomial invariant for single-path loops with polynomial assignments is at least as hard as the Skolem problem, a famous problem whose decidability has been open for almost a century. While the strongest polynomial invariants are computable for affine loops, for polynomial loops the problem remained wide open. As an intermediate result of independent interest, we prove that reachability for discrete polynomial dynamical systems is Skolem-hard as well. Furthermore, we generalize the notion of invariant ideals and introduce moment invariant ideals for probabilistic programs. With this tool, we further show that the strongest polynomial moment invariant is (i) uncomputable, for probabilistic loops with branching statements, and (ii) Skolem-hard to compute for polynomial probabilistic loops without branching statements. Finally, we identify a class of probabilistic loops for which the strongest polynomial moment invariant is computable and provide an algorithm for it.
Julian Müllner, Marcel Moosbrugger, Laura Kovács
2023-07-20T14:24:15Z
http://arxiv.org/abs/2307.10902v2
# Strong Invariants Are Hard ###### Abstract. We show that computing the strongest polynomial invariant for single-path loops with polynomial assignments is at least as hard as the Skolem problem, a famous problem whose decidability has been open for almost a century. While the strongest polynomial invariants are computable for _affine loops_, for polynomial loops the problem remained wide open. As an intermediate result of independent interest, we prove that reachability for discrete polynomial dynamical systems is Skolem-hard as well. Furthermore, we generalize the notion of invariant ideals and introduce _moment invariant ideals_ for probabilistic programs. With this tool, we further show that the strongest polynomial moment invariant is (i) uncomputable, for probabilistic loops with branching statements, and (ii) Skolem-hard to compute for polynomial probabilistic loops without branching statements. Finally, we identify a class of probabilistic loops for which the strongest polynomial moment invariant is computable and provide an algorithm for it. Strongest algebraic invariant, Point-To-Point reachability, Skolem problem, Probabilistic programs + Footnote †: journal: Information Systems ## 1. Introduction Loop invariants describe valid program properties that hold before and after every loop iteration. Intuitively, invariants provide correctness information that may prevent programmers from introducing errors while making changes to the loop. As such, invariants are fundamental to formalizing program semantics as well as to automate the formal analysis and verification of programs. While automatically synthesizing loop invariants is, in general, an uncomputable problem, when considering only single-path loops with linear updates (linear loops), the strongest polynomial invariant is in fact computable (Hrushovski et al., 2018; Karr, 1976; Kovacs, 2008; Muller-Olm and Seidl, 2004). Yet, already for loops with "only" polynomial updates, computing the strongest invariant has been an open challenge since 2004 (Muller-Olm and Seidl, 2004). In this paper, we bridge the gap between the computability result for linear loops and the uncomputability result for general loops by providing, to the best of our knowledge, the _first hardness result for computing the strongest polynomial invariant of polynomial loops_. **Problem setting.** Let us motivate our hardness results using the two loops in Figure 1, showcasing that very small changes in loop arithmetic may significantly increase the difficulty of computing the strongest invariants. Figure 0(a) depicts an affine loop, that is, a loop where all updates are affine combinations of program variables. On the other hand, Figure 0(b) shows a polynomial loop whose updates are polynomials in program variables. An affine (polynomial) invariant is a conjunction of affine (polynomial) equalities holding before and after every loop iteration. The computability of both the strongest affine and polynomial invariant has been studied extensively. For single-path affine loops, the seminal paper [10] shows that the strongest affine invariant is computable, whereas [11] proves computability of the strongest polynomial invariant. Regarding single-path polynomial programs, for example the one in Figure 0(b), [10] gives an algorithm to compute all polynomial invariants of _bounded degree_. Based on these results, the strongest polynomial invariant of Figure 0(a) is thus computable. Yet, the more general problem of computing the strongest polynomial invariant for _polynomial loops_ without any restriction on the degree remained an open challenge since 2004 [10]. In this paper, we address this challenge, which we coin as the SPInv problem and define below. The SPInv Problem: Given a single-path loop with polynomial updates, compute the strongest polynomial invariant. In Section 4, we prove that SPInv is _very hard_, essentially "defending" the state-of-the-art that so far failed to derive computational bounds on computing the strongest polynomial invariants of polynomial loops. The crux of our work is based on the Skolem problem, a prominent algebraic problem in the theory of linear recurrences [11, 12], which we briefly recall below and refer to Section 2.3 for details. The Skolem Problem [11, 12]: Does a given linear recurrence sequence with constant coefficients have a zero? The decidability of the Skolem problem has been open for almost a century, and its decidability would yield far-fetching consequences in number theory [13, 14]. In Section 4, we show that SPInv is at least as hard as the Skolem problem, providing thus a computational lower bound showcasing the hardness of SPInv. To the best of our knowledge, our results from Section 4 are the first lower bounds for SPInv and provide an answer to the open challenge posed by [10]. While [15] proved that the strongest polynomial invariant is uncomputable for multi-path polynomial programs, the computability of SPInv has been left open for future work. With our results proving that SPInv is Skolem-hard (Theorem 4.2), we show that the missing computability proof of SPInv is not surprising: solving SPInv is really hard. Figure 1. Two examples of deterministic programs. **Connecting invariant synthesis and reachability.** A computational gap also exists in the realm of model-checking between affine and polynomial programs, similar to the computability of SPInv. Point-to-point reachability is arguably the simplest model-checking property; it asks whether a program can reach a given target state from a given initial state. For example, one may start the Van der Pol oscillator from Figure 1b in some initial configuration \((x_{0},y_{0})\) and certify that it will eventually reach a certain target configuration \((x_{t},y_{t})\). Reachability, and even more involved model-checking properties, are known to be decidable for affine loops (Karimov et al., 2022). However, the decidability or mere reachability of _polynomial loops_ remains unknown without any existing non-trivial lower bounds. We refer to this reachability quest via the P2P problem. The Point-To-Point Reachability Problem (P2P) : Given a single-path loop with polynomial updates, is a given target state reachable starting from a given initial state? In Section 3, we resolve the lack of computational results on reachability in polynomial loops. In particular, we show that P2P is Skolem-hard (Theorem 3.3) as well. To the best of our knowledge, this yields the first non-trivial hardness result for P2P. In Section 4, we further show that P2P and SPInv are connected in the sense that P2P reduces to SPInv. That is, SPInv is at least as hard as P2P. Therefore, our reduction chain Skolem\(\leq\) P2P \(\leq\) SPInv implies that the decidability of P2P and/or SPInv would immediately solve the Skolem problem and longstanding conjectures in number theory. **Beyond (non)deterministic loops and invariants.** In addition to computational limits within standard, (non)deterministic programs, we further establish computational (hardness) bounds in probabilistic loops. Probabilistic programs model stochastic processes and encode uncertainty information in standard control flow, used for example in cryptography (Barthe et al., 2012), privacy (Barthe et al., 2012), cyber-physical systems (Kofnov et al., 2022), and machine learning (Ghahramani, 2015). Because classical invariants, as in SPInv, do not account for probabilistic information, we provide a proper generalization of the strongest polynomial invariant for probabilistic loops in Section 5 (Lemma 5.5). With this generalization, we transfer the SPInv problem to the probabilistic setting. We hence consider the probabilistic version of SPInv as being the Prob-SPInv problem. The Prob-SPInv Problem: Given a probabilistic loop with polynomial updates, compute the "probabilistic analog" of the strongest polynomial invariant. In Section 5 we prove that Prob-SPInv inherits Skolem-hardness from its classical SPInv analog (Theorem 5.10). We also show that enriching the probabilistic program model with guards or branching statements renders the strongest polynomial (probabilistic) invariant uncomputable, even in the affine case (Theorems 5.8). We nevertheless provide a decision procedure when considering Prob-SPInv for a restricted class of polynomial loops: we define the class of _moment-computable_ (polynomial) loops and show that Prob-SPInv is computable for such loops (Algorithm 1). Despite being restrictive, our moment-computable loops subsume affine loops with constant probabilistic choice. As such, Section 5 shows the limits of computability in deriving the strongest polynomial (probabilistic) invariants for probabilistic polynomial loops. **Our contributions.** In conclusion, the main contributions of our work are as follows: * In Section 3, we provide a reduction from Skolem to point-to-point reachability for polynomial loops, proving that P2P is Skolem-hard (Theorem 3.3). * Section 4 gives a reduction from P2P to the problem of computing the strongest polynomial invariant of polynomial loops, establishing the connection between P2P and SPInv. As such, we prove that SPInv is Skolem-hard (Theorem 4.2). * In Section 5, we generalize the concept of strongest polynomial invariants to the probabilistic setting (Lemma 5.5). We show that Prob-SPInv is Skolem-hard (Theorem 5.10) and uncomputable for general polynomial probabilistic programs (Theorem 5.8), but it becomes computable for moment-computable polynomial probabilistic programs (Algorithm 1). ## 2. Preliminaries Throughout the paper, we write \(\mathbb{N}\) for the natural numbers, \(\mathbb{Q}\) for the rationals, \(\mathbb{R}\) for the reals, and \(\overline{\mathbb{Q}}\) for the algebraic numbers. We denote by \(\mathbb{E}[x_{1},\ldots,x_{k}]\) the polynomial ring over \(k\) variables with coefficients in some field \(\mathbb{K}\). Further, we use the symbol \(\mathbb{P}\) for probability measures and \(\mathbb{E}\) for the expected value operator. ### Program Models In accordance with (Hrushovski et al., 2023; Kovacs and Varonka, 2023), we consider _polynomial programs_\(\mathcal{P}=(Q,E,q_{0})\) over \(k\) variables, where \(Q\) is a set of locations, \(q_{0}\in Q\) is an initial location, and \(E\subseteq Q\times\mathbb{Q}[x_{1},\ldots,x_{k}]\times Q\) is a set of transitions. The vector of _variable valuations_ is denoted as \(\vec{x}=(x_{1},\ldots,x_{k})\), where each transition \((q,f,q^{\prime})\in E\) maps a (program) configuration \((q,\vec{x})\) to some configuration \((q^{\prime},f(\vec{x}))\). A transition \((q,f,q^{\prime})\in E\) is _affine_ if the function \(f\) is affine. In case all program transitions \((q,f,q^{\prime})\in E\) are affine, we say that the polynomial program \(\mathcal{P}\) is an _affine program_. A _loop_ is a program \(\mathcal{L}=(Q,E,q_{0})\) with exactly two locations \(Q=\{q_{0},q_{1}\}\), such that the initial state \(q_{0}\) has exactly one outgoing transition to \(q_{1}\) and all outgoing transitions of \(q_{1}\) are self-loops, that is, \(E=\{(q_{0},f_{1},q_{1}),(q_{1},f_{2},q_{1}),\ldots,(q_{1},f_{n},q_{1})\}\). In a _guarded program_, each transition is additionally guarded by an equality/inequality predicate among variables of the state vector \(\vec{x}\). If in some configuration the guard of an outgoing transition holds, we say that the transition is _enabled_, otherwise the transition is _disabled_. **(Non)Deterministic programs.** If for any location \(q\in Q\) in a program \(\mathcal{P}\) there is exactly one outgoing transition \((q,f,q^{\prime})\), then \(\mathcal{P}\) is _deterministic_; otherwise \(\mathcal{P}\) is _nondeterministic_. A deterministic guarded program may have multiple outgoing transitions from each location, but for any configuration, exactly one outgoing transition must be enabled. For a guarded nondeterministic program, we require that each configuration has at least one enabled outgoing transition. Deterministic, unguarded programs are called _single-path_ programs. To capture the concept of a loop invariant, we consider the collecting semantics of \(\mathcal{P}\), associating each location \(q\in Q\) with a set of vectors \(\mathcal{S}_{q}\) that are reachable from the initial state \((q_{0},\vec{0})\). More formally, the sets \(\{\mathcal{S}_{q}\mid q\in Q\}\) are the least solution of the inclusion system \[\mathcal{S}_{q_{0}}\supseteq\{\vec{0}\}\qquad\text{and}\qquad\mathcal{S}_{q^{ \prime}}\supseteq f(\mathcal{S}_{q})\quad\text{for all }(q,f,q^{\prime})\in E.\] Definition 2.1 (Invariant).: A polynomial \(p\in\overline{\mathbb{Q}}[x_{1},\ldots,x_{k}]\) is an _invariant_ with respect to program location \(q\in Q\), if for all reachable configurations \(\vec{x}\in\mathcal{S}_{q}\) the polynomial vanishes, that is \(p(\vec{x})=0\). Moreover, for a loop \(\mathcal{L}\), the polynomial \(p\) is an _invariant of \(\mathcal{L}\)_, if \(p\) is an invariant with respect to the looping state \(q_{1}\). **Probabilistic programs.** In probabilistic programs, a probability \(pr\) is added to each program transition. That is, \(E\subseteq Q\times\mathbb{Q}[x_{1},\ldots,x_{k}]\times(0,1]\times Q\), where we require that each location has countably many outgoing transitions and that their probabilities \(pr\) sum up to \(1\). Under the intended semantics, a transition \((q,f,pr,q^{\prime})\) then maps a configuration \((q,\vec{x})\) to configuration \((q^{\prime},f(\vec{x}))\) with probability \(pr\). Again, for guarded probabilistic programs, we require that each configuration has at least one enabled outgoing transition and that the probabilities of the enabled transition sum up to \(1\). For probabilistic programs \(\mathcal{P}\), we consider moment invariants over higher-order statistical moments of the probability distributions induced by \(\mathcal{P}\) (see Section 5). In this respect, it is necessary to count the number of executed transitions in the semantics of \(\mathcal{P}\). Formally, the sets \(\{\mathcal{S}_{q}^{n}\mid q\in Q,n\in\mathbb{N}_{0}\}\) are defined as \[\mathcal{S}_{q_{0}}^{0}\coloneqq\{\vec{0}\}\qquad\text{and}\qquad\mathcal{S} _{q^{\prime}}^{n+1}\coloneqq f\Big{(}\mathcal{S}_{q}^{n}\Big{)}\quad\text{ for all }(q,f,pr,q^{\prime})\in E\text{ and }n\in\mathbb{N}_{0}.\] In addition, the probability of a configuration \(\vec{x}\) in location \(q\) after \(n\) iterations, in symbols \(\mathbb{P}(\vec{x}\mid\mathcal{S}_{q}^{n})\), can be defined inductively: (i) in the initial state, the configuration \(\vec{0}\) after \(0\) executed transitions has probability \(1\); (ii) for any other state, the probability of reaching a specific configuration is defined by summing up the probabilities of all incoming paths. More formally, the probability \(\mathbb{P}(\vec{x}\mid\mathcal{S}_{q}^{n})\) is \[\mathbb{P}\Big{(}\vec{x}\mid\mathcal{S}_{q}^{0}\Big{)}\coloneqq\begin{cases}1 &q=q_{0}\wedge\vec{x}=\vec{0}\\ 0&\text{otherwise}\end{cases}\qquad\text{and}\qquad\mathbb{P}\Big{(}\vec{x} \mid\mathcal{S}_{q^{\prime}}^{n+1}\Big{)}\coloneqq\sum_{(q,f,pr,q^{\prime}) \in E}\sum_{\vec{y}\in f^{-1}(\vec{x})}pr\cdot\mathbb{P}(\vec{y}\mid\mathcal{ S}_{q}^{n}).\] We then define the \(n\)th higher-order statistical moment of a monomial \(M\) in program variables as the expected value of \(M\) after \(n\) loop iterations. Namely, \[\mathbb{E}[M_{n}]\coloneqq\sum_{q\in Q}M(\vec{x})\cdot\mathbb{P}(\vec{x}\mid \mathcal{S}_{q}^{n}), \tag{1}\] where \(M(\vec{x})\) evaluates the monomial \(M\) in a specific configuration \(\vec{x}\). **Universality of loops.** In this paper, we focus on polynomial loops. This is justified by the universality of loops (Hrushovski et al. 2023, Section 4), as every polynomial program can be transformed into a polynomial loop that preserves the collecting semantics. Intuitively, this is done by merging all program states into the looping state and by introducing additional variables that keep track of which state is actually active while invalidating infeasible traces. It is then possible to recover the sets \(\mathcal{S}_{q}^{(n)}\) of the original program from the sets \(\mathcal{S}_{q}^{(n)}\) of the loop. ### Computational Algebraic Geometry & Strongest Invariants We study polynomial invariants \(p(\vec{x})\) of polynomial programs; here, \(p(\vec{x})\) are multivariate polynomials in program variables \(\vec{x}\). We therefore recap necessary terminology from algebraic geometry (Cox et al. 1997), to support us in reasoning whether \(p(\vec{x})=0\) is a loop invariant. In the following \(\mathbb{K}\) denotes a field, such as \(\mathbb{R}\), \(\mathbb{Q}\) or \(\overline{\mathbb{Q}}\). Definition 2.2 (Ideal).: A subset of polynomials \(I\subseteq\mathbb{K}[x_{1},\ldots,x_{k}]\) is an _ideal_ if (i) \(0\in I\); (ii) for all \(x,y\in I\): \(x+y\in I\); and (iii) for all \(x\in I\) and \(y\in\mathbb{K}[x_{1},\ldots,x_{k}]\): \(xy\in I\). For polynomials \(p_{1},\ldots,p_{l}\in\mathbb{K}[x_{1},\ldots,x_{k}]\) we denote by \(\langle p_{1},\ldots,p_{l}\rangle\) the set generated by these polynomials, that is \[\langle p_{1},\ldots,p_{l}\rangle\coloneqq\left\{\sum_{i=1}^{l}q_{i}p_{i} \left|\ q_{1},\ldots q_{k}\in\mathbb{K}[x_{1},\ldots,x_{k}]\right.\right\}\] The set \(I=\langle p_{1},\ldots,p_{l}\rangle\) is an ideal, with the polynomials \(p_{1},\ldots,p_{l}\) being a _basis_ of \(I\). Of particular importance to our work is the set of all polynomial invariants of a program location. It is easy to check that this set forms an ideal. Definition 2.3 (Invariant Ideal): Let \(\mathcal{P}\) be a program with location \(q\). The set \(\mathcal{I}\) of all invariants with respect to the location \(q\) is called the _invariant ideal_ of \(q\). If \(\mathcal{P}\) is a loop and \(\mathcal{I}\) is the invariant ideal with respect to the looping state \(q_{1}\), we call \(\mathcal{I}\) the invariant ideal of the loop \(\mathcal{P}\)1. Footnote 1: Computing bases for invariant ideals is equivalent to computing the _Zariski closure_ of the loop: the Zariski closure is the smallest algebraic set containing the set of reachable states [20]. As the invariant ideal \(\mathcal{I}\) of a loop \(\mathcal{L}\) contains _all_ polynomial invariants, a basis for \(\mathcal{I}\) is the strongest polynomial invariant of \(\mathcal{L}\). This is further justified by the following key result, establishing that every ideal has a basis. Theorem 2.4 (Hilbert's Basis Theorem): _Every ideal \(I\subseteq\mathbb{K}[x_{1},\ldots,x_{k}]\) has a basis. That is, \(I=\langle p_{1},\ldots,p_{l}\rangle\) for some \(p_{1},\ldots,p_{l}\in I\)._ While an ideal \(I\) may have infinitely many bases, the work of [1] proved that every ideal \(I\) has a unique (reduced) _Grobner basis_, where uniqueness is guaranteed modulo some _monomial order_. A monomial order \(<\) is a total order on all monomials such that for all monomials \(m_{1},m_{2},m_{3}\), if \(m_{1}<m_{2}\) then \(m_{1}m_{3}<m_{2}m_{3}\). For instance, assume our polynomial ring is \(\mathbb{K}[x,y,z]\), that is, over three variables \(x\), \(y\), and \(z\). A total order \(z<y<x\) over variables can be extended to a lexicographic ordering on monomials, denoted also by \(<\) for simplicity. In this case, for example, \(xyz^{3}<xy^{2}\) and \(y^{2}z<x\). For a given monomial order, one can consider the leading term of a polynomial \(p\) which we denote by \(LT(p)\). For a set of polynomials \(S\) we write \(LT(S)\) for the set of all leading terms of all polynomials. Definition 2.5 (Grobner Basis): Let \(I\subseteq\mathbb{K}[x_{1},\ldots,x_{k}]\) be an ideal and fix a monomial order. A basis \(G=\{g_{1},\ldots,g_{k}\}\) of \(I\) is a _Grobner basis_, if \(\langle LT(g_{1}),\ldots,LT(g_{l})\rangle=\langle LT(I)\rangle\). Further, \(G\) is a _reduced Grobner basis_ if every \(g_{i}\) has leading coefficient \(1\) and for all \(g,h\in G\) with \(g\neq h\), no monomial in \(g\) is a multiple of \(LT(h)\). Grobner bases provide the workhorses to compute and implement algebraic operations over (infinite) ideals, including ideal intersections/unions, variable eliminations, and polynomial memberships. Given _any_ basis for an ideal \(I\), a unique reduced Grobner basis with respect to any monomial ordering \(<\) is computable using Buchberger's algorithm [1]. A central property of Grobner basis computation is that repeated division of a polynomial \(p\) by elements of a Grobner basis results in a unique remainder, regardless of the order in which the divisions are performed. Hence, to decide if a polynomial \(p\) is an element of an ideal \(I\), that is deciding polynomial membership, it suffices to divide \(p\) by a Grobner basis of \(I\) and check if the remainder is \(0\). Moreover, eliminating a variable \(y\) from an ideal \(I\subseteq\mathbb{K}[x,y]\) is performed by computing the Grober basis of the elimination ideal \(I\cap\mathbb{K}[x]\) only over \(x\). ### Recurrence Equations Recurrence equations relate elements of a sequence to previous elements. There is a strong connection between recurrence equations and program loops: assignments in program loops relate values of program variables in the current iteration to the values in the next iteration. It is therefore handy to interpret a (polynomial) program loop as a recurrence. We briefly introduce linear and polynomial recurrence systems and refer to (Kauers and Paule, 2011) for details. We say that a sequence \(u(n):\mathbb{N}_{0}\rightarrow\mathbb{Q}\) is a _linear recurrence sequence_ (LRS) of order \(k\), if there are coefficients \(a_{0},\ldots,a_{k-1}\in\mathbb{Q}\), where \(a_{0}\neq 0\) and for all \(n\in\mathbb{N}_{0}\) we have \[u(n+k)=a_{k-1}u(n+k-1)+\ldots+a_{1}u(n+1)+a_{0}u(n) \tag{2}\] The recurrence equation (2) is called a _linear recurrence equation_, with the coefficients \(a_{0},\ldots,a_{k-1}\) and the initial values \(u(0),\ldots,u(k-1)\) uniquely specifying the sequence \(u(n)\). Any LRS \(u(n)\) of order \(k\) as defined via (2) can be specified by a system of \(k\) linear recurrence sequences \(u_{1}(n),\ldots,u_{k}(n)\), such that each \(u_{i}(n)\) is of order \(1\) and, for all \(n\in\mathbb{N}_{0}\), we have \(u(n)=u_{1}(n)\) and \[\begin{split} u_{1}(n+1)&=\sum_{i=1}^{k}a_{i}^{(1)}u _{i}(n)=a_{1}^{(1)}u_{1}(n)+\ldots+a_{k}^{(1)}u_{k}(n)\\ &\vdots\\ u_{k}(n+1)&=\sum_{i=1}^{k}a_{i}^{(k)}u_{i}(n)=a_{1}^{(k )}u_{1}(n)+\ldots+a_{k}^{(k)}u_{k}(n)\end{split} \tag{3}\] Again, the LRS \(u(n)\) is uniquely defined by the coefficients \(a_{i}^{(j)}\) and the initial values \(u_{1}(0),\ldots,u_{k}(0)\). _Polynomial recursive sequences_ are natural generalizations of linear recurrence sequences and allow not only linear combinations of sequence elements but also polynomial combinations (Cadilhac et al., 2020). More formally, a sequence \(u(n)\) is _polynomial recursive_, if there exists \(k\in\mathbb{N}\) sequences \(u^{1}(n),\ldots,u^{k}(n):\mathbb{N}_{0}\rightarrow\mathbb{Q}\) such that \(u(n)=u_{1}(n)\) and there are polynomials \(p_{1},\ldots,p_{k}\in\mathbb{Q}[u_{1},\ldots,u_{k}]\) such that, for all \(n\in\mathbb{N}_{0}\), we have \[\begin{split} u_{1}(n+1)&=p_{1}(u_{1}(n),\ldots,u_{ k}(n))\\ &\vdots\\ u_{k}(n+1)&=p_{1}(u_{1}(n),\ldots,u_{k}(n))\end{split} \tag{4}\] The sequence \(u(n)\) from (4) is uniquely defined by the polynomials \(p_{1},\ldots,p_{k}\) and the initial values \(u_{1}(0),\ldots,u_{k}(0)\). In contrast to linear recurrence sequences (2), polynomial recursive sequences (4) _cannot_ be in general modeled using a single polynomial recurrence (Cadilhac et al., 2020). Systems of recurrences are widely used to model the evolution of dynamical systems in discrete time. We conclude this section by recalling the Skolem problem (Bilu et al., 2022; Lipton et al., 2022) related to linear recurrence sequences, whose decidability is an open question since the 1930s. We formally revise the definition from Section 1 as: The Skolem Problem (Everest et al., 2003; Tao, 2008): Given an LRS \(u(n),n\in\mathbb{N}_{0}\), does there exist some \(m\in\mathbb{N}_{0}\) such that \(u(m)=0\)? In the upcoming sections, we show that the Skolem problem is reducible to the decidability of three fundamental problems in programming languages, namely P2P, SPInv and Prob-SPInv from Section 1. As such, we prove that the Skolem problem gives us intrinsically hard computational lower bounds for P2P, SPInv, and Prob-SPInv. ## 3. Hardness of reachability in polynomial programs We first address the computational limitations of reachability analysis within polynomial programs. It is decidable whether a loop with _affine_ assignments reaches a target state from a given initial state (Kannan and Lipton, 1980). Additionally, even problems generalizing reachability are known to be decidable for linear loops, such as various model-checking problems (Karimov et al., 2022). However, reachability for loops with polynomial assignments, or equivalently discrete-time polynomial dynamical systems, has been an open challenge. In this section, we address this reachability challenge via our P2P problem, showing that reachability in polynomial program loops is at least as hard as the Skolem problem (Theorem 3.3). To this end, let us revisit and formally define our P2P problem from Section 1, as follows. The Point-To-Point Reachability Problem (P2P): Given a system of \(k\) polynomial recursive sequences \(u_{1}(n),\ldots,u_{k}(n),n\in\mathbb{N}_{0}\) and a target vector \(\vec{t}=(t_{1},\ldots,t_{k})\), does there exist some \(m\in\mathbb{N}_{0}\) such that for all \(1\leq i\leq k\), it holds that \(u_{i}(m)=t_{i}\)? To the best of our knowledge, nothing is known about the hardness of P2P for polynomial recursive sequences2, and hence for loops with arbitrary polynomial assignments, apart from the trivial lower bounds provided by the linear/affine cases (Kannan and Lipton, 1980; Karimov et al., 2022). Footnote 2: For linear systems, the Point-To-Point Reachability problem (P2P) is also referred to as the _Orbit problem_ in (Kannan and Lipton, 1980). In the sequel, in Theorem 3.3 we prove that the P2P problem for polynomial recursive sequences is _at least as hard_ as Skolem. Doing so, we show that solving Skolem can be solved by _reducing_ it to inputs for P2P, written in symbols as Skolem\(\leq\) P2P. We thus establish a computational lower bound for P2P in the sense that providing a decision procedure for P2P for polynomial recursive sequences would prove the decidability of the long-lasting open decision problem given by Skolem. Our reduction for Skolem\(\leq\) P2P. In a nutshell, we fix an arbitrary Skolem instance, that is, a linear recurrence sequence \(u(n)\) of order \(k\). We say that the instance \(u(n)\) is _positive_, if there exists some \(m\in\mathbb{N}_{0}\) such that \(u(m)=0\), otherwise we call the instance _negative_. Our reduction Skolem\(\leq\) P2P constructs an instance of P2P that reaches the all-zero vector \(\vec{0}\) if and only if the Skolem instance is positive. Hence, a decision procedure for P2P would directly lead to a decision procedure for Skolem. Following (2), let our Skolem instance of order \(k\) to be the LRS \(u(n):\mathbb{N}_{0}\rightarrow\mathbb{Q}\) specified by coefficients \(a_{0},\ldots a_{k-1}\in\mathbb{Q}\) such that \(a_{0}\neq 0\) and, for all \(n\in\mathbb{N}_{0}\), we have \[u(n+k)=a_{k-1}\cdot u(n+k-1)+\ldots+a_{1}\cdot u(n+1)+a_{0}\cdot u(n)=\sum_{i=0 }^{k-1}a_{i}\cdot u(n+i). \tag{5}\] From our Skolem instance (5), we construct a system of \(k\) polynomial recursive sequences \(x_{0},\ldots,x_{k-1}\), as given in (4). Namely, the initial sequence values are defined inductively as \[\boxed{x_{0}(0)\coloneqq u(0)}\qquad\boxed{x_{i}(0)\coloneqq u(i)\cdot\prod_ {\ell=0}^{i-1}x_{\ell}(0)\qquad(1\leq i<k)}\] With the initial values defined, the sequences \(x_{0},\ldots,x_{k-1}\) are uniquely defined via the following system of recurrence equations: \[\boxed{x_{i}(n+1)\coloneqq x_{i+1}(n)\qquad(1\leq i<k-1)}\qquad\boxed{x_{k-1}(n+ 1)\coloneqq\sum_{i=0}^{k-1}a_{i}\cdot x_{i}(n)\cdot\prod_{\ell=i}^{k-1}x_{\ell }(n)} \tag{6}\] Intuitively, the \(x_{i}\) sequences are a "non-linear variant" of the Skolem instance \(u(n)\) such that, once any \(x_{i}\) reaches \(0\), \(x_{i}\) remains \(0\) forever. The target vector for our P2P instance is therefore \(\vec{t}=\vec{0}\). Let us illustrate the main idea of our construction with the following example. Example 3.1: Assume our Skolem instance from (5) is given by the recurrence \(u(n+3)=2u(n+2)-2u(n+1)-12u(n)\) and the initial values \(u(0)=2,u(1)=-3,u(2)=3\). Following our reduction (6), we construct a system of polynomial recursive sequences \(x_{i}(n)\): \[x_{0}(0) =u(0)=2 x_{0}(n+1) =x_{1}(n)\] \[x_{1}(0) =u(1)x_{0}(0)=-6 x_{1}(n+1) =x_{2}(n)\] \[x_{2}(0) =u(2)x_{0}(0)x_{1}(0)=-36 x_{2}(n+1) =2x_{2}(n)^{2}-2x_{1}(n)^{2}x_{2}(n)-12x_{0}(n)^{2}x_{1}(n)x_{2}(n)\] The first few sequence elements of \(u(n)\) and \(x_{0}(n)\) are shown in Figure 2 and illustrate the key property of our reduction: * \(x_{0}(n)\) is non-zero as long as \(u(n)\) is non-zero, which we prove in Lemma 3.2; * if there is an \(N\) such that \(u(N)=0\), it holds that for all \(n\geq N:x_{0}(n)=0\). The other sequences \(x_{1}\) and \(x_{2}\) in the system are "shifted" variants of \(x_{0}\). Hence, the constructed sequences all eventually reach the all-zero configuration and remain there. In Theorem 3.3, we prove that this is the case if and only if the Skolem instance \(u(n)\) is positive. Correctness of Skolem\(\leq\) P2P. To prove the correctness of our reduction Skolem\(\leq\) P2P and to assert the properties (i)-(ii) of Example 3.1 among \(u(n)\) and \(x_{i}(n)\), we introduce \(k\) auxiliary variables \(s_{0},\ldots,s_{k-1}\) defined as \[\boxed{s_{i}(0)\coloneqq\begin{cases}1&(i=0)\\ \prod_{\ell=0}^{i-1}x_{\ell}(0)&(1\leq i<k)\end{cases}}\qquad\boxed{s_{i}(n+1 )\coloneqq\begin{cases}s_{i+1}(n)&(i\neq k-1)\\ s_{k-1}(n)\cdot x_{k-1}(n)&(i=k-1)\end{cases}}\] Figure 2: The first 15 sequence elements of \(u(n)\) and \(x_{0}(n)\) in Example 3.1. Using these auxiliary sequences \(s_{i}(n)\), we next prove two central properties of our \(\mathrm{P2P}\) instance. **Lemma 3.2**: _For the system of polynomial recursive sequences in (6), it holds that \(\forall n\geq 0\) and \(0\leq i<k\)_ \[x_{i}(n) =s_{i}(n)\cdot u(n+i),\text{ and} \tag{7}\] \[s_{i}(n) =\prod_{\ell=0}^{n-1}x_{0}(\ell)\cdot\prod_{\ell=0}^{i-1}x_{\ell }(n). \tag{8}\] We prove the two properties by well-founded induction on the lexicographic order \((n,i)\), where \(n\geq 0\) and \(0\leq i<k\). Here, \((n,i)\leq(n^{\prime},i^{\prime})\) if and only if \(n<n^{\prime}\) or \(n=n^{\prime}\ \wedge\ i<i^{\prime}\). The order has the unique least element \((0,0)\). _Base case:_\(n=0\). If \(i=0\), then properties (7) and (8) hold by definition of \(s_{0}(0)\coloneqq 1=\prod_{\ell=0}^{-1}x_{0}(\ell)\cdot\prod_{\ell=0}^{-1}x_{ \ell}(0)\) and \(x_{0}(0)\coloneqq u(0)=s_{0}(0)\cdot u(0)\). Also, if \(0<i<k\), then properties (7) and (8) are trivially satisfied by the definition of the initial values: \(s_{i}(0)\coloneqq\prod_{\ell=0}^{i-1}x_{\ell}(0)\) and \(x_{i}(0)\coloneqq u(i)\cdot\prod_{\ell=0}^{i-1}x_{\ell}(0)=u(i)\cdot s_{i}(0)\). _Induction step - Case 1:_\(n>0\ \wedge\ 0\leq i<k{-}1\). By the lexicographical ordering, it holds that \((n,i{+}1)<(n{+}1,i)\). Hence, we can assume that properties (7) and (8) hold for \((n,i{+}1)\). Thus, we have the induction hypothesis \[x_{i+1}(n) =s_{i+1}(n)\cdot u(n+i+1),\text{ and} \tag{9}\] \[s_{i+1}(n) =\prod_{\ell=0}^{n-1}x_{0}(\ell)\cdot\prod_{\ell=0}^{i}x_{\ell}( n). \tag{10}\] To prove property (7) for \((n{+}1,i)\) means to show that \[x_{i}(n+1)=s_{i}(n+1)\cdot u(n+i+1).\] The sequences \(x_{i}\) and \(s_{i}\) are defined by \(x_{i}(n{+}1)=x_{i+1}(n)\) and \(s_{i}(n{+}1)=s_{i+1}(n)\) and hence property (7) follows from the induction hypothesis (9). To prove property (8) for \((n{+}1,i)\) means to show that \[s_{i}(n+1)=\prod_{\ell=0}^{n}x_{0}(\ell)\cdot\prod_{\ell=0}^{i-1}x_{\ell}(n+1).\] We prove the equation by using the induction hypothesis (10), the definitions \(x_{i}(n{+}1)=x_{i+1}(n)\) and \(s_{i}(n{+}1)=s_{i+1}(n)\), and index manipulation: \[s_{i}(n+1)=s_{i+1}(n) =\prod_{\ell=0}^{n-1}x_{0}(\ell)\cdot\prod_{\ell=0}^{i}x_{\ell}( n)\] \[=\prod_{\ell=0}^{n-1}x_{0}(\ell)\cdot x_{0}(n)\cdot\prod_{\ell=0} ^{i-1}x_{\ell+1}(n)\] \[=\prod_{\ell=0}^{n}x_{0}(\ell)\cdot\prod_{\ell=0}^{i-1}x_{\ell}( n+1)\] _Induction step - Case 2:_\(n>0\)_and_\(i=k{-}1\). We show that property (7) holds for \((n{+}1,k{-}1)\) by proving it to be equivalent to the definition of \(x_{k-1}(n{+}1)\). To do so, we first instantiate property (7) and replace both \(s_{k-1}(n{+}1)\) and \(u(n{+}k)\) by their defining recurrence: \[x_{k-1}(n+1) =s_{k-1}(n+1)\cdot u(n+k)\] \[=s_{k-1}(n)\cdot x_{k-1}(n)\cdot\left(\sum_{i=0}^{k-1}a_{i}\cdot u (n+i)\right)\] Next, we rearrange and apply the induction hypothesis (8) for \((n,k{-}1)\) and \((n,i)\) and obtain: \[x_{k-1}(n+1) =x_{k-1}(n)\cdot\left(\sum_{i=0}^{k-1}a_{i}\cdot u(n+i)\cdot s_{k -1}(n)\right)\] \[=x_{k-1}(n)\cdot\left(\sum_{i=0}^{k-1}a_{i}\cdot u(n+i)\cdot \underbrace{\prod_{\ell=0}^{n-1}x_{0}(\ell)\cdot\prod_{\ell=0}^{k-2}x_{\ell}( n)}_{s_{k-1}(n)\text{ by LH. \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eq:eqeq:eq: Theorem 3.3 (Hardness of P2p).: _P2p is Skolem-hard. That is, Skolem\(\leq\)P2p._ Proof.: We show that our polynomial recursive system constructed in (6) reaches the all-zero vector from the initial value if and only if the original Skolem instance is positive. \((\Rightarrow):\) Assume the Skolem instance is positive, then there is some smallest \(N\in\mathbb{N}_{0}\) such that \(u(N)=0\). Property (7) of Lemma 3.2 implies \[x_{0}(N)=s_{0}(N)\cdot u(N)=0.\] Using this equation and property (8) of Lemma 3.2, we deduce that for all \(n>N\), each \(s_{i}(n)\) contains \(x_{0}(N)\) as a factor and hence \(s_{i}(n)=0\). Additionally, as \(x_{i}(n)=s_{i}(n)\cdot u(n{+}i)\) by property (7), we conclude that for all \(n>N\) also \(x_{i}(n)=0\). Hence, the polynomial recursive system reaches the all-zero vector. \((\Leftarrow)\) Assume that the Skolem instance is negative, meaning that the linear recurrence sequence \(u(n)\) does not have a \(0\). In particular, \(u(i)\neq 0\) for all \(0\leq i<k\). Therefore, by definition of the polynomial recursive system (6), \(x_{i}(0)\neq 0\) for all \(0\leq i<k\). Towards a contradiction, assume that the polynomial recursive system still reaches the all-zero vector. Hence, there is a smallest \(N\in\mathbb{N}_{0}\) such that \(x_{i}(N)=0\) for all \(0\leq i<k\). In particular, \(x_{0}(N)=0\). Moreover, \(x_{0}\) is the last sequence to reach \(0\), because of the recurrence equation \(x_{i}(n{+}1)=x_{i+1}(n)\) for \(0\leq i<k\). Therefore, \(N\) is also the smallest number such that \(x_{0}(N)=0\). By property (7) of Lemma 3.2, we have \[x_{0}(N)=s_{0}(N)\cdot u(N)=0.\] However, \(s_{0}(N)\) must be non-zero, because \[s_{0}(N)=\prod_{\ell=0}^{N-1}x_{0}(\ell),\] by property (8) of Lemma 3.2, and the fact that \(N\) is the smallest number such that \(x_{0}(N)=0\). Then we necessarily have \(u(N)=0\), yielding a contradiction. Theorem 3.3 shows that P2P for polynomial recursive sequences is at least as hard as the Skolem problem. Thus, reachability and model-checking of loops with polynomial assignments is Skolem-hard. A decision procedure establishing decidability for P2P would lead to major breakthroughs in number theory (Lipton et al., 2022), as by Theorem 3.3 this would imply decidability of the Skolem problem. ## 4. Hardness of Computing the Strongest Polynomial Invariant This section goes beyond reachability analysis and focuses on inferring the strongest polynomial invariants of polynomial loops. As such, we turn our attention to solving the SPInv problem of Section 1, which is formally defined as given below. The SPInv Problem: Given an unguarded, deterministic loop with polynomial updates, compute a basis of its polynomial invariant ideal. We prove that finding the strongest polynomial invariant for deterministic loops with polynomial updates, that is, solving SPInv, is at least as hard as P2P (Theorem 4.2). Hence, P2P\(\leq\)SPInv. Then, by the Skolem\(\leq\)P2P hardness result of Theorem 3.3, we conclude the Skolem-hardness of SPInv, that is Skolem\(\leq\)P2P\(\leq\)SPInv. To the best of our knowledge, our Theorem 3.3 together with Theorem 4.2 provide the first computational lower bound on \(\operatorname{SPInv}\), when focusing on loops with arbitrary polynomial updates (see Table 1). **Our reduction for \(\operatorname{P2P}\leq\operatorname{SPInv}\).** We fix an arbitrary \(\operatorname{P2P}\) instance of order \(k\), given by a system of polynomial recursive sequences \(u_{1},\ldots,u_{k}:\mathbb{N}_{0}\to\mathbb{Q}\) and a target vector \(\vec{t}=(t_{1},\ldots,t_{k})\in\mathbb{Q}^{k}\). This \(\operatorname{P2P}\) instance is positive if and only if there exists an \(N\in\mathbb{N}_{0}\) such that \((u_{1}(N),\ldots,u_{k}(N))=\vec{t}\). For reducing \(\operatorname{P2P}\) to \(\operatorname{SPInv}\), we construct the following deterministic loop with polynomial updates over \(k\)+2 variables: \[\begin{bmatrix}f&g&x_{1}&\ldots&x_{k}\end{bmatrix}\leftarrow\begin{bmatrix}1&0& u_{1}(0)&\ldots&u_{k}(0)\end{bmatrix} \tag{11}\] \[\begin{bmatrix}x_{1}\\ \vdots\\ x_{k}\\ f\\ g\end{bmatrix}\leftarrow\begin{bmatrix}p_{1}(x_{1},\ldots,x_{k})\\ \vdots\\ p_{k}(x_{1},\ldots,x_{k})\\ f\cdot\left((x_{1}-t_{1})^{2}+\ldots+(x_{k}-t_{k})^{2}\right)\\ g+1\end{bmatrix}\] **end while** The polynomial recursive sequences \(u_{1},\ldots,u_{k}\) are fully determined by their initial values and the polynomials \(p_{1},\ldots,p_{k}\in\mathbb{Q}[u_{1},\ldots,u_{k}]\) defining the respective recurrence equations \(u_{i}(n\!+\!1)=p_{i}(u_{1}(n),\ldots,u_{k}(n))\). Hence, by the construction of the \(\operatorname{SPInv}\) instance (11), every program variable \(x_{i}\) models the sequence \(u_{i}\). As such, for any number of loop iterations \(n\in\mathbb{N}_{0}\), we have \(x_{i}(n)=u_{i}(n)\). Moreover, the variable \(g\) models the loop counter \(n\), meaning \(g(n)=n\) for all \(n\in\mathbb{N}_{0}\). The motivation behind using the program variable \(f\) is that \(f\) becomes \(0\) as soon as all sequences \(u_{i}\) reach their target \(t_{i}\); moreover, \(f\) remains \(0\) afterward. More precisely, for \(n\in\mathbb{N}_{0}\), \(f(n)=0\) if and only if there is some \(N\leq n\) such that \(x_{1}(N)=t_{1}\wedge\ldots\wedge x_{k}(N)=t_{k}\). Hence, the sequence \(f\) has a \(0\) value, and subsequently, all its values are \(0\), if and only if the original instance of \(\operatorname{P2P}\) is positive. Let us illustrate the main idea of our \(\operatorname{P2P}\leq\operatorname{SPInv}\) reduction via the following example. **Example 4.1**.: Consider the recursive sequences \(x(n\!+\!1)=x(n)+2\) and \(y(n\!+\!1)=y(n)+3\), with initial values \(x(0)=y(0)=0\). It is easy to see that the system \(S=(x(n),y(n))\) reaches the target \(\vec{t_{1}}=(4,6)\) but does not reach the target \(\vec{t_{2}}=(5,7)\). Following are the two \(\operatorname{SPInv}\) instances produced by our reduction for the \(\operatorname{P2P}\) instances \((S,\vec{t_{1}})\) and \((S,\vec{t_{2}})\). \(\operatorname{SPInv}\) **instance for \((\operatorname{S},\operatorname{t_{2}})\):** \(\begin{bmatrix}f&g&x&y\end{bmatrix}\leftarrow\begin{bmatrix}1&0&0&0\end{bmatrix}\) **while \(\star\)do** \(\begin{bmatrix}x\\ y\\ f\\ g\end{bmatrix}\leftarrow\begin{bmatrix}x+2\\ y+3\\ f\cdot\left((x-4)^{2}+(y-6)^{2}\right)\\ g+1\end{bmatrix}\) **end while** Invariant ideal: \(\langle x-2g,y-3g,g(g-1)f\rangle\) The invariant ideals for both instances are given in terms of Grobner bases with respect to the lexicographic order for the variable order \(g<f<y<x\). For the instance with the reachable target \(\vec{t}_{1}\), we have \(f(n)=0\) for \(n\geq 2\). Hence, \(g(g-1)f\) is a polynomial invariant and must be in the invariant ideal of this \(\operatorname{SPInv}\) instance; in fact, \(g(g-1)f\) is not only in the invariant ideal but even a basis element for the Grobner basis with the chosen order. However, \(g(g-1)f\) is not in the ideal of the \(\operatorname{SPInv}\) instance with the unreachable target \(\vec{t}_{2}\). These two \(\operatorname{SPInv}\) instances illustrate thus how a basis of the invariant ideal can be used to decide P2P. While, for simplicity, our recursive sequences \(x(n)\) and \(y(n)\) are linear, our approach to reducing P2P to \(\operatorname{SPInv}\) also applies to polynomial recursive sequences. In Theorem 4.2, we show that a polynomial such as \(g(g-1)f\) is an element of the basis of the invariant ideal (with respect to a specific monomial order) if and only if the original P2P instance is positive. **Correctness of \(\operatorname{P2P}\leq\operatorname{SPInv}\).** To show that it is decidable whether \(f(n)\) has a \(0\) given a basis of the invariant ideal, we employ Grobner bases and an argument introduced in [Kauers 2005] for recursive sequences defined by rational functions, adjusted to our setting using recursive sequences defined by polynomials. Theorem 4.2 (Hardness of \(\operatorname{SPInv}\)).: _SPInv is at least as hard as P2P. That is, P2P \(\leq\) SPInv._ Proof.: Assume we are given an oracle for \(\operatorname{SPInv}\), computing a basis \(B\) of the polynomial invariant ideal \(\mathcal{I}=\langle B\rangle\) of our loop (11). We show that given such a basis \(B\), it is decidable whether \(f(n)\) has a root, which is equivalent to the fixed P2P instance being positive. Note that by the construction of the loop (11), if \(f(N)=0\) for some \(N\in\mathbb{N}_{0}\), then \(\forall n\geq N:f(n)=0\). Moreover, such an \(N\) exists if and only if the P2P instance is positive. This is true if and only if there exists an \(N\in\mathbb{N}_{0}\) such that the sequence \[n\mapsto f(n)\cdot n\cdot(n-1)\cdot(n-2)\cdot\ldots\cdot(n-N+1)\] is \(0\) for all \(n\in\mathbb{N}_{0}\). Consequently, the polynomial invariant ideal \(\mathcal{I}\) contains a polynomial \[P\coloneqq f\cdot g\cdot(g-1)\ldots\cdot(g-N+1) \tag{12}\] for some \(N\in\mathbb{N}_{0}\) only if the P2P instance (11) is positive. It is left to show that, given a basis \(B\) of \(\mathcal{I}\), it is decidable whether \(\mathcal{I}\) contains a polynomial (12). Using Buchberger's algorithm [Buchberger 2006], \(B\) can be transformed into a Grobner basis with respect to any monomial order. We choose a total order among program variables such that \(g<f<x_{1},\ldots,x_{k}\). Without loss of generality, we assume that \(B\) is a Grobner basis with respect to the lexicographic order extending the variable order. In what follows, we argue that if a polynomial \(P\) as in (12) is an element of \(\mathcal{I}\), then \(P\) must be an element of the basis \(B\). As the leading term of \(P\) is \(g^{N}\cdot f\), there must be some polynomial \(Q\) in \(B\) with a leading term that divides \(g^{N}\cdot f\). By the choice of the lexicographic order, this polynomial must be of the form \(Q=Q_{1}(g)\cdot f-Q_{2}(g)\), since if any other term would occur in \(Q\), it would necessarily be in the leading term. As both \(P\in\mathcal{I}\) and \(Q\in\mathcal{I}\), it holds that \[P\cdot Q_{1}-g\cdot(g-1)\ldots\cdot(g-N+1)\cdot Q\in\mathcal{I}.\] By expanding \(P\) and \(Q\), we see that the above polynomial is equivalent to \[Q_{2}\cdot g\cdot(g-1)\ldots\cdot(g-N+1).\] As this polynomial is in the ideal \(\mathcal{I}\), it follows that for all \(n\in\mathbb{N}_{0}\): \[Q_{2}(n)\cdot n\cdot(n-1)\ldots\cdot(n-N+1)=0.\] However, this implies that \(Q_{2}(n)\) has infinitely many zeros, a property that is unique to the zero polynomial. Therefore, we conclude that \(Q_{2}\equiv 0\). Hence, if the original P2P instance is positive, there necessarily exists a basis polynomial of the form \(Q_{1}(g)\cdot f\). We show that this basis polynomial \(Q_{1}(g)\cdot f\) actually has the form (12): choose the basis polynomial of the form \(Q_{1}(g)\cdot f\) such that \(Q_{1}\) has minimal degree. Note that \(Q_{1}(g)\cdot f\) must divide \(P\). Assume \(Q_{1}(g)\) is not of the form \(g\cdot(g{-}1)\dots\cdot(g{-}N{+}1)\). Then, at least one factor \((g{-}m)\) is not a factor of \(Q_{1}\), or equivalently \(Q_{1}(m)\neq 0\). Then, necessarily \(f(m)=0\) and \(g\cdot(g{-}1)\dots\cdot(g{-}m{+}1)\cdot f\) must be in the ideal \(\mathcal{I}\), contradicting the minimality of the degree of \(Q_{1}\). Therefore, we conclude that the P2P instance is positive if and only if the Grobner basis contains a polynomial of the form (12). As the basis \(B\) is finite, this property can be checked by enumeration of the basis elements of \(B\). Hence, given an oracle for SPInv, we can decide if the P2P instance is positive or negative. Theorem 4.2 shows that SPInv is at least as hard as the P2P problem. Together with Theorem 3.3, we conclude that SPInv is Skolem-hard. An improved direct reduction from Skolem to SPInv.Theorem 4.2 together with Theorem 3.3 yields the chain of reductions \[\textsc{Skolem}\leq\textsc{P2P}\leq\textsc{SPInv}.\] Within these reductions, a Skolem instance of order \(k\) yields a P2P instance with \(k\) sequences, which in turn reduces to a SPInv instance over \(k{+}2\) variables. We conclude this section by noting that, if the linear recurrence sequence of the Skolem-instance is an _integer sequence_, then a reduction directly from Skolem to SPInv can be established by using only \(k{+}1\) variables. A slight modification of Skolem\(\leq\textsc{P2P}\) reduction of Section 3 results in a reduction from Skolem instances of order \(k\) directly to SPInv instances with \(k{+}1\) variables. Any system of polynomial recursive sequences can be encoded in a loop with polynomial updates. Hence, the instance produced by the Skolem\(\leq\textsc{P2P}\) reduction can be interpreted as a loop. It is sufficient to modify the resulting loop in the following way: \[\begin{array}{|c|c|}\hline x_{k-1}\leftarrow\sum_{i=0}^{k-1}a_{i}\cdot x_{i} \cdot\prod_{\ell=i}^{k-1}x_{\ell}\\ s_{k-1}\gets x_{k-1}\cdot s_{k-1}\end{array}\quad\to\quad\begin{array}{|c|c|} \hline x_{k-1}\leftarrow\sum_{i=0}^{k-1}a_{i}\cdot x_{i}\cdot\prod_{\ell=i}^{ k-1}2\cdot x_{\ell}\\ s_{k-1}\gets 2\cdot x_{k-1}\cdot s_{k-1}\end{array}\] As in the reduction in Section 3, the equation \(u_{0}(n)=\frac{x_{0}(n)}{s_{0}(n)}\) still holds and the resulting loop reaches the all-zero configuration if and only if the original Skolem-instance is positive (the integer sequence has a 0). Additionally, the resulting loop has infinitely many _different_ configurations if and only if the Skolem instance is positive, as the additional factor in the updates forces a strict increase in \(|s_{k-1}|\). Assuming a solution to SPInv for the constructed loop, that is a basis of the polynomial invariant ideal, it is decidable whether the number of reachable program locations (and its algebraic closure) is finite or not [10]. Therefore, an oracle for SPInv implies the decidability of Skolem for _integer sequences_, while the chain of reductions Skolem\(\leq\textsc{P2P}\leq\textsc{SPInv}\) is also valid for rational sequences. Summary of computability results in polynomial (non)deterministic loops.We conclude this section by overviewing our computability results in Table 1, focusing on the strongest polynomial invariants of (non)deterministic loops and in relation to the state-of-the-art. ## 5. Strongest invariant for probabilistic loops In this section, we finally go beyond (non-)deterministic programs and address computational challenges in probabilistic programming, in particular loops. Unlike the programming models of Section 3-4, probabilistic loops follow different transitions with different probabilities. Recall that the standard definition of an invariant \(I\), as given in Definition 2.1, demands that \(I\) holds in _every reachable_ configuration and location. As such, when using Definition 2.1 to define an invariant \(I\) of a probabilistic loop, the information provided by the probabilities of reaching a configuration within the respective loop is omitted in \(I\). However, Definition 2.1 captures an invariant \(I\) of a probabilistic loop when every probabilistic loop transition is replaced by a nondeterministic transition. Nevertheless, for incorporating probability-based information in loop invariants, Definition 2.1 needs to be revised to consider expected values and higher (statistical) moments describing the value distributions of probabilistic loop variables (Kozen, 1983; McIver and Morgan, 2005). Therefore, in Definition 5.2 we introduce _polynomial moment invariants_ to reason about value distributions of probabilistic loops. We do so by utilizing higher moments of the probability distributions induced by the value distributions of loop variables during the execution (Section 5.1). We prove that polynomial moment invariants generalize classical invariants (Lemma 5.5) and show that the strongest moment invariants up to moment order \(\ell\) are computable for the class of so-called moment-computable polynomial loops (Section 5.2). In this respect, in Algorithm 1 we give a complete procedure for computing the strongest moment invariants of moment-computable polynomial loops. When considering _arbitrary_ polynomial probabilistic loops, we prove that the strongest moment invariants are (i) not computable for guarded probabilistic loops (Section 5.3) and (ii)Skolem-hard to compute for unguarded probabilistic loops (Section 5.4). ### Polynomial Moment Invariants Higher moments capture expected values of monomials over loop variables, for example, \(\mathbb{E}[x^{2}]\) and \(\mathbb{E}[xy]\) respectively yield the second-order moment of \(x\) and a second-order mixed moment. Such higher moments are necessary to characterize, and potentially recover, the value distribution of probabilistic loop variables, allowing us to reason about statistical properties, such as variance or skewness, over probabilistic value distributions. When reasoning about moments of probabilistic program variables, note that in general neither \(\mathbb{E}[x^{\ell}]=\mathbb{E}[x]^{\ell}\) nor \(\mathbb{E}[xy]=\mathbb{E}[x]\mathbb{E}[y]\) hold, due to potential dependencies among the (random) loop variables \(x\) and \(y\). Therefore, describing all polynomial invariants among all higher moments \begin{table} \begin{tabular}{|l|l|l||l|c|c|} \hline \multicolumn{2}{|l|}{Program Model} & \multicolumn{2}{c|}{Strongest Affine Invariant} & \multicolumn{2}{c|}{Strongest Polynomial Invariant} \\ \hline \multirow{3}{*}{Det.} & Unguarded & Affine & ✓ & (Karr, 1976) & ✓ & (Kovács, 2008) \\ \cline{2-6} & Poly. & ✓ & (Müller-Olm and Seidl, 2004a) & Skolem-hard & Theorems 3.3 \& 4.2 \\ \cline{2-6} & Guarded (=, \(<\)) & Affine & & \multicolumn{2}{c|}{\(\boldsymbol{\chi}\) (Halting Problem)} \\ \hline \multirow{3}{*}{Nondet.} & Unguarded & Poly. & & (Hrushovski et al., 2023) & \\ \cline{2-6} & \multirow{2}{*}{Guarded (=, \(<\))} & Affine & & (Müller-Olm and Seidl, 2004b) & \\ \cline{1-1} \cline{3-6} & & Poly. & & \multicolumn{2}{c|}{\(\boldsymbol{\chi}\) [Müller-Olm and Seidl, 2004b]} \\ \hline \end{tabular} \end{table} Table 1. Summary of computability results for strongest invariants of _nonprobabilistic_ polynomial loops, including our own results (Theorems 3.3 & 4.2). With ’✓’ we denote decidable problems, while ’\(\boldsymbol{\chi}\)’ denotes undecidable problems. by finitely many polynomials is futile. A natural restriction and the one we undertake in this paper is to consider polynomials over finitely many moments, which we do as follows. Definition 5.1 (Moments of Bounded Degree): Let \(\ell\) be a positive integer. Then the _set of program variable moments of order at most \(\ell\)_ is given by \[\mathbb{E}^{\leq\ell}\coloneqq\left\{\mathbb{E}\big{[}x_{1}^{\alpha_{1}}x_{2} ^{\alpha_{2}}\cdots x_{k}^{\alpha_{k}}\big{]}\ |\ \alpha_{1}+\ldots+\alpha_{k}\leq\ell\right\}.\] While Definition 5.1 uses a bound \(\ell\) to define the set of moments of bounded degree, our subsequent results apply to any _finite_ set of moments of program variables. Recall that Section 2.1 defines the semantics \(\mathcal{S}_{q}^{n}\) of a probabilistic loop with respect to the location \(q\in Q\) and the number of executed transitions \(n\geq 0\). The set \(\mathcal{S}_{q}^{n}\) in combination with the probability of each configuration allows us to define the moments of program variables after \(n\) transitions. Further, for a monomial \(M\) in program variables, we defined \(\mathbb{E}[M_{n}]\) in (1) to be the expected value of \(M\) after \(n\) transitions. For example, \(\mathbb{E}[x_{n}]\) denotes the expected value of the program variable \(x\) after \(n\) transitions. With this, we define the set of polynomial invariants among moments of program variables, as follows. Definition 5.2 (Moment Invariant Ideal): Let \(\mathbb{E}^{\leq\ell}\) be the set of program variable moments of order less than or equal to \(\ell\) and \(k=|\mathbb{E}^{\leq\ell}|\). The _moment invariant ideal_\(\mathbb{I}^{\leq\ell}\) is defined as \[\mathbb{I}^{\leq\ell}=\left\{p(M_{1},\ldots,M_{k})\in\overline{\mathbb{Q}} \big{[}\mathbb{E}^{\leq\ell}\big{]}\ |\ p(M_{1,n},\ldots,M_{k,n})=0\text{ for all }n\in \mathbb{N}_{0}\right\}.\] We refer to elements of \(\mathbb{I}^{\leq\ell}\) as _polynomial moment invariants_. For example, using Definition 5.2, a polynomial \(p(\mathbb{E}[x],\mathbb{E}[y])\) in the expected values of the variables \(x\) and \(y\) is a _polynomial moment invariant_, if \(p(\mathbb{E}[x_{n}],\mathbb{E}[y_{n}])=0\) for all number of transitions \(n\in\mathbb{N}_{0}\). Note that, although \(\mathbb{E}^{\leq\ell}\) is a finite set, the moment invariant ideal \(\mathbb{I}^{\leq\ell}\) is, in general, an infinite set. Example 5.3: Consider two asymmetric random walks \(x_{n}\) and \(y_{n}\) that both start at the origin. Both random walks increase or decrease with probability \(\nicefrac{{1}}{{2}}\), respectively. The random walk \(x_{n}\) either decreases by \(2\) or increases by \(1\), while \(y_{n}\) behaves conversely, which means \(y_{n}\) either decreases by \(1\) or increases by \(2\). Following is a probabilistic loop encoding this process together with the moment invariant ideal \(\mathbb{I}^{\leq 2}\). The loop is given as program code. The intended meaning of the expression \(e_{1}[pr]e_{2}\) is that it evaluates to \(e_{1}\) with probability \(pr\) and to \(e_{2}\) with probability \(1-pr\). \[\begin{array}{l}\begin{bmatrix}x&y\end{bmatrix}\leftarrow\begin{bmatrix}0& 0\end{bmatrix}\\ \begin{array}{l}\text{\bf while}\star\text{\bf do}\\ \begin{bmatrix}x\\ y\end{bmatrix}\leftarrow\begin{bmatrix}x+2&\nicefrac{{1}}{{2}}&x-1\\ y+1&\nicefrac{{1}}{{2}}&y-2\end{bmatrix}\end{bmatrix}\\ \begin{bmatrix}\text{\bf end}\end{bmatrix}\end{array}\] This ideal \(\mathbb{I}^{\leq 2}\) contains all algebraic relations that hold among \(\mathbb{E}[x_{n}]\), \(\mathbb{E}[y_{n}]\), \(\mathbb{E}\big{[}x_{n}^{2}\big{]}\), \(\mathbb{E}\big{[}y_{n}^{2}\big{]}\) and \(\mathbb{E}[(xy)_{n}]\) after all number of iterations \(n\in\mathbb{N}_{0}\). The ideal provides information about the stochastic process encoded by the loop. For instance, using the basis, it can be automatically checked that \(\mathbb{E}[xy]-\mathbb{E}[x]\mathbb{E}[y]\) is an element of \(\mathbb{I}^{\leq 2}\). Hence, \(\mathbb{E}[xy]=\mathbb{E}[x]\mathbb{E}[y]\) is an invariant, witnessing \(x\) and \(y\) being uncorrelated. Moment invariant ideals of Definition 5.2 generalize the notion of classical invariant ideals of Definition 2.3 for nonprobabilistic loops. For a program variable \(x\) of a nonprobabilistic loop, the expected value of \(x\) after \(n\) transitions is just the value of \(x\) after \(n\) iterations, that is \(\mathbb{E}[x_{n}]=x_{n}\). Furthermore, \(\mathbb{E}[x_{n}\cdot y_{n}]=x_{n}\cdot y_{n}\) for all program variables \(x\) and \(y\). Hence, a moment invariant such as \(\mathbb{E}[x^{2}]^{3}-\mathbb{E}[y]\mathbb{E}[y^{2}]\) corresponds to the classical invariant \(x^{6}-y\)3. To formalize this observation, we introduce a function \(\psi\) mapping invariants involving moments to classical invariants. Footnote 3: If the loop contains nondeterministic choice, this property holds with respect to every scheduler resolving nondeterminism. For readability and simplicity, we omit the treatment of schedulers and refer to [Barthe et al. 2020] for details on schedulers. Definition 5.4 (From Moment Invariants to Invariants).: Let \(\mathcal{P}\) be a program with variables \(x_{1},\ldots,x_{k}\). We define the natural _ring homomorphism_\(\psi\colon\overline{\mathbb{Q}}[\mathbb{E}^{\leq\ell}]\to\overline{\mathbb{Q }}[x_{1},\ldots,x_{k}]\) extending \(\psi(\mathbb{E}[M]):=M\). That means, for all \(p,q\in\overline{\mathbb{Q}}[\mathbb{E}^{\leq\ell}]\) and \(c\in\overline{\mathbb{Q}}\) the function \(\psi\) satisfies the properties (i) \(\psi(p+q)=\psi(p)+\psi(q)\); (ii) \(\psi(p\cdot q)=\psi(p)\cdot\psi(q)\); and (iii) \(\psi(c\cdot p)=c\cdot\psi(p)\). The function \(\psi\) maps polynomials over moments to polynomials over program variables, for example, \(\psi(\mathbb{E}[x^{2}]^{3}-\mathbb{E}[y]\mathbb{E}[y^{2}])=\psi(\mathbb{E}[x^ {2}])^{3}-\psi(\mathbb{E}[y])\psi(\mathbb{E}[y^{2}])=x^{6}-y\)3. If \(p\) is a polynomial moment invariant of a _probabilistic_ program, \(\psi(p)\) is in general _not_ a classical invariant. However, for nonprobabilistic programs, \(\psi(p)\) is necessarily an invariant for every moment invariant \(p\), as we show in the next lemma. Lemma 5.5 (Moment Invariant Ideal Generalization).: _Let \(\mathcal{L}\) be a nonprobabilistic loop. Let \(\mathcal{I}\) be the classical invariant ideal and \(\mathbb{I}^{\leq\ell}\) the moment invariant ideal of order \(\ell\). Then, \(\mathbb{I}^{\leq\ell}\) and \(\mathcal{I}\) are identical under \(\psi\), that is_ \[\psi\big{(}\mathbb{I}^{\leq\ell}\big{)}\coloneqq\big{\{}\psi(p)\mid p\in \mathbb{I}^{\leq\ell}\big{\}}=\mathcal{I}\] Proof.: We show that \(\psi(\mathbb{I}^{\leq\ell})\subseteq\mathcal{I}\). The reasoning for \(\mathcal{I}\subseteq\psi(\mathbb{I}^{\leq\ell})\) is analogous. Let \(q\in\psi(\mathbb{I}^{\leq\ell})\). Then, there is a \(p(\mathbb{E}(M^{(1)}),\ldots,\mathbb{E}(M^{(m)}))\in\mathbb{I}^{\leq\ell}\) for some monomials in program variables \(M^{(i)}\) such that \(\psi(p)=p(M^{(1)},\ldots,M^{(m)})=q\). The polynomial \(p\) in moments of program variables is an invariant because it is an element of \(\mathbb{I}^{\leq\ell}\). Moreover, because the loop \(\mathcal{L}\) is nonprobabilistic, we have \(\mathbb{E}(M_{n})=M_{n}\) for all number of transitions \(n\in\mathbb{N}_{0}\) and all monomials \(M\) in program variables \(\lx@note{footnote}{If the loop contains nondeterministic choice, this property holds with respect to every scheduler resolving nondeterminism. For readability and simplicity, we omit the treatment of schedulers and refer to [Barthe et al. 2020] for details on schedulers.}\). Hence, \(q=p(M^{(1)},\ldots,M^{(m)})\) necessarily is a classical invariant as in Definition 2.1 and therefore \(q\in\mathcal{I}\). Lemma 5.5 hence proves that Definition 5.2 generalizes the notion of invariant ideals of nonprobabilistic loops. ### Computability of Moment Invariant Ideals We next consider a special class of probabilistic loops, called _moment-computable polynomial loops_. For such loops, we prove that the bases for moment invariant ideals \(\mathbb{I}^{\leq\ell}\) are computable for any order \(\ell\). Moreover, in Algorithm 1 we give a decision procedure computing moment invariant ideals of moment-computable polynomial loops. Let us recall the notion of _moment-computable loops_[Moosbrugger et al. 2022], which we adjusted to our setting of polynomial probabilistic loops. ``` 0: A moment-computable polynomial loop \(\mathcal{L}\) and an order \(\ell\in\mathbb{N}\) 0: A basis \(B\) for the moment invariant ideal \(\mathbb{I}^{\leq\ell}\) 0:\(\triangleright\)Closed forms of moments as exponential polynomials \(C\leftarrow\) compute_closed_forms(\(\mathcal{L},\mathbb{R}^{\leq\ell}\)) \(\triangleright\)A basis for the ideal of all algebraic relations among sequences in \(C\) \(B\leftarrow\) compute_algebraic_relations(\(C\)) return\(B\) ``` **Algorithm 1** Computing moment invariant ideals Definition 5.6 (Moment-Computable Polynomial Loops).: A polynomial probabilistic loop \(\mathcal{L}\) is _moment-computable_ if, for any monomial \(M\) in loop variables of \(\mathcal{L}\), we have that \(\mathbb{E}(M_{n})\) exists and is computable as \(\mathbb{E}(M_{n})=f(n)\), where \(f(n)\) is an exponential polynomial in \(n\), describing sums of polynomials multiplied by exponential terms in \(n\). That is, \(f(n)=\sum_{i=0}^{k}p_{i}(n)\cdot\lambda^{n}\) where all \(p_{i}\in\overline{\mathbb{Q}}[n]\) are polynomials and \(\lambda\in\overline{\mathbb{Q}}\). As stated in [10], we note that any LRS (2) has an exponential polynomial as closed form. As proven in [11], when considering loops with affine assignments, probabilistic choice with constant probabilities, and drawing from probability distributions with constant parameters and existing moments, all moments of program variables follow linear recurrence sequences. Moreover, one may also consider polynomial (and not just affine) loop updates such that non-linear dependencies among variables are acyclic. If-statements can also be supported if the loop guards contain only program variables with a finite domain. Under such structural considerations, the resulting probabilistic loops are moment-computable loops [11]: expected values \(\mathbb{E}(M_{n})\) for monomials \(M\) over loop variables are exponential polynomials in \(n\). Furthermore, a basis for the polynomial relations among exponential polynomials is computable [10]. We thus obtain a decision procedure computing the bases of moment invariant ideals of moment-computable polynomial loops, as given in Algorithm 1 and discussed next. The procedure compute_closed_form(\(\mathcal{L},S\)) in Algorithm 1 takes as inputs a moment-computable polynomial loop \(\mathcal{L}\) and a set \(S\) of moments of loop variables and computes exponential polynomial closed forms of the moments in \(S\); here, we adjust results of [11] to implement compute_closed_form(\(\mathcal{L},S\)). Further, compute_algebraic_relations(\(C\)) in Algorithm 1 denotes a procedure that takes a set \(C\) of exponential polynomial closed forms as input and computes a basis for all algebraic relations among them; in our work, we use [10] to implement compute_algebraic_relations(\(C\)). Soundness of Algorithm 1 follows from the soundness arguments of [10, 11]. We implemented Algorithm 1 in our tool called Polar4, allowing us to automatically derive the strongest polynomial moment invariants of moment-computable polynomial loops. Footnote 4: [https://github.com/probing-lab/polar](https://github.com/probing-lab/polar) Example 5.7.: Using Algorithm 1 for the probabilistic loop of Example 5.3, we compute a basis for the moment invariant ideal \(\mathbb{I}^{\leq 2}\) in approximately 0.4 seconds and for \(\mathbb{I}^{\leq 3}\) in roughly 0.8 seconds, on a machine with a 2.6GHz Intel i7 processor and 32GB of RAM. ### Hardness for Guarded Probabilistic Loops As Algorithm 1 provides a decision procedure for moment-computable polynomial loops, a natural question is whether the moment invariant ideals remain computable if we relax 1. the restrictions on the guards, 2. the structural requirements on the polynomial assignments of moment-computable polynomial loops. We first focus on (C1), that is, lifting the restriction on guards and show that in this case a basis for the moment invariant ideal of any order becomes uncomputable (Theorem 5.8). We recall the seminal result of [12] proving that the strongest polynomial invariant for _nonprobabilistic_ loops with affine updates, nondeterministic choice, and guarded transitions is uncomputable. Interestingly, nondeterministic choice can be replaced by uniform probabilistic choice, allowing us to also establish the uncomputability of the strongest polynomial moment invariants, which means a basis for the ideal \(\mathbb{I}^{\leq\ell}\), for any order \(\ell\). Theorem 5.8 (Uncomputability of Moment Invariant Ideal).: _For the class of guarded probabilistic loops with affine updates, a basis for the moment invariant ideal \(\mathbb{I}^{\leq\ell}\) is uncomputable for any order \(\ell\)._ Proof.: The proof is by reduction from Post's correspondence problem (PCP), which is undecidable [13]. A PCP instance consists of a finite alphabet \(\Sigma\) and a finite set of tuples \(\{(x_{i},y_{i})\mid 1\leq i\leq N,x_{i},y_{i}\in\Sigma^{*}\}\). A solution is a sequence of indices \((i_{k})\), \(1\leq k\leq K\) where \(i_{k}\in\{1,\ldots,N\}\) and the concatenations of the substrings indexed by the sequence are identical, written in symbols as \[x_{i_{1}}\cdot x_{i_{2}}\cdot\ldots\cdot x_{i_{K}}=y_{i_{1}}\cdot y_{i_{2}} \cdot\ldots\cdot y_{i_{K}}\] Note that the tuple elements may be of different lengths. Moreover, any instance of the PCP over a finite alphabet \(\Sigma\) can be equivalently represented over the alphabet \(\{0,1\}\) by a binary encoding. Now, given an instance of the (binary) PCP, we construct the guarded probabilistic loop with affine updates shown in Figure 3. We encode the binary strings as integers and denote a transition with probability \(pr\), guard \(g\) and updates \(f\) as \([pr]:g:\vec{x}\gets f(\vec{x})\). The idea is to pick a pair of integer-encoded strings uniformly at random and append them to the string built so far. This is done by left-shifting the existing bits of the string (by multiplying by a power of 2) and adding the randomly selected string. If the PCP instance does not have a solution, we have \(t=0\) after every transition. Hence, \(\mathbb{E}[t]=0\) must be an invariant. Therefore, \(\mathbb{E}[t]\) is necessarily an element of \(\mathbb{I}^{\leq\ell}\) for any order \(\ell\). Figure 3. A guarded probabilistic loop with affine updates simulating the PCP. If the PCP instance does have a solution \((i_{k}),1\leq k\leq K\), then after exactly \(n=K+2\) transitions it holds that \(\mathbb{P}(x_{n}=y_{n})\geq\big{(}\frac{1}{N}\big{)}^{K}\), as this is the probability of choosing the correct sequence uniformly at random. Because \(t\) is an indicator variable, \(\mathbb{E}[t_{n}]=\mathbb{P}(t_{n}=1)=\mathbb{P}(x_{n}=y_{n})\geq\big{(}\frac{ 1}{N}\big{)}^{K}>0\). Hence, \(\mathbb{E}[t_{n}]\neq 0\) after \(n\) transitions and \(\mathbb{E}[t]\) cannot be an element of \(\mathbb{I}^{\leq\ell}\) for any order \(\ell\). Consequently, for all orders \(\ell\), the PCP instance has a solution if and only if \(\mathbb{E}[t]\) is an element of \(\mathbb{I}^{\leq\ell}\). However, given a basis, checking for ideal membership is decidable (cf. Section 2.2). Hence, a basis for the moment invariant ideal \(\mathbb{I}^{\leq\ell}\) must be uncomputable for any order \(\ell\). Note that the PCP reduction within the proof of Theorem 5.8 requires only affine updates and affine invariants. Therefore, allowing loop guards renders even the problem of finding the strongest affine invariant for a finite set of moments uncomputable for probabilistic loops with affine updates. ### Hardness for Unguarded Polynomial Probabilistic Loops In this section we address challenge (C2), that is, study computational lower bounds for computing a basis of moment invariant ideals for probabilistic loops that lack guards and nondeterminism, but feature arbitrary polynomial updates. We show that addressing (C2) boils down to solving the Prob-SPInv problem of Section 1, which in turn we prove to be Skolem-hard (Theorem 5.10). As such, computing the moment invariant ideals of probabilistic loops with arbitrary polynomial updates as stated in (C2) is Skolem-hard. We restrict our attention to moment invariant ideals of order \(1\). Intuitively, a basis for \(\mathbb{I}^{\leq 1}\) is easier to compute than \(\mathbb{I}^{\leq\ell}\) for \(\ell>1\). A formal justification in this respect is given by the following lemma. Lemma 5.9 (Moment Invariant Ideal of Order 1).: _Given a basis for the moment invariant ideal \(\mathbb{I}^{\leq\ell}\) for any order \(\ell\in\mathbb{N}\), a basis for \(\mathbb{I}^{\leq 1}\) is computable._ Proof.: The moment invariant ideal \(\mathbb{I}^{\leq\ell}\) is an ideal in the polynomial ring with variables \(\mathbb{E}^{\leq\ell}\). Moreover, \(\mathbb{E}^{\leq 1}\subseteq\mathbb{E}^{\leq\ell}\). Hence, \(\mathbb{I}^{\leq 1}=\mathbb{I}^{\leq\ell}\cap\overline{\mathbb{Q}}[\mathbb{E}^{ \leq 1}]\), meaning \(\mathbb{I}^{\leq 1}\) is an elimination ideal of \(\mathbb{I}^{\leq\ell}\). Given a basis for a polynomial ideal, bases for elimination ideals are computable (Cox et al., 1997). Using Lemma 5.9, we translate challenge (C2) into the Prob-SPInv problem of Section 1, formally defined as follows. The Prob-SPInv Problem: Given an unguarded, probabilistic loop with polynomial updates and without nondeterministic choice, compute a basis of the moment invariant ideal of order \(1\). Recall that computing a basis for the classical invariant ideal for nonprobabilistic programs with arbitrary polynomial updates, that is, deciding SPInv, is Skolem-hard (Theorem 3.3 and Theorem 4.2). We next show that SPInv reduces to Prob-SPInv, thus implying Skolem-hardness of Prob-SPInv as a direct consequence of Lemma 5.5. Theorem 5.10 (Hardness of Prob-SPInv).: _Prob-SPInv is at least as hard as SPInv, in symbols SPInv\(\leq\)Prob-SPInv._ Proof.: Assume \(\mathcal{L}\) is an instance of SPInv. That is, \(\mathcal{L}\) is a deterministic loop with polynomial updates. Let \(x_{1},\ldots,x_{k}\) be the program variables and \(\mathcal{I}\) the classical invariant ideal of \(\mathcal{L}\). Note that \(\mathcal{L}\) is also an instance of \(\textsc{Prob-SPInv}\) and assume \(B\) is a basis for the moment invariant ideal \(\mathbb{I}^{\leq 1}\) From Lemma 5.5 we know that \(\psi(\mathbb{I}^{\leq 1})=\mathcal{I}\). For order \(1\), the function \(\psi\) is a ring isomorphism between the polynomial rings \(\overline{\mathbb{Q}}[x_{1},\ldots,x_{k}]\) and \(\overline{\mathbb{Q}}[\mathbb{E}[x_{1}],\ldots,\mathbb{E}[x_{k}]]\). Hence, the set \(\{\psi(b)\mid b\in B\}\) is a basis for \(\mathcal{I}\). Therefore, given a basis for \(\mathbb{I}^{\leq 1}\), a basis for \(\mathcal{I}\) is computable. Theorem 5.10 shows that \(\textsc{Prob-SPInv}\) is at least as hard as the SPInv problem. Together with Theorem 3.3 and Theorem 4.2, we conclude the following chain of reductions: \[\textsc{Skolem}\leq\textsc{P2P}\leq\textsc{SPInv}\leq\textsc{Prob-SPInv}.\] On attempting to prove uncomputability of \(\textsc{Prob-SPInv}\)- A remaining open challenge.While Theorem 5.10 asserts that \(\textsc{Prob-SPInv}\) is Skolem-hard, it could be that \(\textsc{Prob-SPInv}\) is uncomputable. Recall that for proving the uncomputability of moment invariant ideals for guarded probabilistic programs in Theorem 5.8, we replaced nondeterministic choice with probabilistic choice. The "nondeterministic version" of \(\textsc{Prob-SPInv}\) refers to computing the strongest polynomial invariant for nondeterministic polynomial programs, which has been recently established as uncomputable [Hrushovski et al.2023]. Therefore, it is natural to consider transferring the uncomputability results of [Hrushovski et al.2023] to \(\textsc{Prob-SPInv}\) by replacing nondeterministic choice with probabilistic choice. However, such a generalization of [Hrushovski et al.2023] to the probabilistic setting poses considerable problems and ultimately fails to establish the potential uncomputability of \(\textsc{Prob-SPInv}\), for the reasons discussed next. The proof in [Hrushovski et al.2023] reduces the Boundedness problem for Reset Vector Addition System with State (VASS) to the problem of finding the strongest polynomial invariant for nondeterministic polynomial programs. A Reset VASS is a nondeterministic program where any transition may increment, decrement, or reset a vector of unbounded, non-negative variables. Importantly, a transition can _only be executed if no zero-valued variable is decremented_. The _Boundedness Problem for Reset VASS_ asks, given a Reset VASS and a specific program location, whether the set of reachable program configurations is finite. The Boundedness Problem for Reset VASS is undecidable [Dufourd et al.1998] and therefore instrumental in the reduction of [Hrushovski et al.2023]. Namely, in the reduction of [Hrushovski et al.2023] to prove uncomputability of the strongest polynomial invariant for nondeterministic polynomial programs, an arbitrary Reset VASS \(\mathcal{V}\) with \(n\) variables \(a_{1},\ldots,a_{n}\) is simulated by a nondeterministic polynomial program \(\mathcal{P}\) with \(n\)+1 variables \(b_{0},\ldots b_{n}\). Note that the programming model is purely nondeterministic, that is, without equality guards, since introducing guards would render the problem immediately undecidable [Muller-Olm and Seidl2004b]. To avoid zero-testing the variables before executing a transition, the crucial point in the reduction of [Hrushovski et al.2023] is to map invalid traces to the vector \(\vec{0}\) and faithfully simulate valid executions. By properties of the reduction, it holds that the configuration \((b_{0},\ldots,b_{n})\) is reachable in \(\mathcal{P}\), if and only if there exists a corresponding configuration \(\nicefrac{{1}}{{b_{0}}}\cdot(b_{1},\ldots,b_{n})\) in \(\mathcal{V}\). Essential to the reduction of [Hrushovski et al.2023] is, that even though there may be multiple configurations in \(\mathcal{P}\) for each configuration in \(\mathcal{V}\), all these configurations are only scaled by the factor \(b_{0}\) and hence collinear. By collinearity, the variety of the invariant ideal can be covered by a finite set of lines if and only if the set of reachable VASS configurations is finite. Testing this property is decidable, and hence finding the invariant ideal must be undecidable. Transferring the reduction of [Hrushovski et al.2023] to the probabilistic setting of \(\textsc{Prob-SPInv}\) by replacing nondeterministic choice with probabilistic choice poses the following problem: in the nondeterministic setting, any path is independent of all other paths. However, this does not hold in the probabilistic setting of \(\textsc{Prob-SPInv}\). The expected value operator \(\mathbb{E}[x_{n}]\) aggregates all possible valuations of \(x\) in iteration \(n\) across all possible paths through the program. Specifically, the expected value is a linear combination of the possible configurations of \(\mathcal{V}\), which is not necessarily limited to a collection of lines but may span a higher-dimensional subspace. This is the step where a reduction similar to (Hrushovski et al., 2023) fails for the \(\textsc{Prob-SPInv}\) problem of probabilistic programs. It is however worth noting how well-suited the Boundedness Problem for Reset VASS is for proving the undecidability of problems for unguarded programs. A Reset VASS is not powerful enough to determine if a variable is zero, yet the Boundedness Problem is still undecidable. The vast majority of other undecidable problems that may be used in a reduction are formulated in terms of counter-machines, Turing machines, or other automata that rely on explicitly determining if a given variable is zero, hindering a straightforward simulation as unguarded programs. Therefore, we conjecture that any attempt towards proving (un)computability of \(\textsc{Prob-SPInv}\) would require a new methodology, unrelated to (Hrushovski et al., 2023). We leave this task as an open challenge for future work. ### Summary of Computability Results for Probabilistic Polynomial Loop Invariants We finally conclude this section by summarizing our computability results on the strongest polynomial (moment) invariants of probabilistic loops. We overview our results in Table 2. ## 6. Related Work We discuss our work in relation to the state-of-the-art in computing strongest (probabilistic) invariants and analyzing point-to-point reachability. **Strongest Invariants.** Algebraic invariants were first considered for unguarded deterministic programs with affine updates (Karr, 1976). Here, a basis for both the ideal of affine invariants and for the ideal of polynomial invariants is computable (Karr, 1976; Kovacs, 2008). For unguarded deterministic programs with polynomial updates, all invariants of _bounded degree_ are computable (Muller-Olm and Seidl, 2004), while the more general task of computing a basis for the ideal of _all_ polynomial invariants, that is solving our \(\textsc{SPInv}\) problem, was stated as an open problem. In Section 4 we proved that \(\textsc{SPInv}\) is at least as hard as Skolem and P2P. Strengthening these results by proving computability for \(\textsc{SPInv}\) would result in major breakthroughs in number theory (Bilu et al., 2022; Lipton et al., 2022). For guarded deterministic programs, the strongest affine invariant is uncomputable, even for programs with only affine updates. This is a direct consequence of the fact that this model is sufficient to encode Turing machines and allows us to encode the Halting problem (Hopcroft and \begin{table} \begin{tabular}{|c|l|l||c|c c|} \hline \multicolumn{2}{|l||}{Program Model} & \multicolumn{3}{c|}{Strongest Affine Invariant} & \multicolumn{2}{c|}{Strongest Polynomial Invariant} \\ \hline \multirow{4}{*}{Prob.} & Unguarded \& & Affine & \(\checkmark\) & Algorithm 1 & \(\checkmark\) & Algorithm 1 \\ \cline{2-6} & Guarded (finite) & Poly. &? & & Skolem-hard & Theorem 5.10 \\ \cline{2-6} & Guarded (=, \(<\)) & Affine & & & \\ \cline{2-6} & Poly. & & \(\boldsymbol{\chi}\) & Theorem 5.8 \\ \hline \end{tabular} \end{table} Table 2. Our computability results for strongest polynomial (moment) invariants of polynomial _probabilistic_ loops. The symbol ‘\(\checkmark\)’ denotes computable problems, ’?’ shows open problems, and ‘\(\boldsymbol{\chi}\)’ marks uncomputable problems. Ullman, 1969). Nevertheless, there exists a multitude of incomplete methods capable of extracting useful invariants even for non-linear programs, for example, based on abstract domains (Kincaid et al., 2018), over-approximation in combination with recurrences (Farzan and Kincaid, 2015; Kincaid et al., 2019) or using consequence finding in tractable logical theories of non-linear arithmetic (Kincaid et al., 2023). For nondeterministic programs with affine updates, a basis for the invariant ideal is computable (Karr, 1976). Furthermore, the set of invariants of bounded degree is computable for nondeterministic programs with polynomial updates, while bases for the ideal of all invariants are uncomputable (Hrushovski et al., 2023; Muller-Olm and Seidl, 2004a). Additionally, even a single transition guarded by an equality or inequality predicate renders the problem uncomputable, already for affine updates (Muller-Olm and Seidl, 2004b). Point-To-Point Reachability.The Point-To-Point reachability problem formalized by our P2P problem appears in various areas dealing with discrete systems, such as dynamical systems, discrete mathematics, and program analysis. For linear dynamical systems, P2P is known as the _Orbit problem_(Chonev et al., 2013), with a significant amount of work on analyzing and proving decidability of P2P for linear systems (Baier et al., 2021; Chonev et al., 2013, 2015; Kannan and Lipton, 1980). In contrast, for polynomial systems, the P2P problem remained open regarding decidability or computational lower bounds. Existing techniques in this respect resorted to approximate techniques (Dang and Testylier, 2012; Dreossi et al., 2017). Contrarily to these works, in Section 3 we rigorously proved that P2P for polynomial systems is at least as hard as the Skolem problem. The P2P problem is essentially undecidable already for affine systems that additionally include nondeterministic choice (Finkel et al., 2013; Ko et al., 2018). Probabilistic Invariants.Invariants for probabilistic loops can be defined in various incomparable ways, depending on the context and use case. Dijkstra's weakest-precondition calculus for classical programs was generalized to the weakest-preexpectation (wp) calculus in the seminal works (Kozen, 1983, 1985; McIver and Morgan, 2005). In thewp-calculus, the semantics of a loop can be described as the least fixed point of the _characteristic_ function of the loop in the lattice of so-called _expectations_(Kaminski et al., 2019). Invariants are expectations that over- or under-approximate this fixed point and are called super- or sub-invariants, respectively. One line of research is to synthesize such invariants using templates and constraint-solving methods (Batz et al., 2023, 2021; Gretz et al., 2013). A calculus, analogous to thewp-calculus, has been introduced for expected runtime analysis (Kaminski et al., 2018) and amortized expected runtime analysis (Batz et al., 2023). The work of (Chatterjee et al., 2017) introduces the notion of _stochastic invariants_, that is, expressions that are violated with bounded probability. Other notions of probabilistic invariants involve martingale theory (Barthe et al., 2016) or utilize bounds on the expected value of program variable expressions (Chakarov and Sankaranarayanan, 2014). The techniques presented in (Bartocci et al., 2019; Moosbrugger et al., 2022) compute closed forms for moments of program variables parameterized by the loop counter. The different notions of probabilistic invariants, in general, do not form ideals or are relative to some other expression. Furthermore, the existing procedures to compute invariants are heuristics-driven and hence incomplete. Contrarily to these, our _polynomial moment invariants_ presented in Section 5 form ideals and relate all variables. Moreover, our Algorithm 1 computes a basis for _all_ moment invariants and is complete for the class of moment-computable polynomial loops. Going beyond such loops, we showed that Prob-SPinv is Skolem-hard and/or uncomputable (Theorem 5.10 and Theorem 5.8). ## 7. Conclusion We prove that computing the strongest polynomial invariant for single-path loops with polynomial assignments (SPInv) is at least as hard as the Skolem problem, a famous problem whose decidability has been open for almost a century. As such, we provide the first non-trivial lower bound for computing the strongest polynomial invariant for deterministic polynomial loops, a quest introduced in (Muller-Olm and Seidl, 2004b). As an intermediate result, we show that point-to-point reachability in deterministic polynomial loops (P2P), or equivalently in discrete-time polynomial dynamical systems, is Skolem-hard. Further, we devise a reduction from P2P to SPInv. We generalize the notion of invariant ideals from classical programs to the probabilistic setting, by introducing _moment invariant ideals_ and addressing the Prob-SPInv problem. We show that the strongest polynomial moment invariant, and hence Prob-SPInv, is (i) computable for the class of _moment-computable_ probabilistic loops, but becomes (ii) uncomputable for probabilistic loops with branching statements and (iii) Skolem-hard for polynomial probabilistic loops without branching statements. Going beyond Skolem-hardness of Prob-SPInv and SPInv are open challenges we aim to further study. ###### Acknowledgements. This research was supported by the Vienna Science and Technology Fund WWTF 10.47379/ICT19018 grant ProbInG, the European Research Council Consolidator Grant ARTIST 101002685, the Austrian Science Fund (FWF) project W1255-N23, and the SecInt Doctoral College funded by TU Wien. We thank Manuel Kauers for providing details on sequences and algebraic relations and Toghrul Karimov for inspiring us to consider the orbit problem. We thank the McGill Bellairs Research Institute for hosting the Bellairs 2023 workshop, whose fruitful discussions influenced parts of this work.
2301.06236
Search for Extremely Metal-Poor Stars with GEMINI-N/GRACES I. Chemical-Abundance Analysis
We present stellar parameters and abundances of 13 elements for 18 very metal-poor (VMP; [Fe/H] $<$ -2.0) stars, selected as extremely metal-poor (EMP; [Fe/H] $<$ -3.0) candidates from SDSS and LAMOST survey. High-resolution spectroscopic observations were performed using GEMINI-N/GRACES. We find ten EMP stars among our candidates, and we newly identify three carbon-enhanced metal poor (CEMP) stars with [Ba/Fe] $<$ 0. Although chemical abundances of our VMP/EMP stars generally follow the overall trend of other Galactic halo stars, there are a few exceptions. One Na-rich star ([Na/Fe] = +1.14) with low [Mg/Fe] suggests a possible chemical connection with second-generation stars in a globular cluster. The progenitor of an extremely Na-poor star ([Na/Fe] = -1.02) with an enhancement of K- and Ni-abundance ratios may have undergone a distinct nucleosynthesis episode, associated with core-collapse supernovae (CCSNe) having a high explosion energy. We have also found a Mg-rich star ([Mg/Fe] = +0.73) with slightly enhanced Na and extremely low [Ba/Fe], indicating that its origin is not associated with neutron-capture events. On the other hand, the origin of the lowest Mg abundance ([Mg/Fe] = -0.61) star could be explained by accretion from a dwarf galaxy, or formation in a gas cloud largely polluted by SNe Ia. We have also explored the progenitor masses of our EMP stars by comparing their chemical-abundance patterns with those predicted by Population III SNe models, and find a mass range of 10 - 26 $M_\odot$, suggesting that such stars were primarily responsible for the chemical enrichment of the early Milky Way.
Miji Jeong, Young Sun Lee, Timothy C. Beers, Vinicius M. Placco, Young Kwang Kim, Jae-Rim Koo, Ho-Gyu Lee, Soung-Chul Yang
2023-01-16T02:42:22Z
http://arxiv.org/abs/2301.06236v1
# Search for extremely metal-poor stars with Gemini-N/Graces I chemical-abundance analysis ###### Abstract We present stellar parameters and abundances of 13 elements for 18 very metal-poor (VMP; [Fe/H] \(<\) -2.0) stars, selected as extremely metal-poor (EMP; [Fe/H] \(<\) -3.0) candidates from SDSS and LAMOST survey. High-resolution spectroscopic observations were performed using GEMINI-N/GRACES. We find ten EMP stars among our candidates, and we newly identify three carbon-enhanced metal-poor (CEMP) stars with [Ba/Fe] \(<\) 0. Although chemical abundances of our VMP/EMP stars generally follow the overall trend of other Galactic halo stars, there are a few exceptions. One Na-rich star ([Na/Fe] = +1.14) with low [Mg/Fe] suggests a possible chemical connection with second-generation stars in a globular cluster. The progenitor of an extremely Na-poor star ([Na/Fe] = -1.02) with an enhancement of K- and Ni-abundance ratios may have undergone a distinct nucleosynthesis episode, associated with core-collapse supernovae (CCSNe) having a high explosion energy. We have also found a Mg-rich star ([Mg/Fe] = +0.73) with slightly enhanced Na and extremely low [Ba/Fe], indicating that its origin is not associated with neutron-capture events. On the other hand, the origin of the lowest Mg abundance ([Mg/Fe] = -0.61) star could be explained by accretion from a dwarf galaxy, or formation in a gas cloud largely polluted by SNe Ia. We have also explored the progenitor masses of our EMP stars by comparing their chemical-abundance patterns with those predicted by Population III SNe models, and find a mass range of 10 - 26 \(M_{\odot}\), suggesting that such stars were primarily responsible for the chemical enrichment of the early Milky Way. Subject headings:Keywords:Unified Astronomy Thesaurus concepts: Chemical abundances (224); Galaxy chemical evolution (580); Milky Way Galaxy (1054); Stellar abundances (1577); Stellar populations (1622) 0000-0002-4061-5885]Miji Jeong 0000-0002-8881-8888]Young Sun Lee 0000-0002-4888-0888]Timothy C. Beers 0000-0002-4888-0888]Vinicius M. Placco 0000-0002-0733-3473]Young Kwang Kim 0000-0002-4888-0888]Jae-Rim Koo 0000-0002-4888-0888]Ho-Gyu Lee 0000-0002-4888-0888]Soung-Chul Yang ## 1 Introduction Population III (Pop III) stars are believed to be responsible for the chemical enrichment of the early Universe, influencing the formation of subsequent generation of stars (Bromm & Loeb, 2003). The characterization of their physical properties is indispensable to draw a complete picture of the origin of the chemical elements and the chemical-evolution history of the early Milky Way (MW). Cosmological simulations predict that the Pop III stars were predominantly very massive, with a characteristic mass of \(\sim\) 100 \(M_{\odot}\)(e.g., Bromm & Larson, 2004). More recent sophisticated high-resolution simulations equipped with detailed physical processes are able to produce lower-mass stars with a few tens of \(M_{\odot}\)(Stacy & Bromm, 2013; Hirano et al., 2014; Susa et al., 2014; Stacy et al., 2016). Given that such stars no long exist, owing to their high masses, we are not able to directly observe and study them. At present the most practical observational probe of the physical properties of the first-generation stars relies on detailed elemental abundances of old, and very metal-poor (VMP; [Fe/H]\({}^{\,\prime}\) \(<\) -2.0) stars. Various chemical elements in the atmospheres of VMP stars enable investigation of the underlying physi cal processes originating in the different nucleosynthetic pathways that produced them (e.g., Beers & Christlieb, 2005; Norris et al., 2013; Frebel & Norris, 2015; Yoon et al., 2016), as well as the chemical yields from the supernovae (SNe) of their progenitors, including Pop III SNe (e.g., Heger & Woosley, 2010; Ishigaki et al., 2014; Nordlander et al., 2019). In turn, this allows us to provide important constraints on not only the nucleosynthesis of Pop III stars by comparing with theoretical model predictions (Nomoto et al., 2013), but also on the chemical-evolution history of the MW by examining the elemental-abundance trends (Frebel & Norris, 2015). Among the VMP stars, extremely metal-poor (EMP; [Fe/H] \(<-3.0\)) stars are the most suitable objects for the studies mentioned above. Previous investigations have revealed that, while EMP stars generally exhibit similar abundance trends for most elements at similar metallicities, they also show over- or under-abundances of C, N, O, the \(\alpha\)-elements, and light elements such as Na and Al (e.g., Caffau et al., 2013; Norris et al., 2013; Frebel & Norris, 2015; Bonifacio et al., 2018). The diversity of the chemical-abundance ratios among the EMP stars suggests that a range of core-collapse supernovae (CCSNe) with various progenitor masses and explosion energies have contributed to the stochastic chemical-enrichment history of the MW. One remarkable feature found from studies of low-metallicity stars is that the fraction of so-called carbon-enhanced metal-poor (CEMP; originally specified as metal-poor stars with [C/Fe] \(>+1.0\), more recently by [C/Fe] \(>+0.7\)) stars dramatically increases with decreasing metallicity (Rossi et al., 1999; Lucatello et al., 2006; Lee et al., 2013, 2014, 2017, 2019; Yong et al., 2013; Placco et al., 2014; Yoon et al., 2018; Arentsen et al., 2022). CEMP stars account for about 20% of all stars with [Fe/H] \(<-2.0\), over 30% at [Fe/H] \(<-3.0\), and approach 100% for [Fe/H] \(<-4.0\). The clear implication is that prodigious amounts of carbon was produced in the early history of the MW. CEMP stars can be divided into four major categories: CEMP-\(s\), CEMP-\(r\), CEMP-\(r/s\), and CEMP-no, according to the level of enhancement of their neutron-capture elements (Beers & Christlieb, 2005). CEMP-\(s\) stars exhibit over-abundances of \(s\)-process elements such as Ba. CEMP-\(r\) stars are strongly enhanced with \(r\)-process elements such as Eu, and CEMP-\(r/s\) objects are mildly enhanced with both \(r\)-process and \(s\)-process elements. CEMP-no stars lack enhancements of neutron-capture elements. Recent studies (e.g., Hampel et al., 2016; Cowan et al., 2021) have reported that the production of the CEMP-\(r/s\) stars is associated with an intermediate neutron-capture process (the "\(i\)-process"), first suggested by Cowan & Rose (1977). The diversity of the nature of CEMP stars implies that the formation of the various sub-classes is closely linked to different specific astrophysical sites. CEMP-no stars are the dominant fraction of stars with [Fe/H] \(<-3.0\)(e.g., Aoki et al., 2007; Yoon et al., 2016). Because of their low-metallicity nature and low abundances of neutron-capture elements, they are regarded as the likely direct descendants of Pop III stars (Christlieb et al., 2002; Frebel et al., 2005; Caffau et al., 2011; Keller et al., 2014; Placco et al., 2016; Yoon et al., 2016; Hartwig et al., 2018; Starkenburg et al., 2018; Placco et al., 2021). The C and O enrichment of CEMP-no stars are expected to have a profound impact on the formation of low-mass stars, since these species play a major role as efficient gas coolants, so that low-mass stars can form in even extremely low-metallicity environments (Bromm & Loeb, 2003; Norris et al., 2013; Frebel & Norris, 2015). Upon recognizing the importance of the EMP stars, several large-scale surveys have been undertaken over the past few decades to discover such iron-poor objects. In the early stages of this effort, a large number of metal-poor stars were identified by objective-prism based searches such as the HK survey (Beers et al., 1985, 1992) and Hamburg/ESO survey (HES; Wisotzki et al., 1996; Frebel et al., 2006; Christlieb et al., 2008). Later on, it was led by large spectroscopic surveys such as the legacy Sloan Digital Sky Survey (SDSS; York et al., 2000), the Sloan Extension for Galactic Understanding and Exploration (SEGUE; Yanny et al., 2009; Rockosi et al., 2022), and the Large sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST; Cui et al., 2012) survey, which are equipped with the capability for multi-object observation using hundreds to thousands of fibers in their focal planes. Recently, narrow-band photometric surveys such as SkyMapper (Keller et al., 2007), Pristine (Starkenburg et al., 2017), the Southern Photometric Local Universe Survey (S-PLUS; Mendes de Oliveira et al., 2019; Placco et al., 2022), and the Javalambre-Photometric Local Universe Survey (J-PLUS; Cenarro et al., 2019) are dramatically increasing the number of EMP candidates. Despite the extensive searches for EMP stars in the last few decades, thus far only several hundred such stars have been confirmed by high-resolution spectroscopic analysis from which their detailed elemental abundances have been derived (e.g., Ryan et al., 1996; Norris et al., 2001; Aoki et al., 2005, 2013; Cayrel et al., 2004; Yong et al., 2013; Matsuno et al., 2017; Aguado et al., 2019; Yong et al., 2021; Li et al., 2022). This is mainly because of their rarity and the difficulty of obtaining high-quality, high-resolution spectra. Due to the stochastic nature of the chemical enrichment of the early MW, and considering the importance of EMP stars to constrain the properties of the first-generation stars and early Galactic chemical evolution, the confirmation, analysis, and interpretation of their varied chemical-abundance patterns of more EMP stars are clearly required. In this study, we report on 18 newly identified VMP stars, of which 10 are EMP stars and 3 are CEMP stars. We derive chemical-abundance ratios for 13 elements in these objects, and discuss the overall abundance trends and possible origin of the chemically peculiar objects. In addition, we use stellar explosion models to predict the progenitor masses of our EMP stars in order to constrain the mass distribution of Pop III stars. This paper is organized as follows. The selection of the EMP candidates and their observation are covered in Section 2. The derivation of stellar parameters and elemental abundances is presented in Sections 3 and 4, respectively. In Section 5, we discuss the derived abundance trends of our EMP candidates by comparing with other Galactic field stars in previous studies, and estimate the progenitor masses of our confirmed EMP stars. We close with a summary in Section 6. ## 2 Target Selection and High-Resolution Observations ### Target Selection We selected EMP candidates from the low-resolution (\(R\sim\) 1800) spectroscopic surveys of SDSS and LAMOST for follow-up observations with high-resolution spectroscopy. Using an updated version of the SEGUE Stellar Parameter Pipeline (SSPP; Allende Prieto et al., 2008; Lee et al., 2008, 2011; Smolinski et al., 2011; Lee et al., 2013), which now has the capability of deriving [N/Fe] (Kim et al., 2022) and [Na/Fe] (Koo et al., 2022) (in addition to [C/Fe] and [Mg/Fe]), we analyzed the stellar spectra obtained by the legacy SDSS survey, SEGUE, the Baryon Oscillation Spectroscopic Survey (BOSS; Dawson et al., 2013), and the extended Baryon Oscillation Spectroscopic Survey (eBOSS; Blanton et al., 2017), and determined stellar atmospheric parameters such as effective temperature (\(T_{\rm eff}\)), surface gravity (\(\log~{}g\)), and metallicity ([Fe/H]). Similarly, we utilized the SSPP to estimate the stellar parameters and abundances from the LAMOST stellar spectra, made feasible due to their similar spectral coverage (3700 A - 9000 A) and resolution (\(R\sim\) 1800) to those of the SDSS. We refer interested readers to Lee et al. (2015) for details of this application. After obtaining the stellar parameters from stars in both surveys, we selected as EMP candidates the objects meeting the following criteria: \(g<\) 16, [Fe/H] \(<\) -2.8, and 4000 K \(<T_{\rm eff}<\) 7000 K. The relaxed cut on metallicity was adopted because the Ca ii K line, which plays an important role in determining the metallicity for VMP stars could be blended with interstellar calcium in low-resolution spectra, causing over-estimation of the metallicity. We eliminated stars that had already been observed with high-dispersion spectrographs, and carried out a visual inspection of the low-resolution spectra to make sure that their estimated metallicities did not arise from defects in the spectrum. ### High-resolution Follow-up Observations We carried out high-resolution spectroscopic observations for twenty stars (18 EMP candidates and two reference stars), making use of Gemini Remote Access to the CFHT ESPOnS Spectrograph (GRACES; Chene et al., 2014) on the 8 m Gemini-North telescope during the 2016A (GN-2016A-Q-17), 2018B (GN-2018B-Q-122), and 2019B (GN-2019B-Q-115, Q-219, and Q-310) semesters. We used the 2-fiber mode (sky + target) of the GRACES Echelle spectrograph, which yields a maximum resolving power of \(R\sim\) 45,000 in the spectral range of 4,000 A - 10,000 A. This mode provides for better handling of sky subtraction. One limitation of the GRACES approach is significant reduction of blue photons, owing to the 270 m-long fiber cable, which guides light from the focal plane of the Gemini telescope to the ESPOnS spectrograph. This produces low signal-to-noise ratio (S/N) for wavelengths shorter than 4700 A, in which numerous metallic lines are present, precluding abundance measurements for many atomic species. The 2D cross-dispersed Echelle spectra were reduced with standard calibration images (bias, flat-field, and arc lamp), using the DRAGRACES8 pipeline (Chene et al., 2021), which is a reduction pipeline written in IDL to extract the wavelength-calibrated 1D spectrum for the science target and sky. After subtracting the sky background, we co-added spectra of each Echelle order by a signal-weighted average for multiple exposures to boost the S/N, and obtained one continuous spectrum for a given object by stitching together adjacent orders following normalization of each order. The overlapping wavelength regions of each order were averaged by weighting the signal. We used this co-added spectrum for the abundance analysis. Footnote 8: [http://drforum.gemini.edu/topic/graces-pipeline-dragraces/](http://drforum.gemini.edu/topic/graces-pipeline-dragraces/) We measured the radial velocity of each star through cross-correlation of a synthetic template spectrum with the co-added observed spectrum in the region of 5160 A - 5190 A, where the Mg i\(b\) triplet lines are located. The synthetic spectrum was generated by considering the evolutionary stage of each target, assuming [Fe/H] = -3.0 for our EMP candidates. Heliocentric corrections were made with the astropy package. Table 1 lists observation details, heliocentric radial velocities, and the average S/N around 5500 A of the co-added spectra for our program stars. The two reference stars (denoted as J0226 and J1522 in the second column) were ob served to validate our approach of abundance analysis (see Section 4 for additional details). ### Distance and Photometry of Our Targets To provide better constraints on the determination of \(T_{\rm eff}\) and \(\log~{}g\), we made use of the distance and photometric information of our EMP candidates. We primarily adopted their parallaxes from the Gaia Early Data Release 3 (EDR3; Gaia Collaboration et al., 2016, 2021) for the distance estimates. We applied the systematic offset of -0.017 mas reported by Lindegren et al. (2021). In cases where the parallax of a star is not available in the Gaia EDR3, or its parallax uncertainty is larger than 25%, we adopted its photometric distance derived by the methodology of Beers et al. (2000, 2012). The \(V\) and \(K_{\rm s}\) magnitudes of our program stars used to derive photometry-based \(T_{\rm eff}\) estimates were obtained from the AAVSO Photometric All-Sky Survey (APASS; Henden et al., 2016) and Two Micron All Sky Survey (2MASS; Skrutskie et al., 2006), respectively. The extinction value \(E(B-V)\) was estimated from the dust map provided by Schlafly & Finkbeiner (2011), applying the relations reported by Schlegel et al. (1998) to correct the interstellar extinction for each bandpass. The reddening obtained from the dust map is the upper limit along the line-of-sight to a star, but we need to calculate the appropriate \(E(B-V)\) taking into account the star's distance. It is known that a difference of 0.01 mag in \(E(B-V)\) leads to a 50 K shift in the temperature estimate (Casagrande et al., 2010). We computed the \(E(B-V)\) value using the reddening fraction equation proposed by Anthony-Twarog & Twarog (1994) in the same way as performed by Ito et al. (2013), and corrected the reddening for each target. Table 2 lists the Gaia EDR3 ID, distance, \(V\), \(K_{\rm s}\), computed reddening, and absolute \(V\) magnitude of the observed EMP candidates. ## 3 Stellar Atmospheric Parameters The determination of stellar parameters is of central importance for deriving abundance estimates of chemical el \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline \multicolumn{1}{c}{ Object} & \multicolumn{1}{c}{Short} & \multicolumn{1}{c}{Date} & \multicolumn{1}{c}{RA} & \multicolumn{1}{c}{Dec} & \multicolumn{1}{c}{\(g\)} & \multicolumn{1}{c}{Exposure Time} & \multicolumn{1}{c}{S/N} & \multicolumn{1}{c}{\(V_{\rm H}\)} \\ ID & ID & \multicolumn{1}{c}{ID} & \multicolumn{1}{c}{(UT)} & & \multicolumn{1}{c}{(mag)} & \multicolumn{1}{c}{(sec)} & \multicolumn{1}{c}{(pixel\({}^{-1}\))} & \multicolumn{1}{c}{(km s\({}^{-1}\))} \\ \hline \multicolumn{10}{c}{2016A (GN-2016A-Q-17)} \\ \hline SDSS J075824.42+433643.4 & J0758 & 2016-04-07 & 07 58 24.42 & +43 36 43.4 & 16.3 & 1500\(\times\)3 & 38 & +67.2 \\ SDSS J092503.50+434718.4 & J0925 & 2016-04-05 & 09 25 03.50 & +43 47 18.4 & 16.5 & 1150\(\times\)3 & 31 & +28.5 \\ SDSS J131116.58+001237.7 & J1311 & 2016-04-06 & 13 11 16.58 & +00 12 37.7 & 16.3 & 1150\(\times\)3 & 34 & –8.3 \\ SDSS J131708.66+664356.8 & J1317 & 2016-04-08 & 13 17 08.66 & +66 43 56.8 & 15.9 & 1500\(\times\)3 & 44 & –185.8 \\ SDSS J152202.10+305526.3 & J1522 & 2016-04-05 & 15 22 02.10 & +30 55 26.3 & 16.5 & 1400\(\times\)3 & 38 & –353.9 \\ \hline \multicolumn{10}{c}{20188 (GN-2018B-Q-122)} \\ \hline LAMOST J001032.66+055759.1 & J0010 & 2018-09-03 & 00 10 32.66 & +05 57 59.1 & 14.6 & 900\(\times\)2 & 77 & –151.3 \\ LAMOST J010235.03+105245.5 & J0102 & 2018-09-06 & 01 02 35.03 & +10 52 54.55 & 15.5 & 1400\(\times\)3 & 47 & –130.9 \\ LAMOST J015857.38+382834.7 & J0158 & 2018-09-05 & 01 58 57.38 & +38 28 34.7 & 15.2 & 1200\(\times\)3 & 68 & –44.54 \\ BD+44 493 & J0226 & 2018-09-05 & 02 26 49.73 & +44 57 46.5 & 9.12 & 20\(\times\)12 & 75 & –149.3 \\ LAMOST J035724.49+324304.3 & J0357 & 2018-09-06 & 03 57 24.49 & +32 4 34.3 & 13.8 & 600\(\times\)1 & 62 & +114.5 \\ LAMOST J042245.27+180824.3 & J0422 & 2018-09-05 & 04 24 24.57 & +18 08 24.3 & 15.8 & 1750\(\times\)3 & 76 & +76.7 \\ LAMOST J165056.88+480240.6 & J1650 & 2018-09-03 & 16 50 56.88 & +48 42 02.0 & 16.3 & 600\(\times\)1 & 43 & –26.1 \\ LAMOST J170529.80+08559.2 & J1705 & 2018-09-03 & 17 05 29.80 & +08 55 59.2 & 14.3 & 1200\(\times\)2 & 76 & +43.5 \\ SDSS J224145.05+292426.1 & J2241 & 2018-09-03 & 22 241 45.0 & +29 24 26.1 & 14.9 & 1200\(\times\)2 & 49 & –218.1 \\ LAMOST J224245.51+272024.5 & J2242 & 2018-09-03 & 22 24 45.51 & +27 20 24.5 & 13.5 & 600\(\times\)1 & 60 & –378.8 \\ LAMOST J234117.38+273557.7 & J2341 & 2018-09-03 & 23 24 17.38 & +47 35 57.7 & 15.1 & 1200\(\times\)3 & 77 & –182.3 \\ \hline \multicolumn{10}{c}{20198 (GN-2019B-Q-115, Q-219, Q-310)} \\ \hline LAMOST J071349.17+550029.6 & J0713 & 2020-01-20 & 07 13 49.17 & +55 029.6 & 14.1 & 900\(\times\)3 & 93 & –56.8 \\ LAMOST J081413.16+330557.5 & J0814 & 2020-01-21 & 08 14 13.16 & +33 05 57.5 & 15.3 & 1800\(\times\)3 & 57 & +67.2 \\ LAMOST J090852.87+311941.2 & J0908 & 2020-01-19 & 09 08 52.87 & +31 19 41.2 & 15.4 & 1600\(\times\)3 & 58 & +144.8 \\ LAMOST J103745.92+253134.2 & J1037 & 2020-01-20 & 10 37 45.92 & +25 31 34.2 & 15.3 & 1500\(\times\)3 & 46 & +28.7 \\ \hline \end{tabular} Note. – The S/N per pixel is the average value around 5500 Å. \(V_{\rm H}\) is the heliocentric radial velocity. The short-named objects J0226 and J1522 are the reference stars studied by Ito et al. (2013) and Matsumoto et al. (2017), respectively. \end{table} Table 1: Observation Details of Our Program Stars ements. In this section, we describe how we derived \(T_{\rm eff}\), \(\log~{}g\), [Fe/H], and microturbulence velocity \(\xi_{\rm t}\). ### Initial Guess of Stellar Parameters Any high-resolution spectroscopic analysis to determine the stellar parameters requires a model atmosphere as a staring point. We obtained initial stellar-atmospheric parameters, which are needed to generate a model atmosphere, through fitting an observed spectrum of our EMP target to a synthetic template. We refer the interested reader to Kim et al. (2016) for a more detailed description of this approach. In this procedure, we used the spectral range of 4800 A - 5500 A, in which several metallic lines and the temperature-sensitive H\(\beta\) line are present. In addition, we degraded the original spectrum to \(R\) = 10,000 for fast and efficient spectral-template matching to derive stellar parameters. The smoothing of the spectrum also has the benefit of increasing the spectral S/N. Figure 1 shows an example of our spectral-matching technique for one of our targets (J0226). In this figure, the black and red lines represent the observed spectrum and best-fit synthetic spectrum, respectively. The strongest feature is the H\(\beta\) line. The bottom panel of the figure is a close-up view of the Mg 1\(b\) triplet lines and a few iron lines. We performed this spectral fitting on our entire sample of stars, and obtained initial estimates of stellar parameters, which were used as starting points in the process of determining more accurate estimates, as described in Section 3.3. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Short} & \multicolumn{1}{c}{\(Gaia\) EDR3 ID} & Distance & \(V\) & Error & \(K_{*}\) & Error & \(V-K_{*}\) & \(E(B-V)\) & \(M_{\rm V}\) \\ ID & & & (kpc) & & & & & & \\ \hline J0010 & 2742456847516899328 & 16.94\({}^{a}\) & 13.93 & 0.04 & 10.82 & 0.02 & 3.11 & 0.03 & –2.33 \\ J0102 & 2583190938965264256 & 0.93\({}^{b}\) & 15.31 & 0.11 & 13.75 & 0.05 & 1.56 & 0.04 & 5.24 \\ J0158 & 34035198441295616 & 0.71\({}^{a}\) & 14.95 & 0.02 & 12.78 & 0.02 & 2.17 & 0.05 & 0.53 \\ J0226 & 3415101064663637376 & 0.20\({}^{b}\) & 9.11 & 0.02 & 7.20 & 0.01 & 1.91 & 0.09 & 2.45 \\ J0357 & 170370808491343360 & 0.72\({}^{b}\) & 13.37 & 0.10 & 11.09 & 0.02 & 2.28 & 0.26 & 3.39 \\ J0422 & 477413114473216 & 0.52\({}^{a}\) & 15.08 & 0.03 & 11.79 & 0.02 & 3.29 & 0.42 & 0.11 \\ J0713 & 98803125048312792 & 10.6\({}^{b}\) & 13.40 & 0.09 & 10.42 & 0.02 & 2.98 & 0.08 & \(-2.00\) \\ J0758 & 98803260853360267136 & 1.60\({}^{b}\) & 16.20 & 0.25 & 14.88 & 0.10 & 1.32 & 0.04 & 5.06 \\ J0814 & 902428295962842496 & 3.18\({}^{b}\) & 15.04 & 0.03 & 13.38 & 0.03 & 1.67 & 0.05 & 2.65 \\ J0908 & 700085583420082176 & 18.18\({}^{a}\) & 14.92 & 0.05 & 12.42 & 0.03 & 2.50 & 0.02 & –1.46 \\ J0925 & 81767220080135152 & 0.64\({}^{a}\) & 16.12 & 0.09 & 14.43 & 0.08 & 1.70 & 0.02 & 2.02 \\ J1037 & 72468165485454368 & 1.19\({}^{b}\) & 15.08 & 0.05 & 13.85 & 0.04 & 1.23 & 0.02 & 4.63 \\ J1311 & 86771871807620949520 & 2.73\({}^{b}\) & 16.28 & 0.07 & 14.99 & 0.14 & 1.29 & 0.03 & 3.97 \\ J1317 & 1678584136808089344 & 2.1\({}^{b}\) & 15.85 & 0.06 & 14.74 & 0.11 & 1.11 & 0.01 & 4.15 \\ J1522 & 127688247044162688 & 4.97\({}^{a}\) & 16.46 & 0.15 & 14.80 & & 1.66 & 0.02 & 2.91 \\ J1650 & 1408719281332527616 & 0.84\({}^{b}\) & 13.14 & 0.01 & 12.01 & 0.02 & 1.13 & 0.02 & 3.44 \\ J1705 & 44432716963951021028 & 0.75\({}^{b}\) & 14.12 & 0.03 & 12.66 & 0.03 & 1.44 & 0.11 & 4.39 \\ J2241 & 1887491436808430808 & 1.2\({}^{b}\) & 15.06 & 0.09 & 13.73 & 0.05 & 1.32 & 0.06 & 3.55 \\ J2242 & 1880093992004192 & 6.42\({}^{b}\) & 13.12 & 0.06 & 10.68 & 0.02 & 2.44 & 0.06 & –1.10 \\ J2341 & 2865251577418971392 & 15.59\({}^{a}\) & 14.49 & 0.09 & 11.79 & 0.02 & 2.70 & 0.08 & –1.75 \\ \hline \end{tabular} Note. – The \(V\) and \(K_{*}\) magnitudes come from APASS and 2MASS, respectively. The reddening value was rescaled according to the distance of a star (see text for detail). \(M_{\rm V}\) is the absolute magnitude in the \(V\) band. \({}^{a}\) Based on spectroscopic parallax. \({}^{b}\) Based on \(Gaia\) EDR3 parallax. \end{table} Table 2: Distance and Photometric Information Figure 1: An example (J0226, one of the reference stars) of our spectral–matching technique. The black line is the observed spectrum, whereas the red is the best-matching synthetic spectrum. The bottom panel is a close-up view of the Mg 1\(b\) triplet region and a few iron lines. ### Measurement of Equivalent Widths To determine accurate stellar parameters and abundances of various chemical elements, we need to measure the equivalent widths (EWs) of Fe lines, as well as other metallic lines that are detectable in a spectrum. For this, we collected the information for various atomic lines from several literature sources (Aoki et al., 2013, 2018; Spite et al., 2018; Placco et al., 2020; Rasmussen et al., 2020). Then, the EWs were measured via fitting a Gaussian profile, using the Image Reduction and Analysis Facility (IRAF; Tody, 1986, 1993) task splot. We did not measure EWs for blended lines, but only for well-isolated lines. The line information of Li, C, and Ba were produced with the linemake9 code (Placco et al., 2021), and their abundances were determined through spectral synthesis rather than the EW analysis. Table 6 in the Appendix lists the line information used for the abundance analysis. Footnote 9: [https://github.com/vmplaceo/linemake](https://github.com/vmplaceo/linemake) ### An Iterative Procedure for Determining Stellar Parameters Because our targets are low-metallicity stars with weak metallic absorption lines, and the GRACES spectra have relatively lower S/N in the wavelength region shorter than 4700 A, which includes many iron lines that play a crucial role in constraining the stellar parameters, we did not follow the traditional ionization-equilibrium technique to derive \(T_{\rm eff},\log~{}g\), and \(\xi_{\rm t}\). Instead, we have devised an iterative procedure to determine the stellar parameters, as illustrated in Figure 2. It begins with the information on \(V\), \(K_{\rm s}\), \(M_{\rm V}\), and the initial stellar parameters obtained by the spectral-matching procedure described above. Note that for brevity we express effective temperature as T, surface gravity as G, metallicity as M, and microturbulence velocity (\(\xi_{\rm t}\)) as V\({}_{\rm t}\). A detailed description of the procedure is as follows: **Step a:**: Preparation of required information. We gather information on \(V\), \(K_{\rm s}\), \(M_{\rm V}\), G\({}_{\rm init}\), and M\({}_{\rm init}\) of each star, where \(M_{\rm V}\) is the absolute \(V\) magnitude, and G\({}_{\rm init}\) and M\({}_{\rm init}\) are the gravity and the metallicity, respectively, estimated from the spectral matching. The adopted \(V\), \(K_{\rm s}\), and M\({}_{V}\) for our program stars are listed in Table 2. **Step b:**: Estimation of effective temperature. We input \(V\), \(K_{\rm s}\), G\({}_{\rm i}\) (G\({}_{\rm init}\)), and M\({}_{\rm i}\) (M\({}_{\rm init}\)), and estimate three \(T_{\rm eff}\) values for each star by following the procedure described in Section 3.3.1. The gravity G\({}_{\rm i}\) is used to separate giants from dwarfs. We obtain a bi-weight average \(T_{\rm eff}\) (\(\Gamma_{\rm o}\)) from the three estimates. **Step c:**: Estimation of surface gravity. Following the prescription in Section 3.3.2, we estimate \(\log~{}g\) (G\({}_{\rm o}\)), based on isochrone fitting. We assume a stellar age of 13 Gyr and [\(\alpha\)/Fe] = +0.3, appropriate for our EMP candidates. In this step, the inputs are T\({}_{\rm o}\), M\({}_{\rm i}\), and \(M_{\rm V}\). **Step d:**: Estimation of metallicity and microturbulence velocity. While fixing T\({}_{\rm o}\) and G\({}_{\rm o}\) determined in **Step b** and **Step c**, we estimate M\({}_{\rm o}\) and V\({}_{\rm t,o}\) by the prescription addressed in Section 3.3.3. **Step e:**: Convergence check. We check if M\({}_{\rm i}\) and M\({}_{\rm o}\) agree with each other within \(\pm\)0.02 dex. If the convergence criterion is satisfied, the routine goes to **Step f**, and if not, it goes back to **Step b**, and G\({}_{\rm o}\) and M\({}_{\rm o}\) are used as G\({}_{\rm i}\) and M\({}_{\rm i}\) to repeat until it converges to the tolerance level of 0.02 dex in the metallicity difference between the input and output. Figure 2: Flow chart of our iterative procedure to determine \(T_{\rm eff}\), \(\log~{}g\), [Fe/H], and \(\xi_{\rm t}\). For convenience, the effective temperature, surface gravity, metallicity, and microturbulence velocity are expressed as T, G, M, and V\({}_{\rm t}\), respectively. \(M_{\rm V}\) is the absolute \(V\) magnitude. * Determination of adopted stellar parameters. If the convergence criterion is met in **Step e**, the derived T\({}_{\rm o}\), G\({}_{\rm o}\), M\({}_{\rm o}\), and V\({}_{\rm t,o}\) are taken as the adopted stellar parameters. #### 3.3.1 Effective Temperature We employed three different methods to derive accurate and precise effective temperature estimates. The underlying principle of the methods is the InfraRed Flux Method (IRFM) method, which is based on the extinction-corrected \(V-K_{\rm s}\) color-\(T_{\rm eff}\) relation. We adopted the color-temperature relations provided by Alonso et al. (1999), Gonzalez Hernandez & Bonifacio (2009), and Casagrande et al. (2010). These studies reported different relations over a range of the surface gravities of stars, and by following their prescription, we made use of Equations 8 and 9 in Table 2 of Alonso et al. (1999) for giants, Equation 10 of Gonzalez Hernandez & Bonifacio (2009) for giants and dwarfs, and Equation 3 of Casagrande et al. (2010) for subgiants and dwarfs. All of these equations have some dependency on the metallicity, with valid ranges of [Fe/H] and \(V-K_{\rm s}\). We require \(V-K_{\rm s}\), log \(g\), and [Fe/H] information to work with the above equations. The criteria for subdividing the luminosity class of a star are \(\log~{}g\leq\) 3.0 for a giant, 3.0 \(<\log~{}g\leq\) 3.75 for a subgiant, and \(\log~{}g>\) 3.75 for a dwarf. As Equations 8 and 9 of Alonso et al. (1999) use the same color range for a giant, we derived the temperature from both equations and took an average. If the \(V-K_{\rm s}\) color is out of the range specified in a color-temperature relation, the effective temperature was determined by using the color range closest to the observed one. Through this process, we obtained two estimates of \(T_{\rm eff}\) for each object. In addition, we introduced the method by Frebel et al. (2013) to correct the spectroscopically determined \(T_{\rm eff}\) from the spectral fitting. They reported a procedure of adjusting the systematic offset between the photometric and spectroscopic based \(T_{\rm eff}\). This provides a third \(T_{\rm eff}\) estimate. We ultimately calculated a bi-weight \(T_{\rm eff}\) estimate from the three \(T_{\rm eff}\) determinations for each star. We then follow the iterative process laid out in Figure 2 until convergence occurs, and a final adopted \(T_{\rm eff}\) is obtained. Note that for the first trial of the \(T_{\rm eff}\) estimate from the color-temperature relations, we adopted \(\log~{}g\) and [Fe/H] derived by the spectral-fitting method. We used the final adopted \(T_{\rm eff}\) for the abundance analysis. The standard error of the three temperatures is reported as the uncertainty in \(T_{\rm eff}\) of each star. #### 3.3.2 Surface Gravity Iron abundances estimated from Fe I lines are known to suffer from large non-local thermodynamic equilibrium (NLTE) effects (e.g., Lind et al., 2012; Amarsi et al., 2016), and the number of available Fe II lines is very limited in the GRACES spectra for our EMP candidates. This makes it difficult to determine the surface gravity by the traditional approach of the ionization equilibrium (forcing a balance between abundances estimated from the neutral atomic lines and singly ionized lines). Instead, we determine the surface-gravity estimate through fitting isochrones. For this technique to work, we first generated a hundred isochrones with metallicities resampled from a normal distribution with an estimated [Fe/H] and a conservative error of 0.2 dex. In this process, we employed Yonsei-Yale (\(Y^{2}\); Kim et al., 2002; Demarque et al., 2004) isochrone, and assumed a stellar age of 13 Gyr and [\(\alpha\)/Fe] = +0.3. Since the minimum value of [Fe/H] is -3.75 in the \(Y^{2}\) isochrone, any metallicity of our target lower than this limit was forced to be [Fe/H] = -3.75. In the \(M_{\rm V}\)-\(T_{\rm eff}\) plane based on the generated isochrones, we searched for a gravity that most closely reflected the observables (\(M_{\rm V}\) and \(T_{\rm eff}\)) of our target. We took a median value from one hundred such gravity estimates. The errors were taken at 34% to the left and right from the median in the \(\log~{}g\) distribution. When the metallicity tolerance is satisfied, as illustrated in Figure 2, the final estimate of \(\log~{}g\) is adopted. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Short} & \(T_{\rm eff}\) & \(\log~{}g\) & [Fe/H] & \(\xi_{\rm t}\) \\ ID & (K) & & & (km s\({}^{-1}\)) \\ \hline J0010 & 4309\(\pm\)81 & \(0.34\pm^{0.23}_{-0.26}\) & –2.48\(\pm\)0.11 & 2.3 \\ J0102 & 5974\(\pm\)35 & \(4.55\pm^{0.02}_{-0.02}\) & –3.09\(\pm\)0.06 & 0.7 \\ J0158 & 5165\(\pm\)44 & \(2.48\pm^{0.15}_{-0.24}\) & –3.04\(\pm\)0.05 & 1.7 \\ J0226 & 5461\(\pm\)98 & \(3.00\pm^{0.23}_{-0.23}\) & –3.78\(\pm\)0.05 & 1.9 \\ J0357 & 5631\(\pm\)27 & \(3.49\pm^{0.00}_{-0.04}\) & –2.75\(\pm\)0.05 & 1.2 \\ J0422 & 5027\(\pm\)48 & \(2.07\pm^{0.17}_{-0.15}\) & –3.33\(\pm\)0.06 & 2.3 \\ J0713 & 4380\(\pm\)91 & \(0.22\pm^{0.10}_{-0.10}\) & –3.15\(\pm\)0.08 & 2.3 \\ J0758 & 6453\(\pm\)254 & \(4.33\pm^{0.10}_{-0.20}\) & –2.96\(\pm\)0.13 & 1.0 \\ J0814 & 5853\(\pm\)237 & \(3.52\pm^{0.16}_{-0.22}\) & –3.39\(\pm\)0.05 & 1.2 \\ J0908 & 4713\(\pm\)145 & \(1.21\pm^{0.40}_{-0.32}\) & –3.67\(\pm\)0.06 & 2.3 \\ J0925 & 5631\(\pm\)111 & \(3.39\pm^{0.00}_{-0.14}\) & –3.53\(\pm\)0.06 & 1.8 \\ J1037 & 6569\(\pm\)423 & \(4.37\pm^{0.18}_{-0.28}\) & –2.50\(\pm\)0.05 & 1.3 \\ J1311 & 6510\(\pm\)237 & \(4.23\pm^{0.19}_{-0.30}\) & –2.74\(\pm\)0.04 & 1.0 \\ J1317 & 6810\(\pm\)336 & \(4.25\pm^{0.11}_{-0.17}\) & –2.37\(\pm\)0.05 & 1.3 \\ J1522 & 5698\(\pm\)27 & \(3.43\pm^{0.01}_{-0.04}\) & –3.70\(\pm\)0.06 & 1.5 \\ J1650 & 6800\(\pm\)335 & \(3.95\pm^{0.12}_{-0.05}\) & –2.17\(\pm\)0.05 & 1.3 \\ J1705 & 6580\(\pm\)328 & \(4.29\pm^{0.16}_{-0.16}\) & –2.64\(\pm\)0.05 & 1.3 \\ J2241 & 6608\(\pm\)152 & \(3.94\pm^{0.05}_{-0.02}\) & –2.71\(\pm\)0.13 & 0.7 \\ J2242 & 4857\(\pm\)144 & \(1.46\pm^{0.52}_{-0.39}\) & –3.40\(\pm\)0.08 & 2.3 \\ J2341 & 4628\(\pm\)46 & \(0.96\pm^{0.15}_{-0.15}\) & –3.17\(\pm\)0.07 & 2.3 \\ \hline \end{tabular} Note. – The objects named J0226 and J1522 are the reference stars studied by Ito et al. (2013) and Matsuno et al. (2017), respectively. \end{table} Table 3: Determined Atmospheric Parameters #### 3.3.3 Metallicity and Microturbulence Velocity The metallicity was determined by minimizing the slope of log(EWs) of Fe i lines as a function of excitation potential. In this process, a total of 129 Fe i lines were used, and a starting model atmosphere was generated with \(T_{\rm eff}\), \(\log~{}g\), and [Fe/H] estimated in **Step b**, **Step c**, and the spectral fitting, respectively. The \(\xi_{\rm t}\) value was assumed according to the evolutionary stage (\(\xi_{\rm t}\) = 0.5 km s\({}^{-1}\) for dwarfs, 1.0 km s\({}^{-1}\) for turnoff stars, and 2.0 km s\({}^{-1}\) for giants). We adopted Kurucz model atmospheres (Castelli & Kurucz, 2003)10, using newly calculated opacity distribution functions with updated opacities and abundances (Castelli & Kurucz, 2004). A desired model atmosphere was created by interpolating the existing grid at given stellar parameters, using the ATLAS9 code (Castelli et al., 1997). Footnote 10: [https://www.user.oats.inaf.it/castelli/grids.html](https://www.user.oats.inaf.it/castelli/grids.html) Note that, because Fe ii lines are less subject to NLTE effects (e.g., Lind et al., 2012; Amarsi et al., 2016), it is desirable to use their abundance. However, because detectable Fe ii lines are very limited for our EMP targets, we used the abundance derived from the Fe i lines. The uncertainty of [Fe/H] is given by the standard error of mean of the Fe i abundances. The microturbulence velocity (\(\xi_{\rm t}\)) of a star was estimated by seeking a flat slope of log (EWs) for Fe i lines as a function of the reduced equivalent width, log (EWs/\(\lambda\)). We used the line-analysis program MOOG (Sneden, 1973; Sobeck et al., 2011). As shown in Figure 2, if the metallicity M\({}_{\rm o}\) agrees with M\({}_{\rm i}\) within a difference of \(\pm\)0.02 dex, then T\({}_{\rm o}\), G\({}_{\rm o}\), M\({}_{\rm o}\), and V\({}_{\rm t,o}\) are adopted as the final stellar parameters. When the absolute difference between M\({}_{\rm i}\) and M\({}_{\rm o}\) is larger than 0.02 dex, M\({}_{\rm i}\) is updated by M\({}_{\rm o}\), and the routine run again until the metallicity tolerance is satisfied. Note that we do not update the \(\xi_{\rm t}\) value when repeating the routine. Table 3 lists the derived stellar parameters by the iterative procedure. According to the table, we have confirmed 8 VMP and 10 EMP stars, except two objects (J0226 and J1522), which were already analyzed by Ito et al. (2013) and Matsuno et al. (2017), respectively. We observed these objects as benchmark stars, and re-analyzed them to validate our approach to determining the stellar parameters and chemical abundances. In Section 4.4, we present a detailed comparison between our estimates and those of the previous studies. Figure 3 shows the position of our program stars in the \(\log~{}g\)-\(T_{\rm eff}\) plane, together with isochrones with age 13 Gyr and [\(\alpha\)/Fe] = +0.3, for [Fe/H] = -2.5, -3.0, and -3.5, represented by blue-dotted, orange-dashed, and green-solid lines, respectively. The turnoff stars are indicated with squares, while the giants are indicated circles. The dwarfs and subgiants are denoted by triangles. The two benchmark stars are represented by diamonds (black for J0226 and red for J1522). It is clear to see that our program stars populate a wide range of luminosity classes and spectral types. ## 4 Elemental Abundances To derive the abundances of individual elements, we carried out a one-dimensional (1D), local thermodynamic equilibrium (LTE) abundance analysis and spectral synthesis using MOOG. We adopted the solar atmospheric abundances in Asplund et al. (2009) to determine the abundance ratios relative to the Sun. Whenever difficulties in measuring the EWs of a line arose, we also performed spectral synthesis. The atomic line data were compiled from several literature sources (Aoki et al., 2013, 2018; Spite et al., 2018; Placco et al., 2020; Rasmussen et al., 2020); and the line information is listed in Table 6 in the Appendix. In this section, we address how we derived the chemical abundances. Figure 3: Adopted surface gravity, as a function of effective temperature, for our program stars. They are grouped into three different symbols according to \(T_{\rm eff}\). The square symbols represent turnoff stars, while the circle symbols represent cool giants. The dwarfs and subgiants are denoted by triangles. The two comparison stars are symbolized by diamonds (black for J0226 and red for J1522). The three isochrones shown, with age 13 Gyr and [\(\alpha\)/Fe] = +0.3, represent [Fe/H] = -2.5 (blue dotted), [Fe/H] = –3.0 (orange dashed), and [Fe/H] = –3.5 (green solid). ### Chemical Abundances by Equivalent Width Analysis We derived the abundances for the odd-\(Z\) elements (Na, K, and Sc), \(\alpha\)-elements (Mg and Ca), and iron-peak elements (Cr, Mn, Ni, and Zn) by measuring the EWs of their neutral or singly ionized lines. The Na abundance was determined using the Na i doublet located at 5889 A and 5895 A, which are the only available sodium lines in low-metallicity stars. Only one line at 7699 A was used to measure the K abundance, while five Sc ii lines were used to measure [Sc/Fe]. We used four Mg i and 12 Ca i lines to derive Mg and Ca abundance ratios, respectively. Concerning the abundances of the iron-peak elements, we utilized 10 Cr i, 3 Mn i, 15 Ni i, and 2 Zn i lines to determine [Cr/Fe], [Mn/Fe], [Ni/Fe], and [Zn/Fe], respectively. Depending on the effective temperature, metallicity, and S/N of a spectrum, the number of measured EWs for each element differs from star to star. ### Chemical Abundances by Spectral Synthesis The abundance ratios for Li, C, and Ba were determined by spectral synthesis, using atomic line data from linemake. For the Li abundance ratio, we used the Li i resonance doublet line around 6707 A. Figure 4 provides an example of the Li spectral synthesis (J0357). The black-solid line is the observed spectrum; the red-dashed line is the best-matching synthetic spectrum. The two dotted lines present the upper and lower limits of the spectral fits by deviating by \(\pm\)0.2 dex in \(A\)(Li) from the best-matching one. We conservatively considered these limits as the error of the determined [Li/Fe]. We estimated the C-abundance ratio by spectral synthesis of the CH \(G\)-band around 4310 A. While estimating [C/Fe], we adopted \({}^{12}\)C/\({}^{13}\)C of 89 (\(\log~{}g\)\(>\) 3.75), 30 (3.0 \(<\)\(\log~{}g\)\(\leq\) 3.75), or 20 (\(\log~{}g\)\(\leq\) 3.0) according to the luminosity class of our program stars (Asplund et al., 2009), and we used spectra degraded to \(R\) = 10,000 in order to increase the S/N around this feature. Figure 5 exhibits an example of the spectral synthesis for the carbon-abundance determination (J0226). As before, the observed spectrum is represented by the black line, while the best-matching synthetic spectrum is represented by the red-dashed line. The upper and lower limits of the spectral fits are denoted by the two dotted lines, which are located at \(\pm\)0.2 dex away from the determined value. This limit is considered as the uncertainty of the estimated [C/Fe]. We corrected the determined [C/Fe] following the prescription of Placco et al. (2014) to restore the natal carbon on the surface of stars that have been altered due to evolutionary effects (the depletion of C as stars climb the giant branch). Table 7 in the Appendix summarizes the measured [C/Fe] and its corrected value for each object. We were able to measure the abundance of one of the neutron-capture elements (Ba), using two Ba ii lines at 6141 A and 6497 A, through spectral synthesis. Isotopic splitting using the values from Sneden et al. (2008) and hy Figure 4: Top panel: Example of Li spectral synthesis for one of our program stars (J0357). The black-solid line is the observed spectrum, and the red-dashed line is the best-matching synthetic spectrum. The two dotted lines are the upper and lower limits of the spectral fits, which deviate by \(\pm\)0.2 dex. Bottom panel: Residual plot of the top panel. Figure 5: Same as in Figure 4, but for C and J0226. perfine structure were considered when the line information is generated with linemake. An example is displayed in Figure 6; the lines are the same as in Figure 4. The error estimate comes from the standard error of the two estimates, or is conservatively set to 0.2 dex for the objects with only one line available. ### Errors of Derived Abundances When computing the error in the derived abundance for individual elements, we have considered both the random errors arising from the line-to-line scatter and the systematic error caused by the errors of the adopted stellar parameters. The random error represents the variation of the individual lines for a given element, calculated by \(\sigma/\sqrt{N}\), where \(N\) is the number of lines and \(\sigma\) is the standard deviation of the derived abundance. This has the advantage of including the uncertainty from the oscillator strength values of the lines considered. In case for an element where the number of detectable lines is less than three, we took \(\sigma\) from the Fe I lines and computed its standard error by \(\sigma\)(Fe I)/\(\sqrt{N}\). To derive the systematic error due to stellar atmospheric-parameter errors, we perturbed the stellar parameters by \(\pm\)100 K for \(T_{\rm eff}\), \(\pm\)0.2 dex for \(\log~{}g\), and \(\pm\)0.2 km s\({}^{-1}\) for \(\xi_{\rm t}\) one at a time, and estimated the systematic error for each case. The systematic error for each parameter is an average of the two values derived by perturbing two ways (\(\pm\)). The final reported error on the abundance of each element is the quadratic sum of the random error and the systematic error. Table 4 summarizes the estimated abundances for our program stars; Table 8 in the Appendix provides more detailed abundances and their associated errors measured for our VMP stars. ### Abundance Comparison with Previous Studies Figure 6: Same as in Figure 4, but for Ba and J0713. Figure 7: Differences in the chemical abundances between our study and the work by Ito et al. (2013) (top panel) and Matsuno et al. (2017) (bottom panel). The red-solid line shows the zero point, while the gray-dashed lines indicate deviations of \(\pm\)0.2 dex. Among our sample of stars, two objects (J0226 and J1522) were previously studied by Ito et al. (2013) and Matsuno et al. (2017), respectively. The two studies observed these stars with Subaru/HDS to obtain high-resolution (\(R\)\(\sim\) 60,000) spectra, and carried out a detailed abundance analysis. We observed these stars to validate the strategy of our stellar parameter and abundance determination by comparing our estimates with their values. We intentionally observed the bright (\(V\)\(\sim\) 9.1) object J0226 multiple times with different exposure times to obtain spectra with a range of S/N. This provides an opportunity to check how the S/N of a spectrum affects the derivation of the stellar parameters and abundances. By comparing the literature stellar parameters and chemical abundances with the ones derived from our spectrum that has similar S/N to the rest of our program stars, we can appreciate if the stellar parameters and abundances are reliable at S/N (\(\sim\) 60), which is the mean S/N of the relatively bright (\(g\)\(\leq\) 15.9) objects among our program stars. Similarly, the relatively faint object J1522 can be used to validate the derived abundance for the faint (\(g\)\(>\) 15.9) objects with S/N \(\sim\) 40. We obtained \(T_{\rm eff}\) = 5461 K, \(\log\)\(g\) = 3.0, [Fe/H] = -3.78 for the object J0226 from a GRACES spectrum with S/N = 75. Comparison with the stellar parameters estimated by Ito et al. (2013) reveals that our estimates are 30 K higher in \(T_{\rm eff}\), 0.4 dex lower in \(\log\)\(g\), and 0.01 dex higher in [Fe/H], indicating good agreement. In addition, we compared the elemental differences in common between their study and our work, as shown in the top panel of Figure 7. The gray-dashed lines indicate the abundance deviation of \(\pm\)0.2 dex. The plot clearly indicates a good match within \(\pm\)0.2 dex, validating our abundance analysis. The stellar parameters derived from our GRACES spectrum for J1522 are \(T_{\rm eff}\) = 5698 K, \(\log\)\(g\) = 3.43, and [Fe/H] = -3.7. This object is relatively faint, and has a low S/N of 38. Comparing with the values reported by Matsuno et al. (2017), our estimates are about 200 K higher for \(T_{\rm eff}\), 0.3 dex lower for \(\log\)\(g\), and 0.25 dex more metal-rich. We ascribe the [Fe/H] difference primarily to the temperature difference. We note that they adopted \(T_{\rm eff}\) = 5505 K determined by fitting the H\(\beta\) profile for their abundance analysis, instead of \(T_{\rm eff}\) derived from \(V-K_{\rm s}\), which is 5813 K, and much closer to our estimate. Nonetheless, in the bottom panel of Figure 7, we see that Li and Mg abundance agree within \(\pm\)0.2 dex, again validating our abundance determinations even at lower S/N. Note that in the figure, we did not include the carbon-abundance ratio because both theirs and ours are the upper limit estimate. ## 5 Discussion In this section, we compare the chemical abundances of our program stars with those from other previously studied Galactic halo stars. We find that most of our program stars follow the general trends of the previous data for VMP stars. However, there are a few exceptional stars with peculiar abundances. We first discuss the overall abundance \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ ID} & Li i & C & Na i & Mg i & K i & Ca i & Sc ii & Cr i & Mn i & Ni i & Zn i & Ba ii \\ \hline J0010 & \(\cdots\) & 0.37 & 0.24\(\pm\)0.07 & 0.73\(\pm\)0.04 & 0.13\(\pm\)0.09 & 0.19\(\pm\)0.05 & \(-\)0.33\(\pm\)0.11 & \(-\)0.10\(\pm\)0.05 & 0.09\(\pm\)0.06 & 0.13\(\pm\)0.05 & 0.00\(\pm\)0.12 & \(-\)1.72\(\pm\)0.06 \\ J0102 & 3.79 & 0.75 & \(-\)0.60\(\pm\)0.12 & 0.21\(\pm\)0.12 & \(\cdots\) & \(\cdots\) & \(-\)0.04\(\pm\)0.17 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ J0158 & 2.78 & 0.11 & 1.14\(\pm\)0.08 & 0.06\(\pm\)0.06 & \(\cdots\) & 0.41\(\pm\)0.04 & \(\cdots\) & \(-\)0.09\(\pm\)0.06 & \(-\)0.01\(\pm\)0.10 & 0.30\(\pm\)0.06 & 0.61\(\pm\)0.10 & \(<\)\(-\)1.60\(\pm\)0.20 \\ J0226 & \(\cdots\) & 1.33 & 0.29\(\pm\)0.04 & 0.59\(\pm\)0.05 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ J0357 & 3.53 & 0.40 & 0.13\(\pm\)0.07 & 0.34\(\cdots\) & 0.33\(\pm\)0.03 & 0.50\(\pm\)0.10 & \(-\)0.16\(\pm\)0.05 & 0.01\(\pm\)0.09 & 0.37\(\pm\)0.07 & 0.81\(\pm\)0.10 & \(-\)0.70\(\pm\)0.04 \\ J0422 & \(\cdots\) & \(<\)0.11 & \(-\)0.46\(\pm\)0.08 & 0.18\(\pm\)0.04 & \(\cdots\) & 0.21\(\pm\)0.05 & \(\cdots\) & \(-\)0.36\(\pm\)0.08 & \(\cdots\) & 0.23\(\pm\)0.08 & \(\cdots\) & \(-\)1.15\(\pm\)0.20 \\ J0713 & \(\cdots\) & 0.08 & 0.37\(\pm\)0.08 & 0.31\(\pm\)0.06 & 0.18\(\pm\)0.10 & 0.00\(\pm\)0.05 & \(-\)0.26\(\pm\)0.10 & \(-\)0.33\(\pm\)0.03 & \(-\)0.02\(\pm\)0.10 & \(-\)0.01\(\pm\)0.04 & 0.31\(\pm\)0.13 & \(-\)2.08\(\pm\)0.04 \\ J0758 & \(<\)1.20 & \(-\)0.75\(\pm\)0.24 & 0.06\(\pm\)0.17 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ J0814 & 4.10 & \(<\)1.10 & \(-\)0.05\(\pm\)0.05 & 0.33\(\pm\)0.04 & \(\cdots\) & 0.61\(\pm\)0.07 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ J0908 & \(\cdots\) & \(<\)0.01 & \(-\)0.19\(\pm\)0.08 & 0.17\(\pm\)0.07 & 0.96\(\pm\)0.11 & 0.35\(\pm\)0.04 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ J0925 & 3.80 & \(-\)0.28\(\pm\)0.05 & 0.35\(\pm\)0.05 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ J1037 & 3.66 & 1.10 & \(-\)0.41\(\pm\)0.06 & 0.30\(\pm\)0.03 & \(\cdots\) & 0.12\(\pm\)0.03 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ J1311 & \(\cdots\) & 1.10 & \(-\)0.12\(\pm\)0.03 & \(-\)0.22\(\pm\)0.03 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ J1317 & 3.73 & \(<\)0.60 & \(-\)0.08\(\pm\)0.08 & 0.30\(\pm\)0.06 & \(\cdots\) & 0.24\(\pm\)0.05 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ J1522 & 4.30 & \(<\)1.03 & \(\cdots\) & 0.42\(\pm\)0.07 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ J1650 & 3.47 & \(<\)0.30 & \(-\)0.60\(\pm\)0.10 & 0.13\(\pm\)0.07 & 0.69\(\pm\)0.14 & 0.35\(\pm\)0.04 & \(-\)0.22\(\pm\)0.10 & \(\cdots\) & 0.73\(\pm\)0.10 & \(\cdots\) & \(<\)\(-\)0.64\(\pm\)0 tends of individual elements, and then focus on the chemically peculiar objects. These stars will be valuable to provide constraints on rare astrophysical production sites or nucleosynthetic channels. Note that in the following discussion that we only consider 1D LTE abundances, to be consistent with other literature abundances, which are not generally corrected for either 3D or NLTE effects. We do not include the two benchmark stars in the following discussion. ### Overall Abundance Trends #### 5.1.1 Lithium Lithium is one of the most important elements, as its abundance can be used to constrain Big Bang nucleosynthesis. Figure 8 exhibits the behavior of the absolute Li abundances, \(A\)(Li), as a function of [Fe/H] (left panel) and \(T_{\rm eff}\) (right panel). The colored symbols are our program stars, while the gray circles are adopted from Roederer et al. (2014), which are not corrected for NLTE effects. The stellar sequence located at \(A\)(Li) \(\sim\) 2.2 for the region of -2.9 \(<\) [Fe/H] is known as the Spite plateau (Spite & Spite, 1982). Although the location of the Spite plateau varies over \(A\)(Li) = 2.10 - 2.35 from study to study (e.g., Ryan et al., 1999, 2000; Bonifacio et al., 2010; Sbordone et al., 2010; Simpson et al., 2021), we indicate it as the red-dashed line at \(A\)(Li) = 2.2, which is the most commonly accepted value. The primordial Li predicted by the Big Bang nucleosynthesis is in the range \(A\)(Li) = 2.67 - 2.74 (Spergel et al., 2007; Cyburt et al., 2016; Coc & Vangioni, 2017); the difference between the primordial Li and the Spite plateau is known as the longstanding cosmological lithium problem. Inspection of the figure reveals that our sample of stars follows the general trend of the other literature abundances, but there are some subtle differences as well. The object J0158 (magenta circle) with \(A\)(Li) \(<\) 1.0 is a giant. When stars leave the main sequence (MS), their surface Li material is reduced by dilution caused by the first dredge-up (FDU). Observationally, we can see the difference in Li abundances between the turnoff (TO) stars and giants, and then the Li abundance for giants does not change much across a large range of [Fe/H] after the FDU to the red giant branch (RGB) bump (e.g., Lind et al., 2009). The Li abundance further decreases as a star ascends above the RGB bump (\(\log~{}g<\) 2.0), due to poorly understood extra mixing. This characteristic abundance pattern due to the stellar evolution is clearly seen in Figure 8. Among our program stars, the four stars hotter than 6500 K have \(A\)(Li) values closely scattered around the Spite plateau, as can be seen in the right panel. These objects can be very useful to check the possibility of the extension of the Li plateau to the hotter temperature region, where relatively few data exist at present. Three stars (black, red, and yellow triangles) are subgiants, which are close to the giant phase (see Figure 3). Their Li abundance is slightly lower than the overall trend, probably due to the increasing convection zone. The object J0102 (cyan triangle) is a warm dwarf with [Fe/H] \(<\) -3.0. Its expected convective zone is not very deep; hence its surface Li should be preserved without much destruction since its birth. However, its \(A\)(Li) is slightly lower than the Spite plateau. One possible cause of its lower \(A\)(Li) compared to the Spite plateau and other literature sample is the temperature scale we have adopted. It is known that a temperature difference of 100 K results in the change of 0.08 dex of \(A\)(Li). Consequently, it may be possible that our temperature scale is slightly lower for dwarf stars than the ones used in the literature. An analysis of a larger number of stars in a uniform and consistent manner is clearly critical when discussing the observed Li trend. A follow-up study of our TO and dwarf stars can provide useful constraints on the primordial lithium problem, considering their evolutionary stage and metallicity. One more interesting aspect is that our sample of stars also gives a clue to a behavior known as the "double sequence" in Figure 8: \(A\)(Li) distribution, as a function of [Fe/H] (left panel) and a function of \(T_{\rm eff}\) (right panel). The colored symbols are our program stars, while the gray circles are gathered from Roederer et al. (2014). The red-dashed line indicates the Spite plateau at \(A\)(Li) = 2.2. the Spite plateau noted by Melendez et al. (2010), whereby the stars with [Fe/H] \(<\) -2.5 possess slightly lower \(A\)(Li) than the ones with [Fe/H] \(>\) -2.5. The similar behavior was reported in other studies (e.g., Ryan et al., 1999, 2000; Sbordone et al., 2010), and we notice this characteristic in the figure as well. #### 5.1.2 Odd-\(Z\) Elements: Na, K, and Sc Figure 9 compares our derived abundances with other previously studied Galactic VMP stars for the odd-\(Z\) (Na, K, and Sc), \(\alpha\)- (Mg and Ca), iron-peak (Cr, Mn, Ni, and Zn), and neutron-capture (Ba) elements. Our program stars are represented by the large colored symbols, while the objects compiled from various literature sources are indicated by gray circles (Venn et al., 2004; Aoki et al., 2013; Yong et al., 2013; Roederer et al., 2014). Note that, even though the NLTE corrected values for some elements are available in some studies, we used LTE values for this comparison. The colors and symbols for our program stars are the same as in Figure 3. The pink-solid line and error bar represent the 3\(\sigma\)-clipped mean trend and standard deviation for the literature sample using a bin size of 0.2 dex in [Fe/H], respectively. Our estimates are not included in the calculations. The black-solid line denotes the solar abundance ratio. Some selection biases may be underlying in the individual samples from the literature, according to their science goals. Nevertheless, Figure 9 can convey useful information on understanding the early chemical evolution of the MW, as well as for identifying the chemically peculiar objects. Among the odd-\(Z\) elements, we were able to derive Na, K, and Sc abundance ratios. The Na-abundance ratios of our program stars in Figure 9 show that many of them are consistently lower than the main trend (the pink line), even though some of them are in the range of other VMP stars. Sodium is formed during C-burning in massive stars, as well as during the hydrogen burning via the Ne-Na cycle (e.g., Romano et al., 2010). It is dispersed into the interstellar medium (ISM) by CCSNe as well as by mass loss from asymptotic giant branch (AGB) stars. Hence, the objects with [Na/Fe] \(<\) -0.5 may be formed in isolated gas clouds, which were not well-mixed or chemically enriched by a few CCSNe or AGB stars. Two objects stand out in the figure (J0158 and J2242), with [Na/Fe] \(>\) +1.0 and [Na/Fe] \(<\) -1.0, respectively. We discuss these objects in more detail in Section 5.3.1. Unlike Na, K and Sc are the product of explosive Si-burning and/or O-burning in the CCSN stage (Woosley & Weaver, 1995). They are also created by the \(\nu\)-process in CCSNe (Kobayashi et al., 2011). Hence, they are good probes for tracing the nucleosynthesis process in explosive events. We measured K abundances for six stars. Figure 9 indicates that the K-abundance trend increases with decreasing [Fe/H] for [Fe/H] \(<\) -3.2; one of our program stars (J0908; cyan circle) is on this increasing trend. There are two objects (J1650 and J2242) at [Fe/H] \(\sim\) -2.2 and -3.4, respectively, which possess much higher [K/Fe] than the other objects at the same metallicity, as further discussed in Section 5.3.1. The Sc-abundance ratio was derived for five objects, and two of their abundances exhibit somewhat larger scatter than the other Galactic field stars, as can be seen in Figure 9. Studies (e.g., Chieffi & Limongi, 2002) show that its production depends on the mass of the CCSN progenitors. If stars have formed in gas clouds which were enriched by a few SNe from stars with different masses, we would expect to observe a large scatter in the Sc abundances, which may be the case for our program stars. It is known that Galactic chemical-evolution models do not reproduce the evolution of the observed K and Sc abundances; the model predictions are consistently lower than the observations for these two elements (e.g., Kobayashi et al., 2020). #### 5.1.3 Alpha Elements: Mg and Ca The so-called \(\alpha\)-elements are mostly produced by the hydrostatic and explosive nucleosynthesis process in massive stars, and they are ejected into the interstellar space by CCSNe explosions. Specifically, Mg is produced through a hydrostatic nucleosynthetic process during the C-burning of a massive star. Calcium is mainly created during the O-burning of CCSNe. Although the \(\alpha\)-elements are mainly formed by CCSNe, some amount of Ca is also generated by SNe Ia (Iwamoto et al., 1999). Thus, the Mg abundance, which has a single production channel, is frequently used as an important tracer to examine the contribution of CCSNe. In addition, thanks to the strong Mg absorption lines, its abundance is readily measured from the spectra of VMP stars, and it has proven to be a powerful tool to track the star-formation history. Among the \(\alpha\)-elements, we were able to derive the abundances of Mg for all our EMP candidates and Ca for 13 objects from the GRACES spectra. In Figure 9, we can observe that most of our targets exhibit a similar [Mg/Fe] trend at [Mg/Fe] \(\sim\) +0.3, with small scatter relative to other VMP stars, but a few objects (J0010, J1311, and J2241) distinguish themselves from the rest. We discuss these objects in Section 5.3.2. The Ca abundance for most of our program stars exhibit a very small dispersion, following the trends of other VMP stars. The behavior of the observed \(\alpha\)-element abundances indicates that CCSNe had a dominant role in the chemical enrichment for our program stars. #### 5.1.4 Iron-peak Elements: Cr, Mn, Ni and Zn Iron-peak elements are produced during Si-burning, and also can be synthesized in both SNe Ia and CCSNe (Kobayashi & Nomoto, 2009; Kobayashi et al., 2020). Since Figure 9: [X/Fe], as a function of [Fe/H], for the odd-\(Z\) (Na, K, and Sc), \(\alpha\)- (Mg and Ca), iron-peak (Cr, Mn, Ni, and Zn), and neutron-capture (Ba) elements. Our program stars are represented by colored symbols, while the objects compiled from various literature sources (Vem et al., 2004; Aoki et al., 2013; Yong et al., 2013; Roederer et al., 2014) are shown with gray circles. The pink-solid line and error bars represent the 3\(\sigma\)-clipped mean trend and standard deviation of the literature sample for a bin size of 0.2 dex in [Fe/H], respectively. Our estimates are not included in these calculations. The black-dashed line denotes the solar abundance ratio. The colors and symbols for our program stars are the same as in Figure 3. The down arrow indicates an upper limit. the innermost region of C+O white dwarfs is sufficiently hot to burn Si, iron-peak elements can also be synthesized (Hoyle & Fowler, 1960; Arnett et al., 1971) by this pathway. CCSNe contribute to explosive nucleosynthesis to form iron-peak elements in two distinct regions: incomplete and complete Si-burning regions (e.g., Hashimoto et al., 1989; Woosley & Weaver, 1995; Thielemann et al., 1996; Umeda & Nomoto, 2002). Chromium and Mn are synthesized in the incomplete Si-burning region in the ejecta of CCSNe, while Ni and Zn are produced in the complete Si-burning region of the deeper inner portion of the ejecta during the CCSN explosion. We are able to determine the abundance of Cr, Mn, Ni, and Zn among the iron-peak elements for some of our program stars. Inspection of Figure 9 reveals that the overall trend of Cr, Mn, and Ni of the VMP/EMP stars exhibits a relatively small dispersion, but the Zn abundance shows a rather large scatter among the iron-group elements. Similar to other Galactic VMP stars, the Cr abundances of our targets exhibit ordinary behavior, consistent with other literature results. The very small dispersion over a range of metallicities suggests that their formation is closely related (e.g., Reggiani et al., 2017). We also notice the declining trend with decreasing [Fe/H], suggesting chemical evolution driven by CCSNe in the early epochs of the MW. Figure 9 indicates the most of the Galactic field stars exhibit a decreasing trend of the Mn abundance with decreasing metallicity, similar to the Cr abundance, supporting the claim of early chemical enrichment from CCSNe. We have measured the Mn abundance for four stars. It can be seen in Figure 9 that their abundances are near the solar value, independent of their metallicity, and appear elevated compared to the main locus. However, the absorption lines of Mn are often too weak in VMP/EMP stars, causing systematic uncertainty. This might be the case for our targets, and three of the four objects rely on one line, resulting in uncertain measurements. An abundance analysis based on higher-quality spectra are required to confirm the Mn enhancements in our four program stars. The Ni abundances of the previously studied VMP stars are generally close to the solar level, as are some of our program stars. Notably, we observe three objects (J1317, J1650, and J2242) that have [Ni/Fe] \(>\) +0.5. We further discuss the objects J1650 and J2242 in Section 5.3.1. The abundances for elements other than Ni in J1317 appear normal, so it may have been enriched by SNe Ia, considering its relatively high metallicity. However, the Ni abundance is derived from one absorption line, and its uncertainty is rather large; its chemical peculiarity is desirable to confirm with additional lines. In Figure 9, the average Zn-abundance pattern tends to increase with decreasing metallicity. This trend has been argued to be caused by problems detecting the Zn lines at low metallicity (see Yong et al., 2021 for a more detailed discussion). This is particularly problematic for EMP stars. These factors may explain the larger dispersion of Zn compared to other iron-peak elements. Notwithstanding, the Zn-abundance trend can be used to understand the physics of CCSNe. Zinc is generated in the deepest region of hypernovae (HNe; Umeda & Nomoto, 2002), and a more significant explosion energy leads to higher [Zn/Fe] ratios (Nomoto et al., 2013). Consequently, HNe may be responsible for higher values of [Zn/Fe] at the lowest metallicity. #### 5.1.5 Neutron-capture Element: Ba Heavier elements beyond the iron peak are created by capturing neutrons and their subsequent \(\beta\)-decay. At least two processes - the slow (\(s\)-) and rapid (\(r\)-) neutron-capture processes - are thought to be responsible for the synthesis of these elements. An intermediate neutron-caption process (the \(i\)-process; Cowan & Rose, 1977) may also be involved. The slow neutron-capture process occurs in an environment where the neutron flux is much lower than the rapid neutron-capture process. The main \(r\)-process elements are created during violent events such as CCSNe, neutron star mergers, gamma ray burst, etc. (Nishimura et al., 2015; Drout et al., 2017; Cote et al., 2019; Siegel et al., 2019), whereas the main \(s\)-process elements are produced during the AGB phase of low-mass stars (Suda et al., 2004; Herwig, 2005; Komiya et al., 2007; Masseron et at., 2010; Lugaro et al., 2012). A number of sites have been suggested for the \(i\)-process, including AGB stars (Hampel et al., 2016; Cowan et al., 2021; Choplin et al., 2022) and rapidly accreting white dwarfs (Denissenkov et al., 2017, 2019; Cote et al., 2018). In the GRACES spectra of our program stars, the only measurable lines for neutron-capture elements are two Ba ii lines. We took an average of the abundances derived from these two lines. The bottom panel of Figure 9 displays [Ba/Fe] as a function of [Fe/H] for our sample (colored symbols) and other field stars. We clearly observe a very large scatter especially in the low-metallicity region, and all our program stars have [Ba/Fe] \(<\) 0. The large spread of the Ba abundance at low metallicity is a well-known pattern (Ryan et al., 1996; Aoki et al., 2005; Roederer, 2013). Because the favored astrophysical site for the operation of the \(s\)-process is AGB stars, stars with the lowest metallicity, and oldest ages, do not have sufficient time to be polluted by progenitors during their thermally pulsing AGB phase. Instead, in the early Universe, Ba could be produced by the \(r\)-process in massive stars (Travaglio et al., 1999; Cescutti et al., 2006), or by fast rotating, low-metallicity stars (Frischknecht et al., 2016; Choplin et al., 2018). Thus, we expect the Ba abundance for our EMP stars was probably produced by the main \(r\)-process. In both cases, the main neutron source is the \({}^{22}\)Ne(\(\alpha\),\(n\))\({}^{25}\)Mg reaction. In addition, due to inefficient mixing of the ISM in the early Galaxy, stars born in giant molecular clouds polluted by single SN may have unusually high Ba. However, we do not see evidence for this in our sample, as all our stars exhibit low Ba abundances ([Ba/Fe] \(<\) 0.0). Although we are not able to measure Sr abundances for our program stars, other studies (e.g., Spite et al., 2014; Cowan et al., 2021; Matas Pinto et al., 2021) reported a large scatter of [Sr/Ba] among the VMP stars, indicating that a Ba-poor star can still be Sr-rich. This leads to invoking other processes, such as a non-standard \(s\)-process (e.g., Frischknecht et al., 2016) and the \(i\)-process (Cowan & Rose, 1977; Hampel et al., 2016). These mechanisms are believed to be relatively dominant only in Ba-deficient stars. Additional follow-up studies to determine the Sr abundance ratio for our program stars will be valuable to confirm these characteristics. Recently, Li et al. (2022) reported two different behaviors of Ba abundance ratios for their VMP giant sample. The stars with [Fe/H] \(<\) -3.0 exhibit much lower [Ba/Fe] than the ones with [Fe/H] \(>\) -3.0. Although we do not observe the pattern from our giants, the inclusion of the TO stars reveals a similar behavior. Furthermore, the plot for [Ba/Fe] indicates that most of stars with [Ba/Fe] \(<\) -1.0 have [Fe/H] \(<\) -3.0, and are giants. Li et al. (2022) also found such a feature in their sample. The variety of Ba abundances for VMP stars suggests stochastic pollution from neutron-capture elements in the chemical evolution of the early MW. One intriguing object is J0713, which has the lowest [Ba/Fe] among our sample. Given that there is nothing unusual in other abundances, this object may be born in a natal cloud having no association with a neutron-capture event. ### Carbon-Enhanced Metal-Poor (CEMP) Stars Based on the [C/Fe] estimates of our program stars, three objects (excluding upper limit estimates) can be classified as CEMP stars ([C/Fe] \(>\) +0.7), resulting in a CEMP fraction of 17% (3/18), which is not far from the literature value of \(\sim\) 20% for [Fe/H] \(<\) -2.0 (Lee et al., 2013; Placco et al., 2014). As they exhibit low Ba-abundance ratios ([Ba/Fe] \(<\) 0.0), they are CEMP-no stars. Previous studies reported that some CEMP (especially CEMP-no) stars are often enhanced with Na, Mg, Al, and Si (Norris et al., 2013; Aoki et al., 2018; Bonifacio et al., 2018). These characteristics can be useful to identify the mechanisms for the production of the CEMP stars. However, this is not the case for our CEMP stars, as they exhibit normal or relatively low abundance ratios of Na and Mg. In fact, only one potential CEMP object (J0758) with an upper limit of [C/Fe] \(=\) 1.2 exhibits a very low sodium ratio, [Na/Fe] \(=\) -0.75. The stars with peculiar abundances are mostly non-CEMP stars in our sample. We have examined where our target stars are located in the \(A\)(C)-[Fe/H] diagram, the so-called Yoon-Beers Diagram, as shown in Figure 10. In the figure, the light purple, green, and yellow circles indicate the morphological regions for Group I, II, and III stars, respectively, as described by Yoon et al. (2016, 2019). The black solid, dashed, and dotted lines denote [C/Fe] \(=\) +0.7, \(A\)(C) \(=\) 7.1, and \(A\)(C) \(=\) 6.3, respectively. The legend for colors and symbols is the same as in Figure 3. The down arrow indicates an upper limit. Note that the \(A\)(C) values are corrected for the evolutionary effects following the prescription of Placco et al. (2014), and their uncorrected values and the amount of the correction are listed in Table 7 in the Appendix. Inspection of Figure 10 reveals that, among our confirmed three CEMP stars, taking into account the low-metallicity ([Fe/H] \(<\) -3.0) and \(A\)(C) level, two objects (black and cyan triangles) may belong to Group II, and one star (J1037, yellow square) with [Fe/H] \(>\) -3.0 occupies Group I. According to the study by Yoon et al. (2016), stars in Group I are dominated by CEMP-\(s\) or \(r/s\) stars, which exhibit enhancements of \(s\)- or \(r\)-process elements (and may in fact be CEMP-\(i\) stars), while Groups II and III contain mostly Figure 10: Yoon-Beers diagram of \(A\)(C) as a function of [Fe/H]. Light purple, green, and yellow circles indicate morphological Groups I, II, and III, respectively, as described by Yoon et al. (2016, 2019). The black solid, dashed, and dotted lines denote [C/Fe] \(=\) +0.7, \(A\)(C) \(=\) 7.1, and \(A\)(C) \(=\) 6.3, respectively. Note that the \(A\)(C) values are corrected for the evolutionary effects following the prescription of Placco et al. (2014). Their corrections are listed in Table 7 in the Appendix. The color and symbol are the same as in Figure 3. The down arrow indicates the upper limit estimate. CEMP-no stars, which exhibit low abundance of neutron-capture elements. Considering its classification as CEMP-no star (due to their low Ba abundances), J1037 is likely to be in Group III rather than Group I. A recent study by Norris & Yong (2019) also reported that about 14% of the Group I CEMP objects belong to CEMP-no category. Since most of CEMP-\(s\) stars show radial-velocity variation (e.g., Starkenburg et al., 2014; Placco et al., 2015; Hansen et al., 2016, 2016; Jorissen et al., 2016), indicative of their binarity, radial-velocity monitoring of these stars can further confirm their assigned CEMP sub-class. As mentioned above, CEMP-no stars can be further subdivided into two groups: one (Group II) with a good correlation between \(A\)(C) and [Fe/H], and the other (Group III) without clear correlation between them (Yoon et al., 2016). The fact that CEMP-no objects exhibit excesses in carbon with low neutron-capture elements implies that the sources of their chemical patterns are unlikely to be due to mass transfer from a binary companion, as in the CEMP-\(s\) stars. Rather, they have likely been enriched through distinct nucleosynthesis channels. There are two channels of the formation of the CEMP-no stars that have been widely considered. One is pollution from faint SNe associated with Pop III stars. This type of SN experiences mixing and fallback (Umeda & Nomoto, 2003; Tominaga et al., 2007, 2014; Heger & Woosley, 2010; Nomoto et al., 2013; Ezzeddine et al., 2019), and ejects less iron due to its small explosion energy. Thus, only the outer layers, which have copious amounts of lighter elements, including carbon, are ejected, whereas the inner part, which includes a large amount of Fe falls back onto the neutron star or black hole, increasing the [C/Fe] ratio, Another mechanism is a so-called spinstar. A rapidly rotating massive ultra metal-poor ([Fe/H] \(<-4.0\)) star can have large amounts of carbon at its surface (due to efficient mixing with carbon production deeper in the star), and the surface material is blown by a stellar wind to pollute the ISM (Meynet et al., 2006; Hirschi, 2007; Frischknecht et al., 2012; Maeder et al., 2015). Additional formation mechanisms for CEMP-no stars are discussed in detail by Norris et al. (2013). These two models cannot account completely for the observed chemical patterns of the CEMP-no stars. Nevertheless, one can infer from the different level of \(A\)(C) and the distinct behaviors in the \(A\)(Na)-\(A\)(C) and \(A\)(Mg)-\(A\)(C) spaces that the two Group II and Group III subgroups may be associated with different formation mechanisms (Yoon et al., 2016). However, a much larger sample of CEMP-no stars (especially Group III stars) with accurate elemental-abundance estimates is required to better distinguish between these two channels. In this aspect, the CEMP-no stars identified through our work certainly help increase their sample size. ### Chemically Peculiar Stars #### 5.3.1 Sodium-Peculiar Stars In Figure 9, we identified one object (J0158) with [Na/Fe] = +1.14. This high Na-abundance ratio is a typical property of the second population (P2) in globular clusters (GCs). It is known that materials synthesized in stars of the primordial population (P1) of GCs, chemically enriched ISM with light elements such as Na and Al, so that the P2 stars exhibit distinct chemical characteristics from the P1 objects, establishing anti-correlations between Na-O and Al-Mg among stars in GCs (e.g., Gratton et al., 2004; Martell et al., 2011; Carretta et al., 2012; Pancino et al., 2017). This object thus may have originated in a GC. Another piece of evidence for the chemical signature of a GC P2 in J0158 is its low Mg-abundance ratio, [Mg/Fe] = +0.06, yielding a very high [Na/Mg] ratio of +1.08. Its normal carbon content ([C/Fe] = +0.11) also points to P2, as P2 stars mostly exhibit normal to low values of carbon. Even though a further detailed abundance analysis is required, if this object is indeed a P2 star from a GC, such a cluster may once have belonged to a dwarf galaxy, because its metallicity ([Fe/H] = -3.04) is quite low compared to GCs in the MW. In line with this, Fernandez-Trincado et al. (2017) argued that their low Mg-abundance stars with P2 chemical abundances could originate from outside of the MW. Additional kinematic study will help confirm whether or not it has been accreted from a dwarf satellite galaxy. We plan to carry out a thorough kinematics analysis of our program stars in a forthcoming paper. The object J2242 is extremely Na poor ([Na/Fe] = -1.02); J1650 also has a relatively low [Na/Fe] = -0.6. However, both these stars are enhanced in K and Ni. As K is produced by CCSNe, and Ni is formed in the inner region of the explosion, it is plausible to infer that the gas clouds which formed these objects may have polluted by CCSNe with high explosion energies. Especially, considering its metallicity being [Fe/H] = -3.4, the progenitor of J2242 was unlikely to have been enriched by AGBs. Because the chemical abundances in the early MW were likely established by a limited number of chemical-enrichment events, this particular object may have undergone a peculiar nucleosynthesis episode. Its enhancement of K and Ni abundances supports the distinct nucleosynthesis hypothesis. However, recall that K and Ni for this star were only estimated from single absorption lines, thus additional high-resolution spectroscopic study of this object is necessary to confirm its peculiarity. It is worthwhile mentioning that, in the case of globular clusters for NGC 2419 (Cohen & Kirby, 2012; Mucciarelli et al., 2012) and NGC 2808 (Mucciarelli et al., 2015), it has been reported that some of their member stars exhibit strong anti-correlations between their K and Mg abundance ratios. Kemp et al. (2018) also reported that a large number of their metal-poor stars selected from LAMOST are K rich, with relatively low Mg-abundance ratios, and concluded that an anomalous nucleosynthesis event might be associated with the progenitors of the stars. Even though the metallicities of our two K-rich objects are much lower than that of the stars from Kemp et al. (2018) ([Fe/H \(>\) -1.5), because they are not strongly enhanced in Mg, they could belong to the same category. #### 5.3.2 Magnesium-Peculiar Stars The object J0010 (black circle) in Figure 9 has [Mg/Fe] = +0.73, much higher than other halo stars near its metallicity ([Fe/H] = -2.48). This Mg-rich star is slightly enhanced with Na, while [Ca/Fe] looks normal (+0.19), resulting in an elevated [Mg/Ca] ratio (+0.54). The existence of the high-[Mg/Ca] objects has been suggested in numerous studies (Norris et al., 2002; Andrievsky et al., 2007; Frebel et al., 2008; Aoki et al., 2018). Its carbon enhancement is mild ([C/Fe] = +0.37). This star also stands out as a very low-Ba object with respect to other halo stars at [Fe/H] = -2.5; its [Ba/Fe] of -1.72 is over 1 dex lower than other objects at the same metallicity. Judging from its extremely low [Ba/Fe] ratio, this object may be not associated with events that produced large amounts of neutron-capture elements, but more likely with CCSNe, since most of its Ca and iron-peak show low or normal abundances, with the exception of Mn, whose abundance is uncertain due to weak Mn I lines. J2241, indicated by the Mg-abundance plot of Figure 9, has [Mg/Fe] = -0.61, which is much lower than the other VMP stars near its metallicity ([Fe/H] = -2.71). Interestingly, this star's Na-abundance ratio is also somewhat deficient with [Na/Fe] = -0.6. J1311 (red square) also has a relatively low [Mg/Fe], but nothing abnormal among the other abundances. Mg-poor halo stars have also been reported in other studies (e.g., Ivans et al., 2003; Aoki et al., 2014). According to Ivans et al. (2003), the origin of these objects can be explained by larger pollution from SNe Ia compared to other halo objects with similar metallicities. Magnesium is primarily produced by massive stars, while Ca is created by both SNe Ia and CCSNe, resulting in a deficiency of Mg relative to Ca. Unfortunately, we do not have measured Ca abundances for the two low-[Mg/Fe] stars to confirm this scenario. Ivans et al. (2003) also reported very low [Na/Fe] and [Ba/Fe] ratios for their Mg-poor stars. One of our Mg-poor objects also exhibits this signature. Stars with low \(\alpha\)-abundance ratios are sometimes explained by enhancement of their Fe (Cayrel et al., 2004; Yong et al., 2013; Jacobson et al., 2015). In this case, the abundance ratios of other elements are expected to be relatively lower as well. It will be worthwhile to carry out higher S/N, high-resolution follow-up observation for these low-[Mg/Fe] objects to see if other elements behave in this manner. Another plausible explanation for the Mg-poor stars is that they came from classical or ultra-faint dwarf galaxies. These systems have had very low star-formation rates, and the contribution of SNe Ia started occurring at much lower metallicities (e.g., Shetrone et al., 2003; Tolstoy et al., 2009). In order Figure 11: Left panel: Example of the abundance-pattern matching for J0713. Magenta symbols are the measured abundances. The solid lines represent the five best-fit models with different masses and mixing efficiencies. In this particular example, the explosion energy is the same for all five models. The percentage in the legend indicates the occurrence rate of the model among 10,000 predicted models. Right panel: Histogram (black) of the predicted progenitor masses of our EMP stars. The red and blue histograms are the mass distributions of the UMP stars predicted by Placco et al. (2015) and the EMP stars derived by Ishigaki et al. (2018), respectively. Each histogram is normalized by the total number of stars to compare the mass distribution. to test this, a kinematic analysis of the Mg-deficient stars is presently being pursued. ### Progenitor Masses of EMP Stars Extremely metal-poor stars are regarded as fossil probes for understanding the chemical evolution of the early MW, because they preserve the chemical information of their natal gas clouds, permitting constraints on their predecessors, presumably massive Pop III stars. We explore the characteristics (especially mass and explosion energy) of the progenitors of our EMP stars by comparing their abundance patterns with theoretical predictions of Pop III SN models by Heger & Woosley (2010). Their SN models consist of a grid of 16,800 combinations, which have a range of the explosion energy of 0.3 - 10 \(\times\)\(10^{51}\) erg and progenitor masses between of 10 - 100 \(M_{\odot}\). There exist 120 initial masses, and the grid includes the mixing efficiencies from no mixing to nearly complete mixing. Using this grid, one can retrieve the progenitor properties of an EMP star by finding a best-matching SN chemical yield with the observed abundance patterns. We have made use of the starfit online tool11 to carry out this exercise. We only considered EMP stars among our program stars, and assumed that our EMP stars were formed out of the gas polluted by a single Pop III SN. If the measured abundance was derived from only one line, we treated it as upper limit, and we attempted to fit with all available abundances. In this exercise, we did not include the two reference stars (J0226 and J1522) and one program star (J0925), because the abundances of only a few elements are available for them to constrain their progenitor mass. Footnote 11: [http://starfit.org](http://starfit.org) To confidently recover progenitor masses for our EMP stars, we generated 10,000 different abundance patterns by resampling each abundance from a normal error distribution for each object, obtaining distributions of masses and explosion energies of 10,000 possibilities. The left panel of Figure 11 shows an example of the best-matching chemical yield for J0713. The magenta symbols represent the observed abundances. Each solid line represents the theoretically predicted abundance patterns produced by a combination of different masses, explosion energies, and mixing efficiencies, as indicated in the legend at the top of the panel. In this particular example, all five best-fit models have the explosion energy of 0.3 \(\times\)\(10^{51}\) erg, and the best-fit model has a mass of 15.2 \(M_{\odot}\) with mixing efficiency of -0.6, which accounts for 44.5% among 10,000 predicted models. The other four top models are followed with a range of masses, 15 - 20 \(M_{\odot}\) and mixing efficiency of -0.6 or -0.8. We chose the most frequently occurring model, and adopted its mass as the progenitor mass of our EMP star. The right panel of Figure 11 displays a histogram of the predicted progenitor masses of our EMP stars. The histogram implies that except one object (60 \(M_{\odot}\)), our EMP objects have a progenitor mass of less than 26 \(M_{\odot}\). Table 5 lists the most probable masses and explosion energies for our EMP stars. Placco et al. (2015a) used the same SNe models by Heger & Woosley (2010) to determine the progenitor masses of 21 UMP stars, and found that most of their progenitors have the mass range 20 - 28 \(M_{\odot}\) and explosion energies 0.3 - 0.9 \(\times\)\(10^{51}\) erg (see also Placco et al., 2016 for the progenitor masses of additional UMP stars). The red histogram of the right panel of Figure 11 is their mass estimates of the UMP progenitors, which are mostly less than 40 \(M_{\odot}\). The mass range of our EMP stars is somewhat less than that of Placco et al. (2015a), but generally in good agreement. The majority of our EMP stars have their progenitor SN explosion energy between 0.3 and 1.0 \(\times\)\(10^{51}\) erg, as can be read off from Table 5, again consistent with that of Placco et al. (2015a). Placco et al. pointed out that the estimated mass and explosion energy are very sensitive to the present of carbon and nitrogen abundance. The relatively lower masses of our EMP progenitors compared to theirs may thus be due to the absence of the measured N abundance for our EMP stars. Ishigaki et al. (2018) also carried out a similar work to derive the mass function of the first stars, using about 200 EMP stars, but with different SN models. They used mixing and fallback SNe models, which have the combination of five initial masses (13, 15, 25, 40, and 100 \(M_{\odot}\)) and explosion energies of 0.5, 1.0, 10, 30, and 60 \(\times\)\(10^{51}\) erg to represent low-, normal-, and high-energy explosions. They found that \begin{table} \begin{tabular}{c c c} \hline \hline Short ID & Mass (\(M_{\odot}\)) & Energy (10\({}^{51}\) erg) \\ \hline J0102 & 10.9 & 0.9 \\ J0158 & 21.0 & 0.3 \\ J0422 & 60.0 & 3.0 \\ J0713 & 15.2 & 0.3 \\ J0758 & 11.9 & 0.9 \\ J0814 & 25.5 & 0.9 \\ J0908 & 12.8 & 0.3 \\ J2242 & 18.6 & 10.0 \\ J2341 & 15.2 & 0.3 \\ \hline \end{tabular} Note. – Note that we did not attempt to determine the progenitor mass of the two reference stars (J0226 and J1522) and one program star (J0925), because only a few elements are available to constrain their progenitor mass. \end{table} Table 5: Predicted Progenitor Masses and SN Explosion Energies of Our EMP Stars the progenitor masses of their sample are mostly less than 40 \(M_{\odot}\), as can be seen in the blue histogram of the right panel of Figure 11. Our predicted masses also agree well with theirs. They also suggested that the C, N, and O abundances are sensitive to the progenitor mass, and that the Na, Mg, and Al abundance ratios are also useful tracers of the progenitor mass when ignoring the impact of stellar rotation. To sum up, the (relatively) low-mass range of the progenitors of our sample and other studies suggests that stars with \(M<\) 40 \(M_{\odot}\)were likely primarily responsible for the chemical enrichment of the early MW. ## 6 Summary and Future Work We have carried out high-resolution spectroscopic follow-up observations using GEMINI-N/GRACES for 20 stars (including two reference stars), selected as EMP candidates from SDSS and LAMOST medium-resolution spectra. We have presented stellar parameters and abundance estimates for Li, C, Na, Mg, K, Ca, Sc, Cr, Mn, Fe, Ni, Zn, and Ba, which are derived from a 1D LTE abundance analysis. The chemical abundances of our EMP candidates are compared with those of two benchmark stars from previous studies, and they show good agreement, validating our measurements of the program star chemical abundances. Based on our chemical abundances, we have found that all our candidates are VMP stars, and include 10 objects that are EMP stars. In addition, three CEMP stars are newly identified, and their low-Ba abundances ([Ba/Fe] \(<\) 0.0) indicate that they are all CEMP-no objects. As a result, our work nicely contributes to increasing the sample size of the VMP/EMP stars as well as CEMP-no objects. The Li abundance of our warm dwarf is slightly lower than the Spite Plateau, possibly due to the different temperature scale adopted. On the other hand, the \(A\)(Li) values of our TO stars are distributed around the Spite Plateau. Consequently, they can be used to constrain the Spite plateau by follow-up studies, and especially, the TO stars may contribute to extending the Spite plateau to the warmer temperature region. Comparison with other Galactic halo VMP stars reveals that the chemical abundances of our VMP objects generally follow similar abundance trends as a function of [Fe/H]. However, there exist a few objects that stand out from the majority of the VMP stars. We have identified one star (J0158) with [Na/Fe] = +1.14. Its low [Fe/H], [Mg/Fe], and [C/Fe] values imply that this object is closely connected to a second-generation star from a GC that once belonged to a dwarf galaxy. The object J2242 has an extremely low-sodium abundance ratio of [Na/Fe] = -1.02. This object also exhibits enhancement of K and Ni abundances. Taking into account its metallicity of [Fe/H] = -3.4, this star may have formed in a gas cloud that was chemically enriched by CCSNe with high explosion energies. We also found a Mg-rich star ([Mg/Fe] = +0.73) that is also slightly enhanced with Na, while its [Ca/Fe] is normal. These abundance characteristics, together with the extremely low [Ba/Fe], suggest that this VMP star is not likely to be associated with CCSNe that produce a large amount of neutron-capture elements. The star J2241 exhibits the lowest [Mg/Fe] (-0.61). The origin of this Mg-poor star can be explained by larger pollution by SNe Ia, or it could be accreted from a dwarf galaxy that experienced low star-formation efficiency. We have also explored the progenitor characteristics (mass and explosion energy) of our EMP stars by comparing their chemical-abundance patterns with those predicted by Pop III SN models. Except one object (60 \(M_{\odot}\)), our estimate masses of the EMP stars are in the range of 10 - 26 \(M_{\odot}\), which is in good agreement with the previous studies by Placco et al. (2015) and Ishigaki et al. (2018). This suggests that the chemical evolution of the early MW was driven primarily by stars with masses \(M<\) 40 \(M_{\odot}\). Since there are some uncertainties in the derived abundance for some elements due to weak metallic lines, low-S/N spectra, or a small number of measured lines, we plan to carry out high-resolution, high-S/N spectroscopic observations for the chemically peculiar objects to determine more reliable measurements of their elemental abundances. We also plan to supplement the available chemical information for our program stars with a chemodynamical analysis in an upcoming paper. Astropy(Robitaille et al., 2013; Astropy Collaboration et al., 2018, 2022), matplotlib(Hunter, 2007), NumPy(van der Walt et al., 2011), SciPy(Virtanen et al., 2020). We thank Christopher Sneden for his suggestions and advice on the spectral synthesis of CH \(G\) band. Y.S.L. acknowledges support from the National Research Foundation (NRF) of Korea grant funded by the Ministry of Science and ICT (NRF-2021R1A2C1008679). Y.S.L. also gratefully acknowledges partial support for his visit to the University of Notre Dame from OISE-1927130: The International Research Network for Nuclear Astrophysics (IRENA), awarded by the US National Science Foundation. Y.K.K. acknowledges support from Basic Science Research Program through the NRF of Korea funded by the Ministry of Education (NRF-2021R1A6A3A01086446). T.C.B. acknowledges partial support for this work from grant PHY 14-30152; Physics Frontier Center/JINA Center for the Evolution of the Elements (JINA-CEE), awarded by the U.S. National Science Foundation. The work of V.M.P. is supported by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. This work was supported by K-GMT Science Program (PID: GN-2016A-Q-17), GN-2018B-Q-122, GN-2019B-Q-115, GN-2019B-Q-219, and GN-2019B-Q-310) of Korea Astronomy and Space Science Institute (KASI). This research is based on observations obtained through the Gemini Remote Access to CFHT ESPaDOnS Spectrograph (GRACES). ESPaDOnS is located at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawai'i. ESPaDOnS is a collaborative project funded by France (CNRS, MENESR, OMP, LATT), Canada (NSERC), CFHT and ESA. ESPaDOnS was remotely controlled from the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea).
2307.04977
Model-Driven Sensing-Node Selection and Power Allocation for Tracking Maneuvering Targets in Perceptive Mobile Networks
Maneuvering target tracking will be an important service of future wireless networks to assist innovative applications such as intelligent transportation. However, tracking maneuvering targets by cellular networks faces many challenges. For example, the dense network and high-speed targets make the selection of the sensing nodes (SNs) and the associated power allocation very challenging. Existing methods demonstrated engaging performance, but with high computational complexity. In this paper, we propose a model-driven deep learning (DL)-based approach for SN selection. To this end, we first propose an iterative SN selection method by jointly exploiting the majorization-minimization (MM) framework and the alternating direction method of multipliers (ADMM). Then, we unfold the iterative algorithm as a deep neural network and prove its convergence. The proposed method achieves lower computational complexity, because the number of layers is less than the number of iterations required by the original algorithm, and each layer only involves simple matrix-vector additions/multiplications. Finally, we propose an efficient power allocation method based on fixed point (FP) water filling and solve the joint SN selection and power allocation problem under the alternative optimization framework. Simulation results show that the proposed method achieves better performance than the conventional optimization-based methods with much lower computational complexity.
Lei Xie, Hengtao He, Shenghui Song, Yonina C. Eldar
2023-07-11T02:33:43Z
http://arxiv.org/abs/2307.04977v2
Model-Driven Sensing-Node Selection and Power Allocation for Tracking Maneuvering Targets in Perceptive Mobile Networks ###### Abstract Maneuvering target tracking will be an important service of future wireless networks to assist innovative applications such as intelligent transportation. However, tracking maneuvering targets by cellular networks faces many challenges. For example, the dense network and high-speed targets make the selection of the sensing nodes (SNs), e.g., base stations, and the associated power allocation very difficult, given the stringent latency requirement of sensing applications. Existing methods have demonstrated engaging tracking performance, but with very high computational complexity. In this paper, we propose a model-driven deep learning approach for SN selection to meet the latency requirement. To this end, we first propose an iterative SN selection method by jointly exploiting the majorization-minimization (MM) framework and the alternating direction method of multipliers (ADMM). Then, we unfold the iterative algorithm as a deep neural network (DNN) and prove its convergence. The proposed model-driven method has a low computational complexity, because the number of layers is less than the number of iterations required by the original algorithm, and each layer only involves simple matrix-vector additions/multiplications. Finally, we propose an efficient power allocation method based on fixed point (FP) water filling (WF) and solve the joint SN selection and power allocation problem under the alternative optimization framework. Simulation results show that the proposed method achieves better performance than the conventional optimization-based methods with much lower computational complexity. Maneuvering target tracking, perceptive mobile network, model-driven deep learning, sensing node selection, power allocation. ## I Introduction Innovative applications such as intelligent transportation systems require high-precision sensing capabilities, which are unavailable from current cellular networks. To this end, the recently proposed integrated sensing and communication (ISAC) paradigm offers a promising way to share spectrum, hardware, and software between sensing and communication [1, 2]. Perceptive mobile network (PMN) was proposed as a special type of ISAC system that adds high-precision sensing capability to the cellular networks [3, 4, 5, 6]. There are many favorable properties of cellular networks that can facilitate sensing. For instance, the large number of sensing nodes (SNs) in PMNs enables collaborative sensing, where multiple perspectives from different SNs are exploited to sense the same target. The SNs can be base station (BS) [1], road side units [2], remote radio unit [3], or target monitoring terminal [4]. However, tracking maneuvering targets by PMNs faces many challenges. For example, due to the dense cellular network, selecting a proper set of SNs to track a moving target can be very difficult, because the handover from one group of SNs to another faces very stringent latency requirements. There have been engaging results on SN selection and power allocation for tracking maneuvering targets [7, 8, 9, 10, 11, 12, 13, 14, 15]. The authors of [7] proposed two SN selection methods in wireless networks to minimize the posterior Cramer-Rao lower bound (PCRLB) and maximize the mutual information between the target location and the measurements of the selected SNs, respectively. In [8], a cooperative game theoretic approach was utilized to allocate power for tracking targets in a radar network. The authors of [9] proposed two strategies for resource allocation with given SNs, where one maximizes the tracking accuracy with limited power budgets, and the other minimizes the power consumption with required tracking performance. To achieve better performance, the joint SN selection and power allocation schemes were also considered [14, 15]. In [14], a distributed multi-target tracking method was proposed for the networked multiple-input multiple-output (MIMO) radar system, where an alternative optimization (AO)-based method was utilized to solve the bi-variable optimization problem. The boolean constraint on the SN selection vector is one of the most critical challenges for the joint SN selection and power allocation problem. To handle this issue, a typical method is to relax the boolean constraint to allow continuous and sparse variables [15, 16, 17]. In [14, 15], the relaxed SN selection was formulated as a semi-definite programming (SDP) problem and solved by the CVX toolbox [18]. Unfortunately, the complexity of the existing methods increases exponentially with the number of SNs, which may violate the stringent latency requirement of sensing applications when a large number of SNs exist. To this end, model-driven deep learning (DL) offers a promising solution. By unfolding an iterative algorithm as a neural network where each iteration is implemented by one layer with learnable parameters, model-driven methods have the potential to offer better performance with reduced computational complexity. Some research efforts have been made to utilize model-driven deep neural networks (DNNs) to find sparse solutions for better performance and lower computational costs. In [19], an unfolded vector-approximate message passing network with random initialization was proposed to learn a denoiser identical to the statistically matched one. The authors of [20] unfolded the iterative algorithm, used to solve a problem with \(l_{0}\) sparse regularization, to be a feed-forward neural network for faster inference and better scalability. In [21], a generalized DNN was proposed to learn a sparse solution by unfolding the alternating direction method of multipliers (ADMM) with better accuracy and lower computational cost. The authors of [22] designed an ADMM-Net for interference removal in radar imaging, which exhibited much lower imaging error and computational cost than ADMM and CVX. However, the inverse of high-dimensional matrices are involved in the existing ADMM-based unfolding methods, which causes high storage and computational cost. In this paper, to meet the stringent latency requirement of sensing applications, we propose a model-driven method for SN selection to track multiple maneuvering targets. For that purpose, we first derive an iterative algorithm for SN selection, leveraging the majorization-minimization (MM) framework and ADMM. Then, the MM-ADMM algorithm is unfolded into a DNN where the technical challenges lie in the large number of learnable parameters and the uncertain convergence property. To this end, we design a new model-driven DNN with an additional module to exploit the first- and second-order momentum, and refer to it as deep alternating network (DAN), which has fewer learnable parameters than the directly-unfolded MM-ADMM. The convergence proof of the proposed DAN is also given. The computational complexity of DAN is low, because the number of layers is less than the number of iterations required by the original algorithm, and each layer of DAN only involves simple matrix-vector additions/multiplications without high-dimensional matrix inverse. Finally, we propose a fixed-point (FP) water-filling (WF)-based method for power allocation, which is derived based on the Lagrange multiplier method. The joint SN selection and power allocation problem is solved by combining the proposed DAN and FP-WF algorithms under the AO framework. Experiment results show that the proposed method can achieve better performance than the optimization-based methods with remarkably lower computational costs. The contributions of this paper are summarized as follows: 1. We propose an iterative method based on MM and ADMM for SN selection. In particular, we exploit the MM approach to handle the non-convexity of the penalized cost functions. For each iteration of ADMM, we derive explicit expressions for the solution to the constrained optimization problem by exploiting the KKT conditions, which facilitate the development of the model-driven method. 2. We design a new model-driven DNN, named DAN, by adding an additional module to the directly-unfolded MM-ADMM method, which exploits the momentum for accelerating the convergence. Moreover, we provide the convergence proof for DAN, which achieves a similar SN selection performance as the exhaustive searching method with significantly lower computational cost. 3. Inspired by the classic WF-based power allocation strategies, we propose an iterative FP-WF power allocation method. Specifically, in each water-filling step, the water level is obtained by solving an FP equation. This approach not only reduces the computational complexity, but also provides an interesting physical insight: the power allocation strategy depends on the ratio between the Fisher information of the predictions and the measurements. The remainder of this paper is organized as follows. Section II introduces the system model and formulates the problem. Section III derives the joint SN selection and power allocation algorithm. Section IV provides the simulation results to validate the advantage of the proposed model-driven method. Section V concludes this paper. ## II System Model and Problem Formulation In Fig. 1, we show a PMN consisting of one BS serving as the sensing signal transmitter and \(N\) SNs serving as the receivers for the echoes, which can be BSs or other types of SNs [1, 2, 3, 4]. In each tracking frame, the BS will transmit sensing signals to the predicted positions of multiple targets, and the selected SNs will collaboratively estimate the location and velocity of the targets (motion state). The estimation results will be utilized to predict the motion state in the next tracking frame1. In this paper, the SN selection and power allocation will be formulated as an optimization problem to minimize the PCRLB for the estimation error of the target motion state. To this end, we first introduce the target motion model and the signal model, which are the foundation for deriving the PCRLB. Footnote 1: The tracked targets are initialized and the number of the targets is known in advance. This assumption can be realized by communication or some available detection approaches, e.g., radio access technology [23], PDA [24] or multi-frame detection [25] before target tracking. The targets are widely separated and each of them moves independently in the monitoring area [13]. ### _Target Motion Model_ The target motion model describes the motion behavior of the targets and affects the Fisher information of the prediction. Assume that the target motion follows a near constant velocity model and the transition matrix \(\mathbf{G}\) is given by [11, 12, 13, 15] \[\mathbf{G}=\mathbf{I}_{2}\otimes\begin{bmatrix}1&\Delta T\\ 0&1\end{bmatrix} \tag{1}\] Fig. 1: Illustration of the system. where \(\mathbf{I}_{2}\) denotes the \(2\times 2\) identity matrix, \(\otimes\) represents the Kronecker product, and \(\Delta T\) denotes the time between two adjacent tracking frames. In the \(k\)th tracking frame, there are \(Q\) point-like targets, where the \(q\)th target is located at \(\mathbf{r}_{q}^{(k)}=(r_{x,q}^{(k)},r_{y,q}^{(k)})\) with a velocity \(\mathbf{v}_{q}^{(k)}=(v_{x,q}^{(k)},v_{y,q}^{(k)})\). The target motion state is updated by \(\mathbf{x}_{q}^{(k)}=\mathbf{G}\mathbf{x}_{q}^{(k-1)}+\mathbf{z}_{q}^{(k-1)}\), where \(\mathbf{x}_{q}^{(k)}=[r_{x,q}^{(k)},v_{x,q}^{(k)},r_{y,q}^{(k)},v_{y,q}^{(k)}]^ {\mathrm{T}}\) includes the parameters to be estimated. Here, \(\mathbf{z}_{q}^{(k)}\) denotes the state noise, which is assumed to be a zero-mean Gaussian vector with covariance matrix [11, 12] \[\mathbf{Q}=q_{s}\mathbf{I}_{2}\otimes\begin{bmatrix}\frac{1}{2}(\Delta T)^{3 }&\frac{1}{2}(\Delta T)^{2}\\ \frac{1}{2}(\Delta T)^{2}&\Delta T\end{bmatrix} \tag{2}\] where \(q_{s}\) is the intensity of the process noise. ### _Signal Model_ In the \(k\)th tracking frame, the BS will transmit the sensing signal \(\mathbf{s}^{(k)}(t)\) to the targets, and the echoes will be captured by the selected SNs for sensing purposes. The location of the BS and the \(n\)th SN is given by \(\mathbf{r}_{BS}\) and \(\mathbf{r}_{n}\), respectively. Given the motion state, we can determine the measurements, i.e., the angle of arrival (AOA), the time delay, and the Doppler frequency of the \(q\)-th target with respect to the \(n\)-th SN as \[\theta_{q,n}^{(k)} =\arccos\frac{\mathbf{e}_{n}^{\mathrm{T}}(\mathbf{r}_{q}^{(k)}- \mathbf{r}_{n})}{\|\mathbf{r}_{q}^{(k)}-\mathbf{r}_{n}\|}, \tag{3}\] \[\tau_{q,n}^{(k)} =\frac{1}{c}\left(\|\mathbf{r}_{n}-\mathbf{r}_{q}^{(k)}\|+\| \mathbf{r}_{BS}-\mathbf{r}_{q}^{(k)}\|\right),\] (4) \[\mu_{q,n}^{(k)} =\frac{\mathbf{v}_{q}^{\mathrm{T}}(\mathbf{r}_{q}^{(k)}-\mathbf{ r}_{n})}{\lambda\|\mathbf{r}_{q}^{(k)}-\mathbf{r}_{n}\|}+\frac{\mathbf{v}_{q}^{ \mathrm{T}}(\mathbf{r}_{q}^{(k)}-\mathbf{r}_{BS})}{\lambda\|\mathbf{r}_{q}^{( k)}-\mathbf{r}_{BS}\|}, \tag{5}\] where \(\mathbf{e}_{n}\) represents the unit vector parallel to the line formed by all antennas of the uniform linear array, \(c\) is the speed of light, \(\lambda\) is the wavelength, and \(||\cdot||\) denotes the \(l_{2}\) norm. Define the power allocation vector \(\mathbf{p}^{(k)}=[p_{1}^{(k)},\cdots,p_{Q}^{(k)}]\in\mathbb{R}^{Q\times 1}\), where \(p_{q}^{(k)}\) denotes the power allocated to the \(q\)th target. The baseband echo of the \(q\)th target received by the \(n\)th SN is given by \[\mathbf{y}_{q,n}^{(k)}(t) =\sqrt{p_{q}^{(k)}}\beta_{q,n}^{(k)}e^{j2\pi\mu_{q,n}^{(k)}t} \mathbf{b}_{q,n}^{(k)}\mathbf{a}_{q,k}^{\mathrm{H}}\mathbf{s}^{(k)}(t-\tau_{q, n}^{(k)}) \tag{6}\] \[\quad+\mathbf{n}_{n}^{(k)}(t),\] where \(\mathbf{n}_{n}^{(k)}(t)\) denotes the complex additive white Gaussian noise with zero mean and variance \(\sigma^{2}\). The transmit and receive steering vectors are given by \(\mathbf{b}_{q,n}^{(k)}=\mathbf{b}(\theta_{q,n}^{(k)})\) and \(\mathbf{a}_{q,k}=\mathbf{a}(\psi_{q}^{(k)})\), respectively, where \(\psi_{q}^{(k)}\) represents the angle of departure (AOD) of the \(q\)th target from the BS. \(\beta_{q,n}^{(k)}\) represents the complex gain of the BS-target-SN (\(q\)th target and \(n\)th SN) path, which accounts for the array gain, the propagation loss and the target radar cross section (RCS) [26]. Following [11, 12, 13, 14, 15], the local estimation error is modeled as a zero-mean Gaussian vector with the covariance matrix \[\mathbf{\Sigma}_{q,n}^{(k)}=\mathrm{diag}\left[\sigma_{\theta_{q,n}^{(k)}}^{(k) },\sigma_{\tau_{q,n}^{(k)}}^{(2)},\sigma_{\mu_{q,n}^{(k)}}^{(k)}\right], \tag{7}\] where \(\sigma_{\theta_{q,n}^{(k)}}^{(k)}\), \(\sigma_{\tau_{q,n}^{(k)}}^{(k)}\), and \(\sigma_{\mu_{q,n}^{(k)}}^{(k)}\) denote the CRLBs for the estimation of the direction, range, and Doppler shift, respectively. The local estimation error affects the Fisher information of measurement, which will be utilized to derive the PCRLB in the next section. ### _Posterior Cramer-Rao Lower Bound_ Based on the above-mentioned target motion model and signal model, we will derive the PCRLB, which gives the lower bound of the estimation error for the target motion state. Define \(\mathbf{U}^{(k)}=[\mathbf{u}_{1}^{(k)},\cdots,\mathbf{u}_{Q}^{(k)}]\in \mathbb{R}^{N_{BS}\times Q}\) as the SN selection matrix, whose \((n,q)\)th entry \(u_{q,n}^{(k)}\) is 1 if the \(q\)th target is associated with the \(n\)th SN. The Fisher information matrix (FIM) for the \(q\)th target is given by [27] \[\mathbf{J}_{q}^{(k)}(p_{q}^{(k)},\mathbf{u}_{q}^{(k)})=\mathbf{J}_{P,q}^{(k)}+ \mathbf{J}_{Z,q}^{(k)}, \tag{8}\] where \(\mathbf{J}_{P,q}^{(k)}\) and \(\mathbf{J}_{Z,q}^{(k)}\) denote the prior and data information matrix, respectively. In particular, the prior information matrix is given by \[\mathbf{J}_{P,q}^{(k)}=\left(\mathbf{Q}+\mathbf{G}(\mathbf{J}_{q}^{(k-1)})^{-1 }\mathbf{G}^{\mathrm{H}}\right)^{-1}. \tag{9}\] The data information matrix \(\mathbf{J}_{Z,q}^{(k)}\) is given by \[\mathbf{J}_{Z,q}^{(k)}=\sum_{n=1}^{N}u_{q,n}^{(k)}(\mathbf{H}_{q,n}^{(k)})^{ \mathrm{T}}(\mathbf{\Sigma}_{q,n}^{(k)})^{-1}\mathbf{H}_{q,n}^{(k)}, \tag{10}\] where \[\mathbf{H}_{q,n}^{(k)}=\frac{\partial\mathbf{g}_{n}^{(k)}}{\partial\mathbf{x}_ {q}^{(k)}}\bigg{|}_{\mathbf{x}_{q}^{(k)}=\hat{\mathbf{x}}_{q}^{(k)-1}}, \tag{11}\] with \(\frac{\partial\mathbf{g}_{n}^{(k)}}{\partial\mathbf{x}_{q}^{(k)}}\) denoting the derivative of the measurements \(\mathbf{g}_{n}^{(k)}=[\theta_{q,n}^{(k)}(\mathbf{x}_{q}^{(k)}),\tau_{q,n}^{(k)} (\mathbf{x}_{q}^{(k)}),\mu_{q,n}^{(k)}(\mathbf{x}_{q}^{(k)})]^{\mathrm{T}}\) with respect to the motion state \(\mathbf{x}_{q}^{(k)}\). The predicted motion state of the \(q\)th target in the \(k\)th frame is updated by \(\hat{\mathbf{x}}_{q}^{(k|k-1)}=\mathbf{G}\hat{\mathbf{x}}_{q}^{(k-1)},\) where \(\hat{\mathbf{x}}_{q}^{(k-1)}\) represents the estimated motion state of the \(q\)th target in the \((k-1)\)th frame. Note that \(\mathbf{\Sigma}_{q,n}^{(k)}\) is inversely proportional to the SNR at the SN [11, 12, 13, 14, 15]. Thus, we can rewrite the measurement covariance in (7) as \[\mathbf{\Sigma}_{q,n}^{(k)}=(p_{q}^{(k)})^{-1}\mathbf{\Sigma}_{q,n}^{(k)}, \tag{12}\] where \(\mathbf{\Sigma}_{q,n}^{(k)}\) contains the part of \(\mathbf{\Sigma}_{q,n}^{(k)}\) that is independent of \(p_{q}^{(k)}\). Then, we have \(\mathbf{J}_{Z,q}^{(k)}=p_{q}^{(k)}\sum_{n=1}^{N}u_{q,n}^{(k)}\overline{ \mathbf{M}}_{q,n}^{(k)}\), where \(\overline{\mathbf{M}}_{q,n}^{(k)}=(\mathbf{H}_{q,n}^{(k)})^{\mathrm{T}}( \mathbf{\Sigma}_{q,n}^{(k)})^{-1}\mathbf{H}_{q,n}^{(k)}\). Note that \(\mathbf{M}_{q,n}^{(k)}=p_{q}^{(k)} ### _Problem Formulation_ We want to minimize the PCRLB through SN selection and power allocation. In the \(k\)th frame, the problem is modeled as \[\min_{\mathbf{p}^{(k)},\mathbf{U}^{(k)}} \sum_{q=1}^{Q}\log\det\mathbf{C}_{q}(p_{q}^{(k)},\mathbf{u}_{q}^{(k )})\] s.t. \[\sum_{q=1}^{Q}p_{q}^{(k)}\leq P_{T}, \tag{15a}\] \[p_{q}^{(k)}\geq P_{\min},\] (15b) \[\mathbf{1}^{\mathrm{T}}\mathbf{u}_{q}^{(k)}\leq N_{\max},q=1,2, \cdots,Q,\] (15c) \[\mathbf{U}^{(k)}\in\{0,1\}^{N\times Q}, \tag{15d}\] where constraint (15a) limits the total transmit power. Constraint (15b) indicates the minimum power allocated to each target, constraint (15c) limits the maximum number of SNs to track one target [30], and (15d) gives the binary constraint on \(\mathbf{u}_{q}^{(k)}\). The main reasons to select \(\log\det(\mathbf{C}_{q})\) as the performance metric include: 1) the determinant of \(\mathbf{C}_{q}\) is proportional to the volume of the minimum achievable covariance ellipsoid, which is widely used as an important metric for parameter estimation [28, 29]; and 2) if the determinant is directly used, the original problem (15) is not convex, but the monotonic logarithmic transformations can render this problem convex. ## III Model-Driven Sensing Node Selection and Power Allocation Scheme Note that the problem in (15) has two variables. To handle this issue, we propose to update the variables alternatively based on the AO theory. With a given feasible starting point \(\left\{\mathbf{p}^{(k,0)},\{\mathbf{u}_{q}^{(k,0)}\}_{q=1}^{Q}\right\}\), we iteratively perform the following two operations: 1) updating \(\{\mathbf{u}_{q}^{(k,j+1)}\}_{q=1}^{Q}\) with fixed \(\mathbf{p}^{(k,j)}\) via \[\mathbf{u}_{q}^{(k,j+1)}= \arg\min_{\mathbf{u}_{q}^{(k)}}\log\det\mathbf{C}_{q}(p_{q}^{(k,j )},\mathbf{u}_{q}^{(k)}), \tag{16}\] 2) updating \(\mathbf{p}^{(k,j+1)}\) with fixed \(\{\mathbf{u}_{q}^{(k,j+1)}\}_{q=1}^{Q}\) via \[\mathbf{p}^{(k,j+1)}=\arg\min_{\mathbf{p}^{(k)}}\sum_{q=1}^{Q}\log\det\mathbf{ C}_{q}(p_{q}^{(k)},\mathbf{u}_{q}^{(k,j+1)}), \tag{17}\] which decouple the SN selection and power allocation problem. In the following, we will first derive an iterative method for SN selection by jointly exploiting the MM framework and ADMM. To further reduce the computational complexity, we will develop a model-driven approach to solve (16). Finally, we will propose an FP-based WF method to solve (17), which has much lower complexity but offers comparable performance as the traditional CVX-based method. ### _MM-ADMM based Sensing Node Selection_ Given \(\mathbf{p}^{(k,j)}\), the problem in (16) can be formulated as \[\min_{\mathbf{u}_{q}^{(k)}} \mathcal{F}_{u}(\mathbf{u}_{q}^{(k)})\] (18) s.t. \[\mathbf{1}^{\mathrm{T}}\mathbf{u}_{q}^{(k)}\leq N_{\max},\; \mathbf{u}_{q}^{(k)}\in\{0,1\}^{N\times 1},\] where \(\mathcal{F}_{u}(\mathbf{u}_{q}^{(k)})=\log\det\mathbf{C}_{q}(\mathbf{u}_{q}^{ (k)}|p_{q}^{(k,j)})\). In order to enforce a binary solution and simplify the problem, we introduce a \(l_{0}\) pseudo-norm penalty to the objective function and relax the binary constraint [31]. Then, the problem in (18) is relaxed as \[\min_{\mathbf{u}_{q}^{(k)}} \mathcal{F}_{u}(\mathbf{u}_{q}^{(k)})+\rho_{q}\|\mathbf{u}_{q}^{( k)}\|_{0}\] (19) s.t. \[\mathbf{1}^{\mathrm{T}}\mathbf{u}_{q}^{(k)}\leq N_{\max},\; \mathbf{0}\leq\mathbf{u}_{q}^{(k)}\leq\mathbf{1},\] where \(\|\cdot\|_{0}\) denotes the \(l_{0}\) pseudo-norm. In general, a larger \(\rho_{q}\) leads to a sparser \(\mathbf{u}_{q}^{(k)}\). Due to the non-convex, non-continuous, and combinatorial nature of the \(l_{0}\) pseudo-norm, the problem (19) is NP-hard. To simplify the notation, we omit the index \(q\) hereafter unless doing so creates confusion. Inspired by [32], we approximate the \(l_{0}\) pseudo-norm by a function \(\mathcal{P}_{\gamma}(\mathbf{u}^{(k)})=\sum_{m=1}^{N}(1-e^{-\gamma u_{m}^{(k)}})\), where \(\gamma\) is a sufficiently large constant. \(\mathcal{P}_{\gamma}(\mathbf{u}^{(k)})\) is utilized due to several favorable properties: 1) it is asymptotically equivalent to \(\|\mathbf{u}^{(k)}\|_{0}\), i.e., \(\lim_{\gamma\rightarrow\infty}\mathcal{P}_{\gamma}(\mathbf{u}^{(k)})=\sum_{m=1 }^{N}(1-\delta(u_{m}^{(k)}))=\|\mathbf{u}^{(k)}\|_{0}\); 2) it is continuous, concave, and non-decreasing in the feasible set; and 3) it is differentiable and its gradient is easy to obtain. #### Iii-A1 MM framework for solving (19) The problem in (19) can be approximated by \[\min_{\mathbf{u}^{(k)}\in\mathcal{S}_{\mathbf{u}}}\mathcal{F}_{u}(\mathbf{u}^ {(k)})+\rho\mathcal{P}_{\gamma}(\mathbf{u}^{(k)}) \tag{20}\] where \(\mathcal{S}_{u}=\{\mathbf{u}^{(k)}|\mathbf{1}^{\mathrm{T}}\mathbf{u}^{(k)}=N_ {\max},\mathbf{0}\leq\mathbf{u}^{(k)}\leq\mathbf{1}\}\). Though \(\mathcal{P}_{\gamma}(\mathbf{u}^{(k)})\) is continuous w.r.t. \(\mathbf{u}^{(k)}\), the problem in (20) is still hard to solve, due to the complicated form of \(\mathcal{F}_{u}(\mathbf{u}^{(k)})\) w.r.t. \(\mathbf{u}^{(k)}\). To handle this difficulty, we propose to utilize the MM framework [33], based on which (20) can be solved in an iterative process. At each iteration, the MM framework updates the optimization variable by minimizing a tight upperbound of the function, which is known as the surrogate function. Then, The next question is how to construct a surrogate function for the objective function in (20). Since \(\mathcal{P}_{\gamma}(\mathbf{u}^{(k)})\) is differentiable and concave with respect to \(\mathbf{u}^{(k)}\), it is upperbounded by its first-order Taylor expansion, i.e., \[\mathcal{P}_{\gamma}(\mathbf{u}^{(k)})\leq\widetilde{\mathcal{P} }_{\gamma}(\mathbf{u}^{(k)}|\mathbf{u}^{(k,l)}) \tag{21}\] \[\triangleq\mathcal{P}_{\gamma}(\mathbf{u}^{(k,l)})+(\mathbf{d}_{ \gamma}^{(k,l)})^{\mathrm{T}}(\mathbf{u}^{(k)}-\mathbf{u}^{(k,l)}),\] where \(\mathbf{u}^{(k,l)}\) denotes the optimized result at the \(l\)th iteration, \(\mathbf{d}_{\gamma}^{(k,l)}=\gamma[e^{-\gamma u_{m}^{(k,l)}},e^{-\gamma u_{m}^{(k,l )}},\cdots,e^{-\gamma u_{m}^{(k,l)}}]^{\mathrm{T}}\) represents the gradient of \(\mathcal{P}_{\gamma}(\mathbf{u}^{(k)})\), and \(u_{m}^{(k,l)}\) denotes the \(n\)th entry of \(\mathbf{u}^{(k,l)}\). An appropriate upperbound of \(\mathcal{F}_{u}(\mathbf{u}^{(k)})\) can be obtained by \[\widetilde{\mathcal{G}}_{1}(\mathbf{u}^{(k)}|\mathbf{u}^{(k,l)}) \triangleq\mathcal{F}_{u}(\mathbf{u}^{(k,l)})+\mathbf{d}_{u}^{\mathrm{T}}( \mathbf{u}^{(k,l)})(\mathbf{u}^{(k)}-\mathbf{u}^{(k,l)}) \tag{22}\] \[+\frac{1}{2}(\mathbf{u}^{(k)}-\mathbf{u}^{(k,l)})^{\mathrm{T}} \mathbf{T}^{\mathrm{T}}(k)(\mathbf{u}^{(k)}-\mathbf{u}^{(k,l)}),\] where \(\mathbf{d}_{u}^{(k,l)}=\mathbf{d}_{u}(\mathbf{u}^{(k,l)})\) and \(\mathbf{d}_{u}(\mathbf{u}^{(k)})=\frac{\partial\mathcal{F}_{u}(\mathbf{u}^{(k)})}{ \partial\mathbf{u}^{(k)}}\) denotes the gradient of \(\mathcal{F}_{u}(\mathbf{u}^{(k)})\) w.r.t. \(\mathbf{u}^{(k)}\), whose \(n\)th entry is given by \(d_{u,n}(\mathbf{u}^{(k)})=\frac{\partial\mathcal{F}_{u}(\mathbf{u}^{(k)})}{ \partial\mathbf{u}^{(k)}}=-\mathrm{tr}\left((\mathbf{J}^{(k)}(\mathbf{u}^{(k)}|p ^{(k,j)}))^{-1}\mathbf{M}_{n}^{(k)}\right).\) The positive-definite matrix \(\mathbf{T}^{(k,l)}\) should satisfy \[\mathbf{T}^{(k,l)}\succeq\mathbf{H}_{u}(\mathbf{u}^{(k,l)}), \tag{23}\] where \(\mathbf{H}_{u}(\mathbf{u}^{(k)})=\frac{\partial\mathcal{F}_{u}(\mathbf{u}^{(k)})}{ \partial\mathbf{u}^{(k)}\partial\mathbf{u}^{(k)}\mathbf{T}}\) denotes the Hessian matrix of \(\mathcal{F}_{u}(\mathbf{u}^{(k)})\) w.r.t. \(\mathbf{u}^{(k)}\), whose \((m,n)\)th entry is given by \(H_{u,m,n}(\mathbf{u}^{(k)})=\frac{\partial\mathcal{F}_{u}(\mathbf{u}^{(k)})}{ \partial u^{(k)}_{m}\partial u^{(k)}_{n}}=\mathrm{tr}\;\mathbf{M}^{(k)}_{m} \left(\mathbf{J}^{(k)}(\mathbf{u}^{(k)}|p^{(k,j)})\right)^{-2}\mathbf{M}^{(k)}_ {n}\). Then, at the \((l+1)\)th iteration, the selection vector can be updated by solving the problem \[\min_{\mathbf{u}^{(k)}\in\mathcal{S}_{u}}\mathcal{G}(\mathbf{u}^{(k)}), \tag{24}\] where the surrogate function \(\mathcal{G}(\mathbf{u}^{(k)})\) is defined by \[\mathcal{G}(\mathbf{u}^{(k)})=\widetilde{\mathcal{G}}_{1}(\mathbf{u}^{(k)}| \mathbf{u}^{(k,l)})+\rho\widetilde{\mathcal{P}}_{\gamma}(\mathbf{u}^{(k)}| \mathbf{u}^{(k,l)}). \tag{25}\] The problem in (24) is convex and can be solved by using the general CVX toolbox based on the interior point method [18]. However, the computational complexity of CVX is about \(\mathcal{O}(N^{3.5})\), which is not suitable for PMNs with a large \(N\). #### Iii-B2 ADMM-based method for solving (24) To solve (24) efficiently, we exploit the ADMM, which splits the problem into two distinct parts and handles them separately [34]. Since (25) is Lipschitz continuous, the convergence of the ADMM can be guaranteed. By introducing an auxiliary variable \(\mathbf{v}^{(k)}\), (19) is equivalent to \[\min_{\mathbf{u}^{(k)},\mathbf{v}^{(k)}} \widetilde{\mathcal{G}}_{1}(\mathbf{u}^{(k)}|\mathbf{u}^{(k,l)})+ \rho\widetilde{\mathcal{P}}_{\gamma}(\mathbf{v}^{(k)}|\mathbf{u}^{(k,l)})\] (26) s.t. \[\mathbf{1}^{\mathrm{T}}\mathbf{u}^{(k)}=N_{\mathrm{max}},\; \mathbf{0}\leq\mathbf{v}^{(k)}\leq\mathbf{1},\;\mathbf{u}^{(k)}=\mathbf{v}^{ (k)},\] which leads to the augmented Lagrangian function [34] \[\mathcal{L}(\mathbf{u}^{(k)},\mathbf{v}^{(k)},\mathbf{z}^{(k)}) =\widetilde{\mathcal{G}}_{1}(\mathbf{u}^{(k)}|\mathbf{u}^{(k,l)}) +\rho\widetilde{\mathcal{P}}_{\gamma}(\mathbf{v}^{(k)}|\mathbf{u}^{(k,l)})\] \[+\frac{\rho_{a,l}}{2}\|\mathbf{u}^{(k)}-\mathbf{v}^{(k)}+\mathbf{ z}^{(k)}\|^{2}, \tag{27}\] where \(\mathbf{z}^{(k)}\) is the dual variable and \(\rho_{a,l}\) is a penalty parameter at the \(l\)th iteration. Then, at the \(m\)th iteration, the optimization variables are updated as \[\mathbf{u}^{(k,l)}_{m+1} =\arg\min_{\mathbf{u}^{(k)}}\mathcal{L}(\mathbf{u}^{(k)},\mathbf{ v}^{(k,l)}_{m},\mathbf{z}^{(k,l)}_{m}),\] (28a) s.t. \[\mathbf{1}^{\mathrm{T}}\mathbf{u}^{(k)}=N_{\mathrm{max}},\] \[\mathbf{v}^{(k,l)}_{m+1} =\arg\min_{\mathbf{u}^{(k)}}\mathcal{L}(\mathbf{u}^{(k,l)}_{m+1}, \mathbf{v}^{(k)},\mathbf{z}^{(k,l)}_{m}),\] (28b) s.t. \[\mathbf{0}\leq\mathbf{v}^{(k)}\leq\mathbf{1},\] \[\mathbf{z}^{(k,l)}_{m+1} =\mathbf{z}^{(k,l)}_{m}+\mathbf{u}^{(k+1,l)}_{m+1}-\mathbf{v}^{(k +1,l)}_{m+1}, \tag{28c}\] where \(\mathbf{u}^{(k,l)}_{m}\), \(\mathbf{v}^{(k,l)}_{m}\) and \(\mathbf{z}^{(k,l)}_{m}\) denote \(\mathbf{u}\), \(\mathbf{v}\) and \(\mathbf{z}\) at the \(m\)th ADMM iteration, respectively. Update \(\mathbf{u}^{(k,l)}_{m+1}\) via (28a)By utilizing the Lagrange multiplier method, (28a) can be reformulated as an unconstrained problem, whose Lagrange function is given by \(\mathcal{L}_{u}(\mathbf{u}^{(k)})=\mathcal{L}(\mathbf{u}^{(k)},\mathbf{v}^{(k,l)}_{m},\mathbf{z}^{(k,l)}_{m})+\nu_{l}(N_{\mathrm{max}}-\mathbf{1}^{\mathrm{ T}}\mathbf{u}^{(k)})\), where \(\nu_{l}\) is a Lagrange multiplier. The closed-form solution to (28a) is \[\mathbf{u}^{(k,l)}_{m+1}=\mathbf{u}^{(k,l)}-\mathbf{\Phi}^{-1}_{u}(\mathbf{d}^ {(k,l)}_{m}-\nu_{l}\mathbf{1}), \tag{29}\] where \(\mathbf{\Phi}_{l}=\mathbf{T}^{(k,l)}+\rho_{a,l}\mathbf{I}\) and \(\mathbf{d}^{(k,l)}_{m}=\mathbf{d}^{(k,l)}_{u}-\rho_{a,l}(\mathbf{v}^{(k,l)}_{m }-\mathbf{z}^{(k,l)}_{m})\). By substituting (29) into the constraint of (28a), we have \[\nu_{l}=\frac{N_{\mathrm{max}}-\mathbf{1}^{\mathrm{T}}\mathbf{u}^{(k,l)}+ \mathbf{1}^{\mathrm{T}}\mathbf{\Phi}^{-1}_{l}\mathbf{d}^{(k,l)}_{m}}{\mathbf{1} ^{\mathrm{T}}\mathbf{\Phi}^{-1}_{l}\mathbf{1}}=\frac{\mathbf{1}^{\mathrm{T}} \mathbf{\Phi}^{-1}_{l}\mathbf{d}^{(k,l)}_{m}}{\mathbf{1}^{\mathrm{T}}\mathbf{ \Phi}^{-1}_{l}\mathbf{1}}, \tag{30}\] which follows from the fact that \(N_{\mathrm{max}}=\mathbf{1}^{\mathrm{T}}\mathbf{u}^{(k,l)}\). Therefore, the closed-form solution to (28a) is given by \[\mathbf{u}^{(k,l)}_{m+1}=\mathbf{u}^{(k,l)}-\mathbf{\Phi}^{-1}_{l}\left(\mathbf{ d}^{(k,l)}_{m}-\frac{\mathbf{1}^{\mathrm{T}}\mathbf{\Phi}^{-1}_{l}\mathbf{d}^{(k,l)}_{m }}{\mathbf{1}^{\mathrm{T}}\mathbf{\Phi}^{-1}_{l}\mathbf{1}}\mathbf{1}\right). \tag{31}\] One remaining problem is how to determine \(\mathbf{\Phi}_{l}\), which is equivalent to choosing a proper \(\mathbf{T}^{(k,l)}\). Indeed, it is not difficult to find a matrix \(\mathbf{T}^{(k,l)}\) that satisfies (23), such as \(\mathbf{T}^{(k,l)}=\mathbf{H}_{u}(\mathbf{u}^{(k,l)})+\epsilon\mathbf{I}\), where \(\epsilon\) is a positive constant to make \(\mathbf{T}^{(k,l)}\) positive definite. However, the matrix inversion of \(\mathbf{\Phi}_{l}\) is involved in (31) when updating \(\mathbf{u}^{(k,l)}_{m+1}\), which may be computationally complex due to the large number of SNs. To tackle this issue, \(\mathbf{T}^{(k,l)}\) is desired to be a diagonal matrix. One feasible solution is to make \(\mathbf{T}^{(k,l)}\) proportional to the identity matrix, i.e., [35] \[\mathbf{T}^{(k,l)}=C^{(k,l)}_{T}\mathbf{I}, \tag{32}\] where \(C^{(k,l)}_{T}\) is a positive constant to satisfy (23). For example, one feasible choice is \(C^{(k,l)}_{T}=\lambda_{\mathrm{max}}\left(\mathbf{H}_{F}(\mathbf{u}^{(k,l)})\right)\) and \(\lambda_{\mathrm{max}}(\mathbf{X})\) denotes the principle eigenvalue of \(\mathbf{X}\). Update \(\mathbf{v}^{(k,l)}_{m+1}\) via (28b)Since (28b) is convex, the closed-form solution \(\mathbf{v}^{(k,l)}_{m+1}\) to (28b) can be obtained based on the KKT conditions, whose \(n\)th entry is given by \[v^{(k)}_{m+1,n}=\begin{cases}\widetilde{v}_{n},&\text{if }0\leq\widetilde{v}_{n}\leq 1,\\ 0,&\text{if }\widetilde{v}_{n}<0,\\ 1,&\text{if }\widetilde{v}_{n}>1,\end{cases} \tag{33}\] where \(\widetilde{v}_{n}\) denotes the \(n\)th entry of \(\widetilde{\mathbf{v}}\), given by \[\widetilde{\mathbf{v}}=-\frac{\rho}{\rho_{a,l}}\mathbf{d}^{(k,l)}_{\gamma}+ \mathbf{u}^{(k)}_{m+1}+\mathbf{z}^{(k)}_{m}. \tag{34}\] **Remark 1**: _The cost function will not increase One feasible way is to treat the diagonal elements of \(\mathbf{T}^{(k,l)}\) as the learnable parameters. In this case, the number of learnable parameters is \(N\) at each layer, which will be large due to the dense SNs. Moreover, the trained \(\mathbf{T}^{(k,l)}\) may break the convergence condition (23). These issues motivate us to consider another design with three desirable properties: 1) the number of learnable parameters is moderate, 2) the convergence property is guaranteed, and 3) the proposed method will be restricted to first-order methods that only require gradients, since higher-order optimization methods may cost a large amount of computing and storage resource. ### _Deep-Alternative-Network: DNN Based Sensing Node Selection_ To derive a DNN with the above-mentioned properties, we unfold the MM-ADMM-based SN selection method and introduce an additional module. The new DNN is called DAN. As shown in Fig. 2, DAN consists of \(L\) cascaded layers with some learnable parameters, where the \((l+1)\)th layer takes the first- and second-order momentum \(\hat{\mathbf{m}}^{(l-1)}\) and \(\hat{\mathbf{v}}^{(l-1)}\), the gradients \(\mathbf{d}_{u}^{(k,l)}\) and \(\mathbf{d}_{v}^{(k,l)}\), and the output from the previous layer \(\mathbf{u}^{(k,l)}\) as inputs, and outputs an update \(\mathbf{u}^{(k,l+1)}\). In particular, the \((l+1)\)th layer updates \(\mathbf{u}_{m}^{(k,l)}\), \(\mathbf{v}_{m}^{(k,l)}\), and \(\mathbf{z}_{m}^{(k,l)}\), alternatively, as shown by the blue, green, and orange blocks in Fig. 2, respectively. The update of \(\mathbf{u}_{m+1}^{(k,l)}\) is of the same form as (31). But we make the following two modifications, as shown by the red block in Fig. 2: 1) \(\mathbf{d}_{m}^{(k,l)}\) is constructed as \[\mathbf{d}_{m}^{(k,l)}=\hat{\mathbf{m}}_{l}-\rho_{a,l}(\mathbf{v}_{m}^{(k,l)}- \mathbf{z}_{m}^{(k,l)}), \tag{35}\] where \[\hat{\mathbf{m}}_{l}=\beta_{1,l}\hat{\mathbf{m}}_{l-1}+(1-\beta_{1,l}) \mathbf{d}_{u}^{(k,l)}. \tag{36}\] Here, \(\beta_{1,l}=\beta_{1}\eta_{1}^{l}\) where \(\eta_{1}\in(0,1)\) and \(\beta_{1}\in(0,1)\) denotes a learnable hyper-parameters to avoid the case that the momentum diverges severely. When \(\beta_{1,l}=0\), the first-order momentum \(\hat{\mathbf{m}}_{l}\) reduces to the gradient \(\mathbf{d}_{u}^{(k,l)}\). In this paper, we define \(\beta_{1,l}=\beta_{1}\eta_{1}^{l}\) with \(\beta_{1}\in(0,1)\) and \(\eta_{1}\in(0,1)\). The momentum terms caused by non-zero \(\beta_{1,l}\) may improve the performance significantly, especially in deep learning applications. 2) \(\mathbf{\Phi}_{l}\) is constructed as \[\mathbf{\Phi}_{l}=\hat{\mathbf{T}}^{(k,l)}+\rho_{a,l}\mathbf{I}, \tag{37}\] where \(\hat{\mathbf{T}}^{(k,l)}\triangleq\mathrm{diag}\left(\left[\frac{\sqrt{| \hat{v}_{l,1}|}}{\alpha_{1,l}},\cdots,\frac{\sqrt{|\hat{v}_{l,N}|}}{\alpha_{1,l}}\right]\right),\) and \(\rho_{a,l}=\rho_{a}\eta_{a}^{l}\) with \(\eta_{a}^{l}\in(0,1)\). Here, \(\hat{v}_{l,i}\) denotes the \(i\)th entry of the second-order momentum \(\hat{\mathbf{v}}_{l}\), which is defined by \[\hat{\mathbf{v}}_{l}=\beta_{2}\hat{\mathbf{v}}_{l-1}+(1-\beta_{2})(\mathbf{d }_{u}^{(k,l)})^{2}, \tag{38}\] where \(\beta_{2}\) denotes a constant to control the second-order momentum and \(\alpha_{1,l}=\frac{\alpha_{1,l}}{\sqrt{l}}\) with \(\bar{\alpha}_{1,l}\in[\alpha_{1}^{-},\alpha_{1}^{+}]\) representing a set of learnable parameters to control the update step size. Here, the positive constants \(\alpha_{1}^{-}\) and \(\alpha_{1}^{+}\) are the lower and upper bounds of \(\bar{\alpha}_{1,l}\). We refer to the diagonal element of \(\mathbf{\Phi}_{l}^{-1}\) as the learning rate of this algorithm, whose \(i\)th entry is given by \(\phi_{l,i}^{-1}=\left(\sqrt{|\hat{v}_{l,i}|}/\alpha_{1,l}+\rho_{a,l}\right)^{-1}.\) Learning rate decay is critical for training neural networks. In the early training stage, a large learning rate can accelerate training and help the network escape spurious local minima. By the end of the iteration, a small learning rate helps the network converge to a local minimum and avoid oscillation. Therefore, we desire a set of \(\rho_{a,l}\) and \(\alpha_{1,l}\) such that, for any \(l\in\{2,\cdots,L\}\) and \(i\in\{1,\cdots,N\}\), we have \(\phi_{l,i}^{-1}\leq\phi_{l-1,i}^{-1}\). The updates are inspired by the adaptive momentum (Adam) method [37], i.e., an algorithm for first-order gradient-based optimization. Adam is chosen due to its favorable properties: 1) simple implementation, computationally efficient, and low memory requirements; 2) adaptability to large-scale problems; and 3) adaptation to sparse gradients [37]. Based on the adaptive estimates of first- and second-order momentum, we propose a novel construction of \(\mathbf{d}_{m}^{(k,l)}\) and \(\hat{\mathbf{T}}^{(k,l)}\) as well as its resultant \(\mathbf{\Phi}_{l}\), which can meet the constraint in (15c) and the diagonal requirement, simultaneously. But different from ADAM, the update has additional terms resulting from the original MM-ADMM and one learnable step size \(\alpha_{1,l}\) to control the iteration process. Compared with training all diagonal elements of \(\hat{\mathbf{T}}^{(k,l)}\), the learnable parameters in the DAN are changed to \(\bar{\alpha}_{1,l}\) and \(\beta_{1}\). The total number of learnable parameters over all layers is reduced from \(LN\) to \(L+1\). The update of \(\mathbf{v}_{m+1}^{(k,l)}\) and \(\mathbf{z}_{m+1}^{(k,l)}\) are the same as (33) and (28c), respectively. With given \(\hat{\mathbf{m}}_{l}\) and \(\mathbf{\Phi}_{l}\), the Lagrange function \(\mathcal{L}(\mathbf{u}^{(k)},\mathbf{v}^{(k)},\mathbf{z}^{(k)}|\hat{\mathbf{m} }_{l},\mathbf{\Phi}_{l})\) defined in (27) will not increase after updating \(\mathbf{u}_{m}^{(k,l)}\), \(\mathbf{v}_{m}^{(k,l)}\) and \(\mathbf{z}_{m}^{(k,l)}\) by (31), (33), and (28c), respectively. The modified ADMM iteration will also converge at a set of station points denoted by \(\mathbf{u}_{(\star)}^{(k)}\), \(\mathbf{v}_{(\star)}^{(k)}\), and \(\mathbf{z}_{(\star)}^{(k)}\). Therefore, we have \[\mathbf{u}^{(k,l+1)}=\mathbf{u}_{\star}^{(k,l)}=\mathbf{u}^{(k,l)}-\mathbf{ \Phi}_{l}^{-1}\left(\mathbf{d}_{\star}^{(k,l)}-\nu_{l}\mathbf{1}\right), \tag{39}\] where \[\mathbf{d}_{\star}^{(k,l)}=\hat{\mathbf{m}}_{l}-\rho_{a,l}(\mathbf{v}_{\star}^{(k,l)}-\mathbf{z}_{\star}^{(k,l)}),\ \nu_{l}=\frac{\mathbf{1}^{T}\mathbf{\Phi}_{l}^{-1}\mathbf{d}_{\star}^{(k,l)}}{ \mathbf{1}^{T}\mathbf{\Phi}_{l}^{-1}\mathbf{1}}. \tag{40}\] ### _Convergence of DAN_ Until now, we have developed a new model-driven method for SN selection. However, the obtained \(\hat{\mathbf{T}}^{(k,l)}\) may not satisfy (23), which indicates that the convergence property of the MM framework is questionable. To address this issue, we next analyze the convergence of the proposed DAN. Fig. 2: Illustration of DAN. For any sequence \(\{{\bf u}^{(k,l)}\}_{l=1}^{L}\) generated by the proposed DAN, the regret function is defined as \[R_{L}\triangleq\sum_{l=1}^{L}\left(\mathcal{G}({\bf u}^{(k,l)})-\mathcal{G}({\bf u }^{(k,\star)})\right), \tag{41}\] where \({\bf u}^{(k,\star)}=\arg\min_{{\bf u}^{(k)}\in\mathcal{S}_{\bf u}}\mathcal{G}({ \bf u}^{(k)})\) denotes the best stationary point in the feasible set \(\mathcal{S}_{u}\). Generally speaking, the regret function indicates the sum of the difference between \(\mathcal{G}({\bf u}^{(k,l)})\) and \(\mathcal{G}({\bf u}^{(k,\star)})\), which is widely used for the convergence proof [37]. Note that the feasible set has bounded diameter, i.e., for all \({\bf u},{\bf v}\in\mathcal{S}_{u}\), \(||{\bf u}-{\bf v}||^{2}\leq D_{\Delta}\). Define \(D_{u,1}\triangleq\max\limits_{l}||{\bf d}_{u}^{(k,l)}||_{1}\), \(D_{\phi}\triangleq\max\limits_{l}\max\limits_{i}\phi_{l,i}^{-1}\), \(D_{b,1}\triangleq\max\limits_{l}||\hat{\bf b}_{l}||_{1}\), and \(D_{b,2}\triangleq\max\limits_{l}||\hat{\bf b}_{l}||^{2}\), where \(\hat{\bf b}_{l}={\bf v}_{\star}^{(k,l)}-{\bf z}_{\star}^{(k,l)}\). Then, we have the following theorem for the convergence analysis. **Theorem 1**: _Assume that, for all \(l\in[2,L]\), \(\phi_{l,i}^{-1}\leq\phi_{l-1,i}^{-1}\). The regret is bounded by_ \[R_{L}\leq C_{1}\sqrt{L}+C_{2}, \tag{42}\] _where \(C_{1}=\frac{\sqrt{1-\beta_{2}}D_{u,1}D_{\Delta}}{\alpha_{*}^{-1}(1-\sqrt{\beta _{2}})(1-\beta_{1})}\) and \(C_{2}\) is defined by (43), given at the top of this page._ _Proof:_ See Appendix A. Since \(C_{1}\) and \(C_{2}\) are constants independent of \(L\), _Theorem 1_ indicates that the DAN has a regret of \(\mathcal{O}(L^{\frac{L}{2}})\), which guarantees that the sequence \(\{\mathcal{G}({\bf u}^{(k,l)})\}_{l=1}^{L}\) will converge to \(\mathcal{G}({\bf u}^{(k,\star)})\) with convergence rate on the order of \(\mathcal{O}(L^{-\frac{L}{2}})\). ### _Transmit Power Allocation For Multiple Targets_ Given \(\{{\bf u}_{q}^{(k,j+1)}\}_{q=1}^{Q}\), the problem in (17) can be expressed as \[\min_{{\bf p}^{(k)}\in\mathcal{S}_{p}}\,\sum_{q=1}^{Q}\mathcal{F}_{\rm pa}(p_{ q}^{(k)}), \tag{44}\] where \(\mathcal{F}_{\rm pa}(p_{q}^{(k)})=\log\det{\bf C}_{q}(p_{q}^{(k)}|{\bf u}_{q}^ {(k,j)})\) is the cost function and \(\mathcal{S}_{p}=\{{\bf p}^{(k)}|\sum_{q=1}^{Q}p_{q}^{(k)}\leq P_{T},p_{q}^{(k) }\geq P_{\rm min},q=1,2\cdots,Q\}\) denotes the feasible set of \({\bf p}^{(k)}\). This problem is convex and can be reformulated as a SDP problem, i.e., \[\max_{{\bf p}^{(k)}}\,\sum_{q=1}^{Q}\log\det({\bf Q}_{q}), \tag{45}\] \[\text{s.t.}\,\sum_{q=1}^{Q}p_{q}^{(k)}\leq P_{T},\quad p_{q}^{(k) }\geq P_{\rm min},\] \[\quad{\bf J}_{q}^{(k)}(p_{q}^{(k)}|{\bf u}_{q}^{(k,j)})\succeq{\bf Q }_{q},q=1,2\cdots,Q,\] where \(\{{\bf Q}_{q}\}_{q=1}^{Q}\) denotes a set of auxiliary symmetric matrices. Then, this problem can be solved by the CVX toolbox. However, the CVX toolbox is generally time-consuming, especially when the number of targets is large. To reduce the computational complexity and reveal more physical insights, we propose an iterative water-filling-based power allocation method. First, we merge the total power constraint into the cost function by the Lagrange multiplier method, i.e., \[\mathcal{L}_{\rm pa}({\bf p}^{(k)})=\sum_{q=1}^{Q}\mathcal{F}_{\rm pa}(p_{q}^{( k)})+\lambda_{\rm pa}(P_{T}-\sum_{q=1}^{Q}p_{q}^{(k)}), \tag{46}\] where \(\lambda_{\rm pa}\) is the Lagrange multiplier. The derivative of (46) w.r.t. \(p_{q}^{(k)}\) is given by \[\frac{\partial\mathcal{L}_{\rm pa}({\bf p}^{(k)})}{\partial p_{q}^{(k)}}={\rm tr }(({\bf J}_{P,q}^{(k)}+p_{q}^{(k)}\widehat{\mathbf{\Sigma}}_{q}^{(k)})^{-1} \widetilde{\mathbf{\Sigma}}_{q}^{(k)})-\lambda_{\rm pa}, \tag{47}\] where \(\widetilde{\mathbf{\Sigma}}_{q}^{(k)}=\sum_{n=1}^{N}u_{q,n}^{(k)}\overline{\mathbf{ \Omega}}_{q,n}^{(k)}\). By setting \(\frac{\partial\mathcal{L}_{\rm pa}({\bf p}^{(k)})}{\partial p_{q}^{(k)}}=0\), we have the following fixed-point equation, i.e., \[p_{q}^{(k)}=\frac{1}{\lambda_{\rm pa}}-\frac{{\rm tr}\;({\bf J}_{P,q}^{(k)}+p_ {q}^{(k)}\widetilde{\mathbf{\Sigma}}_{q}^{(k)})^{-1}{\bf J}_{P,q}^{(k)}}{{\rm tr}\;({ \bf J}_{P,q}^{(k)}+p_{q}^{(k)}\widetilde{\mathbf{\Sigma}}_{q}^{(k)})^{-1} \widetilde{\mathbf{\Sigma}}_{q}^{(k)}}. \tag{48}\] If \({\bf J}_{P,q}^{(k)}\) and \(\widetilde{\mathbf{\Sigma}}_{q}^{(k)}\) reduce to one-dimensional constants denoted by \(J_{q}^{(k)}\) and \(\widetilde{\Sigma}_{q}^{(k)}\), respectively, the closed-form solution of \(p_{q}^{(k)}\) can be directly obtained from (48), i.e., \(p_{q}^{(k)}=\mu_{\rm wf}-J_{p,q}^{(k)}\widetilde{\Sigma}_{q}^{(k)}\), where \(\mu_{\rm wf}=\frac{1}{\lambda_{\rm pa}}\) denotes the water level. For the matrix-version \({\bf J}_{P,q}^{(k)}\) and \(\widetilde{\mathbf{\Sigma}}_{q}^{(k)}\), we propose to obtain \(p_{q}^{(k)}\) and the water level \(\mu_{\rm wf}\) by an iteration process. In particular, at the \(i\)th iteration, \(p_{q,i+1}^{(k)}\) is obtained by \[p_{q,i+1}^{(k)}=\left|\mu_{\rm wf}-\frac{{\rm tr}\;({\bf J}_{P,q}^{(k)}+p_{q}^{( k)}\widetilde{\mathbf{\Sigma}}_{q}^{(k)})^{-1}{\bf J}_{P,q}^{(k)}}{{\rm tr}\;({\bf J}_{P,q}^{(k)}+p_{q}^{( k)}\widetilde{\mathbf{\Sigma}}_{q}^{(k)})^{-1}\widetilde{\mathbf{\Sigma}}_{q}^{(k)}}\right|_{P_{\rm min}}, \tag{49}\] where \(p_{q,i}^{(k)}\) denotes the power for the \(i\)th target at the \(i\)th iteration and \(\lfloor a\rfloor_{b}=\max\{a,b\}\). Then, the water level \(\mu_{\rm wf}\) is updated by setting \(\sum_{q=1}^{Q}p_{q,i+1}^{(k)}(\mu_{\rm wf})=P_{T}\). **Remark 3**: _According to the Rayleigh quotient, we have \(\tilde{\lambda}_{\rm min}\leq\frac{{\rm tr}\;({\bf J}_{P,q}^{(k)}+p_{q}^{(k)} \widetilde{\mathbf{\Sigma}}_{q}^{(k)})^{-1}{\bf J}_{P,q}^{(k)}}{{\rm tr}\;({\bf J}_{P,q }^{(k)}+p_{q}^{(k)}\widetilde{\mathbf{\Sigma}}_{q}^{(k)})^{-1}\widetilde{\mathbf{\Sigma}}_{q}^ {(k)}}\leq\tilde{\lambda}_{\rm max}\), where \(\tilde{\lambda}_{\rm min}\) and \(\tilde{\lambda}_{\rm max}\) denote the minimum and maximum eigenvalue of \((\widetilde{\mathbf{\Sigma}}_{q}^{(k)})^{-1}{\bf J}_{P,q}^{(k)}\), respectively. Note that \({\bf J}_{P,q}^{(k)}\) and \(\widetilde{\mathbf{\Sigma}}_{q}^{(k)}\) denote the FIM of the prediction and the measurement, respectively. Thus, the eigenvalues of \((\tilde{\mathbf{\Sigma}}_{q}^{(k)})^{-1}\mathbf{J}_{P,q}^{(k)}\) denote the ratio between the prediction and measurement. Recalling (48), if the eigenvalues of \((\tilde{\mathbf{\Sigma}}_{q}^{(k)})^{-1}\mathbf{J}_{P,q}^{(k)}\) are larger, \(p_{q}^{(k)}\) will be lower. This indicates that, more power will be allocated to a target, if 1) the measurement provides more information than the prediction, which enables the system to improve the accuracy of the prediction, or 2) the prediction of this target is so bad such that the system needs to allocate more power for better motion state estimation. In turn, if the eigenvalues of \((\tilde{\mathbf{\Sigma}}_{q}^{(k)})^{-1}\mathbf{J}_{P,q}^{(k)}\) are smaller, \(p_{q}^{(k)}\) will be lower. This indicates that, a target will be assigned with a lower power, if 1) the prediction is good enough; or 2) the measurement is too bad. ## IV Simulation In the simulation, we will show the efficiency and effectiveness of the proposed DAN and FP-WF algorithms. In the following, we first introduce the system parameters, the training details of DAN, and the benchmark algorithms. **System parameters**: We consider a mmWave system operating at a carrier frequency of 28 GHz. There is one BS acting as the transmitter, which is located at \([0,0]\) m. The number of SNs is \(N=32\). These SNs are uniformly distributed in the area within \(400\times 400\) m\({}^{2}\). On average, there is one SN within an area of 5000 m\({}^{2}\). The measurement covariance defined in (7) is generated by \(\mathbf{\Sigma}_{q,n}^{(k)}=\frac{1}{\mathrm{SNR}_{q}^{(k)}}\tilde{\mathbf{ \Sigma}}_{q,n}^{(k)}\), where \(\tilde{\mathbf{\Sigma}}_{q,n}^{(k)}=\mathrm{diag}[\hat{\sigma}_{\theta_{q,n}^ {(k)}}^{2},\hat{\sigma}_{\tau_{q,n}^{(k)}}^{2},\hat{\sigma}_{\mu_{n}^{(k)}}^{ 2}]\) with \(\hat{\sigma}_{\theta_{q}^{(k)}}^{(k)}=2\), \(\hat{\sigma}_{\tau_{q,n}^{(k)}}=1\), \(\hat{\sigma}_{\mu_{n}^{(k)}}=1\). The SNR is defined by \(\mathrm{SNR}_{q}^{(k)}=\frac{\mathbf{\gamma}_{q}^{(k)}}{\sigma^{2}\hat{\sigma} _{\mu_{n}^{(k)}}^{2}}\), where \(\gamma_{0}=-61.4\) dB denotes the pathloss at reference distance. We set the total power at BS \(P=30\) dBm, the minimum power for single target \(P_{min}=20\) dBm, the noise power \(\sigma^{2}=-90\) dBm, the intensity of process noise \(q_{s}=5\), and \(\Delta T=0.5\) s. **Initialization of motion state**: There are three targets to be tracked, i.e., \(Q=3\), if not otherwise specified. The initial velocities of the targets are given as \(\mathbf{v}_{1}=[-10,0]^{\mathrm{T}}\) m/s, \(\mathbf{v}_{2}=[0,-10]^{\mathrm{T}}\) m/s, \(\mathbf{v}_{3}=[10,0]^{\mathrm{T}}\) m/s, respectively. The initial locations of the targets are given as \(\mathbf{x}_{1}^{(0)}=[124,124]^{\mathrm{T}}\) m, \(\mathbf{x}_{2}^{(0)}=[-134,134]^{\mathrm{T}}\) m, and \(\mathbf{x}_{3}^{(0)}=[-144,-144]^{\mathrm{T}}\) m, respectively. **Training details**: During training, the learnable parameters are optimized by the SGD optimizer in the PyTorch with a learning rate \(5\times 10^{-5}\). In our experiment, the loss function for training is selected as \(f_{loss}=\frac{1}{L}\sum_{l=1}^{L}||\mathbf{u}_{ES}-\hat{\mathbf{u}}^{l}||^{2}\), where \(\mathbf{u}_{ES}\) denotes the selection vector obtained by the exhaustive search (ES). The number of data for training is set as \(N_{\text{train}}=500\). The network parameters are set as \(\rho=1\), \(\rho_{a}=10^{2}\), \(\gamma=10^{4}\), \(\beta_{2}=0.999\), \(\eta_{1}=0.99\), and \(\eta_{a}=0.99\). The learnable parameters are initialized as \(\beta_{1}=0.99\), and \(\alpha_{1}=0.15\) for all layers. The number of layers is set as \(L=10\). The maximum number of ADMM iterations is set as \(200\). **Benchmark methods**: The proposed methods are compared with the following algorithms for SN selection and power allocation. #### Iv-A1 SN selection We compare DAN with the following methods: \(\bullet\) 'Nearest SN Selection': this method selects the subset of SNs nearest to the target; \(\bullet\) 'Exhaustive Search (ES)': this method selects the subset of SNs which minimizes the cost function; \(\bullet\) 'MM-CVX': the method solves (24) by CVX toolbox. \(\bullet\) 'MM-ADMM': the optimization-based method proposed in Sec. III. A. To show the impact of \(\mathbf{T}^{(k,l)}\) in MM-ADMM, we use two different \(\mathbf{T}^{(k,l)}\). Specifically, the first choice is \(\mathbf{T}_{1}^{(k,l)}=\mathrm{tr}(\mathbf{H}_{F}(\mathbf{u}^{(k,l)}))\mathbf{I}\), and the second choice is \(\mathbf{T}_{2}^{(k,l)}=\lambda_{\max}(\mathbf{H}_{F}(\mathbf{u}^{(k,l)}))\mathbf{I}\), which are denoted by 'MA-I' and 'MA-II', respectively. The parameters of MM-ADMM and MM-CVX are the same as that for DAN. The maximum number of MM iterations for MM-ADMM and MM-CVX is set as \(30\) and \(50\), respectively, unless specified otherwise. #### Iv-A2 Power allocation We compare FP-WF with 'CVX', which represents the method for solving (45) by CVX. ### _Computational Cost_ Table I shows the running time2 of the algorithms composed of different power allocation and SN selection methods. It can be observed that the running time of DAN \(\&\) FP-WF is 0.7724 s, which is the lowest among all combinations. Meanwhile, we can observe that the running time of ES \(\&\) CVX is 18.6242 s, which is about 24.11 times more than that of DAN \(\&\) FP-WF. To further demonstrate the low computational complexity provided by DAN and FP-WF, we study the computational cost of the SN selection and power allocation methods, respectively. Footnote 2: Configuration of this computer: CPU: Inter Core i9-9900 @ 3.10GHz; RAM: 16GB; Software: Python 3.10.9 in Microsoft visual studio code and Matlab 2020b. **Running time of the SN selection methods**: Table II shows the running time of the SN selection algorithms with different \(N\). DAN achieves the lowest computational cost among the candidates with different \(N\). The computational consumption of ES is extremely large, especially when \(N\) is large. For example, when \(N=128\), the DAN is about 443 times faster than ES. MM-CVX is more time-consuming than MM-ADMM. Meanwhile, the running time of DAN is less than that of the MM-ADMM. There are two main reasons: 1) one layer of DAN has a lower computational cost than one iteration of MM-ADMM. In particular, DAN only requires the gradient, while MM-ADMM requires both the gradient and Hessian matrix, which needs more computational cost, and 2) owing to the well-trained \(\mathbf{T}^{(k,l)}\), DAN can converge faster than MM-ADMM, which will be shown in the following. **Convergence of the SN selection methods**: The running time of MM-CVX, MM-ADMM and DAN is proportional to the required number of iterations/layers to converge. Fig. 3 shows the cost function over the number of the iterations (optimization-based methods) or the layers (DAN). First, MM-CVX needs about 50 iterations to converge, which is more than MM-ADMM and DAN. Meanwhile, we can observe that DAN can converge within 3 layers, while MM-ADMM needs about 15-20 iterations to converge, which leads to more running time. This is because, unlike MM-ADMM, DAN utilizes the momentum, which accumulates the gradient of the past layers and can thus speed up the convergence [37]. Meanwhile, we see that MM-ADMM-II can converge faster than MM-ADMM-I which indicates that the convergence of MM-ADMM highly depends on the choice of \(\mathbf{T}^{(k,l)}\). This is also the motivation to learn \(\mathbf{T}^{(k,l)}\) in DAN. **Running time of the power allocation methods**: Table III shows the running time for the power allocation algorithms versus different \(Q\). We can observe that the running time of FP-WF is much lower than CVX for different cases. This is because FP-WF is derived based on the Lagrange multiplier method, which can solve (45) more efficiently than the interior point method used by CVX. ### _Tracking Accuracy_ The average root mean square error (RMSE) of multiple targets tracking over \(Q\) targets and \(K\) frames is selected as the performance metric for multiple target tracking, which is defined as \(\frac{1}{Q}\frac{1}{K}\sum_{q=1}^{Q}\sum_{k=1}^{K}\sqrt{\frac{1}{N_{mc}}\sum_{ i=1}^{N_{mc}}\|\mathbf{x}_{q}^{(k)}-\hat{\mathbf{x}}_{q}^{(k,i)}\|^{2}}\), where \(\hat{\mathbf{x}}_{q}^{(k,i)}\) denotes the estimated position of the target \(q\) at the \(k\)th time frame in the \(i\)th Monte-Carlo trial, and \(N_{mc}\) denotes the number of Monte-Carlo trials. The number of tracking frames is set as \(K=10\). Fig. 4 shows the average RMSE with different power budget \(P\). We have several observations. First, associated with different SN selection methods, FP-WF can achieve the same performance as CVX. Recalling from the results in Table III, compared to CVX, FP-WF can reduce the computational cost without losing performance loss. Second, we can observe that ES can achieve the best performance among the SN selection methods. However, from Table I, it can be observed that the running time of ES is extremely high, which limits its real application. Third, MM-CVX and MM-ADMM can achieve similar performance, but as shown in Table I, the computational cost of MM-CVX is higher than that of MM-ADMM. Furthermore, DAN can outperform MM-ADMM, which is because a more suitable \(\mathbf{T}\) is learned by DAN. Finally, the performance of the nearest SN selection is worse than DAN. This is because the tracking performance is affected by both the distance and the angle from target to SNs. DAN takes both of them into consideration, while the nearest SN selection only considers the distance. This will be further demonstrated in the next part. **Illustration of SN selection**: To better understand the effect of SN selection, we focus on the single target case in this section. The power allocated to the target is set as \(p=25\) dBm. The initial state of the target is given by \(\mathbf{v}=[-10,0]^{\mathrm{T}}\) m/s and \(\mathbf{x}^{(0)}=[124,124]^{\mathrm{T}}\) m. Fig. 5 shows the SN selection result by DAN in 4 consecutive frames. The selection depends on the geometric relation between the target and SNs. DAN does not always choose the nearest SNs, because, besides the distance, the different perspectives to observe the target provided by different SNs will also affect the tracking performance. Fig. 6 shows the corresponding RMSE over the tracking frames. It can be observed that DAN consistently outperforms the Nearest SN selection and achieves comparable performance as ES. **Effect of noise power**: One of the biggest drawbacks of DL-based approaches is the performance degradation when the features (such as the noise power) in test data differ from Fig. 4: Average RMSE versus the total power budget \(P\). Fig. 3: Cost function over the number of iterations/layers. those in training. This leads to the study of generalization in this part. Fig. 7 shows the performance under different noise power with \(N=32\). When the noise power is different from that of the training data, DAN can provide a near-ES RMSE. It indicates that DAN can adapt to the change of \(\sigma^{2}\), which makes DAN attractive in real applications. ### _Accuracy-Complexity Tradeoff_ By adjusting the termination tolerance and the maximum number of iterations, a tradeoff between computational cost and accuracy can be achieved by MM-ADMM. Meanwhile, the proposed DAN requires a fixed number of layers and thus has a fixed running time. Fig. 8 shows the RMSE performance of different algorithms versus the running time. It is observed that DAN can always outperform MM-ADMM in terms of both computational cost and RMSE. Moreover, though MM-ADMM-II can converge faster than MM-ADMM-I, \(\mathbf{T}_{2}^{(k,l)}\) requires more computational cost than \(\mathbf{T}_{1}^{(k,l)}\). Thus, given the same time cost, MM-ADMM-I outperforms MM-ADMM-II. ## V Conclusion In this paper, we considered the joint SN selection and power allocation problem for tracking multiple maneuvering targets in PMNs. To meet the stringent latency requirement of sensing applications, we proposed a model-driven approach for SN selection by unfolding the optimization-based MM-ADMM method. A novel DNN architecture was derived to speed up the convergence by exploiting the momentum, whose convergence property was also guaranteed by deriving the regret bound. Furthermore, we proposed an efficient power allocation method based on fixed-point water filling and revealed some physical insights. Simulation results demonstrated that the proposed method can achieve better performance than the existing methods with much lower computational cost. This work demonstrated that, by reducing the number of iterations and improving the effectiveness of each layer, model-driven approaches offer a promising solution to meet the stringent latency requirement of sensing applications. ## Appendix A Proof of Theorem 1 Given \(\mathcal{L}(\mathbf{u}^{(k)})\) is convex, we have \[\mathcal{G}(\mathbf{u}^{(k,l)})-\mathcal{G}(\mathbf{u}^{(k,*)})\leq\left\langle \mathbf{d}_{u}^{(k,l)},\Delta\mathbf{u}^{(k,l)}\right\rangle, \tag{50}\] Fig. 5: SN selection result by DAN at 4 consecutive frames. (a) Frame 2; (b) Frame 4; (c) Frame 6; (d) Frame 8; Fig. 8: NMSE versus the running time. Fig. 6: RMSE over tracking frames. Fig. 7: RMSE versus different noise power \(\sigma^{2}\). where \(\Delta\mathbf{u}^{(k,l)}=\mathbf{u}^{(k,l)}-\mathbf{u}^{(k,\star)}\). Since \(R_{L}\leq\sum_{l=1}^{L}\left\langle\mathbf{d}_{u}^{(k,l)},\Delta\mathbf{u}^{(k,l )}\right\rangle\), the main idea of the proof is to find an upperbound of \(\sum_{l=1}^{L}\left\langle\mathbf{d}_{u}^{(k,l)},\Delta\mathbf{u}^{(k,l)}\right\rangle\). Recalling from (39), we have \[\begin{split}&\|\mathbf{\Phi}_{l}^{\frac{1}{2}}\Delta\mathbf{u}^{(k,l+1)}\|^{2}=\|\mathbf{\Phi}_{l}^{\frac{1}{2}}(\mathbf{u}^{(k,l+1)}-\mathbf{u }^{(k,\star)})\|^{2}\\ &\stackrel{{(a)}}{{=}}\|\mathbf{\Phi}_{l}^{\frac{1}{ 2}}(\mathbf{u}^{(k,l)}-\mathbf{\Phi}_{l}^{-1}(\mathbf{d}_{\star}^{(k,l)}-\nu _{1}\mathbf{1}))-\mathbf{u}^{(k,\star)}\|^{2}\\ &\stackrel{{(b)}}{{=}}\|\mathbf{\Phi}_{l}^{\frac{1}{ 2}}\Delta\mathbf{u}^{(k,l)}-\mathbf{\Phi}_{l}^{-\frac{1}{2}}(\hat{\mathbf{m}} _{l}-\rho_{a,l}\hat{\mathbf{b}}_{l}-\nu_{1}\mathbf{1})\|^{2}\\ &\stackrel{{(c)}}{{=}}\left\|\mathbf{\Phi}_{l}^{\frac{1} {2}}\Delta\mathbf{u}^{(k,l)}\right\|^{2}-2\left\langle(1-\beta_{1,l})\mathbf{ d}_{u}^{(k,l)},\Delta\mathbf{u}^{(k,l)}\right\rangle\\ &\quad-2\left\langle\beta_{1,l}\hat{\mathbf{m}}_{l-1}-\rho_{a,l} \hat{\mathbf{b}}_{l}-\nu_{1}\mathbf{1},\Delta\mathbf{u}^{(k,l)}\right\rangle \\ &\quad+\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}(\hat{\mathbf{m}}_{l}- \rho_{a,l}\hat{\mathbf{b}}_{l}-\nu_{1}\mathbf{1})\|^{2},\end{split} \tag{51}\] where step (a) follows (39), step (b) follows (40), and step (c) follows (36). By adding \(2\left\langle(1-\beta_{1,l})\mathbf{d}_{u}^{(k,l)},\Delta\mathbf{u}^{(k,l)} \right\rangle-\|\mathbf{\Phi}_{l}^{\frac{1}{2}}\Delta\mathbf{u}^{(k,l+1)}\|^{2}\) to both sides of (51), and dividing both sides of (51) by \(2(1-\beta_{1,l})\), we have \[\begin{split}&\left\langle\mathbf{d}_{u}^{(k,l)},\Delta\mathbf{u }^{(k,l)}\right\rangle=\frac{\|\mathbf{\Phi}_{l}^{\frac{1}{2}}\Delta\mathbf{u}^{ (k,l)}\|^{2}}{2(1-\beta_{1,l})}-\frac{\|\mathbf{\Phi}_{l}^{\frac{1}{2}}\Delta \mathbf{u}^{(k,l+1)}\|^{2}}{2(1-\beta_{1,l})}\\ &=-\frac{\left\langle\beta_{1,l}\hat{\mathbf{m}}_{l-1},\Delta \mathbf{u}^{(k,l)}\right\rangle}{1-\beta_{1,l}}+\frac{\left\langle\rho_{a,l} \hat{\mathbf{b}}_{l},\Delta\mathbf{u}^{(k,l)}\right\rangle}{1-\beta_{1,l}}\\ &\quad+\frac{\left\langle\nu_{1},\Delta\mathbf{u}^{(k,l)}\right\rangle }{1-\beta_{1,l}}+\frac{\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}(\hat{\mathbf{m}}_{l} -\rho_{a,l}\hat{\mathbf{b}}_{l}-\nu_{1}\mathbf{1})\|^{2}}{2(1-\beta_{1,l})}. \end{split} \tag{52}\] By using the Young's inequality for products, i.e., \(\pm ab\leq\frac{a^{2}}{2}+\frac{b^{2}}{2}\), the second, third, and fourth terms on the right-hand side of (52) are upperbounded by \(-\frac{\left\langle\beta_{1,l}\hat{\mathbf{m}}_{l-1},\Delta\mathbf{u}^{(k,l)} \right\rangle}{1-\beta_{1,l}}\leq\frac{\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{ \mathbf{m}}_{l-1}\|^{2}}{2(1-\beta_{1})}+\frac{\|\mathbf{\Phi}_{l}^{\frac{1}{2}} \Delta\mathbf{u}^{(k,l)}\|^{2}}{2(1-\beta_{1})}\), \(\frac{\left\langle\hat{\mathbf{b}}_{l},\Delta\mathbf{u}^{(k,l)}\right\rangle}{1- \beta_{1,l}}\leq\frac{\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{\mathbf{b}}_{l} \|^{2}}{2(1-\beta_{1})}+\frac{y_{l}^{2}\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{ \mathbf{m}}_{l}\|^{2}}{2(1-\beta_{1})}+\frac{\left\|\mathbf{\Phi}_{l}^{\frac{1}{ 2}}\hat{\mathbf{m}}_{l}\right\|^{2}}{2(1-\beta_{1})}\), and \(\frac{\left\langle\mathbf{1},\Delta\mathbf{u}^{(k,l)}\right\rangle}{1-\beta_{1,l }}\leq\frac{\|\mathbf{\Phi}_{l}^{\frac{1}{2}}\hat{\mathbf{m}}_{l}\|^{2}}{2(1- \beta_{1})}+\frac{\|\mathbf{\Phi}_{l}^{\frac{1}{2}}\Delta\mathbf{u}^{(k,l)}\|^{2 }}{2(1-\beta_{1})}\), respectively. By utilizing the inequality between the arithmetic mean and quadratic mean, the last term on the right-hand side of (52) is upperbounded by \(\frac{\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}(\hat{\mathbf{m}}_{l}-\rho_{a,l}\hat{ \mathbf{b}}_{l}-\nu_{1}\mathbf{1})\|^{2}}{2(1-\beta_{1,l})}\leq\frac{3\| \mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{\mathbf{m}}_{l}\|^{2}}{2(1-\beta_{1})}+ \frac{3\nu_{a,l}^{2}\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{\mathbf{b}}_{l}\|^{2} }{2(1-\beta_{1})}+\frac{3\nu_{l}^{2}\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{ \mathbf{m}}_{l}\|^{2}}{2(1-\beta_{1})}=\frac{3\nu_{a,l}^{2}\|\mathbf{\Phi}_{l}^{- \frac{1}{2}}\hat{\mathbf{b}}_{l}\|^{2}}{2(1-\beta_{1})}\). Then, the upper-bound of (52) can be given by \[\begin{split}&\left\langle\mathbf{d}_{u}^{(k,l)},\Delta\mathbf{u}^{(k,l )}\right\rangle\leq\underbrace{\frac{\|\mathbf{\Phi}_{l}^{\frac{1}{2}}\Delta \mathbf{u}^{(k,l)}\|^{2}}{2(1-\beta_{1,l})}-\frac{\|\mathbf{\Phi}_{l}^{\frac{1}{ 2}}\Delta\mathbf{u}^{(k,l+1)}\|^{2}}{2(1-\beta_{1,l})}}\\ &+\underbrace{\frac{\beta_{1,l}\|\mathbf{\Phi}_{l}^{-\frac{1}{2}} \hat{\mathbf{m}}_{l-1}\|^{2}}{2(1-\beta_{1})}}_{\mathcal{R}}+\underbrace{\frac{ \rho_{a,l}\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{\mathbf{b}}_{l}\|^{2}}{2(1- \beta_{1})}}_{\mathcal{R}}+\underbrace{\frac{\nu_{l}\|\mathbf{\Phi}_{l}^{- \frac{1}{2}}\mathbf{1}\|^{2}}{2(1-\beta_{1})}}_{\mathcal{R}}\\ &+\underbrace{\frac{\beta_{1,l}\|\mathbf{\Phi}_{l}^{\frac{1}{2}} \Delta\mathbf{u}^{(k,l)}\|^{2}}{2(1-\beta_{1})}}_{\mathcal{R}}+\underbrace{ \frac{\rho_{a,l}\|\mathbf{\Phi}_{l}^{\frac{1}{2}}\Delta\mathbf{u}^{(k,l)}\|^{2}}{2(1- \beta_{1})}}_{2(1-\beta_{1})}+\underbrace{\frac{\nu_{l}\|\mathbf{\Phi}_{l}^{\frac{1}{2}} \Delta\mathbf{u}^{(k,l)}\|^{2}}{2(1-\beta_{1})}}_{\mathcal{R}}\\ &+\underbrace{\frac{3\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{ \mathbf{m}}_{l}\|^{2}}{2(1-\beta_{1})}}_{\mathcal{R}}+\underbrace{\frac{3\rho_{a,l}^{2}\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{\mathbf{b}}_{l}\|^{2}}{2(1- \beta_{1})}}_{\mathcal{R}}+\underbrace{ entry of \(\hat{\mathbf{m}}_{l}\) and \(\mathbf{d}_{u}^{(k,l)}\), respectively. Then, we have \[\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{\mathbf{m}}_{l}\|^{2}=\sum_{i =1}^{N}\frac{\hat{m}_{l,i}^{2}}{\phi_{l,i}}\leq\sum_{i=1}^{N}\frac{\hat{m}_{l,i }^{2}}{\sqrt{|\hat{v}_{l,i}|/\alpha_{1,l}}} \tag{56}\] \[=\sum_{i=1}^{N}\frac{\left(\sum_{p=1}^{l}(1-\beta_{1,p})\prod_{q= 1}^{l-p}\beta_{1,l-q+1}d_{u,i}^{(k,p)}\right)^{2}}{\sqrt{|\hat{v}_{l,i}|/\alpha _{1,l}}}\] \[\stackrel{{(a)}}{{\leq}}\sum_{i=1}^{N}\frac{\alpha_ {1,l}\eta_{1}^{2l}\left(\sum_{p=1}^{l}\beta_{1}^{l-p}\right)\left(\sum_{p=1}^ {l}\beta_{1}^{l-p}(d_{u,i}^{(k,p)})^{2}\right)}{\sqrt{\sum_{p=1}^{l}(1-\beta_ {2})\beta_{2}^{l-p}|d_{u,i}^{(k,p)}|^{2}}}\] \[\stackrel{{(b)}}{{\leq}}\frac{\alpha_{1,l}\eta_{1}^ {2l}}{(1-\beta_{1})\sqrt{1-\beta_{2}}}\sum_{i=1}^{N}\left(\sum_{p=1}^{l} \left(\frac{\beta_{1}}{\sqrt{\beta_{2}}}\right)^{l-p}|d_{u,i}^{(k,p)}|\right),\] where step (a) comes from the inequality \((1-\beta_{1,p})\leq 1\), \(\prod_{q=1}^{l-p}\beta_{1,l-q+1}\leq\beta_{1}^{l-p}\eta_{1}^{l}\) and the Jensen inequality, i.e., \(\left(\frac{\sum_{i}a_{i}b_{i}}{\sum_{i}a_{i}}\right)^{2}\leq\frac{\alpha_{i}b _{i}^{2}}{\sum_{i}a_{i}}\), and step (b) follows the inequalities \(\sum_{p=1}^{l}\beta_{1}^{l-p}\leq\frac{1}{1-\beta_{1}}\) and \(\sum_{p=1}^{l}(1-\beta_{2})\beta_{2}^{l-p}|d_{u,i}^{(k,p)}|^{2}\geq(1-\beta_{ 2})\beta_{2}^{l-p}|d_{u,i}^{(k,p)}|^{2}\). By summing up (56) over the index \(l\), we have \[\sum_{l=1}^{L}\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{\mathbf{m}} _{l}\|^{2} \tag{57}\] \[\leq\sum_{l=1}^{L}\frac{\alpha_{1,l}\eta_{1}^{2l}}{(1-\beta_{1}) \sqrt{1-\beta_{2}}}\sum_{i=1}^{N}\left(\sum_{p=1}^{l}\left(\frac{\beta_{1}}{ \sqrt{\beta_{2}}}\right)^{l-p}|d_{u,i}^{(k,p)}|\right)\] \[=\sum_{l=1}^{L}\frac{\alpha_{1,l}\eta_{1}^{2l}}{(1-\beta_{1}) \sqrt{1-\beta_{2}}}||\mathbf{d}_{u}^{(k,l)}||_{1}\left(\sum_{j=1}^{L}\left( \frac{\beta_{1}}{\sqrt{\beta_{2}}}\right)^{j-l}\right)\] \[\leq\frac{\alpha_{1}^{+}D_{u,1}}{(1-\beta_{1})(1-\frac{\beta_{1} }{\sqrt{\beta_{2}}})\sqrt{1-\beta_{2}}}\sum_{l=1}^{L}\frac{\eta_{1}^{2l}}{ \sqrt{l}}\] \[\stackrel{{(a)}}{{\leq}}\frac{\alpha_{1}^{+}D_{u,1}}{ (1-\beta_{1})(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}})\sqrt{1-\beta_{2}}(1-\eta_ {1}^{2})},\] where we have utilized the property that \(\sum_{l=1}^{L}\frac{\eta_{1}^{2l}}{\sqrt{l}}\leq\sum_{l=1}^{L}\eta_{1}^{2l}\leq \frac{1}{1-\eta_{1}^{2}}\) in step (a). Then, we have \[\sum_{l=1}^{L}\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{\mathbf{m}} _{l}\|^{2}\leq\frac{\alpha_{1}^{+}D_{u,1}}{(1-\beta_{1})(1-\frac{\beta_{1}}{ \sqrt{\beta_{2}}})\sqrt{1-\beta_{2}}(1-\eta_{1}^{2})}.\] Similarly, we can obtain \[\sum_{l=1}^{L}\beta_{1,l}\|\mathbf{\Phi}_{l}^{-\frac{1}{2}}\hat{ \mathbf{m}}_{l-1}\|^{2}\leq\sum_{l=1}^{L}\beta_{1,l}\|\mathbf{\Phi}_{l-1}^{- \frac{1}{2}}\hat{\mathbf{m}}_{l-1}\|^{2} \tag{58}\] \[\leq\frac{\alpha_{1}^{+}\beta_{1}D_{u,1}}{(1-\beta_{1})(1-\frac{ \beta_{1}}{\sqrt{\beta_{2}}})\sqrt{1-\beta_{2}}(1-\eta_{1}^{2})}.\] V-B3 Terms \(\leavevmode\hbox{\small\textcircled{\char 37}}\leavevmode\nobreak\ \leavevmode\nobreak\ \&\leavevmode\nobreak\ \leavevmode\nobreak\ Similarly, we have \[\sum_{l=1}^{L}\sqrt{l}\nu_{l} =\sum_{l=1}^{L}\sqrt{l}\left(\frac{D_{u,1}}{(1-\beta_{1})}\eta_{1}^{l }+\rho_{a}D_{b,1}\eta_{a}^{l}\right) \tag{68}\] \[\leq\frac{D_{u,1}}{(1-\beta_{1})(1-\eta_{1})^{2}}+\frac{\rho_{a}D_ {b,1}}{(1-\eta_{a})^{2}},\] \[\sum_{l=1}^{L}\eta_{a}^{l}\nu_{l} =\sum_{l=1}^{L}\eta_{a}^{l}\left(\frac{D_{u,1}}{(1-\beta_{1})} \eta_{1}^{l}+\rho_{a}D_{b,1}\eta_{a}^{l}\right)\] (69) \[\leq\frac{D_{u,1}}{(1-\beta_{1})(1-\eta_{1})(1-\eta_{a})}+\frac{ \rho_{a}D_{b,1}}{(1-\eta_{a})^{2}}.\] By substituting (68) and (69) into (67), we have \[\sum_{l=1}^{L}\nu_{l}\|\mathbf{\Phi}_{l}^{\frac{1}{2}}\Delta \mathbf{u}^{(k,l)}\|^{2} \tag{70}\] \[\leq\left(\frac{D_{u,1}}{(1-\beta_{1})(1-\eta_{1})^{2}}+\frac{ \rho_{a}D_{b,1}}{(1-\eta_{a})^{2}}\right)\frac{\sqrt{1-\beta_{2}}D_{u,1}D_{ \Delta}}{\alpha_{1}(1-\sqrt{\beta_{2}})}\] \[+\left(\frac{D_{u,1}}{(1-\beta_{1})(1-\eta_{1})(1-\eta_{a})}+ \frac{\rho_{a}D_{b,1}}{(1-\eta_{a})^{2}}\right)D_{\Delta}\rho_{a}.\] By combining the upperbounds for the summations of terms \(\mathfrak{Q}\)-\(\mathfrak{W}\), (42) can be proved.
2302.10104
Comprehensive Framework for Controlling Nonlinear Multi-Species Water Quality Dynamics
Tracing disinfectant (e.g., chlorine) and contaminants evolution in water networks requires the solution of 1- D advection-reaction (AR) partial differential equations (PDEs). With the absence of analytical solutions in many scenarios, numerical solutions require high-resolution time- and spacediscretizations resulting in large model dimensions. This adds complexity to the water quality control problem. In addition, considering multi-species water quality dynamics rather than the single-species dynamics produces a more accurate description of the reaction dynamics under abnormal hazardous conditions (e.g., contamination events). Yet, these dynamics introduces nonlinear reaction formulation to the model. To that end, solving nonlinear 1-D AR PDEs in real time is critical in achieving monitoring and control goals for various scaled networks with a high computational burden. In this work, we propose a novel comprehensive framework to overcome the large-dimensionality issue by introducing different approaches for applying model order reduction (MOR) algorithms to the nonlinear system followed by applying real-time water quality regulation algorithm that is based on an advanced model to maintain desirable disinfectant levels in water networks under multi-species dynamics. The performance of this framework is validated using rigorous numerical case studies under a wide range of scenarios demonstrating the challenges associated with regulating water quality under such conditions.
Salma M. Elsherif, Ahmad F. Taha, Ahmed A. Abokifa, Lina Sela
2023-02-20T17:07:38Z
http://arxiv.org/abs/2302.10104v3
# Comprehensive Framework for Controlling Nonlinear Multi-Species Water Quality Dynamics ###### Abstract Tracing disinfectant (e.g., chlorine) and contaminants evolution in water networks requires the solution of 1-D advection-reaction (AR) partial differential equations (PDEs). With the absence of analytical solutions in many scenarios, numerical solutions require high-resolution time- and space-discretizations resulting in large model dimensions. This adds complexity to the water quality control problem. In addition, considering multi-species water quality dynamics rather than the single-species dynamics produces a more accurate description of the reaction dynamics under abnormal hazardous conditions (e.g., contamination events). Yet, these dynamics introduces nonlinear reaction formulation to the model. To that end, solving nonlinear 1-D AR PDEs in real time is critical in achieving monitoring and control goals for various scaled networks with a high computational burden. In this work, we propose a novel comprehensive framework to overcome the large-dimensionality issue by introducing different approaches for applying model order reduction (MOR) algorithms to the nonlinear system followed by applying real-time water quality regulation algorithm that is based on an advanced model to maintain desirable disinfectant levels in water networks under multi-species dynamics. The performance of this framework is validated using rigorous numerical case studies under a wide range of scenarios demonstrating the challenges associated with regulating water quality under such conditions. Multi-species water quality dynamics, water quality regulation and control, model predictive control, McCormick relaxation, linear/nonlinear model order reduction. ## I Introduction and Literature Review Water quality dynamics are widely modeled by the one-dimension advection-reaction (1-D AR) partial differential equations (PDEs). The AR PDEs allow tracing disinfectant and chemicals evolution throughout water distribution networks (WDNs). While analytical solutions are non-existent in most cases to solve these PDEs network wide, numerical solutions require high-resolution time- and space-discretizations. This results in high-dimension models that add computational burden to the problem of regulating water quality in drinking network. That leads to physical-driven models that are intractable when considering constrained control and water quality (WQ) regulation algorithms. Moreover, in water quality simulations the most widely used decay and reaction model is the single-species model. In this model, disinfectant (i.e., chlorine) is assumed to decay at a constant rate that only accounts for purified water contamination levels. Yet, contamination sources vary from microbial, non-microbial components in the bulk flow, attached to the pipe walls, or contamination events [1] that get intruded into the system. This drives the need for a more accurate representation of these scenarios which can be achieved by the multi-species reaction dynamics. The multi-species dynamics enable the model to simulate chlorine evolution with the existence of another reactive component in the system. This representation duplicates the number of variables to be traced network-component-wide while unfortunately adding complexity to the model by introducing nonlinear reaction dynamics. To this end, model order reduction (MOR) is an essential step to move forward in achieving a compact formulation of the multi-species water quality dynamics to be integrated into a model-based control framework. MOR techniques transform the _full-order model (FOM)_ to a _reduced-order model (ROM)_ in a way that preserves the structure, properties, and the closed-form representation of the FOM while maintaining an acceptable reliable level of accuracy and reducing the computational time. Eventually, the goal is to control chlorine injections dosed by rechlorination stations to maintain residual levels that meet water quality standards. That can be achieved by applying an effective control algorithm on the derived ROM. Our group has been interested in various dimensions of this research area, and a summary of our work and the prior literature is given next. ### _Literature Review_ Hereinafter, we survey the literature on the topics of model order reduction for dynamic systems, in general, and water systems, in particular, and water quality regulation and control while highlighting the gaps and drawbacks motivating this paper's contributions. _MOR for Dynamic Systems:_ Several studies have proposed and implemented different model order reduction algorithms in various disciplines (e.g., electromagnetics, electro-mechanics, structural and fluid dynamics) where the large-dimensionality issue is faced [2, 3, 4, 5]. Most of these studies have applied either singular value decomposition (SVD) approach [6, 7] or Krylov subspace methods [8, 9]. Whilst, combining SVD and Krylov methods is investigated and implemented in [10]. In infrastructure networks preserving the system's properties including stability, controllability and observability is a major concern while applying MOR with the aim of applying a post-reduction control. Nevertheless, Krylov methods do not preserve such properties which limit their suitability to be used in our study [3]. Several SVD-based model reduction methods have been proposed for linear systems and more realizations and extensions have been investigated and integrated to tackle the reduction of nonlinear systems; a review of linear and nonlinear models order reduction can be found in [11, 12]. The balanced Truncation (BT) method [2] is built based on both the controllability and observability Gramians for stable, linear systems. Lall [13] extends BT to be applied to nonlinear systems, while the authors in [14, 15] build extensions for unstable systems. However, BT becomes computationally intractable for large-scale systems. Nevertheless, the famous widely used Proper Orthogonal Decomposition (POD) in fluid dynamics community [16] is considered tractable in the expense of accuracy compared to BT. Yet, in some cases where relatively lower accuracy is acceptable POD may result in an unstable system even near stable equilibrium points depending on the actual formulation of the full-order model. Therefore, balanced methods between BT and POD have been propounded to capture the advance of both methods in one. For example, the authors in [17] have proposed a balanced method but it has failed to successfully reduce models when the number of outputs of the system is large. Conversely, the balanced POD (BPOD) [6] is tractable with an overall computation time similar to POD, but it computes adjoint snapshots to combine and balance controllability and observability similarly to BT, which are not raised in POD. Furthermore, POD can be extended to reduce the order of nonlinear systems by approximately projecting the nonlinearity term in the system to a subspace of the dynamics [18, 19]. Therefore, the nonlinear term is evaluated separately and approximated at only a small set of interpolation points (hyper-parameter) using a combination of projection and interpolation methods such as the discrete empirical interpolation method (DEIM) [18], the Gappy POD method [20, 21], and the Gauss-Newton with approximated tensors (GNAT) method [22], refer to the review paper [23] for details. _MOR for Water Systems:_ Water systems model order reduction has been broadly investigated for network hydraulics over the past decades with a limited number of studies looking into MOR for water quality dynamics. These studies adopt different approaches to reduce the hydraulic model dimension by applying methods varying between performing nodal Gaussian elimination [24], Gaussian elimination on the linearized form of the model, and recovering the nonlinearity of the system as a post-reduction step [25], genetic algorithm [26], and system aggregation [27]. Perelman and Ostfeld [28] consider a coupled model that combines both hydraulics and water quality dynamics of the network and applies systems aggregation. Lately, two studies have applied different approaches to cover MOR for water quality dynamics. Elkhashap in [29] proposes reducing the order of the water quality model by formulating a bi-linear spatially-discretized but a temporally-continuous representation of the dynamic. This formulation augments the input vectors in a way that preserves the system's stability. The induced error between the actual and reduced-order models is minimized by the reduction of the \(\mathcal{H}_{2}\)-norm. In this study, water dynamics transport and reaction are simulated using the Advection-Diffusion-Reaction Partial Differential Equations that include the diffusion term in comparison to our work that neglects its effect. Nonetheless, studies [30, 31] state that diffusion is dominated in network branches with significantly low velocities. To that end, it is an acceptable assumption to neglect the diffusion effect in networks with limited dead-end branches, higher velocities, and changing demands. On the other hand, augmenting and transferring the model into nonlinear formulation results in a more complex one when considering the multi-species nonlinear water quality dynamics that do not preserve the stability of the system. Secondly, Wang _et al._[32] applies different SVD-based projection algorithms to reach a reduced-order water quality model including BT, POD, and BPOD in addition to preserving the stability of the BPOD method. Results have proven that the BPOD method which balances between BT and POD is more usable while being computationally tractable and robust for zero and non-zero initial conditions. However, their model only includes single-species linear reaction dynamics where chlorine is assumed to be decaying at a constant rate resulting in a linear state-space formulation. Therefore, our work allows filling the gap in applying MOR for multi-species nonlinear dynamics. Moreover, in their model the explicit central Lav-Wendroff discretization scheme is used while Upwind schemes give a more accurate physical description of the advection-reaction problem. In our study, we apply the _explicit_ and _implicit_ Upwind discretization schemes while highlighting the differences and the level of difficulty. Notice that, on the contrary to studies [33, 34, 35] where MOR is performed for compositional simulation, the authors in [32] state that it is considered as a pre-step to apply an efficient control algorithm which also applies to our work in this paper. _Water Quality Control:_ The topic of controlling chlorine has been covered in several studies with various algorithms, objectives, and constraints [36, 37, 38]. Objectives vary between minimizing the cost of injecting chlorine into the system, maintaining minimal deviations from chlorine setpoint concentrations, minimizing the formation of the excess DBPs, and minimizing computational time [39]. The problem formulation is either a single-objective optimization problem or a multi-objective one with more of the aforementioned objectives. However, such studies do not build a closed-form representation of all inputs, states, and outputs that updates every specified time-step over the simulation period and allows network-wide control. Whereas, studies [32, 40] applies Model Predictive Control (MPC) on the full-order and reduced-order single-species models in both studies with no clear explanation/extension for scenarios where multi-species dynamics take place. _Our prior Work:_ We have been focusing on tackling and covering the water quality modeling and control in WDNs. First, the problem of modeling and controlling single-species water quality dynamics is thoroughly investigated in [40], followed by reducing this model's order and verifying the validity of controlling the reduced order model in [32]. Moreover, as a first state-of-the-art attempt, study [41] has identified single-species water quality models using only input-output experimental data and, accordingly, data-driven system identification algorithms. Lastly, a survey study on how to accurately simulate multi-species water quality dynamics has been conducted in [42]. This study has built a closed-form, network- and control-theoretic representation of all system inputs, variables, and output measurements under such dynamics that give a more realistic WQ formulation. However, controlling chlorine under multi-species dynamics, based on a control-theoretic explicit model, is to the authors' knowledge has not been investigated--a gap that is filled in this paper. ### _Paper Contributions_ This paper's major objective is to investigate the implementation and complexity of regulating and controlling chlorine levels under multi-species water quality. The detailed paper contributions are: * Construct and propose a comprehensive framework to overcome the large-dimensionality issue associated with discretizing the 1-D AR PDEs and the complexity associated with the nonlinearity of the multi-species water quality dynamics. Different paths can be taken, starting by linearizing the system and applying MOR for linear systems (MOR-LS). Another path is to consider the nonlinear MOR (MOR-NLS) algorithm on the original FOM. * Utilizing the reduced-order models in an MPC algorithm. We apply it to the formulated ROMs and compare them to each other and to the original FOMs. Also, we compare it with basic scenarios with single-species dynamics considered demonstrating the challenges associated with controlling chlorine levels under multi-species water quality dynamics. * Position the framework in a generalized scalable form in the sense that simplifications are included to consider single-species water dynamics and differentiations are suggested to consider chlorine linear/nonlinear decay and reaction models that have been in the literature to simulate various events/scenarios. * Validate the performance of the framework using thorough numerical case studies to test accuracy, computational burden, and robustness to the rate at which the fictitious reactant consumes chlorine. Our proposed framework is illustrated in Fig. 1. As shown, different approaches can be followed to formulate a reduced-order model to be controlled for Multi-species water quality model. Each step taken and each path to be chosen are explained in the following sections of the paper. The paper's sections are organized as follows, Section II provides the formulation of the state-space representation of the multi-species water quality model (MS-WQM). This formulation is based on the transport and reaction model in pipes, mass balance for the other network components, and the multi-species dynamics expression. Section III provides full descriptions of the methods used in our framework to reach a compact reduced-order model. Section IV introduces the control problem and its implementation on the linear and nonlinear ROM. Section V showcases the framework performance on different networks under a wide range of scenarios. Based on the results obtained; conclusion, paper's limitations, and recommendations for future work are all given in Section VI. ## II State-space Multi-species Water Quality Model We model WDN by a directed graph \(\mathcal{G}=(\mathcal{N},\mathcal{L})\). The set \(\mathcal{N}\) defines the nodes and is partitioned as \(\mathcal{N}=\mathcal{J}\cup\mathcal{T}\cup\mathcal{R}\) where sets \(\mathcal{J}\), \(\mathcal{T}\), and \(\mathcal{R}\) are collections of junctions, tanks, and reservoirs. Let \(\mathcal{L}\subseteq\mathcal{N}\times\mathcal{N}\) be the set of links, and define the partition \(\mathcal{L}=\mathcal{P}\cup\mathcal{M}\cup\mathcal{V}\), where sets \(\mathcal{P}\), \(\mathcal{M}\), and \(\mathcal{V}\) represent the collection of pipes, pumps, and valves. Total number of states is \(n_{x}=n_{L}+n_{N}\), where \(n_{\mathrm{L}}\) and \(n_{\mathrm{N}}\) are numbers of links and nodes. The number of reservoirs, junctions, tanks, pumps, valves, and pipes are \(n_{\mathrm{R}},n_{\mathrm{J}},n_{\mathrm{TK}},n_{\mathrm{M}},n_{\mathrm{V}},\) and \(n_{\mathrm{P}}\). Each pipe \(i\) with length \(L_{i}\) is spatially discretized and split into \(s_{L_{i}}\) segments. Hence, number of links is expressed as \(n_{\mathrm{L}}=n_{\mathrm{M}}+n_{\mathrm{V}}+\sum_{i=1}^{n_{\mathrm{P}}}s_{L_ {i}}\) while \(n_{\mathrm{N}}=n_{\mathrm{R}}+n_{\mathrm{J}}+n_{\mathrm{TK}}\) is the number of nodes. In this paper, the state-space representation is formulated for multi-species dynamics with two chemicals: chlorine and a fictitious reactant. The system representation of the two-species which is able to capture chemicals evolution, booster stations injections, and sensors measurements, is expressed by an NDE as follows \[\underbrace{\begin{bmatrix}\mathbf{E}_{11}(t)&0\\ 0&\mathbf{E}_{22}(t)\end{bmatrix}}_{\mathbf{E}(t)} \underbrace{\begin{bmatrix}\mathbf{x}_{1}(t+\Delta t)\\ \mathbf{x}_{2}(t+\Delta t)\end{bmatrix}}_{\mathbf{\mathbf{x}}(t+\Delta t)}= \underbrace{\begin{bmatrix}\mathbf{A}_{11}(t)&0\\ 0&\mathbf{A}_{22}(t)\end{bmatrix}}_{\mathbf{\mathbf{A}}(t)}\underbrace{\begin{bmatrix} \mathbf{x}_{1}(t)\\ \mathbf{x}_{2}(t)\end{bmatrix}}_{\mathbf{\mathbf{x}}(t)} \tag{1a}\] \[+\underbrace{\begin{bmatrix}\mathbf{B}_{11}(t)&0\\ 0&\mathbf{B}_{22}(t)\end{bmatrix}}_{\mathbf{B}(t)}\underbrace{\begin{bmatrix}\mathbf{u }_{1}(t)\\ \mathbf{u}_{2}(t)\end{bmatrix}}_{\mathbf{\mathbf{u}}(t)}+\mathbf{f}(\mathbf{x}_{1}, \mathbf{x}_{2},t),\] Fig. 1: Conceptual framework of the paper. \[\underbrace{\begin{bmatrix}\mathbf{y}_{1}(t)\\ \mathbf{y}_{2}(t)\end{bmatrix}}_{\mathbf{y}(t)} =\underbrace{\begin{bmatrix}\mathbf{C}_{11}(t)&0\\ 0&\mathbf{C}_{22}(t)\end{bmatrix}}_{\mathbf{C}(t)}\underbrace{\begin{bmatrix}\mathbf{x}_ {1}(t)\\ \mathbf{x}_{2}(t)\end{bmatrix}}_{\mathbf{x}(t)}\] \[+\underbrace{\begin{bmatrix}\mathbf{D}_{11}(t)&0\\ 0&\mathbf{D}_{22}(t)\end{bmatrix}}_{\mathbf{D}(t)}\underbrace{\begin{bmatrix}\mathbf{u}_ {1}(t)\\ \mathbf{u}_{2}(t)\end{bmatrix}}_{\mathbf{u}(t)} \tag{1b}\] where variable \(t\) represents specific time in a simulation period \([0,T_{s}]\); \(\Delta t\) is the time-step or sampling time; vectors \(\mathbf{x}_{1}(t)\) and \(\mathbf{x}_{2}(t)\in\mathbb{R}^{n_{x}}\) depict the concentrations of chlorine and the other fictitious reactant (two species model) in the entire network; vector \(\mathbf{u}_{1}(t)\in\mathbb{R}^{n_{u_{1}}}\) represents the dosages of injected chlorine; vector \(\mathbf{u}_{2}(t)\in\mathbb{R}^{n_{u_{2}}}\) accounts for planned or unplanned injection of the fictitious component; vector \(\mathbf{f}(\mathbf{x}_{1},\mathbf{x}_{2},t)\) encapsulates the nonlinear part of the equations representing the mutual nonlinear reaction between the two chemicals; vector \(\mathbf{y}_{1}(t)\in\mathbb{R}^{n_{y_{1}}}\) denotes the sensor measurements of chlorine concentrations at specific locations in the network while \(\mathbf{y}_{2}(t)\in\mathbb{R}^{n_{y_{2}}}\) captures the fictitious reactant measurements by sensors in the network if they exist. The state-space matrices \(\{\mathbf{E},\mathbf{A},\mathbf{B},\mathbf{C},\mathbf{D}\}_{\bullet}\) are all time-varying matrices that depend on the network topology and parameters, hydraulic parameters, decay rate coefficients for the disinfectant, and booster stations and sensors locations. It is customary to assume that these matrices evolve at a slower pace than the states \(\mathbf{x}(t)\) and control inputs \(\mathbf{u}(t)\). On another note, matrices \(\mathbf{E}_{11},\mathbf{E}_{22}\) are changing every hydraulic time-step allowing them to be represented at time \(t\) not \(t+\Delta t\) of the water quality simulation horizon. The concentration evolution throughout network components is covered by the conservation of mass law, transport, decay, and reaction models of the substances. A full description of how the models are derived for each type of the components is provided in [42]. However, for the reader to be able to follow up with the developments of this paper, some material from [42] need to be reproduced and altered. We list a brief overview of the governing equations formulating our model and its state-space representation in the following sections. #### Ii-B1 Transport and Reaction in Pipes Conservation of mass during transport and reaction in pipes is simulated by the one-dimension advection-reaction (1-D AR) partial differential equation, which for Pipe \(i\) is expressed as \[\partial_{t}c_{i}^{\mathrm{P}}=-v_{i}(t)\partial_{t}c_{i}^{\mathrm{P}}+R_{ \mathrm{MS}}^{\mathrm{P}}(c_{i}^{\mathrm{P}}(x,t)), \tag{2}\] where \(c_{i}^{\mathrm{P}}(x,t)\) is concentration in pipe at location \(x\) and time \(t\); \(v_{i}(t)\) is the mean flow velocity; and \(R_{\mathrm{MS}}^{\mathrm{P}}(c_{i}^{\mathrm{P}}(x,t))\) is the multi-species reaction rate in pipes expression (more explanation is given in Section II-3). Eq. (2) is discretized over a fixed spatio-temporal grid, that for a Pipe \(i\) with length \(L_{i}\) is split into a number of segments \(s_{i}=\left\lfloor\frac{L_{i}}{v_{i}(t)\Delta t}\right\rfloor\) of length \(\Delta x_{i}=\frac{L_{i}}{s_{i}}\). In the considered 1-D AR model, the main two processes are the advection where the concentration at a certain location and time is affected by upstream concentrations, and reaction where chemicals decay and/or mutually react. That being said, _Upwind_ discretization schemes are more descriptive to the actual physical process considered among other schemes [43]. Applying the Eulerian Finite-Difference based Implicit Upwind scheme on the multi-species water quality dynamics representation adapted in this paper has shown reliable results that trace chemicals contractions within different networks with various scales, according to [42]. In this paper we consider both _Explicit_ and _Implicit_ Upwind schemes to investigate their performance from a control-theoretic perspective (See Fig. 2). Explicit Upwind SchemeFor segment \(s\) of Pipe \(i\) except for the first segment, the concentration is calculated as \[c_{i}^{\mathrm{P}}(s,t+\Delta t) =(1-\lambda_{i}(t))c_{i}^{\mathrm{P}}(s,t)\] \[+\lambda_{i}(t)c_{i}^{\mathrm{P}}(s-1,t)+R_{\mathrm{MS}}^{\mathrm{ P}}(c_{i}^{\mathrm{P}}(s,t))\Delta t, \tag{3}\] where \(\lambda_{i}(t)=\frac{v_{i}(t)\Delta t}{\Delta x_{i}}\) is the Courant number and according to Courant-Friedrichs-Lewy condition (CFL), Courant number (CN) is maintained to be in the range of \(0<\lambda_{i}(t)\leq 1\) so that the scheme is stable. Moreover, the concentrations in the first segment is expressed as in (4) assuming that the connecting upstream node is Junction \(j\). \[c_{i}^{\mathrm{P}}(1,t+\Delta t)=(1-\lambda_{i}(t))c_{i}^{\mathrm{ P}}(1,t)\\ \hskip 142.26378pt+\lambda_{i}(t)c_{j}^{\mathrm{J}}(t)+R_{\mathrm{MS}}^{ \mathrm{P}}(c_{i}^{\mathrm{P}}(s,t))\Delta t, \tag{4}\] Implicit Upwind SchemeThe difference is that the concentration at the upstream segment/node is taken at the current time-step we are calculating at. That is, Equ. (5a) calculates the concentration for segment \(s\) of Pipe \(i\). As well, the concentration of the first segment with Junction \(j\) as the upstream node is expressed in Equ. (5b). \[(1+\lambda_{i}(t))c_{i}^{\mathrm{P}}(s,t+\Delta t)-\lambda_{i}(t)c_{i}^{ \mathrm{P}}(s-1,t+\Delta t)=\\ c_{i}^{\mathrm{P}}(s,t)+R_{\mathrm{MS}}^{\mathrm{P}}(c_{i}^{\mathrm{P}}(s,t)) \Delta t, \tag{5a}\] \[(1+\lambda_{i}(t))c_{i}^{\mathrm{P}}(1,t+\Delta t)-\lambda_{i}(t)c_{i}^{ \mathrm{J}}(t+\Delta t)=\\ c_{i}^{\mathrm{P}}(1,t)+R_{\mathrm{MS}}^{\mathrm{P}}(c_{i}^{\mathrm{P}}(1,t)) \Delta t. \tag{5b}\] #### Ii-B2 Mass Balance at Network Components For components other than pipes, conservation of mass is applied to formulate expressions for concentrations calculation. Fig. 2: Implicit and Explicit Upwind discretization schemes for Pipe \(i\) connecting Junctions 1 and 2. Each scheme calculates concentration \(c_{i}^{\mathrm{P}}(s,t+\Delta t)\) at segment \(s\) (colored in maroon) depending on concentrations at the segments/nodes included in its frame. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Mass Balance at ReservoirReservoirires are assumed to have constant concentrations. For each Reservoir \(i\) concentration is expressed as \(c_{i}^{\mathrm{R}}(t+\Delta t)=c_{i}^{\mathrm{R}}(t)\). Mass Balance at Pumps and ValvesThe model deals with pumps and valves as transmission links with concentration equals the concentration of the node upstream them. That being said, for Pump \(i\) or Valve \(j\) installed after Reservoir \(k\) (as an example), concentrations are expressed as \(c_{i}^{\mathrm{M}}(t+\Delta t)=c_{k}^{\mathrm{R}}(t+\Delta t),\) and \(c_{j}^{\mathrm{V}}(t+\Delta t)=c_{k}^{\mathrm{R}}(t+\Delta t).\) Mass Balance at JunctionsChemicals are assumed to have complete and instantaneous mixing in junctions with no storage time. Thus, chemical concentration at each Junction \(i\) is expressed as \[c_{i}^{\mathrm{J}}(t)=\frac{\sum_{j\in L_{\mathrm{in}}}q_{\mathrm{in}}^{j}(t) c_{i}^{\mathrm{j}}(t)+q_{i}^{\mathrm{B}_{\mathrm{J}}}(t)c_{i}^{\mathrm{B}_{ \mathrm{J}}}(t)}{q_{i}^{\mathrm{D}_{\mathrm{J}}}(t)+\sum_{k\in L_{\mathrm{out }}}q_{\mathrm{out}}^{k}(t)}, \tag{6}\] where \(j\) and \(k\) are the counters for total \(L_{\mathrm{in}}\) links flowing into the junction and \(L_{\mathrm{out}}\) links extracting flow from the junction; \(q_{\mathrm{in}}^{j}(t)\) and \(q_{\mathrm{out}}^{k}(t)\) are the inflows and outflows from these links connected to the junction; \(c_{\mathrm{in}}^{j}(t)\) is the concentration in the inflow solute; \(q_{i}^{\mathrm{B}_{\mathrm{J}}}(t)\) is the flow injected to the junction with concentration \(c_{i}^{\mathrm{D}_{\mathrm{J}}}(t)\) by booster station if located; and \(q_{i}^{\mathrm{D}_{\mathrm{J}}}(t)\) is demand. Mass Balance at TanksMass conservation in tanks assumes complete instantaneous mixing of all inflows, outflows, and stored water following the continuously stirred tank reactor (CSTR) model. \[\begin{split}& V_{i}^{\mathrm{TK}}(t+\Delta t)c_{i}^{\mathrm{ TK}}(t+\Delta t)=V_{i}^{\mathrm{TK}}(t)c_{i}^{\mathrm{TK}}(t)\\ &+\sum_{j\in L_{\mathrm{in}}}q_{\mathrm{in}}^{j}(t)c_{\mathrm{in }}^{j}(t)\Delta t+V_{i}^{\mathrm{Br_{\mathrm{TK}}}}(t+\Delta t)c_{i}^{\mathrm{ Br_{\mathrm{TK}}}}(t+\Delta t)\\ &-\sum_{k\in L_{\mathrm{out}}}q_{\mathrm{out}}^{k}(t)c_{i}^{ \mathrm{TK}}(t)\Delta t+R_{\mathrm{MS}}^{\mathrm{TK}}(c_{i}^{\mathrm{TK}}(t) )V_{i}^{\mathrm{TK}}(t)\Delta t,\end{split} \tag{7}\] where \(V_{i}^{\mathrm{Br_{\mathrm{TK}}}}(t+\Delta t)\) is the volume injected to the tank with concentration \(c_{i}^{\mathrm{Br_{\mathrm{TK}}}}(t+\Delta t)\) by booster station if located. \(R_{\mathrm{MS}}^{\mathrm{TK}}(c_{i}^{\mathrm{P}}(x,t))\) is the multi-species reaction rate in tanks expression (refer to Section II-3). #### Ii-B3 Multi-species Reaction and Decay Model Dividing the model into decay and mutual reaction dynamics allows it to consider a substance with relatively different reaction rates than the decay rate and for the model to be less sensitive to the other reactants' concentrations. Decay model is a first-order model that depends on only chlorine concentration and constant decay rate. Hence, the chlorine decay reaction rates for Pipe \(i\) and Tank \(j\) are \(k_{i}^{\mathrm{P}}=k_{b}+\frac{2k_{r}k_{f}}{r_{r_{i}}(k_{r}+k_{f})},\)\(k_{j}^{\mathrm{TK}}=k_{b}\), where \(k_{b}\) is the bulk reaction rate constant; \(k_{w}\) is the wall reaction rate constant; \(k_{f}\) is the mass transfer coefficient between the bulk flow and the pipe wall; \(r_{\mathrm{P}_{i}}\) is the pipe radius. The mutual reaction model is expressed by a second-order nonlinear ODEs which are discretized using Forward Euler method \(c(t+\Delta t)-c(t)=-k_{r}\Delta t(c(t)\tilde{c}(t)),\tilde{c}(t+\Delta t)- \tilde{c}(t)=-k_{r}\Delta t(c(t)\tilde{c}(t)),\) where \(c(t),\tilde{c}(t)\) are the concentrations for chlorine and fictitious reactant; and \(k_{r}\) is the mutual reaction rate between them. Eventually, reaction expressions for pipes and tanks are \[\begin{split}& R_{\mathrm{M}}^{\mathrm{P}}(c_{i}^{\mathrm{P}}(s,t)) =-k_{r}c_{i}^{\mathrm{P}}(s,t)\tilde{c}_{i}^{\mathrm{P}}(s,t),\\ & R_{\mathrm{M}}^{\mathrm{TK}}(c_{j}^{\mathrm{TK}}(t))=-k_{r} \tilde{c}_{j}^{\mathrm{TK}}(t)c_{j}^{\mathrm{TK}}(t),\\ & R_{\mathrm{M}}^{\mathrm{TK}}(\tilde{c}_{i}^{\mathrm{TK}}(t))=-k_ {r}c_{i}^{\mathrm{V}}(s,t)\tilde{c}_{i}^{\mathrm{P}}(s,t),\\ & R_{\mathrm{M}}^{\mathrm{TK}}(\tilde{c}_{j}^{\mathrm{TK}}(t))=-k_ {r}\tilde{c}_{j}^{\mathrm{TK}}(t)c_{j}^{\mathrm{TK}}(t).\end{split} \tag{8a}\] A full description of the state-space matrices construction for the Upwind discretization schemes and an example on a simple three-node network (consists of reservoir, a pump, a junction, a pipe, and a tank--Fig. 6) is included in [42] for reader's reference on how to formulate the representation for different network component. In the next section, we investigate different MOR algorithms for (1). ## III Model Order Reduction and Transformation of MS-WQM The state-space representations formulated in the previous section are in forms of nonlinear difference equations (NDEs) (1) with large numbers of variables resulted from high resolution spatio- temporal-discretization. To reach the end-goal of this paper, which is controlling chlorine levels for (1), we propose different methodologies to reduce the model order and showcase their limitations, accuracy, computational time, and robustness/sensitivity to initial conditions and fictitious reactant type. That being said, we list full descriptions of the methods covered in our framework. We start with linearizing (1), then explain model order reduction and transformation for linearized and original nonlinear systems. ### _Model Linearization_ The mutual reaction is expressed as a nonlinear term that can be linearized using Taylor series approximations [44]. By linearizing around operating points \(c_{o},\tilde{c}_{o}\), the nonlinear term \(R_{\mathrm{M}}(c(t),\tilde{c}(t))\) for both chemicals is expressed as: \[\begin{split} R_{\mathrm{M}}(c(t),\tilde{c}(t))&=-k_ {r}(c_{o}\tilde{c}_{o}+c_{o}(\tilde{c}(t)-\tilde{c}_{o})+\tilde{c}_{o}(c(t)-c_{o })),\\ &=-k_{r}(c_{o}\tilde{c}_{o}+c_{o}\tilde{c}(t)-c_{o}\tilde{c}_{o}+ \tilde{c}_{o}c(t)-\tilde{c}_{o}c_{o}),\\ &=-k_{r}(c_{o}\tilde{c}(t)+\tilde{c}_{o}c(t)-\tilde{c}_{o}c_{o}). \end{split} \tag{9}\] For each of the chemicals, the mutual reaction after linearization is broke down to a term that depends on its concentration, a term that depends on the other chemical's concentration, and a constant. The general state-space representation (1) has a block-diagonal matrix of \(\mathbf{A}\) matrices with no dependency between the chemical except in the \(\mathbf{f}\) function. That is, by applying linearization to the model the state-space representation is updated to linear difference equations (LDEs): \[\underbrace{\begin{bmatrix}\mathbf{E}_{11}(t)&0\\ 0&\mathbf{E}_{22}(t)\end{bmatrix}}_{\mathbf{E}(t)}\underbrace{\begin{bmatrix}\mathbf{ \chi}_{1}(t+\Delta t)\\ \mathbf{\chi}_{2}(t+\Delta t)\end{bmatrix}}_{\mathbf{\chi}(t+\Delta t)}=\underbrace{ \begin{bmatrix}\mathbf{\check{A}}_{11}(t)&\mathbf{\check{A}}_{12}(t)\\ \mathbf{\check{A}}_{21}(t)&\mathbf{\check{A}}_{22}(t)\end{bmatrix}}_{\mathbf{\check{A}}(t)} \underbrace{\begin{bmatrix}\mathbf{\chi}_{1}(t)\\ \mathbf{\chi}_{2}(t)\end{bmatrix}}_{\mathbf{\chi}(t)}\] \[+\underbrace{\begin{bmatrix}\mathbf{B}_{11}(t)&0\\ 0&\mathbf{B}_{22}(t)\end{bmatrix}}_{\mathbf{B}(t)}\underbrace{\begin{bmatrix}\mathbf{u}_{ 1}(t)\\ \mathbf{u}_{2}(t)\end{bmatrix}}_{\mathbf{u}(t)}+\mathbf{\Phi}, \tag{10}\] where \(\bar{\mathbf{A}}_{11}(t)\) and \(\bar{\mathbf{A}}_{22}(t)\) are the modified diagonal matrices; \(\bar{\mathbf{A}}_{12}(t)\) and \(\bar{\mathbf{A}}_{21}(t)\) are the matrices gathering the dependency between the two species concentrations; and \(\mathbf{\Phi}\) is the vector containing the constants. Note that the changes in \(\bar{\mathbf{A}}_{11}(t)\) and \(\bar{\mathbf{A}}_{22}(t)\) from the original matrices are only in the submatrices/elements representing pipes and tanks only (i.e., \(\mathbf{A}_{\rm P}^{\rm P}\) and \(\mathbf{A}_{\rm TR}^{\rm TK}\)). ### _Model Order Reduction and Transformation_ In our study, we investigate two SVD-based projection methods; POD and BPOD. The reason behind not applying BT method is that it has been proven to be computationally impractical for the linear water quality model [32]. Both POD and BPOD are applied on the linearized MS-WQM, while the POD method is extended to reduce the nonlinear model where the nonlinearity is directly evaluated. Before explaining the detailed approach of the aforementioned methods, we start by explaining the general approach of SVD-based methods where a snapshot of the original space is taken. For the general nonlinear state-space representation (1) which can concisely formulated as follows \[\mathbf{E}(t)\mathbf{x}(t+\Delta t) =\mathbf{A}(t)\mathbf{x}(t)+\mathbf{B}(t)\mathbf{u}(t)+\mathbf{f}(\mathbf{ x}(t)), \tag{11}\] \[\mathbf{y}(t) =\mathbf{C}(t)\mathbf{x}(t)+\mathbf{D}(t)\mathbf{u}(t),\] the first step is to map the representation states \(\mathbf{x}\in\mathbb{R}^{n_{x}}\) to another space state \(\mathbf{w}\in\mathbb{R}^{n_{x}}\). This mapping aims to re-order the states according to their _influence_ in the preserved property. Driven by the goal of applying control algorithm on our model, we care to capture the most controllable and observable snapshots of the original space. Transformation is performed through constructing a non-singular matrix \(\mathbf{V}\in\mathbb{R}^{n_{x}\times n_{x}}\), so that \(\mathbf{x}=\mathbf{V}\mathbf{w}\). That is, Eq. (11) is expressed in terms of \(\mathbf{w}\) as follows \[\mathbf{E}_{w}(t)\mathbf{w}(t+\Delta t) =\mathbf{A}_{w}(t)\mathbf{w}(t)+\mathbf{B}_{w}(t)\mathbf{u}(t)\] \[+\mathbf{V}^{-1}\mathbf{f}(\mathbf{V}\mathbf{w}(t)), \tag{12}\] \[\mathbf{y}_{w}(t) =\mathbf{C}_{w}(t)\mathbf{w}(t)+\mathbf{D}(t)\mathbf{u}(t),\] where \(\mathbf{E}_{w}=\mathbf{V}^{-1}\mathbf{E}\mathbf{V},\ \mathbf{A}_{w}=\mathbf{V}^{-1}\mathbf{A}\mathbf{V},\ \mathbf{B}_{w}=\mathbf{V}^{-1}\mathbf{B}\) and \(\mathbf{C}_{w}=\mathbf{C}\mathbf{V}\). Next, the reduced-order model is captured from the transformed mapping with number of states \(n_{r}\ll n_{x}\) donated by \(\mathbf{x}_{r}\in\mathbb{R}^{n_{r}}\). A snapshot is taken of \(\mathbf{x}\) equal to \(\mathbf{V}_{r}\)x\({}_{r}\) where \(\mathbf{V}_{r}\) is the matrix comprised of the first \(n_{r}\) columns of \(\mathbf{V}\). Similarly, we define \(\mathbf{L}_{r}\) as the first \(n_{r}\) rows of \(\mathbf{V}^{-1}\). Finally, the reduced-order model is expressed as \[\mathbf{E}_{r}(t)\mathbf{x}_{r}(t+\Delta t) =\mathbf{A}_{r}(t)\mathbf{x}_{r}(t)+\mathbf{B}_{r}(t)\mathbf{u}(t)+ \mathbf{f}_{r}(\mathbf{x}_{r}(t)), \tag{13}\] \[\overline{\mathbf{y}}_{r}(t) =\mathbf{C}_{r}(t)\mathbf{x}_{r}(t)+\mathbf{D}(t)\mathbf{u}(t),\] where \(\mathbf{E}_{r}=\mathbf{L}_{r}\mathbf{E}\mathbf{V}_{r},\ \mathbf{A}_{r}=\mathbf{L}_{r}\mathbf{A}\mathbf{V}_{r},\ \mathbf{B}_{r}=\mathbf{L}_{r}\mathbf{B}\) and \(\mathbf{C}_{r}=\mathbf{C}\mathbf{V}_{r}\). The choice of \(n_{r}\) can be done arbitrarily as a fixed number or to conserve a specified level of energy between ROM and FOM. The energy of a system is determined by the summation of its eigenvalues, hence \(n_{r}\) can be chosen to keep a certain energy percentage of FOM in ROM [45]. However, we investigate choosing different numbers of \(n_{r}\) for each case study where the energy persevered is increased with larger \(n_{r}\). Additionally, majority of MOR methods deal with original systems with zero initial conditions which does not align with the nature of water quality dynamics. Study [32] has dealt with that by recognizing the non-zero initials network-wide as inputs for the system and setting \(\hat{\mathbf{x}}(t)=\mathbf{x}(t)-\mathbf{x}(0)\) in the original model. We follow same approach with further analysis for the nonlinear term of the mutual dynamics. The mutual reaction dynamics as stated in Section II-3 take place in pipes and tanks. That is, vector \(\mathbf{f}\) contains zeros except for states of pipes' segments and tanks. We define \(\mathbf{x}_{\rm MS_{1}}(t):=\{c^{\rm TK}(t),\ c^{\rm P}(t)\}\) and \(\mathbf{x}_{\rm MS_{2}}(t):=\{\tilde{c}^{\rm TK}(t),\ \tilde{c}^{\rm P}(t)\}\). Accordingly, \[\mathbf{f}(\mathbf{x}_{\rm MS_{1}}(t),\mathbf{x}_{\rm MS_{2}}(t))=\mathbf{\alpha} \cdot\mathbf{x}_{\rm MS_{1}}(t)\cdot\mathbf{x}_{\rm MS_{2}}(t), \tag{14}\] where \(\mathbf{\alpha}:=\{\mathbf{\alpha}_{\rm TK},\ \mathbf{\alpha}_{\rm P}\}\); \(\alpha_{\rm TK}^{\rm TK}=-k_{r}\Delta t\frac{V_{\rm TR}^{\rm TK}(t)}{V_{\rm TR }^{\rm TK}(t+\Delta t)}\ \forall\ j=1,\ldots,n_{\rm TK}\); and \(\alpha_{\rm P}^{\rm P}=-k_{r}\Delta t\ \forall\ l=1,\ldots,\sum_{i=1}^{n_{r}}s_{L_{i}}\). Henceforward, by setting \(\hat{\mathbf{x}}_{\rm MS_{1}}(t)=\mathbf{x}_{\rm MS_{1}}(t)-\mathbf{x}_{\rm MS _{2}}(0)\) and \(\hat{\mathbf{x}}_{\rm MS_{2}}(t)=\mathbf{x}_{\rm MS_{2}}(t)-\mathbf{x}_{\rm MS _{2}}(0)\) and substituting in (14) we get \[\mathbf{f}(\hat{\mathbf{x}}_{\rm MS_{1}}(t),\hat{\mathbf{x}}_{ \rm MS_{2}}(t))=\mathbf{\alpha}\cdot\hat{\mathbf{x}}_{\rm MS_{1}}(t)\cdot\hat{ \mathbf{x}}_{\rm MS_{2}}(t) \tag{15}\] \[=\mathbf{\alpha}\cdot(\mathbf{x}_{\rm MS_{1}}(t)-\mathbf{x}_{\rm MS_{1 }}(0))\cdot(\mathbf{x}_{\rm MS_{2}}(t)-\mathbf{x}_{\rm MS_{2}}(0))\] \[=\mathbf{\alpha}\cdot(\mathbf{x}_{\rm MS_{1}}(t)\cdot\mathbf{x}_{\rm MS _{2}}(t)-\mathbf{x}_{\rm MS_{2}}(0)\cdot\mathbf{x}_{\rm MS_{1}}(t)\] \[\quad\quad-\mathbf{x}_{\rm MS_{1}}(0)\cdot\mathbf{x}_{\rm MS_{2}}(t)+ \mathbf{x}_{\rm MS_{1}}(0)\cdot\mathbf{x}_{\rm MS_{2}}(0)),\] which proves that considering \(\mathbf{f}(\hat{\mathbf{x}}_{\rm MS_{1}}(t),\hat{\mathbf{x}}_{\rm MS_{2}}(t))\) can be utilized by updating \(\mathbf{A}(t)\) in the original model to eliminate terms in blue for pipes and tanks and the negative term in violet encapsulates the nonlinear term at the initial concentrations (refer to the online version of the paper for the actual colors). Subsequently, the full-order model is formulated as \[\mathbf{E}(t)\hat{\mathbf{x}}(t+\Delta t) =\hat{\mathbf{A}}(t)\hat{\mathbf{x}}(t)+\hat{\mathbf{B}}(t)\hat{\mathbf{u}}(t)+ \mathbf{f}(\hat{\mathbf{x}}(t)), \tag{16}\] \[\mathbf{y}(t) =\mathbf{C}(t)\hat{\mathbf{x}}(t)+\hat{\mathbf{D}}(t)\hat{\mathbf{u}}(t),\] where \(\hat{\mathbf{A}}(t)=\begin{bmatrix}\mathbf{A}_{11}(t)\ \hat{\mathbf{A}}_{12}(t)\\ \hat{\mathbf{A}}_{21}(t)\ \mathbf{A}_{22}(t)\end{bmatrix},\) \(\hat{\mathbf{B}}(t)=[\mathbf{B}(t)\quad\mathbf{A}(t)\mathbf{x}(0)]\), \(\hat{\mathbf{D}}(t)=[\mathbf{D}(t)\quad\mathbf{C}(t)\mathbf{x}(0)]\) and Figure 3: (a) Linear and (b) nonlinear MOR methods configuration. \(\hat{\mathbf{u}}(t)=[\mathbf{u}^{\top}(t)\quad\mathbf{1}^{\top}]^{\top}\). On the other hand, for the linearized full-order model in (10) same approach as in [32] is followed and the final model formulated as \[\begin{split}\mathbf{E}(t)\hat{\mathbf{x}}(t+\Delta t)&= \hat{\mathbf{A}}(t)\hat{\mathbf{x}}(t)+\hat{\mathbf{B}}(t)\hat{\mathbf{u}}(t)+\Phi,\\ \mathbf{y}(t)&=\mathbf{C}(t)\hat{\mathbf{x}}(t)+\hat{ \mathbf{D}}(t)\hat{\mathbf{u}}(t).\end{split} \tag{17}\] where \(\hat{\mathbf{A}}(t)=\tilde{\mathbf{A}}(t)\), \(\hat{\mathbf{B}}(t)=[\mathbf{B}(t)\quad\tilde{\mathbf{A}}(t)\mathbf{x}(0)]\), \(\hat{\mathbf{D}}(t)=[\mathbf{D}(t)\quad\mathbf{C}(t)\mathbf{x}(0)]\) and \(\hat{\mathbf{u}}(t)=[\mathbf{u}^{\top}(t)\quad\mathbf{1}^{\top}]^{\top}\). Lastly, we judge the performance of the MOR methods by calculate the root-mean-square error (RMSE) metric, \[\text{RMSE}=\sqrt{\frac{1}{N_{p}}\sum_{j=1}^{N_{p}}||\mathbf{y}(j)-\overline {\mathbf{y}}(j)||_{2}^{2}}. \tag{18}\] The error is calculated for a specific simulation period of \(N_{p}\) time-steps through which we apply same system inputs \(\mathbf{u}(j)\) to the two models. In the following sections, we give full description of the utilized methods. We start with applying POD and BPOD for the lineared formulation of the system, followed by integrating and handling the nonlinearity in the original representation of the system (Eq. (1) for case of zero initial conditions and Eq. (16) for case of non-zero initial conditions). The basic and the balanced POD methods are considered data-driven SVD methods. The main idea is to build empirical Gramians based on snapshots of the original system. These empirical Gramians avoid solving complicated, intractable in many case, Lyapunov equations. POD method relies on constructing controllability Gramian while BPOD constructs finite horizon controllability and observability Gramians. Notably, POD method favors highly controllable states over highly observable but less controllable ones which BPOD averts by reflecting observability in the captured snapshot. It is important to highlight that in our system the concepts of _controllability_ and _observability_ for the two chemicals are different in what they do reflect. While the input vector \(\mathbf{u}_{1}(t)\) depicts chlorine injections into the system by source or recholrization stations, vector \(\mathbf{u}_{2}(t)\) enables simulating the intrusion of the contaminant to the system [46]. Henceforward, controllability for the second chemical is indicating which network components get exposed/affected by the contamination event. On the other hand, typically water quality sensors are located to measure chlorine levels and from here comes the abstract concept of the system being observable for water quality measurements. This is a main reason for monitoring chlorine to be a solid proxy of the water quality state in a specific network. However, no sensors are places for contaminates with their wide range making their observability reflected in chlorine levels and not quantifiable in the matrix \(\mathbf{C}_{22}\) (i.e., a zero matrix). That puts a limitation on applying BPOD method for that chemical in equation as it will overlooks this chemical. In Section III-B2 we propose a special approach to solve this issue. In addition, with no output measured for that chemical, the RMSE metric in Eq. 18 only measures the error for chlorine. In fact, the main purpose of this work is to control and monitor chlorine under contamination events. To that end, it is valid to focus on the output of measuring its concentrations that are accurately representing the real-time state. Nevertheless, for the sense of evaluating the performance of the applied MOR methods we assume the existence of _"imaginary"_ sensors on some specific nodes for the fictitious reactant only for the sake of calculating the error. In the following subsections, we explain what snapshots each method captures and how to construct these Gramians correspondingly. We apply both POD and BPOD methods to the linearized model and extend POD to be applied to the nonlinear model. #### Iii-B1 Proper Orthogonal Decomposition (POD) This method captures snapshot matrix \(\mathbf{X}_{m}\) that is built for specific number of steps \(m\) by concatenating the states vector into \[\mathbf{X}_{m}=[\mathbf{x}(0)\ \ \mathbf{x}(1)\ \ldots\ \ \mathbf{x}(m-1)], \tag{19}\] where \(\mathbf{X}\in\mathbb{R}^{n_{x}\times m}\). The approximate \(m\)-step controllability Gramian \(\mathbf{W}_{C_{m}}\) is defined as \(\mathbf{X}_{m}\mathbf{X}_{m}^{\top}\in\mathbb{R}^{n_{x}\times n_{x}}\). Next, we apply eigenvalue decomposition (ED) \(\mathbf{W}_{C_{m}}\mathbf{V}=\mathbf{V}\mathbf{\Lambda}\) and obtain \(\mathbf{V}\) whose columns are the corresponding eigenvectors. However, in many cases applying ED for an \(n_{x}\times n_{x}\) matrix with large \(n_{x}\) is taxing. This can be avoided in cases of \(m\ll n_{x}\) by constructing \(\mathbf{\widetilde{W}}_{C_{m}}=\mathbf{X}_{m}^{\top}\mathbf{X}_{m}\in\mathbb{R}^{m\times m}\). Accordingly, eigenvalue decomposition performing is eased down [47]. In this case, ED is formulated as \(\mathbf{\widetilde{W}}_{C_{m}}\mathbf{Q}=\mathbf{Q}\mathbf{\Lambda}\) where \(\mathbf{\Lambda}\) is the diagonal matrix of eigenvalues and matrix \(\mathbf{Q}\) is assembled with eigenvectors as columns. The transformation matrix is then calculated as \(\mathbf{V}=\mathbf{X}_{m}\mathbf{Q}\mathbf{\Lambda}^{-\frac{1}{2}}\). For detailed step-by-step depiction of the POD method, follow Procedure 1. This procedure is followed for both chemicals. ``` 1 Construct snapshot \(\mathbf{X}_{m}\) as in (19) 2if\(n_{x}\ll m\)then 3 Calculate \(\mathbf{W}_{C_{m}}=\mathbf{X}_{m}\mathbf{X}_{m}^{\top}\) 4 Obtain transformation matrix \(\mathbf{V}\) by applying eigenvalue decomposition \(\mathbf{W}_{C_{m}}\mathbf{V}=\mathbf{V}\mathbf{\Lambda}\) 5else 6 Calculate \(\mathbf{\widetilde{W}}_{C_{m}}=\mathbf{X}_{m}^{\top}\mathbf{X}_{m}\) 7 Obtain matrices forms of eigenvector and eigenvalue of \(\mathbf{\widetilde{W}}_{C_{m}}\); \(\mathbf{Q}\) and \(\mathbf{\Lambda}\) 8 Calculate transformation matrix as \(\mathbf{V}=\mathbf{X}_{m}\mathbf{Q}\mathbf{\Lambda}^{-\frac{1}{2}}\) 9 endif 10 Specify \(n_{r}\) 11 Define \(\mathbf{V}_{r}\) as the first \(n_{r}\) columns of \(\mathbf{V}\) 12 Define \(\mathbf{L}_{r}\) as the first \(n_{r}\) rows of \(\mathbf{V}^{-1}\) 13 Calculate \(\mathbf{E}_{r},\mathbf{A}_{r},\mathbf{B}_{r}\), and \(\mathbf{C}_{r}\) 14ifFOM is nonlinearthen 15 Follow Procedure 2 16 endif ``` **Procedure 1:** POD for general MS-WQM Mapping and Integrating The NonlinearityThe idea behind separating reducing the linear term(s) and the nonlinear term(s) is to be able to capture the behavior of the second one while working in a subspace of the original system (i.e., \(\mathcal{R}^{n_{r}}\) instead of \(\mathcal{R}^{n_{x}}\)). In Eq. (13), following the projection of the whole system the nonlinear term is expressed as \(\mathbf{f}_{r}=\mathbf{L}_{r}\mathbf{f}(\mathbf{V}_{r}\mathbf{x}_{r}(t))\). Yet, the computational complexity of the nonlinear term still depends on \(n_{x}\); \[\mathbf{f}_{r}=\underbrace{\mathbf{L}_{r}}_{n_{r}\times n_{x}}\underbrace{\mathbf{f}( V_{r}\mathbf{x}_{r}(t))}_{n_{x}\times 1}.\] Henceforward, it is proposed to reduce the nonlinear term is based on an approximation through a hyperreduction approach. The approach is to measure not the full state-space variables, but particular points and from those points we construct the nonlinear term by interpolation around these points. In our study we specify the number of these points to equal \(n_{r}\); \[\mathbf{f}_{r}=\underbrace{\mathbf{L}_{r}\mathbf{U}_{f_{r}}}_{n_{r}\times n_{r}} \underbrace{\hat{\mathbf{f}}(t)}_{n_{r}\times 1}.\] The goal is to project \(\mathbf{f}(\mathbf{V}_{r}\mathbf{x}_{r}(t))\) onto \(\mathbf{U}_{f_{r}}\) so that \(\mathbf{f}(\mathbf{V}_{r}\mathbf{x}_{r}(t))\approx\mathbf{U}_{f_{r}}\hat{\mathbf{f}} (t)\) and \(\mathbf{L}_{r}\mathbf{U}_{f_{r}}\) can be pre-computed offline. This approach is called the _"Gappy method"_ of _Galerkin projection_ and the _Discrete Empirical Interpolation Method (DEIM)_ is used to reconstruct the nonlinear vector by interpolation. We adopt a _Greedy sampling algorithm_ to construct the measurement matrix to select the entries used. We start by stacking numerical snapshot \(\mathbf{F}_{m}\) only for the nonlinear term, \[\mathbf{F}_{m}=[\mathbf{f}(\mathbf{x}(0))\ \ \mathbf{f}(\mathbf{x}(1))\ \ \ldots\ \ \mathbf{f}(\mathbf{x}(m-1))], \tag{20}\] followed by performing a _separate_ SVD for that snapshot, \(\mathbf{F}_{m}=\mathbf{U}_{f}\mathbf{\Sigma}_{F}\mathbf{Q}_{f}^{\top}\). The next step is to define a rank-\(n_{r}\) approximating basis \(\mathbf{U}_{f_{r}}\) as the first \(n_{r}\) columns of \(\mathbf{U}_{f}\). Next, we construct the measurement matrix \(\mathbf{K}\) by applying the Greedy sampling algorithm as summarized in Procedure 2. As shown in Fig. 4, the Greedy sampling algorithm starts by choosing the index with the maximum value in the first mode \(\mathbf{u}_{1}\) and making it the first measurement location. In the second iteration and the succeeding one, we compute the residual to evaluate how the current measurement subspace projects onto the next one and decide on the next measurement point. The reason behind choosing the measurement with the maximum residual is that the modes are no longer orthogonal in the support space, hence, we calculate the residuals and locate the index with the maximum residual. ``` 1 Capture \(\mathbf{F}_{m}\) as in (20) 2 Perform SVD of \(\mathbf{F}_{m}=\mathbf{U}_{f}\mathbf{\Sigma}_{F}\mathbf{Q}_{f}^{\top}\) 3 Construct \(\mathbf{U}_{f_{r}}\) as the first \(n_{r}\) columns of \(\mathbf{U}_{f}\) 4Start Greedy sampling algorithm for selecting the indices (entries of \(\mathbf{f}\)) Input:\(\mathbf{U}_{f_{r}}=[\mathbf{u}_{1}\ \ldots\ \mathbf{u}_{n_{r}}]\) Output:\(\mathbf{\mathcal{I}}=[\mathbf{i}_{1}\ \ldots\ \mathbf{i}_{n_{r}}]\) and \(\mathbf{K}=[\mathbf{o}_{i_{1}}\ \ldots\ \mathbf{o}_{i_{r}}]\) 5\([s_{i_{1}}]=\max\{[\mathbf{u}_{1}]\}\) 6\(\mathbf{U}_{f_{r}}=[\mathbf{u}_{1}],\ \mathbf{K}=[\mathbf{e}_{i_{1}}]\) 7for\(I=2:n_{r}\)do 8 solve \(\mathbf{K}^{\top}\mathbf{U}_{f_{r}}\mathbf{b}=\mathbf{K}^{\top}\mathbf{u}_{I}\) for \(\mathbf{b}\) 9\(\mathbf{q}=\mathbf{u}_{I}=\mathbf{U}_{f_{r}}\) 10\([s_{i},i_{l}]=\max[\mathbf{q}]\) 11\(\mathbf{U}_{f_{r}}=[\mathbf{U}_{f_{r}},\ \mathbf{u}_{I}],\ \mathbf{K}=[\mathbf{K},\ \mathbf{e}_{i_{I}}]\) 12 endfor 13 14Conclude \(\hat{\mathbf{f}}(t)=(\mathbf{K}^{\top}\mathbf{U}_{f_{r}})^{-1}\mathbf{f}(\mathbf{K}^{ \top}\mathbf{V}_{r}\mathbf{x}_{r}(t))\) ``` **Procedure 2:** Nonlinearity handling in MOR #### Iii-B2 Balanced Proper Orthogonal Decomposition (BPOD) The advance in the BPOD method is the reflection of both controllability and observability in ranking the states, unlike POD. This is attained by constructing two snapshots of the system, \(\widetilde{\mathbf{X}}_{m}\) which captures the impulse responses when applying impulse signal as system input (i.e., \(u_{i}(m)=\gamma(m)\)) and \(\mathbf{P}_{m}\) is assembled from states \(\mathbf{p}(t)\) obtained from the adjoint system with impulse response in the measurements as the system's output. For the linearized model in (17), the adjoint system can be expressed as follows, \[\mathbf{p}(t+\Delta t)=\mathbf{\bar{A}}^{\top}(t)(\mathbf{E}^{-1}(t))^{\top}\mathbf{p} (t)+\mathbf{C}^{\top}(t)\mathbf{y}(t)+\mathbf{E}^{-1}(t)\mathbf{\Phi}. \tag{21}\] Next step is performing singular value decomposition (SVD) to the block Hankel matrix \(\mathbf{H}_{m}=\mathbf{P}_{m}^{\top}\widetilde{\mathbf{X}}_{m}=\mathbf{U}\mathbf{\Sigma}\mathbf{Q}^{\top}\) then specifying \(n_{r}\) to collect the largest \(n_{r}\) singular values in \(\mathbf{\Sigma}\) and obtain the corresponding left and right singular vectors (i.e., \(\mathbf{U}_{r}\) and \(\mathbf{Q}_{r}\)). Accordingly, \(\mathbf{V}_{r}\) and \(\mathbf{S}_{r}\) are calculated as \[\mathbf{V}_{r}=\widetilde{\mathbf{X}}_{m}\mathbf{Q}_{r}\mathbf{\Sigma}_{r}^{\frac{1}{2}},\ \ \mathbf{L}_{r}=\mathbf{\Sigma}_{r}^{\frac{1}{2}}\mathbf{U}_{r}^{\top}\mathbf{P}_{m}^{\top} \tag{22}\] This approach is applicable for chlorine with sensors placed to measure its levels. For the fictitious reactant representing the contaminant, matrix \(\mathbf{C}_{22}\) is a zero matrix representing a non-sensed variables in our system. To solve this issue we adapt the assumption that the contamination event is detected and the source location is determined. This is considered a valid assumption in water quality monitoring to work backward detecting, classifying, and quantify using conventional WQ sensors [48]. This is different than the _"imaginary"_ sensors that are aforementioned while calculating the error to evaluate the performance of the applied methods. Fig. 4: An illustrative example of applying the Greedy sampling algorithm to construct the measurement matrix \(\mathbf{K}\) for the case of \(n_{r}=5\). Another advance of the BPOD is the ability of stabilizing it by choosing the length of the snapshots [32] to be large enough to represent the actual Graminas shooting for infinity. We adopt an _a priori stabilization_ method to ensure that the snapshot captures the chemicals' evolution from the time it is injected in the system till it is observed by the furthest sensor. This is fulfilled by assembling the snapshots over a period exceeding \(\underline{m}=\max\left(\left\lceil\frac{T_{BBS}}{\Delta t}\right\rceil\right)= \max\left(\left\lceil\sum\frac{L_{i}^{BS}}{v_{i}^{BS}\Delta t}\right\rceil\right)\) where \(L_{i}^{BS}\) and \(v_{i}^{BS}\) are the length and velocity of the pipes the chemical travels through from a booster station to the furthest sensor. With the existence of multiple booster stations and sensors and with within the simulation period, \(\underline{m}\) is taken as the length corresponding to the maximum travel time \(T_{BS}\). Accordingly, this method is affected by the actuators' and sensors' locations along the network. Lastly, Procedure 3 summarizes all the steps needed for a linear(ized) WQ model. ``` 1Obtain snapshots length \(m=\underline{m}\) 2Construct snapshot \(\widetilde{\mathbf{X}}_{m}\) and \(\mathbf{P}_{m}\) 3Construct the block Hankel matrix \(\mathbf{H}_{m}=\mathbf{P}_{m}^{\top}\widetilde{\mathbf{X}}_{m}\) 4Perform SVD of \(\mathbf{H}_{m}=\mathbf{U}\mathbf{\Sigma}\mathbf{Q}^{\top}\) 5Specify \(n_{r}\) 6Obtain \(\mathbf{U}_{r},\mathbf{\Sigma}_{r}\), and \(\mathbf{Q}_{r}\) 7Calculate \(\mathbf{V}_{r}\) and \(\mathbf{L}_{r}\) via (22) 8Calculate \(\mathbf{E}_{r},\mathbf{A}_{r},\mathbf{B}_{r}\), and \(\mathbf{C}_{r}\) ``` [MISSING_PAGE_POST] maximum allowed one stated by EPA. In such case, we specify \(x_{2}^{\max}\) to be equal to this initial concentration detected to be able to tighten the overestimators envelope while having the minimum equal to zero. Eventually, the problem formulation explained for the linearized model can be adopted with these modifications. First, a new variable vector \(\mathbf{z}(t)\) is introduced and it replaces \(\mathbf{f}(\mathbf{x}_{1},\mathbf{x}_{2},t)\) in (1a). Additionally, the total number of the constraints added to the optimization problem via (24) is equal to \(4(n_{\mathrm{TK}}+\sum_{i=1}^{n_{\mathrm{P}}}s_{L_{i}})\) as the nonlinear term is defined for pipes' segments and tanks and is the same for both chemicals at same element of the aforementioned (refer to Eq. (8)). To that end, the WQC problem described in (23) is modified as follows, \[\underset{\mathbf{x}(t),\mathbf{u}_{1}(t),\mathbf{z}(t)}{\text{ minimize}} \mathcal{J}(\mathbf{u}_{1}(t))=\mu\sum_{t=1}^{N_{\mathrm{P}}} \mathbf{q}^{\mathrm{B}}(t)^{\top}\mathbf{u}_{1}(t)\] (25a) subject to \[\mathbf{W}\text{QM (1)}, \tag{25b}\] \[\mathbf{u}_{1}^{\min}\leq\mathbf{u}_{1}(t)\leq\mathbf{u}_{1}^{ \max},\] \[\mathrm{Mcormick} \tag{26}\] Next step is transforming (25) into a linear augmented formulation based on which the final WQC-QP is built. First, by introducing \(\mathbf{z}(t)\) into (1a), the state-space representation is updates as \[\mathbf{x}(t+1)=\mathbf{A}(t)\mathbf{x}(t)+\mathbf{B}(t)\mathbf{u}(t)+\beta\mathbf{z} (t). \tag{27}\] where \(\beta=-k_{r}\). Then, we define the change in the states and inputs as follows \[\Delta\mathbf{x}(t+1)=\mathbf{x}(t+1)-\mathbf{x}(t),\ \ \Delta \mathbf{u}(t+1)=\mathbf{u}(t+1)-\mathbf{u}(t),\] \[\Delta\mathbf{z}(t+1)=\mathbf{z}(t+1)-\mathbf{z}(t).\] To concatenate these rates of change in (27), \(\Delta\mathbf{z}\) is assembled to the vector of systems decision inputs to be optimally chosen within the envelopes defined by (26). Eventually, we reach an the augmented state-space representation in (28). \[\underset{\mathbf{x}_{a}(t+1)}{\underbrace{\begin{bmatrix}\Delta \mathbf{x}(t+1)\\ \mathbf{y}(t+1)\end{bmatrix}}} =\underset{\mathbf{x}_{a}(t)}{\underbrace{\begin{bmatrix}\mathbf{A} (t)&\mathbf{0}\\ \mathbf{C}(t)\mathbf{A}(t)&\mathbf{I}\end{bmatrix}}}\underset{\mathbf{x}_{a}(t)}{\underbrace {\begin{bmatrix}\Delta\mathbf{x}(t)\\ \mathbf{y}(t)\end{bmatrix}}} \tag{28}\] \[+\underset{\mathbf{x}_{a}(t)}{\underbrace{\begin{bmatrix}\mathbf{B} (t)&\beta\\ \mathbf{C}(t)\mathbf{B}(t)&\beta\mathbf{C}(t)\end{bmatrix}}}\underset{\mathbf{\Gamma}_{a}}{ \underbrace{\begin{bmatrix}\Delta\mathbf{u}(t)\\ \Delta\mathbf{z}(t)\end{bmatrix}}}\] This augmented representation can be abstractly rewritten as, \(\mathbf{x}_{a}(t+1)=\mathbf{\Phi}_{a}\mathbf{x}_{a}(t+1)+\mathbf{\Gamma}_{a} \Delta\mathbf{u}_{a}(t)\). To avoid redundancy, integrating this equality into WQC-MPC formation follows same approach in [32] reaching the final QP [32, Eq. (38)]. On another note, the added constraints expressed in (26) are incorporated in the constraints on the optimization variables. ### _Generalized Comprehensive Water Quality Modeling and Control Framework_ In our study, we have covered model order reduction and control for multi-species water quality dynamics where chlorine is reacting with another source of contamination in form of a bi-linear expression--refer to Section II-3. As mentioned, it has been also covered for single-species model in [32]. However, there are other formulations for single-species and multi-species chlorine bulk decay and reaction dynamics as listed in [42]. We include a short list of these formulations in Tab. I, nevertheless for more details and descriptions refer to the aforementioned study. The following generalized framework described in Algorithm 1 maps out the methods adopted in this study to be applied on the different decay and reaction models. For the first-order, first-order with stable component, and Parallel first-order (M-1&M-2&M-3) models, the dynamics are linear and accordingly follow the procedure of the linearized model represented in our study. Whilst, the second-order with multiple components (M-8) is considered same formula as the second-order with fictitious component (M-7) we cover in this paper except for the number of states which gets multiplied by the number of reactants in the system. That is, model order reduction for M-8 model becomes more demanded. On the other hand, the parallel second-order model (M-4) is a special form of the second-order with fictitious component. Lastly, the \(\mathrm{n}^{\mathrm{th}}\)-order without and with stable component models are higher order models which can be reduced as a nonlinear models or be transferred into quadratic approximation and apply piecewise linear relaxation. \begin{table} \begin{tabular}{l l l|l|l|l|} \hline \hline \(\mathbf{M}\) & Model & \multicolumn{2}{c|}{Model formulation} & \(\mathbf{\#\text{Sates}}\) & \(\mathbb{L}/\mathbf{NL}^{*}\) \\ \hline M-1 & First-order & & \(\frac{d\varepsilon}{dt}=-kc(t)\) & \(n_{a}\) & \(\mathbb{L}\) \\ \hline M-2 & First-order & with stable & \(\frac{d\varepsilon}{dt}=-k(c(t)-c_{\mathrm{L}})\) & \(n_{x}\) & \(\mathbb{L}\) \\ \hline M-3 & Parallel first-order & & \(\frac{d\varepsilon}{dt}\big{|}_{\infty}=-k_{\infty}c_{\mathrm{L}}(t)\) & & & \\ & & \(\frac{d\varepsilon}{dt}\big{|}_{\infty}=-k_{\infty}c_{\mathrm{L}}(t)\) & & & \\ & & \(c_{\mathrm{L}}(t)=\varepsilon_{\mathrm{L}}(t)+c_{\mathrm{L}}(t)\) & & & \\ \hline & & \(\frac{d\varepsilon}{dt}\big{|}_{\infty}=-k_{\infty}c(t)\varepsilon(t)\) & & & \\ \hline M-4 & Parallel second-order & & \(\frac{d\varepsilon}{dt}\big{|}_{\infty}=-k_{\infty}c(t)\varepsilon(t)\) & & & \\ & & \(\frac{d\varepsilon}{dt}=-k_{\infty}c(t)\varepsilon(t)\) & 2n_{x} & \(\overline{\mathbf{NL}}\) \\ \hline M-5 & \(a^{\mathrm{b}}\)-order & & \(\frac{d\varepsilon}{dt}=-kc^{\mathrm{L}}(t)\) & \(n_{x}\) & \(\overline{\mathbf{NL}}\) \\ \hline M-6 & \(a^{\mathrm{b}}\)-order & with stable & \(\frac{d\varepsilon}{dt}=-k(c(t)-c_{\mathrm{L}})c^{(n-1)}\) & \(n_{x}\) & \(\overline{\mathbf{NL}}\) \\ \hline M-7 & Second-order with fictitious component & & \(\frac{d\varepsilon}{dt}=-kc(t)\varepsilon(t)\) & 2n_{x} & \(\overline{\mathbf{NL}}\) \\ \hline M-8 & Second-order with multiple components & & \(\frac{d\varepsilon}{dt}=-k_{i}c(t)\varepsilon_{\mathrm{L}}(t)\) & & \\ & & \(\frac{d\varepsilon}{dt}=-k_{i}c(t)\varepsilon_{\mathrm{L}}(t)\) & \(I_{n_{x}}\) & \(\overline{\mathbf{NL}}\) \\ \hline \hline \end{tabular} * \({}^{1}\): Linear or NL: Nonlinear model expression. \end{table} Table I: Chlorine bulk decay and reaction models expressions ``` Input: WDN topology, components' characteristics, and hydraulics parameters Output: Real-time water quality states \(\mathbf{x}(t)\) and control inputs \(\mathbf{u}(t)\) at time \(t\) of a simulation period of \(T_{s}\) 1 Initialization 2 Define \(\Delta t\), number of segments \(s_{i}\) for each pipe and accordingly \(n_{x}\) 3 Formulate WQ state-space representation (1) as explained in Section II and according to the reaction dynamics in Tab. I. 4Proced 5ifApplying \(\textbf{M-1}/\textbf{M-2}/\textbf{M-3}\) reaction modelthen 6 Follow Procedure 3 to obtain ROM 7 Apply constrained real-time WQ-MPC on (23) 8elseifApplying \(\textbf{M-4}/\textbf{M-7}/\textbf{M-8}\) reaction modelthen 9ifFollowing Procedure 1then 10 Apply McCormick relaxation via (24) 11 Apply constrained real-time WQ-MPC on (25) 12else 13 Linearizing and following Procedure 3then 14 Apply constrained real-time WQ-MPC on (23) 15endif 16else 17Applying M-5/M-6then 18 Follow Procedure 1 to obtain ROM 19 Transforming into quadratic app/Apply piecewise linear relaxation 20 Apply constrained real-time WQ-MPC on (23) 21endif ``` **Algorithm 1**Generalized water quality modeling and control framework ## V Case Studies This section demonstrates the proposed framework for model order reduction and control of MS-WQM. Particularly, we attempt to answer the following questions: * _Q1:_ How close is the linearized model to the nonlinear MS-WQM? How can operating points be chosen effectively? * _Q2:_ How effective are the proposed model order reduction producers in terms of accuracy and computational time when applied on the MS-WQM? * _Q3:_ How sensitive is the performance of MOR and control algorithm to the discretization methods and system's hydraulics? * _Q4:_ How reliable and robust is model predictive control when applied to control chlorine levels under multi-species dynamics? Numerical studies in this section are performed on three different networks, three-node, Net1, and FFCL-1 networks [51]. As shown in Fig. 6, each of the networks has different topologies and scales. The three-node network is a self-designed network to help provide simple illustrations for different approaches throughout our framework implementation. Net1 includes different types of network components and has a looped layout. The FFCL-1 network is based on the Fairfield, CA, USA water distribution system on which we test the scalability of our framework and its performance with scattered dead-ends. Fig. 7 illustrates and lists each of the networks' components. In addition to the listed components for each of the test networks in Fig 6, each network has a different number of sensors and booster stations. The three-node network has one booster station at Junction J1 and one sensor at Tank TK1. Net1 has two stations at Junctions 1 and 6 and sensors at Junctions J4 and J9. Lastly, the controlled region of FFCL-1 network has two sensors at Junctions J56 and J67, and one rechlorination station at J89. It is worth mentioning that for any WDN, the system dimension depends on the hydraulics and water quality simulation time-step which accordingly define the number of segments for each pipe (i.e., pipes state variables). Further, changing the velocities and flows from one scenario to other results in distinct chemical concentrations across the network components for each scenario. With that in mind, in some of our case studies we feature the effect of changing the hydraulics for the same network. In some other case studies, we fix the hydraulics setting in the system to investigate/test a technique or an approach under discussion. ### _Nonlinear vs. Linearized Models_ Studies [52, 53, 54] state that the applying linear model order reduction algorithm on a linearized system gives satisfactory performance when the linearized system is close to the nonlinear original one or operating within or near its linear regime. In these studies, linearization is performed around one operating point for the whole simulation horizon. We apply the same approach by linearizing around two operating points, (0,0) and Fig. 6: Case studies’ layouts: (a) Three-node network, (b) Net1, and (c) FFCL-1 with the zone we control framed. Fig. 7: Test networks’ components count. (0.2,0.05) mg/L for chlorine and fictitious reactant respectively at Tank TK1 of the three-node network. In this scenario, a constant demand is drawn from J1 and sources of 2 mg/L chlorine and 0.5 mg/L of the fictitious reactant are provided at R1 and zero initial conditions for other network components. As demonstrated in Fig. 8, linearization around the operating point of (0,0) results in higher concentrations compared to the nonlinear model (based on NDE (1)) for both chemicals due to the fact it drops out the nonlinear term and basically neglect the mutual reaction. On another hand, linearizing the model around one random operating point as (0.2,0.05) mg/L results in relatively closer values for chlorine concentrations but not as close for the fictitious reactant. In addition, the choice of the operating point is done based on randomness, not a physical-based process. On top of that, unlike this scenario, in real-time water networks hydraulics are not fixed and demands are time-variant resulting in chemical evolution with different schemes for which fixing the operating point for all elements is not actively efficient. That is, we investigate next taking different operating points for each network component along the simulation window every specific number of time-steps. Note that for the previous scenario and from this point forward in this section we apply linearization to the model constructed using the Implicit Upwind scheme. The choice of the operating points we linearize around is critical. The narrowest the recurrent window of choosing the operating points, the closest the results to the original model. However, if we choose to update the operating points each water quality time-step then matrices \(\bar{\mathbf{A}}_{11}(t),\mathbf{A}_{12}(t),\bar{\mathbf{A}}_{21}(t)\) and \(\bar{\mathbf{A}}_{22}(t)\) in Eq. (10) should be updated accordingly instead of being updated each hydraulic time-step. Hydraulic time-step is acceptable to be within an hourly scale while reflecting the change in demand, while the range is between minutes and seconds for water quality to allow a stable numerical simulation [55, 56]. Consequently, updating these matrices frequently adds more computational burden to the simulation which negates the main reason for implementing linearization and model order reduction. On the contrary, widening the window to be more than the hydraulic time-step especially in cases with significant demand change gives an inaccurate approximation of the system. Over and above that, it is important to consider falling within the control algorithm prediction and control horizon to be able to adjust accordingly with the controller input. With the hydraulic setting of a patterned demand at J1 changing every 1 hr (Fig. 9c), the model is linearized around operating points that are taken every 1 hr for each of the network elements. The same sources of chemicals are provided at R1 with zero initial conditions for the other components. Results, shown in Fig. 9a and 9b for chlorine concentrations at TK1 and P1, exhibit that updating operating points every 1 hr gives close value to the original model except for the first hour during which operating points are taken to be the initial concentrations at those elements. To mitigate this issue, operating points are updated after 1-10 minutes from the simulation start. The same approach is followed in scenarios where chemical dosages are increased locally at some node for elements downstream of this node. ### _MS-WQ Model Order Reduction Performance_ In this section, we assess and compare the performance of each of the proposed model order reduction procedures for multi-species water quality dynamics in terms of accuracy compared to the original full-order model, and computational time. For each network, we apply POD and BPOD on the linearized model and POD for the nonlinear model. We refer to these procedures as LPOD, LBPOD, and NLPOD, respectively. We note that we record the computational time needed for assembling the snapshots, obtaining the transformation matrices, and calculating the RMSE between the original and reduced-order model for a specific simulation under the same conditions. All these simulations are performed by applying the Implicit Upwind discretization scheme. First, we apply the three MOR methods on the three-node and Net1 networks under zero and non-zero initial conditions and static hydraulic profiles. The results shown in Figures 10 and 11 validate that all methods prove their ability to reduce the model dimensions with relatively low RMSEs for different \(n_{r}\) values. These RMSEs get lower with increasing the Fig. 8: Nonlinear vs lineared models results for (a) chlorine and (b) fictitious reactant at TK1 of the three-node network. Fig. 9: Chlorine concentrations at (a) P1 and (b) TK1 of the three-node network with (c) patterned demand at J1. Results are for the nonlinear and linearized models—linearization operating points are updated every 1 hr for all network components. values and are lower for the scenario of zero initial conditions compared to the case of non-zero initial conditions. For the scenario with non-zero initial conditions, initial chlorine concentrations are 0.5 mg/L network-wide; the initial fictitious reactant concentrations at TK1 in the three-node network and Tank 11 in Net1 are 0.05 mg/L. Fig. 11 shows the chlorine and fictitious reactant concentrations for both scenarios of initial conditions at TK1 of the three-node network and Tank 11 of Net1 for the full-order model and the reduced-order models using all three MOR producers. It is observed that the reduced-order models give almost identical results to the full-order one for the step response at TK1 and for a regular node along the network for Tank 1 for the two scenarios of zero and non-zero initial conditions. On the contrary, in [32] the POD method gives high errors for the scenario of non-zero initial concentrations under single-species dynamics as the input-output relationship is not correctly captured when the initial values are treated as inputs into the system. In our study, this effect is mitigated by building the offline snapshot with a higher impulse signal by the booster stations which results in favoring the actual locations of booster stations. Meanwhile, it is worth mentioning that MOR methods' performance depends intensively on the locations of the sensors and actuators and their reflection on network-wide observability and controllability. This leads to inaccurate or unstable results in some cases and in some other scenarios. However, the allocation of these sensors and actuators for each network is out of this paper's scope and we solve assuming the predetermination of their locations. ### _Model Order Reduction Sensitivity to System Hydraulics_ The construction of the transformation matrices \(V_{r}\) and \(L_{r}\) for both methods POD and BPOD is sensitive to the snapshots (i.e., \(\mathbf{X}_{m}\) and \(\mathbf{P}_{m}\)) constructed offline. These snapshots need to be long enough and representative of the actual reaction between states, inputs, and outputs. That leads accordingly to being sensitive to the hydraulic settings of the system while capturing these snapshots and also while applying the desired model reduction. Dynamic hydraulic states in a network reflect the consumers' patterned consumption which can be recorded for a specific network during a specific season [57]. After validating the reliability of the three MOR methods under zero and non-zero initial conditions, we investigate the case of dynamic hydraulic demands for a bigger network; the FFCL-1 network. In Fig. 12, the evolution of chlorine and the fictitious reactant at J11, J56, J76, and J107 of the FFCL-1 network simulated by full-order model and LPOD-based reduced-order model is presented. Note that, instead of a crowded figure with four plots for each junction, we only present the results of LPOD to highlight the major observations. In this scenario, an input of 0.3 mg/L for the fictitious reactant is inserted at the start of the network (i.e., at the Tank) depicting an early intrusion event. As demonstrated, the LPOD-based ROM is able to trace the concentrations of the chemicals at Fig. 11: Chlorine and fictitious reactant concentrations evolution at (a) TK1 of three-node network (\(n_{x}=204\)) and (b) Tank 11 of Net1 (\(n_{x}=482\)) under zero and non-zero initial conditions simulated by full- and reduced-order models with \(n_{r}=30\) for both networks. Fig. 12: Chlorine and fictitious reactant concentrations evolution at J11, J56, J76, and J107 of FFCL-1 network simulated by full- and reduced-order models. Full-order model results for at each of the junctions are in solid lines, while the LPOD method results are in dashed lines. Number of states for the full-order model is \(n_{x}=10356\) and reduced to \(n_{r}=200\) states. Fig. 10: RMSEs for the three proposed MOR methods for the three-node (\(n_{x}=204\)) and Net1 (\(n_{x}=482\)) networks with (a) zero and (b) non-zero initial conditions for different \(n_{r}\) values. different junctions, including dead-ends and junctions that are connecting looped pipes. Nonetheless, an oscillatory effect is detected for the fictitious reactant concentrations in the framed zone. This oscillation is formulated as the fictitious reactant being completely consumed by the chlorine at these junctions or at pipes flowing into them (e.g., J76), however, the operating points around which the system is linearized force the fictitious reactant to have false concentrations. Therefore, this effect is illuminated by applying NLPOD and is reduced by updating the operating points more frequently. Lastly, the computational time recorded for each of the MOR methods implementations on the three tested networks is illustrated in Fig. 13. For all networks, the NLPOD method requires more computational time as a result of handling the nonlinearity term separately and performing the greedy sampling algorithm. However, the maximum increase in time is around 95 seconds compared to BPOD for the FFCL-1 network, which is considered an acceptable computational time for a network of \(n_{x}=10356\) states. ### _Implicit vs. Explicit Discretization Schemes under Control-theoretic Perspective_ As stated in Section II-1, the 1-D AR equation can be discretized by implementing either Explicit or Implicit Upwind schemes. The explicit scheme needs to be performed under satisfied CFL condition to ensure stability which requires a small time-step in many cases and hence a higher system dimension. While the implicit scheme is stable regardless but requires more complicated mathematical calculations that add to the computational work. Therefore, it has been a pressing question that needs to be answered, "_Which is better: Implicit or Explicit discretization schemes?_". This has been proven to not be an easy question with an easy answer. In our study, we reduce our system's dimensions while applying either of these methods. Nonetheless, while transformation matrices are calculated offline, some system matrices are updated every hydraulic time-step. This adds more computational load with matrices multiplication which is higher with matrix inverse in the case of the implicit scheme. One more point to highlight, although the implicit scheme allows a bigger simulation time-step, a smaller one is more efficient to be able to update the control inputs more frequently. So our question can be formulated as follows, "From a control-theoretic perspective, which is better: Implicit or Explicit discretization schemes?" As model order reduction is a prior step to applying control to our model, we test both discretization methods' performance while applying the LPOD method for Net1. As demonstrated in Fig. 13(a), the RMSEs are lower for the implicit than the explicit scheme. In addition, the change in RMSEs by increasing \(n_{r}\) more than 150 is insignificant as the states which get retained do not hold high energy compared to the previously selected ones. On the other hand, errors for the explicit scheme do not go lower than 0.003 with increasing \(n_{r}\)--explained through the following example. Although the CFL condition is satisfied for the explicit scheme, it formulates sharp fronts at points with a relatively significant change in the chemical concentrations as shown in Fig. 13(b). Fig. 13(b) illustrates chlorine concentrations at J4 for both implicit and explicit schemes and the corresponding reduced models with \(n_{r}=120\) of a full model with \(n_{x}=482\). It is noticeable that the reduced model based on the explicit scheme is in consequence affected and shows instability behavior that damps reaching equilibrium. This performance is recorded under a low Courant number with the network's pipes. To mitigate that, the water quality time-step is required to be reduced to reach higher CN--near but less than 1. Such behavior is avoided when applying the unconditionally stable implicit scheme. To that end, the Implicit Upwind scheme gives more accurate results that lead to a more robust control algorithm. The computational burden of this scheme can be lowered using sparse matrix multiplication. The computational time to perform the simulation shown in Fig. 13(a) is 32.9 and 43.4 seconds for the Explicit and Implicit Upwind schemes, respectively for the same water quality time-step. However, the implicit scheme retains high accuracy under higher WQ time-step and accordingly lower computational run-time. Therefore, this scheme gives more flexibility in choosing the time-step needed to retain real-time control windows while maintaining high accuracy. Fig. 14: (a) RMSEs for the reduced-order Explicit and Implicit Upwind schemes-based models while applying LPOD on Net1 with \(n_{x}=482\) for different \(n_{r}\) values, (b) chlorine concentrations at J4 simulated by the full- and reduced-order models (with \(n_{r}=100\)) using both schemes. Fig. 13: Computational time to implement the three MOR methods for three tested networks. ### _Real-time Control Implementation of MS-WQM MGR-Based MPC_ The main objective of this paper and the prior investigation of the MOR methods is to integrate them into and apply a real-time control algorithm of chlorine concentrations using the booster stations distributed along the WDN under MS-WQM. We apply the MPC algorithm on the linearized and nonlinearized MS-WQ ROM as explained in Section IV. As both LPOD and BPOD can reduce the MS-WQM effectively, For the linearized model we apply the BPOD method. On the other hand, we apply the NLPOD for the nonlinear model to obtain the ROM. For multi-specious water quality control and regulation, while applying the McCormick relaxation the envelopes rely on the limits for both chemicals. For the network's components near the location of the second chemical intrusion, these envelopes put tight boundaries on the chosen value for \(z\) by the control problem as the \(x_{2}\) is close to \(x_{2}^{\max}\). On another hand, for farther components with lower concentrations for both chemicals the relaxation allows higher and lower values for \(z\) which leads to choosing a value of \(z\) to be as close to the underestimators so that the control inputs are lower and the cost of chlorine injections is reduced. Additionally, as the mutual reaction coefficient \(k_{r}\) becomes more rapid, the effect of relaxation on the chosen control input increases. That is, the proposed relaxed MPC may result in overlooking/underestimating the mutual reaction and, therefore, we lower the upper bound for the fictitious reactant as a procedure integrated into the looped control algorithm repeated each time-step. As explained in Section V-A, it is proposed to update the operating points around which the system is linearized every significant change window (e.g., hydraulic states change). Updating these operating points adds to the computational time by recalculating the matrices, yet it yields a more accurate representation. Therefore, we apply a threshold change test after applying every control input to update these points. By adopting these approaches, we start with applying the MS-WQC MPC-based method on the three-node network under a static hydraulic profile and with reduced number of states of \(n_{r}=30\) states out of \(n_{x}=204\). The full model of this network has 204 states while the ROM has \(n_{r}=30\). The water quality time-step is chosen to be 5 seconds and the control horizon is 10 minutes. The fictitious reactant is discharged into the system at J1 (same location of the booster station) at a concentration of 0.1 mg/L for the first 1 hour of the simulation duration. Fig. 15 demonstrates the control actions and the corresponding control response in J1 and P1 under the multi-species linearized ROM, nonlinear ROM, and the single-species ROM that neglects the existence of the other chemical in the system for the first consecutive 2 hours of simulation. As shown, for all scenarios chlorine concentrations at J1 and P1 are zeros at the start of the simulation. That is, MPC starts by injecting high chlorine dosage of 21284 mg/min for the case of multi-species dynamics and 20838 mg/min for the single-species model. The control input needed drops to 19158 mg/min and 17596 mg/min for multi-species and single-species dynamics, respectively. After the first hour of simulation, MPC results in the same control actions for both models as the intrusion event is contained. Not that, the second substance's initial concentration for P1 is zero, which leads to the peak control action at the start of the simulation being relatively close as the second substance has not traveled thoroughly into P1. Comprehensively, this highlights the importance and effectiveness of the adopted MS-WQM and control framework. On top of that, this difference between the two models' results (i.e., chlorine concentration dynamics and optimal chlorine inputs) increases for more reactive components with chlorine and initial intrusion concentrations which may cause operational issues with limited chlorine availability and/or budget. Additionally, the linearized MS-WQC problem and relaxed one produce the same performance as illustrated in Fig. 14(a). We note that for the linearized model, the operating points are updated at the start of the simulation, with applying the peak control action, and by the end of the contamination event. For the relaxed MS-WQC problem, all elements are directly affected by the event resulting in tight envelops and approximating the mutual reaction near its actual value. However, the number of control variables for this procedure is higher for the first hour. To that end, the computational time needed for each of the two control procedures is case-oriented. For this case study, the linearized-based MPC method has computational time of 78 seconds, while it is 93 seconds for the second method. Next, we apply the proposed MS-WQC approach on Net1 under a dynamic hydraulic profile defined by the patterned Fig. 16: (a) MPC control action at Junctions 1 and 6, and the corresponding chlorine concentrations at these junctions and Junctions 5 and 8 under (c) patterned demand at Junction one. Fig. 15: (a) Control action \(\mathbf{u_{1}}\) during 2 hrs of simulation on the three-node network by applying, SS-LMPC: linear single-species-based MPC, MS-LMPC: linearized multi-species-based MPC, and MS-RMPC: nonlinear multi-species-based relaxed MPC, (b) chlorine concentrations at J1 and P1 under another chemical intrusion at J1 for the first hour. demand at Junction 1, Fig. 16c. The FOM has 482 states that is reduced to 50 states. As both control procedures proved their ability to regulate chlorine concentrations network-wide, we showcase the results from the relaxed MPC procedure only to point out case studies that can take place. In this case study, the water quality time-step is 5 seconds while the control horizon is 10 minutes and the simulation period is 24 hours. The initial concentrations of all chemicals are zeros. The fictitious reactant is set to intrude at Junction 6 with a concentration of 0.3 mg/L mid-day. Additionally, chlorine concentrations are limited to 1.2 mg/L for cost reasons. In this case, we introduce two types of disturbance to the system: a sudden drop in chlorine concentration at Junction 6 to 0.15 mg/L at the 12th hour of the day and a sudden increase to 2 mg/L at the 18th hour. Fig. 16a shows the control action at Junctions 1 and 6, while Fig. 16b demonstrates the corresponding chlorine concentrations at these Junctions and Junctions 5 and 8. For Junction 1, control action is higher and almost constant of \(1.9\times 10^{4}\) mg/min as the junction is located at the very start of the network and all the downstream elements are affected by its input. On another hand, the booster station at Junction 6 acts on the disturbances and the changes at the downstream nodes effectively. Results validate the performance of the control algorithm and how it behaves under these disturbances. The run-time recorded for applying the control algorithm for this case study is 278 seconds. Likewise, chlorine concentrations are regulated in FFCL-1 network as Fig. 17 exhibits. Total number of states of the original model is 10356, while the reduced model has 200 states. Water quality time-step, control horizon, and simulation period are same as the previous case study. Two different fictitious reactants are assumed to be detected, the first one at J76 with initial concentration of 0.3 mg/L while the second one at J89 of 0.2 mg/L. Control actions illustrated in Fig. 17a are under the condition of hydraulic profile that results in changing flow directions. Yet, the control algorithm recovers effectively and maintain chlorine concentrations within the desired range. In short, the ROMs-based control algorithms guarantee the bounds defined for the inputs and outputs while being tractable for larger networks. ## VI Conclusion, paper's limitations, and recommendations for future work Relying on the results from the numerical case studies in Section V, we answer the research questions posed: * _A1:_ The multi-species water quality model can be effectively linearized around operating points updated every specific moving window according to the hydraulic profile, instantaneous changes, initial conditions, and control actions. However, to achieve the desired accuracy this window is reduced and the computational time increases. * _A2:_ The presented MOR methods yield high accuracy in estimating output concentrations for both chlorine and the substance in the system. The three MOR procedures: LPOD, BPOD, and NLPOD are able to handle non-zero initial conditions by favoring the control actuators' inputs while building the offline snapshots. Additionally, the NLPOD method requires more computational time to handle and interpolate the nonlinearity in the system, yet, it is still computationally tractable, same for LPOD and BPOD. * _A3:_ MPC's behavior depends on the underlying model and its accuracy. Accordingly, the Implicit Upwind scheme is preferred over the Explicit Upwind scheme because of its ability to provide highly-accurate simulation for the full and reduced-order MS-WQM. Moreover, numerical case studies show that the three MOR producers are robust to dynamically changing hydraulic profiles. * _A4:_ MPC shows robustness and high flexibility in regulating chlorine levels in WDNs under different scenarios of contamination events and hydraulic profiles by applying feedback control on the reduced order model while maintaining affordable computational requirements. Both proposed control procedures, the linearized model- and the relaxed nonlinear model-based show reliable performance while applying adaptive approaches according to the case study considered. These approaches lead to a different level of complexity and computational burden for each of the procedures which results in favoring one procedure over the other according to the case study. Our study is not limitations-free. We highlight these limitations next along with the authors' future work to be investigated. First, we apply our framework to networks with pre-distributed booster stations and water quality sensors. The locations of these actuators and sensors affect the performance of the MOR and control of the system. Henceforward, more investigation on their placement from a control-theoretic perspective and accurate model order reduction is recommended for future work. Second, in the process of linearizing our model the choice of linearization points can be done through offline methods that pre-compute trajectories of the FOM and select linearization regions accordingly; e.g., Trajectory piecewise-linear (TPWL) method [58, 59] which is a direction for future inspection. Lastly, the proposed relaxation method (i.e., McCormick envelops) can be tightened in a piecewise way as the tighter the lower and upper bounds, the higher the quality of the relaxation. Accordingly, it can be adopted more effectively under different scenarios of multi-species dynamics in the system while resulting in lower computational time compared to the linearized model. Fig. 17: (a) Control action at J89 of the FFCL-1 network, (b) the corresponding chlorine concentrations at J56 and J67.
2304.02856
Optimization of probabilistic quantum search algorithm with a priori information
A quantum computer encodes information in quantum states and runs quantum algorithms to surpass the classical counterparts by exploiting quantum superposition and quantum correlation. Grover's quantum search algorithm is a typical quantum algorithm that proves the superiority of quantum computing over classical computing. It has a quadratic reduction in the query complexity of database search, and is known to be optimal when no a priori information about the elements of the database is provided. In this work, we consider a probabilistic Grover search algorithm allowing nonzero probability of failure for a database with a general a priori probability distribution of the elements, and minimize the number of oracle calls by optimizing the initial state of the quantum system and the reflection axis of the diffusion operator. The initial state and the reflection axis are allowed to not coincide, and thus the quantum search algorithm rotates the quantum system in a three-dimensional subspace spanned by the initial state, the reflection axis and the search target state in general. The number of oracle calls is minimized by a variational method, and formal results are obtained with the assumption of low failure probability. The results show that for a nonuniform a priori distribution of the database elements, the number of oracle calls can be significantly reduced given a small decrease in the success probability of the quantum search algorithm, leading to a lower average query complexity to find the solution of the search problem. The results are applied to a simple but nontrivial database model with two-value a priori probabilities to show the power of the optimized quantum search algorithm. The paper concludes with a discussion about the generalization to higher-order results that allows for a larger failure probability for the quantum search algorithm.
Yutong Huang, Shengshi Pang
2023-04-06T04:33:37Z
http://arxiv.org/abs/2304.02856v2
# Optimization of probabilistic quantum search algorithm with a priori information ###### Abstract A quantum computer encodes information in quantum states and runs quantum algorithms to surpass the classical counterparts by exploiting quantum superposition and quantum correlation. Grover's quantum search algorithm is a typical quantum algorithm that proves the superiority of quantum computing over classical computing. It has a quadratic reduction in the query complexity of database search, and is known to be optimal when no a priori information about the elements of the database is provided. In this work, we consider a probabilistic Grover's search algorithm allowing nonzero probability of failure for a database with general a priori probability distribution of the elements, and minimize the number of oracle calls by optimizing the initial state of the quantum system and the reflection axis of the diffusion operator. The initial state and the reflection axis are allowed not to coincide, and thus the quantum search algorithm rotates the quantum system in a three-dimensional subspace spanned by the initial state, the reflection axis and the search target state. The number of oracle calls is minimized by a variational method, and formal analytical results are obtained with the assumption of low failure probability. The results show that for nonuniform a priori distribution of the oracle elements, the number of oracle calls can be significantly reduced given a small decrease in the success probability of the quantum search algorithm, leading to a lower average number of oracle calls to find the solution of the search problem. The result is applied to a simple but nontrivial database model which has \(N\) elements with nonuniform two-valued a priori probabilities to show the effect of the optimized search algorithm. The paper is concluded with a discussion about the generalization to higher-order results that allows a larger failure probability for the quantum search algorithm. ## I Introduction Quantum computing has been expected to revolutionize the field of computing since it was proposed [1; 2]. It accelerates computing tasks by taking advantage of nonclassicalities of quantum systems such as quantum superposition and quantum correlation. With the development of quantum computing, various quantum algorithms have been proposed. The large number factorization algorithm proposed by Peter W. Shor [3; 4] has an exponential speedup compared to classical factorization algorithms, and the quantum search algorithm proposed by Lov K. Grover [5; 6] has a quadratic speedup in terms of the database size compared to classical database search algorithms. More quantum algorithms have been proposed in recent years, such as the variational quantum eigensolver algorithm [7] and the quantum approximate optimization algorithms [8] for noisy intermediate-scale quantum (NISQ) devices, the HHL algorithm of linear systems of equations for quantum machine learning [9], etc. For a classical database with \(N\) elements, if a search task has \(M\) solutions, the classical query complexity of finding a solution is usually of order \(O(N/M)\). In quantum computing, Grover's search algorithm finds a solution in a database by preparing the quantum system in an appropriate superposed state of the computational basis and driving the system to approach a target state with alternate oracle operations and specific diffusion operations in the Hilbert space. After the evolution, a quantum projective measurement is performed on the system along the computational basis to obtain the target state. The query complexity required by the algorithm is \(O(\sqrt{N/M})\), which is a quadratic speedup compared to the classical algorithm. Because of this superiority, quantum search algorithms has been found useful in various applications, such as quantum dynamic programming [10], quantum random-walk search algorithm[11], the preparation of GHZ states using algorithms[12], etc. While the quantum search algorithm has found wide applications in vast areas, the algorithm itself can still be extended and improved in various aspects. For example, a well-known improvement idea is the quantum partial search algorithm (QPSA) [13; 14; 15; 16; 17; 18; 19], which decomposes the database into multiple smaller blocks and search for the block which the target state belongs to instead of finding the exact location of the target state. Another idea is to change the Grover iterations of the quantum search algorithm. The core of Grover's quantum search algorithm is the amplitude amplification technique[20] which increases the weight of the solution in the superposed state of the quantum system by repetition of the Grover iteration consisting of the oracle operation and a diffusion operation. It has been proposed that the diffusion operation in the Grover iteration can be replaced by a two-dimensional phase rotation [21]. Extension of the two-dimensional phase rotation to a three-dimensional rotation was subsequently proposed, and a phase matching condition to realize the quantum search algorithm by phase rotation was found [22; 23; 24]. This phase-matching condition has been verified theoretically [21; 25], and realized by experiments on optical systems[26; 27] and ion trap systems[28], etc. An interesting zero failure rate quantum search algorithm based on this condition was subsequently proposed [29], and this algorithm has been applied in different directions, such as quantum pattern recognition[30], sliding mode control of quantum systems[31] and quantum image compression[32]. More ideas of enhancing the quantum search algorithm can be referred to [33; 34; 35; 36; 37]. Variants of the quantum search algorithm have also been proposed, such as quantum searching with continuous variables [38; 39; 40], the fixed point quantum search algorithm[41; 42], the Hamiltonian search algorithm [40], etc. Moreover, the quantum search algorithm has been realized on photonic systems [43; 44], superconducting systems [45], NMR systems [46; 47; 48], trapped ion systems [49; 50], etc. The lower bounds for the number of oracle calls of these algorithms have been proven optimal [51; 52; 53]. In the optimality of the quadratic speedup of Grover's search algorithm, a key ingredient is that all the elements of the database are equally likely to be the solution of the search problem. If the elements of the database are allowed to be unequally likely to be the search solution, the result will be quite different, as it can be expected that preparing the quantum system closer to those states with higher probabilities to be the search solution will be more beneficial to the search algorithm [54; 55; 56; 57; 58]. An intuitive example is that, if part of the database elements, e.g., \(K\) of the \(N\) elements, are known to have very low probabilities (or even zero probabilities) to be the search solution, one just needs to prepare the quantum system in a superposition of the rest \(N-K\) high-probability elements and flip the system around this state as well in the Grover iterations just as in the original Grover's search algorithm, then the query complexity will be reduced to \(O(\sqrt{N-K})\) rather than \(O(\sqrt{N})\) with a low probability to fail the search task. This inspires us to consider the following question: if we know in advance the probabilities of the elements in the database to be the search solution, what is the optimal performance of the quantum search algorithm by exploiting the a priori probabilities of the database elements and allowing the algorithm to succeed probabilistically? The purpose of this work is to study the minimal query complexity of the quantum search algorithm by optimizing the initial state of the quantum system and the reflection axis of the diffusion operator given the average success probability of the algorithm. The query complexity is quantified by the number of oracle calls, and the minimization of the query complexity is carried out by variation of the initial state of the quantum system and the reflection axis of the diffusion operator. The optimization equation is highly nontrivial and hard to solve in general. However, if the failure probability is low, one can expect that the optimal initial state and the optimal reflection axis should just deviate slightly from the uniformly superposed state of all database elements as in the original Grover's search algorithm. This leads to a differential approach to solving the optimization equation: by taking the differentiation of the optimization equation as well as the normalization conditions for the initial state and the reflection axis, one can establish a differential relation between the success success probability of the algorithm, the number of the oracle calls, the optimal initial state of the system and the optimal reflection axis of the diffusion operator. When the failure probability of the algorithm is low, this differential relation can immediately tell the reduction of the number of oracle calls in terms of the failure probability of the search algorithm. In this work, we obtain a formal second-order differential relation between the failure probability of the algorithm and the reduction in the number of the oracle calls given general a priori distribution of the database elements to be the search solution and also the optimized initial state of the system and the reflection axis of the diffusion operator. An interesting property of the result is that the reduction percentage of the number of oracle calls is proportional to the square root of the failure probability of the search algorithm which is always much larger than the failure probability when the latter is small, implying the optimized probabilistic quantum search algorithm can decrease the average number of oracle calls when the success probability of the algorithm is taken into account. The formal results are then applied to a simple but nontrivial database model where all the elements have only two possible values for the a priori probabilities, and the reduction of the query complexity is analytically derived in terms of the failure probability of the search algorithm and illustrated in detail by numerical computation. The paper is structured as follows. In Sec. II, we give a brief overview for the Grover's quantum search algorithm. In Sec. III, the number of oracle calls is minimized by the method of Lagrange multipliers given the a priori probabilities of the database elements and the success probability of the search algorithm. The failure probability of the algorithm is then assumed to be small and the reduction of the number of oracle calls is obtained in terms of the failure probability by differentiating the optimization equations. Sec. IV is devoted to a simple database model with the a priori probabilities of the elements to be two-valued. The paper is finally concluded in Sec. V by summarizing the results of this work. ## II Preliminaries In this section, we briefly introduce the preliminaries of Grover's quantum search algorithm [5; 6] that are relevant to the current research. We focus on the case that the search problem has only one solution throughout this paper. ### Procedures of Grover's search algorithm Suppose we have a database with \(N\) elements where the probabilities of the elements being the search solu tion are the same, and we use the computational basis of a quantum system to represent the elements of the database. The quantum system is initially prepared in a uniform superposed state, \[\left|\psi_{0}\right\rangle=\sum_{i=1}^{N}\frac{1}{\sqrt{N}}\left|i\right\rangle. \tag{1}\] The solution of the search problem is recognized by a quantum oracle. The quantum oracle can be regarded as a black box, the internal working mechanism of which is not critical to the search algorithm, but can perform a unitary transformation on the quantum system and mark up the solution of the search task by shifting the phase of the target state. In detail, the unitary transformation of the quantum oracle can be written as \[O=I-2|t\rangle\langle t|, \tag{2}\] where \(|t\rangle\) is the target state and \(I_{N}\) is the identity operator on the \(N\)-dimensional Hilbert space of the system. The effect of the oracle \(O\) when it acts on a quantum state is that \[O|t\rangle=-|t\rangle,\;O|i\rangle=|i\rangle,\;i\neq t. \tag{3}\] So, it flips the sign of the target state and leaves the basis states other than the target state unchanged. While the oracle can mark up the solution by changing the sign of the target state, it cannot lead quantum system to approach the target state alone as it does not change the amplitude distribution of different basis states in the superposed state of the quantum system. In order to increase the amplitude of the target state in the superposed state of the quantum system, the oracle operation needs to be followed by another unitary transformation, usually called the Grover diffusion operator, which reflects the state of the quantum system around the uniformly superposed state \(|\psi_{0}\rangle\), \[D=2|\psi_{0}\rangle\langle\psi_{0}|-I. \tag{4}\] The effect of the diffusion operator \(D\) is to inverse the amplitudes of the basis states in the superposed state of the system around the mean of all amplitudes. The combination of the oracle \(O\) and the diffusion operator \(D\) is usually called the Grover operator or Grover iteration, denoted as \[G=DO. \tag{5}\] It turns out that the Grover operator \(G\) can boost the amplitude of the target state in the superposed state of the system. So if one repeats this procedure for a proper number of times, the quantum system can finally approach the target state of the search problem with a high fidelity. The Grover's search algorithm has an intuitive geometric interpretation. In order to see how Grover's algorithm works in the geometric picture, the initial state can be rewritten as \[\left|\psi_{0}\right\rangle= a_{t}\left|t\right\rangle+\sum_{i\neq t}^{N}a_{i}\left|i \right\rangle=\sin\theta\left|t\right\rangle+\cos\theta\left|t_{\perp}\right\rangle, \tag{6}\] where the state is decomposed to two states, one is the target state \(\left|t\right\rangle\) and the other is a uniformly superposed state in the rest \((N-1)\)-dimensional subspace orthogonal to the target state \(t\), and \[\sin\theta=\frac{1}{\sqrt{N}},\;\cos\theta=\sqrt{1-\frac{1}{N}}. \tag{7}\] It can be verified that after \(j\) repetitions of Grover iteration, the initial state of the quantum system is transformed to \[\left|\psi_{j}\right\rangle= \sin\left(2j+1\right)\theta\left|t\right\rangle+\cos\left(2j+1 \right)\theta\left|t_{\perp}\right\rangle\quad. \tag{8}\] So, it can be seen that the state of the quantum system always lies in the two-dimensional subspace spanned by \(\left|t\right\rangle\) and \(\left|t_{\perp}\right\rangle\) during the repetitions of the Grover iteration, and the effect of the algorithm is essentially to rotate the quantum system from the initial state \(\left|\psi\right\rangle\) towards the target state \(t\). The geometric picture of Grover's algorithm is illustrated in Fig. 1. ### Query complexity of Grover's search algorithm After the evolution, one measures the quantum system along the computational basis. If the system collapses to the target state, the search task is completed successfully. In order to obtain the solution of the search task with a high probability, the quantum system should be as close to the target state as possible at the end of the evolution. Ideally, one expects to have the probability of obtaining the target state \[P_{j}=\left|\langle t|\psi_{j}\rangle\right|^{2}=\sin^{2}\left(2j+1\right) \theta=1, \tag{9}\] so the optimal number of Grover iterations is \[j=\frac{\pi}{4\arcsin\frac{1}{\sqrt{N}}}-\frac{1}{2}. \tag{10}\] When the size of the database, \(N\), is large, \(j\) can be approximated to \[j\approx\frac{\pi}{4}\sqrt{N}. \tag{11}\] In reality, as the number of Grover iterations needs to be an integer, \(j\) can usually be chosen as \[j\approx\left\lceil\frac{\pi}{4}\sqrt{N}\right\rceil, \tag{12}\] where \(\lceil x\rceil\) is the ceiling function which outputs the minimum integer that is no smaller than \(x\). ## III Optimization method When the elements of a database are equally likely to be the solution of the search problem, Grover's quantum search algorithm has been proven to be optimal in the query complexity. But if the elements have nonuniform a priori probability distribution to be the search solution, Grover's algorithm can be further improved, as one may increase the weights of the basis states with higher probabilities in the initial state of the quantum system so that the system can approach the target state faster. In this section, we study the minimization of the query complexity of Grover's quantum search algorithm by optimizing the initial state of the system and the reflection axis of the diffusion operator, provided the average success probability of the algorithm to find the solution is given. ### Success probability of generalized Grover's search algorithm Consider a database of \(N\) items, the a priori probabilities of which to be the search solution is known. Denote the a priori probability of the \(k\)-th element to be search solution as \(p_{k}\), and the probabilities \(p_{k}\), \(k=1,\cdots,N\), are normalized, \[p_{1}+\cdots+p_{N}=1. \tag{13}\] In contrast to the uniformly superposed initial state in the standard Grover's quantum search algorithm, a nonuniformly superposed initial state of the quantum system may perform better when the a priori probabilities of the database elements are given, as one may increase the weights of the basis states with higher a priori probabilities to accelerate the search algorithm. So, we assume the initial state of the quantum system to be an arbitrary state in the current problem, i.e., \[\left|\psi\right\rangle=\sum_{i=1}^{N}a_{i}\left|i\right\rangle, \tag{14}\] where \(a_{i}\)'s are arbitrary coefficients that satisfy the normalization condition, \[\left|a_{1}\right|^{2}+\cdots\left|a_{N}\right|^{2}=1. \tag{15}\] Similarly, the reflection axis of the diffusion operator is not necessarily to be the uniformly superposed state, as one may choose the reflection axis to make the diffusion more beneficial to those basis states with higher a priori probabilities, so the reflection axis is also assumed to be an arbitrary state in the current problem, i.e., \[\left|\varphi\right\rangle=\sum_{i=1}^{N}b_{i}\left|i\right\rangle, \tag{16}\] where \(b_{i}\)'s are arbitrary coefficients satisfying the normalization condition, \[\left|b_{1}\right|^{2}+\cdots\left|b_{N}\right|^{2}=1. \tag{17}\] As the initial state of the system and the reflection axis of the diffusion operator does not necessarily coincide, the state is no longer rotating in a two-dimensional subspace during the Grover iterations as in the standard Grover's search algorithm. In contrast, the initial state of the quantum system can now be decomposed into two orthogonal components, one lies in the two-dimensional subspace spanned by the target state and the reflection axis of the diffusion operator and the other is orthogonal to the two-dimensional subspace. The parallel component (that lies within the two-dimensional subspace) is still rotating in the two-dimensional subspace towards the target state by Grover iterations, but the orthogonal component is just flipped about the two-dimensional Figure 1: A sketch of the standard Grover’s quantum search algorithm. The initial state of the quantum system and the reflection axis of the diffusion operator are both the uniform superposed state \(\left|\psi_{0}\right\rangle\) of all database elements. The initial state \(\left|\psi_{0}\right\rangle\) and the target state \(\left|t\right\rangle\) of the search problem span a two-dimensional subspace. The Grover iteration consists of two steps: first perform the oracle operation which reflects the system about the state \(\left|t_{\perp}\right\rangle\) that is orthogonal to the target state \(\left|t\right\rangle\) in the two-dimensional subspace, then perform the diffusion operation which reflects the system about the state \(\left|\psi_{0}\right\rangle\). The total effect of the Grover iteration is to rotate the system in the two-dimensional subspace away from \(\left|t_{\perp}\right\rangle\) by a doubled angle between \(\left|\psi_{0}\right\rangle\) and \(\left|t_{\perp}\right\rangle\). subspace by each Grover iteration and always kept orthogonal to the two-dimensional subspace. So, we only need to consider the parallel component of the system state in computing the success probability of the algorithm in the following. The mechanism of how the system state is changed by the generalized Grover iterations is illustrated in Fig. 2. In order to obtain the parallel component of the system state, we need to first find the projector onto the two-dimensional subspace spanned by the target state \(\ket{t}\) and the reflection axis \(\ket{\varphi}\). The projection operator can be derived by the Gram-Schmidt orthogonalization of \(\ket{t}\) and \(\ket{\varphi}\), and the result turns out to to \[P_{t}=\frac{\ket{t}\!\bra{t}\!+\!\ket{\varphi}\!\bra{\varphi}\!-\!\ket{t}\! \bra{\varphi}\!\bra{t}\!-\!\ket{\varphi}\!\bra{t}\!\bra{\varphi}\!\!\bra{t}\! }{1-\ket{\bra{t}\!\ket{\varphi}}^{2}}, \tag{18}\] where the subscript \(t\) of the projection operator \(P_{t}\) denotes that the projection operator depends on the target state \(\ket{t}\). With this projection operator, the parallel component of the system initial state \(\ket{\psi}\) can be obtained as \[\ket{\psi_{t}^{\parallel}}=P_{t}\ket{\psi}=\sqrt{\bra{\psi}\!P_{t}\ket{\psi}} (\sin\phi_{0}^{t}\ket{t}+\cos\phi_{0}^{t}\ket{t^{\perp}}), \tag{19}\] where \(\ket{t^{\perp}}\) is the orthogonal state orthogonal to the target state \(\ket{t}\) in the two-dimensional subspace and \(\phi_{0}^{t}=\arcsin\frac{\bra{t}\!P_{t}\ket{\psi}}{\sqrt{\bra{\psi}\!P_{t} \ket{\psi}}}\) is the initial angle between the parallel component and \(\ket{t^{\perp}}\). Therefore, if the target state of the search problem is \(\ket{t}\), the success probability of the find the system in the target state after \(j\) repetitions of the generalized Grover iteration is \[P_{\text{sus}}^{(t)}=\bra{\psi}\!P_{t}\ket{\psi}\sin^{2}\left(2j\beta_{t}+ \phi_{0}^{t}\right), \tag{20}\] where both the probabilities of projecting the system state onto the two-dimensional subspace and the success probability of the final measurement to find the target state are considered. If we also take the a priori probabilities of different database items into account, the final success probability of the generalized Grover's search algorithm turns out to be \[\bar{P}= \sum_{t}p_{t}P_{\text{sus}}^{t}=\sum_{t}\left[p_{t}\bra{\psi}\!P_ {t}\ket{\psi}\right. \tag{21}\] \[\left.\times\sin^{2}\left(2j\arcsin\langle t|\varphi\rangle+ \arcsin\frac{\langle t|P_{t}|\psi\rangle}{\sqrt{\langle\psi|P_{t}|\psi\rangle }}\right)\right].\] Eq. (21) will be critical to the optimization of Grover's search algorithm below. ### Optimization equations Now, we proceed to find the minimum number of oracle calls that can drive the quantum system to the target state. It can be verified that the standard Grover's search algorithm is always optimal, whatever the a priori probabilities of the database items are, provided the success probability of the search algorithm is required to be 1 (neglecting the integer nature of the number of the oracle calls). Therefore, we allow a nonzero failure probability of the search algorithm in this work, and investigate how the reduction in the number of oracle calls can compensate the loss in the success probability of the search algorithm. Suppose the success probability of the search algorithm is fixed as \(P_{0}\) and the number of oracle calls to realize this success probability of the search algorithm is \(j\). By the Lagrange multiplier method, we can minimize the number of oracle calls \(j\) by letting the variation of the following function be zero, \[F=j^{2}+\mu\left(\bar{P}-P_{0}\right)+\nu_{1}\left(\bra{\psi}\!\ket{\psi}-1 \right)+\nu_{2}\left(\bra{\varphi}\!\ket{\varphi}-1\right), \tag{22}\] where \(\mu\), \(\nu_{1}\) and \(\nu_{2}\) denote the Lagrange multipliers for the constraint conditions of the success probability, the normalization of the initial state and the normalization of Figure 2: A sketch of the improved quantum search algorithm. The initial state \(\ket{\psi}\) of the quantum system and the reflection axis \(\ket{\varphi}\) of the diffusion operator can be arbitrary and do not necessarily coincide. The reflection axis \(\ket{\varphi}\) and the target state \(\ket{t}\) of the search problem span a two-dimensional subspace, and the state of the system can be decomposed into two components, \(\ket{\psi_{t}^{\parallel}}\) parallel to this two-dimensional subspace and \(\ket{\psi_{t}^{\perp}}\) orthogonal to it. The Grover iteration consists of two steps: first perform the oracle operation which reflects the system about the state \(\ket{t_{\perp}}\) that is orthogonal to the target state \(\ket{t}\) in the two-dimensional subspace, then perform the diffusion operation which reflects the system about the reflection axis \(\ket{\varphi}\). The total effect of the Grover iteration is to rotate the parallel component \(\ket{\psi_{t}^{\parallel}}\) of the system state in the two-dimensional subspace which is similar to the original Grover’s search algorithm but with an additional flip of the orthogonal component \(\ket{\psi_{t}^{\perp}}\) about the two-dimensional subspace. the reflection axis of the diffusion operator respectively. Note that as \(j\) must be positive, so we minimize \(j^{2}\) instead of \(j\) in the above function (otherwise the variation may generate a negative \(j\) with minimized absolute value). The number of oracle calls, \(j\), also needs to be varied in the variation of \(F\). But the discreteness of \(j\) makes the variation of \(j\) difficult. To circumvent this issue, we assume the number of the database elements \(N\) is large and rescale the number of oracle calls to \[\lambda=2j\arcsin\frac{1}{\sqrt{N}}, \tag{23}\] which is still discrete in principle but can vary approximately in a continuous way when \(N\) is sufficiently large. The optimized \(\lambda\) and the corresponding \(j\) will generally be a float number, but one just needs to take the ceiling function of \(j\) to make \(j\) integer which will change \(j\) by no more than 1, a negligible change when \(N\) is large, so we will just assume \(\lambda\) to be a continuous positive number in the following computation. The average success probability \(\bar{P}\) of the generalized Grover's search algorithm can be rewritten in terms of \(\lambda\) as \[\bar{P}= \sum_{t}\left[p_{t}\langle\psi|P_{t}|\psi\right] \tag{24}\] \[\times\sin^{2}\left(\lambda\frac{\arcsin\langle t|\varphi\rangle }{\arcsin\frac{1}{\sqrt{N}}}+\arcsin\frac{\langle t|P_{t}|\psi\rangle}{\sqrt {\langle\psi|P_{t}|\psi\rangle}}\right)\right],\] and the \(j^{2}\) term in \(F\) (22) should be replaced by \(\lambda^{2}/(4\arcsin^{2}\frac{1}{\sqrt{N}})\) in this case. The variation of \(F\) includes the variation of the average success probability as well as the other constraint conditions. Since the average success probability is the main constraint condition in the variation of \(F\), we study the the variation of the success probability \(\bar{P}\) first. By some computation, the variation of \(\bar{P}\) can be written as \[\delta\bar{P}=\langle\delta\psi|a_{\psi}\rangle+\langle\delta\varphi|b_{ \varphi}\rangle+c_{\lambda}\delta\lambda. \tag{25}\] As the expressions for \(|a_{\psi}\rangle\), \(|b_{\varphi}\rangle\) and \(c_{\lambda}\) are quite lengthy, we leave the detail in Appendix A. Note that there should have been Hermitian conjugate terms of \(\langle\delta\psi|a_{\psi}\rangle\) and \(\langle\delta\varphi|b_{\varphi}\rangle\) in the variation of \(\bar{P}\) (25), but considering \(|\psi\rangle\) and \(|\varphi\rangle\) are both real states, the Hermitian conjugates of \(\langle\delta\psi|a_{\psi}\rangle\) and \(\langle\delta\varphi|b_{\varphi}\rangle\) coincide with themselves. So we have only \(\langle\delta\psi|a_{\psi}\rangle\) and \(\langle\delta\varphi|b_{\varphi}\rangle\) in Eq. (25). Taking the other two constraint conditions as well as the \(\lambda^{2}\) term into account, the full variation of \(F\) can be obtained as \[\delta F= \langle\delta\psi|(2\nu_{1}|\psi\rangle+\mu|a_{\psi}\rangle)+ \langle\delta\varphi|(2\nu_{2}|\varphi\rangle+\mu|b_{\varphi}\rangle) \tag{26}\] \[+\left(\mu c_{\lambda}+\frac{\lambda}{2\arcsin^{2}\frac{1}{\sqrt {N}}}\right)\delta\lambda.\] When the number of oracle calls is minimized, the variation of \(F\) should be zero, so this immediately leads to the following optimization equations for the initial state \(|\psi\rangle\), the reflection axis \(|\varphi\rangle\) of the diffusion operator, and the rescaled number of oracle calls \(\lambda\), \[2\nu_{1}|\psi\rangle+\mu|a_{\psi}\rangle =0, \tag{27}\] \[2\nu_{2}|\varphi\rangle+\mu|b_{\varphi}\rangle =0,\] \[\mu c_{\lambda}+\frac{2\lambda}{4\arcsin^{2}\frac{1}{\sqrt{N}}} =0.\] Since \(\mu\) can be changed arbitrarily by changing \(\nu_{1}\) and \(\nu_{2}\) accordingly in the first two equations of (27), the third equation can then always be satisfied and do not need to be further considered in the following computation. By projecting the first two equations of (27) onto the state \(|\psi\rangle\) and \(|\varphi\rangle\) respectively, one can obtain the Lagrange multipliers \(\nu_{1}\) and \(\nu_{2}\), \[\nu_{1}=-\frac{\mu}{2}\langle\psi|a_{\psi}\rangle,\ \nu_{2}=-\frac{\mu}{2} \langle\varphi|b_{\varphi}\rangle. \tag{28}\] Therefore, the optimization equations for \(|a_{\psi}\rangle\) and \(|b_{\varphi}\rangle\) can be be finally obtained as \[|a_{\psi}\rangle= \langle\psi|a_{\psi}\rangle|\psi\rangle, \tag{29}\] \[|b_{\varphi}\rangle= \langle\varphi|b_{\varphi}\rangle|\varphi\rangle.\] implying the proportionalities between \(|a_{\psi}\rangle\), \(|\psi\rangle\) and between \(|b_{\varphi}\rangle\), \(|\varphi\rangle\). Eq. (29) is the optimization condition derived from the Lagrange multipliers method for the initial state of the system, the reflection axis of the diffusion operator and the rescaled number of the oracle calls. It will be the starting point of the following computation, from which we can derive the minimal number of oracle calls given the success probability of the search algorithm and the corresponding optimized initial state and reflection axis. ### Differential solution to optimization equations Solving Eq. (29) is generally difficult, as the equation is nonlinear with respect to the initial state, the reflection axis and the rescaled number of oracle calls. In order to simplify the problem, we assume the failure probability of the algorithm is low, i.e., the success probability \(\bar{P}\) is close to 1, so that the rescaled number of oracle calls as well as the initial state and the reflection axis have only slight deviations from those of the standard Grover's search algorithm. In this case, we just need to obtain the differential relation between the initial state, the reflection axis and the rescaled number of oracle calls to minimize the query complexity of the quantum search algorithm. In the following, we give a formal differential solution to the optimization problem based on this idea. In detail, one can take the differentiation of Eq. (29), which produces \[\begin{cases}|da_{\psi}\rangle=&\langle d\psi|a_{\psi}\rangle|\psi\rangle+\langle \psi|da_{\psi}\rangle|\psi\rangle+\langle\psi|a_{\psi}\rangle|d\psi\rangle,\\ |db_{\varphi}\rangle=&\langle d\varphi|b_{\varphi}\rangle|\varphi\rangle+ \langle\varphi|db_{\varphi}\rangle|\varphi\rangle+\langle\varphi|b_{\varphi} \rangle|d\varphi\rangle.\end{cases} \tag{30}\] Projecting the two lines of this equation onto \(|\psi\rangle\) and \(|\varphi\rangle\) respectively, one can obtain \(\langle d\psi|a_{\psi}\rangle=0\) and \(\langle d\varphi|b_{\varphi}\rangle=0\) by noting that \(\langle\psi|d\psi\rangle=0\), \(\langle\varphi|d\varphi\rangle=0\) as \(|\psi\rangle\) and \(|\varphi\rangle\) are real and normalized states. Hence, the above two differential equations can be simplified to \[\begin{split}(I-|\psi\rangle\langle\psi|)\,|da_{\psi}\rangle& =\langle\psi|a_{\psi}\rangle|d\psi\rangle,\\ (I-|\varphi\rangle\langle\varphi|)\,|db_{\varphi}\rangle& =\langle\varphi|b_{\varphi}\rangle|d\varphi\rangle,\end{split} \tag{31}\] where \(I\) denotes the \(N\times N\) identity matrix. The explicit expressions of \(|a_{\psi}\rangle\) and \(|b_{\varphi}\rangle\) can be derived from the variation of the average success probability \(\bar{P}\) as defined in Eq. (25) and are shown in detail in Appendix A, so their differentials \(|da_{\psi}\rangle\) and \(|db_{\varphi}\rangle\) can be obtained accordingly, which can be further expanded to the differentials of \(|\psi\rangle\), \(|\varphi\rangle\) and \(\lambda\), \[|da_{\psi}\rangle=A_{\psi\psi}|d\psi\rangle+A_{\psi\varphi}|d\varphi\rangle+d \lambda|a_{\psi\lambda}\rangle, \tag{32}\] where \(A_{\psi\psi}\) and \(A_{\psi\varphi}\) are both \(N\times N\) matrices and \(|a_{\psi\lambda}\rangle\) is an unnormalized \(N\times 1\) vector. Plugging Eq. (32) into the first line of (31), one can have \[\begin{split}(I-|\psi\rangle\langle\psi|)\,|da_{\psi}\rangle=& (I-|\psi\rangle\langle\psi|)\,A_{\psi\psi}|d\psi\rangle\\ &+(I-|\psi\rangle\langle\psi|)\,A_{\psi\varphi}|d\varphi\rangle \\ &+(I-|\psi\rangle\langle\psi|)\,d\lambda|a_{\psi\lambda}\rangle \\ =&\langle\psi|a_{\psi}\rangle|d\psi\rangle.\end{split} \tag{33}\] which can be rearranged to \[\widetilde{A_{\psi}}|d\psi\rangle+\widetilde{A_{\varphi}}|d\varphi\rangle=d \lambda|v_{a}\rangle, \tag{34}\] where \[\begin{split}\widetilde{A_{\psi}}=&\langle\psi| a_{\psi}\rangle I-(I-|\psi\rangle\langle\psi|)\,A_{\psi\psi},\\ \widetilde{A_{\varphi}}=&-(I-|\psi\rangle\langle \psi|)\,A_{\psi\varphi},\\ |v_{a}\rangle=&(I-|\psi\rangle\langle\psi|)\,|a_{ \psi\lambda}\rangle.\end{split} \tag{35}\] Similarly, the differential \(|db_{\varphi}\rangle\) can also be expanded to the differentials of \(|\psi\rangle\), \(|\varphi\rangle\) and \(\lambda\), \[|db_{\varphi}\rangle=B_{\varphi\psi}|d\psi\rangle+B_{\varphi\varphi}|d\varphi \rangle+d\lambda|b_{\varphi\lambda}\rangle, \tag{36}\] where \(B_{\varphi\psi}\) and \(B_{\varphi\varphi}\) are \(N\times N\) matrices and \(|b_{\varphi\lambda}\rangle\) is an unnormalized \(N\times 1\) vector. Plugging Eq. (36) into the second line of (31) and rearranging the equation gives \[\widetilde{B_{\varphi}}|d\varphi\rangle+\widetilde{B_{\psi}}|d\psi\rangle=d \lambda|v_{b}\rangle, \tag{37}\] where \[\begin{split}\widetilde{B_{\psi}}=&-(I-|\varphi \rangle\langle\varphi|)\,B_{\varphi\psi},\\ \widetilde{B_{\varphi}}=&\langle\varphi|b_{\varphi} \rangle I-(I-|\varphi\rangle\langle\varphi|)\,B_{\varphi\varphi},\\ |v_{b}\rangle=&(I-|\varphi\rangle\langle\varphi|)\,|b_{ \varphi\lambda}\rangle.\end{split} \tag{38}\] Now, the two optimization equations in Eq. (31) can be merged and written in a more compact way, \[M\begin{bmatrix}|d\psi\rangle\\ |d\varphi\rangle\end{bmatrix}=\begin{bmatrix}|v_{a}\rangle\\ |v_{b}\rangle\end{bmatrix}d\lambda, \tag{39}\] where \(M\) is a \(2N\times 2N\) matrix, \[M=\begin{bmatrix}\widetilde{A_{\psi}}&\widetilde{A_{\varphi}}\\ \widetilde{B_{\psi}}&\widetilde{B_{\varphi}}\end{bmatrix}. \tag{40}\] Therefore, \(|d\psi\rangle\) and \(|d\varphi\rangle\) is given by \[\begin{bmatrix}|d\psi\rangle\\ |d\varphi\rangle\end{bmatrix}=M^{-1}\begin{bmatrix}|v_{a}\rangle\\ |v_{b}\rangle\end{bmatrix}d\lambda. \tag{41}\] This is the formal differential relations between \(|\psi\rangle\), \(|\varphi\rangle\) and \(\lambda\). With this differential relation, \(|da_{\psi}\rangle\) (32) and \(|db_{\varphi}\rangle\) (36) can be written as \[\begin{bmatrix}|da_{\psi}\rangle\\ |db_{\varphi}\rangle\end{bmatrix}=\left(TM^{-1}\begin{bmatrix}|v_{a}\rangle\\ |v_{b}\rangle\end{bmatrix}+\begin{bmatrix}|a_{\psi\lambda}\rangle\\ |b_{\varphi\lambda}\rangle\end{bmatrix}\right)d\lambda \tag{42}\] where \(T\) is a \(2N\times 2N\) matrix given by \[T=\begin{bmatrix}A_{\psi\psi}&A_{\psi\varphi}\\ B_{\varphi\psi}&B_{\varphi\varphi}\end{bmatrix}. \tag{43}\] Thus, the differential relations between \(|a_{\psi}\rangle\), \(|b_{\varphi}\rangle\) and \(\lambda\) are also obtained. As will be shown later, we will also need \(|d^{2}\psi\rangle\) and \(|d^{2}\varphi\rangle\) to compute the deviation of the success probability of the search algorithm from the original Grover's search algorithm, so we obtain a formal solution to \(|d^{2}\psi\rangle\) and \(|d^{2}\varphi\rangle\) below. By taking differentiation of \(|d\psi\rangle\) and \(|d\varphi\rangle\) in Eq. (41) and noting that \[\frac{dM^{-1}}{d\lambda}=-M^{-1}\frac{dM}{d\lambda}M^{-1}, \tag{44}\] which can be derived from the differentiation of \(M^{-1}M=I\), one can have \[\begin{bmatrix}|d^{2}\psi\rangle\\ |d^{2}\varphi\rangle\end{bmatrix}=M^{-1}\left(-\frac{dM}{d\lambda}M^{-1} \begin{bmatrix}|v_{a}\rangle\\ |v_{b}\rangle\end{bmatrix}+\begin{bmatrix}|\frac{dv_{a}}{d\lambda}\rangle\\ |\frac{dv_{b}}{d\lambda}\rangle\end{bmatrix}\right)d\lambda^{2}. \tag{45}\] This is the formal solution to \(|d^{2}\psi\rangle\) and \(|d^{2}\varphi\rangle\). Note that \(M\), \(|v_{a}\rangle\), \(|v_{b}\rangle\) also rely on \(|\psi\rangle\) and \(|\varphi\rangle\), so Eq. (41) will need to be invoked in the computation of \(\frac{dM}{d\lambda}\), \(|\frac{dv_{a}}{d\lambda}\rangle\) and \(|\frac{dv_{b}}{d\lambda}\rangle\). Now, we can proceed to find the relation between the drop of the success probability of the quantum search algorithm and the decrease in the number of oracle calls. We expand the average success probability \(\bar{P}\) to the differentials of \(|\psi\rangle\), \(|\varphi\rangle\) and \(\lambda\) near the parameters of the original Grover search algorithm. As the original Grover search algorithm gives \(\bar{P}=1\) which is the maximal value of \(\bar{P}\), the expansion of success probability \(\bar{P}\) to the first-order differentials of \(|\psi\rangle\), \(|\varphi\rangle\) and \(\lambda\) must be zero, so we need to consider the second order differentials of \(\left|\psi\right\rangle\), \(\left|\varphi\right\rangle\) and \(\lambda\). The differentiation of \(\bar{P}\) is \[d\bar{P}=\langle d\psi|a_{\psi}\rangle+\langle d\varphi|b_{\varphi}\rangle+c_{ \lambda}d\lambda, \tag{46}\] similar to the variation of \(\bar{P}\) (25). So, the expansion of \(\bar{P}\) to the second-order differentials \(\left|\psi\right\rangle\), \(\left|\varphi\right\rangle\) and \(\lambda\) deviated from the standard Grover's search algorithm can be obtained as \[\begin{split} d\bar{P}\big{|}_{\text{std}}=&\frac{1 }{2}\big{(}\langle d\psi|da_{\psi}\rangle+\langle d\varphi|db_{\varphi}\rangle +\langle d^{2}\psi|a_{\psi}\rangle\\ &+\langle d^{2}\varphi|b_{\varphi}\rangle+dc_{\lambda}d\lambda \big{)}\big{|}_{\text{std}}\,,\end{split} \tag{47}\] where the subscript "std" indicates the terms are evaluated with \(\left|\psi\right\rangle\), \(\left|\varphi\right\rangle\) and \(\lambda\) given by the standard Grover search algorithm, i.e., \[\left|\psi\right\rangle\big{|}_{\text{std}}=\left|\varphi\right\rangle\big{|} _{\text{std}}=\left|\psi_{0}\right\rangle,\ \lambda\big{|}_{\text{std}}=\frac{\pi}{2}-\arccos \sqrt{N}, \tag{48}\] where \(\left|\psi_{0}\right\rangle\) is the uniformly superposed state given in (1). Then by invoking the original Grover's quantum search algorithm and using the results of \(\left|a_{\psi}\right\rangle\), \(\left|b_{\varphi}\right\rangle\), \(\lambda\) in Appendix A, it can be obtained that \[\left|a_{\psi}\right\rangle\big{|}_{\text{std}}=\left|b_{\varphi}\right\rangle \big{|}_{\text{std}}=2\left|\psi_{0}\right\rangle,\ c_{\lambda}\big{|}_{\text {std}}=0. \tag{49}\] Taking the differentiation of \(\left|a_{\psi}\right\rangle\), \(\left|b_{\varphi}\right\rangle\), \(\lambda\) in Appendix A and evaluating the resulted differentials with \(\left|\psi\right\rangle\), \(\left|\varphi\right\rangle\) and \(\lambda\) from the standard Grover search algorithm, one can have \[\begin{split}\left|da_{\psi}\right\rangle\big{|}_{\text{std}}& =A_{\psi\psi}\big{|}_{\text{std}}\left|d\psi\right\rangle+\left.A _{\psi\varphi}\big{|}_{\text{std}}\left|d\varphi\right\rangle+\left.\left|a_ {\psi\lambda}\right\rangle\right|_{\text{std}}d\lambda,\\ \left|db_{\varphi}\right\rangle\big{|}_{\text{std}}& =B_{\varphi\psi}\big{|}_{\text{std}}\left|d\psi\right\rangle+\left.B_{ \varphi\varphi}\big{|}_{\text{std}}\left|d\varphi\right\rangle+\left.\left|b_ {\varphi\lambda}\right\rangle\right|_{\text{std}}d\lambda,\end{split} \tag{50}\] where \[\begin{split} A_{\psi\psi}\big{|}_{\text{std}}=& 0,\\ A_{\psi\varphi}\big{|}_{\text{std}}=& 2I+\frac{\pi\sqrt{N}|\psi_{0}\rangle \langle\eta|-\pi N\sum_{t}p_{t}|t\rangle\langle t|}{(N-1)\arccos\sqrt{N}},\\ \left|a_{\psi\lambda}\right\rangle\big{|}_{\text{std}}=& \frac{2|\psi_{0}\rangle-2\sqrt{N}|\eta\rangle}{\sqrt{N-1}},\end{split} \tag{51}\] and \[\begin{split} B_{\varphi\psi}\big{|}_{\text{std}}=& 2I-\frac{\pi N\sum_{t}p_{t}|t\rangle \langle t|}{(N-1)\arccos\sqrt{N}},\\ B_{\varphi\varphi}\big{|}_{\text{std}}=&\frac{ \pi N\left(\pi-4\sec^{-1}\sqrt{N}\right)\sum_{t}p_{t}|t\rangle\langle t|}{2(N -1)\arccos c^{2}\sqrt{N}},\\ &+\frac{\pi\sqrt{N}|\psi_{0}\rangle\langle\eta|}{(N-1)\arccos \sqrt{n}},\\ \left|b_{\varphi\lambda}\right\rangle\big{|}_{\text{std}}=& \frac{2|\psi_{0}\rangle+2\sqrt{N}|\eta\rangle}{\sqrt{N-1}}-\frac{\sqrt{N}\pi| \eta\rangle}{\sqrt{N-1}\arccos\sqrt{N}}.\end{split} \tag{52}\] In the above equations, \(\left.\left\langle\psi|d\psi\right\rangle\right|_{\text{std}}=\left.\left\langle \varphi|d\psi\right\rangle\right|_{\text{std}}=\left.\left\langle\varphi|d\varphi \right\rangle\right|_{\text{std}}=\left.\left\langle\psi|d\varphi\right\rangle \right|_{\text{std}}=0\) has been used, and \[|\eta\rangle=\sum_{t}p_{t}|t\rangle, \tag{53}\] which is an unnormalized state determined by the a priori probability distribution \(p_{t}\) of the database elements. With \(A_{\psi\psi}\big{|}_{\text{std}}\), \(A_{\psi\psi}\big{|}_{\text{std}}\), \(B_{\varphi\psi}\big{|}_{\text{std}}\), \(B_{\varphi\varphi}\big{|}_{\text{std}}\), one can obtain the matrix \(T\) for \(\left|da_{\psi}\right\rangle\) and \(\left|db_{\varphi}\right\rangle\) (42). The matrices \(\widetilde{A_{\psi}}\), \(\widetilde{A_{\varphi}}\), \(\widetilde{B_{\psi}}\), \(\widetilde{B_{\varphi}}\) can be also be obtained accordingly by using the definitions (35) and (38) as \[\begin{split}\widetilde{A_{\psi}}\Big{|}_{\text{std}}=& 2I,\\ \widetilde{A_{\varphi}}\Big{|}_{\text{std}}=&\ =2I+\frac{\pi\sqrt{N}\sum_{t}p_{t}|t\rangle \langle t|-\pi\sqrt{N}|\psi_{0}\rangle\langle\eta|}{(N-1)\arccos\sqrt{N}},\\ \widetilde{B_{\varphi}}\Big{|}_{\text{std}}=& 2I+\frac{\pi\left(\pi-4\arccos\sqrt{N}\right)}{2(N-1)\arccos c^{2}\sqrt{N}} \\ &\times\left(N\sum_{t}p_{t}|t\rangle\langle t|+\sqrt{N}|\psi_{0} \rangle\langle\eta|\right).\end{split} \tag{54}\] With \(\widetilde{A_{\psi}}\Big{|}_{\text{std}}\), \(\widetilde{A_{\varphi}}\Big{|}_{\text{std}}\), \(\widetilde{B_{\psi}}\Big{|}_{\text{std}}\) and \(\widetilde{B_{\varphi}}\Big{|}_{\text{std}}\), one can obtain the matrix \(M\) for \(\left|d\psi\right\rangle\) and \(\left|d\varphi\right\rangle\) (41). In addition, the differential of \(c_{\lambda}\) can be also be obtained with the parameters from the standard Grover's quantum search algorithm, \[\left.dc_{\lambda}\right|_{\text{std}}=-2\left(\frac{\sqrt{N}\text{arcsec} \sqrt{N}\langle\eta|d\varphi\rangle}{\sqrt{N-1}\arccos\sqrt{N}}+\frac{\sqrt{N} \langle\eta|d\psi\rangle}{\sqrt{N-1}}+d\lambda\right), \tag{55}\] and the vectors \(\left|v_{a}\right\rangle\), \(\left|v_{b}\right\rangle\) in Eq. (35), (38) can be obtained as \[\left.\left|v_{a}\right\rangle\right|_{\text{std}}=\frac{2|\psi_{0}\rangle-2 \sqrt{N}|\eta\rangle}{\sqrt{N-1}}d\lambda,\ \left|v_{b}\right\rangle\right|_{\text{std}}=\frac{\arccos \sqrt{N}}{\arccos\sqrt{N}}|v_{a}\rangle. \tag{56}\] Using the above matrices and plugging the differential relations (41) and (42) into Eq. (47), one can have \[d\bar{P}=\frac{1}{2}\ S\big{|}_{\text{std}}\,d\lambda^{2}, \tag{57}\] where \[\begin{split} S\big{|}_{\text{std}}=&\Big{(}-\langle V |M^{\dagger-1}\frac{dM^{\dagger}}{d\lambda}M^{\dagger-1}|\eta\rangle+\langle \frac{d}{d\lambda}V|M^{\dagger-1}|\eta\rangle\\ &+\langle V|M^{\dagger-1}TM^{-1}|V\rangle+\langle V|M^{\dagger-1}| \gamma\rangle+\frac{dc_{\lambda}}{d\lambda}\Big{)}\bigg{|}_{\text{std}},\end{split} \tag{58}\] and \[\left|V\right\rangle=\begin{bmatrix}\left|v_{a}\right\rangle\\ \left|v_{b}\right\rangle\end{bmatrix},\ |\eta\rangle=\begin{bmatrix}\left|a_{\psi} \right\rangle\\ \left|b_{\varphi}\right\rangle\end{bmatrix},\ |\gamma\rangle=\left|\partial_{\lambda}\eta\right\rangle= \begin{bmatrix}\left|a_{\psi}\right\rangle\\ \left|b_{\varphi\lambda}\right\rangle\end{bmatrix}. \tag{59}\] Eq. (57) immediately gives the decrease of the rescaled number of oracle calls in terms of the decrease in the success probability of the quantum search algorithm when the latter is small, \[d\lambda=-\sqrt{\frac{2d\bar{P}}{S\big{|}_{\text{std}}}}. \tag{60}\] The matrices and vectors in \(S\) (58) have been obtained above in Eq. (51), (52), (54) and (56) with the parameters from the standard Grover's quantum search algorithm, except \(M^{-1}\), \(\frac{dM}{d\lambda}\), \(|dv_{a,b}\rangle\). \(M^{-1}\) does not have a general compact solution since \(M\) is an \(N\times N\) matrix, and \(\frac{dM}{d\lambda}\), \(|dv_{a,b}\rangle\) reply on \(M^{-1}\) since they involve \(|d\psi\rangle\), \(|d\varphi\rangle\) which needs \(M^{-1}\) to be obtained by (41). But as \(M\) is known (with the four block submatrices given by (54)), \(M^{-1}\) can be obtained by numerical computation for any given \(N\), and thus \(\frac{dM}{d\lambda}\), \(|dv_{a,b}\rangle\) can also be obtained numerically. Therefore, we finally arrive at a formal solution to the optimal number of oracle calls given the decrease in the success probability of the quantum search algorithm \(\Delta P\), \[j_{\min}=\left\lceil\frac{\arccos\sqrt{N}-\sqrt{\frac{2\Delta P}{S|_{\rm std} }}}{2\arccos\sqrt{N}}\right\rceil, \tag{61}\] where the rescaled number of oracle calls \(\lambda\) has been restored to the actual number of oracle calls \(j\). When \(N\) is large, \(j_{\min}\) is approximately \[j_{\min}\approx\left\lceil\sqrt{N}\left(\frac{\pi}{4}-\sqrt{\frac{\Delta P}{2 S|_{\rm std}}}\right)\right\rceil. \tag{62}\] And the corresponding optimized initial state of the system and the reflection axis of diffusion operator can be obtained from Eq. (41), \[\begin{bmatrix}|\psi_{\rm opt}\rangle\\ |\varphi_{\rm opt}\rangle\end{bmatrix}=\begin{bmatrix}|\psi_{0}\rangle\\ |\psi_{0}\rangle\end{bmatrix}+\left.M^{-1}\right|_{\rm std}\left|V\rangle \right|_{\rm std}\sqrt{\frac{2\Delta P}{S|_{\rm std}}}. \tag{63}\] _Remark._ Eq. (60) implies that if \(|S|\) is small, a large increase in the number of oracle calls can only increase a small portion of the success probability. So, if a low failure probability of the quantum search algorithm is allowed, the query complexity of the algorithm can be significantly decreased, compared to the standard Grover's quantum search algorithm. Certainly, the failure probability of the search algorithm will require more trials of the algorithm to find the search solution which will conversely increase the query complexity of the algorithm. However, as the failure probability is of order \(O(\Delta P)\) while the reduction in the number of oracle calls is of order \(O(\sqrt{\Delta P})\), the reduction in the oracle calls is much more than the increase of oracle calls caused by the failure probability, so the optimization method obtained above can still lower the query complexity of quantum search algorithm on average. ## IV Example: two-valued a priori probability distribution In the previous section, we derived the minimized number of oracle calls for the quantum search algorithm and the optimized initial state of the quantum system and the optimized reflection axis of the diffusion operator. In this section, we apply these results to a simple database model to show how the a priori knowledge of the search solution can assist to reduce the query complexity of quantum search algorithm. Consider a database of \(N\) elements. Suppose we know the a priori probabilities of the database elements to be the search solution, and the a priori probabilities are two-valued. For example, \(K\) elements in the database have a priori probabilities \(p\) to be the search solution and the other \(N-K\) elements have a priori probabilities \[q=\frac{1-Kp}{N-K}, \tag{64}\] to be the search solution. A low failure probability of the quantum search algorithm \(\Delta P\) is allowed. The method introduced in the previous section can be invoked to minimize the number of oracle calls to reach the given success probability of the search algorithm by optimizing the initial state of the quantum system and the reflection axis of the diffusion operator. The mathematical detail of derivation is left in Appendix B. The factor \(S\) in the differential relation between the success probability and the rescaled number of oracle calls (B) turns out to be \[S|_{\rm std}=\frac{2Np(Kp-1)}{-2Kp+K/N+Np}. \tag{65}\] Hence if the failure probability of the quantum search algorithm is \(\Delta P\), the decrease in the number of oracle calls is approximately \[\Delta j=\frac{-1}{2\arccos\sqrt{N}}\sqrt{\frac{\Delta P}{S|_{\rm std}}}. \tag{66}\] To illustrated this result, the factor \(S\) is plotted for different values of \(p\) and \(K/N\) in Fig. 3. An interesting special case is that the a priori probabilities for different elements of the database are uniform, i.e., \[p=\frac{1}{N}, \tag{67}\] which is exactly the case considered by the standard Grover's search algorithm. It can be obtained from Eq. (65) as well as observed from Fig. 3 that for this case, \[S=-2, \tag{68}\] which implies that the query complexity of the original Grover's search algorithm can also be decreased if a failure probability of the search algorithm is allowed. The above result \(S=-2\) for the original Grover's search algorithm can also be obtained in a straightforward way as follows, without invoking the method introduced in the previous section. As the success probability of the original Grover's algorithm to find the solution of the search problem with \(j\) steps of Grover iterations is known to be \[P_{G}=\sin^{2}\left[(2j+1)\arccos\sqrt{N}\right]=\sin^{2}\left(\lambda+\arccos \sqrt{N}\right), \tag{69}\] where \(P_{G}\) denotes the success probability for the original Grover's search algorithm and \(\lambda\) is the rescaled number of oracle calls defined in (23), one can obtain \[\begin{split}\frac{dP_{G}}{d\lambda}=&\sin\left(2 \lambda+2\arccos\sqrt{N}\right),\\ \frac{d^{2}P_{G}}{d\lambda^{2}}=& 2\cos\left(2 \lambda+2\arccos\sqrt{N}\right).\end{split} \tag{70}\] For the original Grover's search algorithm, \[P_{G}=1,\;\lambda=\frac{\pi}{2}-\arccos\sqrt{N}, \tag{71}\] so, one can obtain that \[\frac{dP_{G}}{d\lambda}=0, \tag{72}\] which can be understood from that \(P_{G}\) is at the maximal value \(1\), and \[\frac{d^{2}P_{G}}{d\lambda^{2}}=-2, \tag{73}\] which is in accordance with the result \(S=-2\) (68). This result for the original Grover's search algorithm seems quite natural, as a failure probability of the search algorithm can certainly allow a reduction in the number of oracle calls, and note that the initial state of the system and the reflection axis of the diffusion operator are not optimized to obtain \(S=-2\) in this case, since it can be verified that for the original Grover's search algorithm, the uniformly superposed state (1) is still the optimal choice for the initial state and the reflection axis when a failure probability of the search algorithm is allowed. But this is not the case when the a priori probabilities of the database elements are not uniform. For nonuniform a priori probabilities, one need to change the initial state of the system and the reflection axis of the diffusion operator to minimize the query complexity of the quantum search algorithm, and this is the goal of the optimization method proposed in the previous section. In fact, by the optimization of the initial state and the reflection axis, the quantum search algorithm can gain more increase in the efficiency for database elements with nonuniform a priori probabilities than that with uniform a priori probabilities. This can be observed from Fig. (3) as \(S=-2\) of the original Grover's search algorithm has the largest absolute value over all possible values of \(S\) for various a priori probability \(p\). As indicated by Eq. (66) that a smaller \(S\) results in a larger decrease of the number of oracle calls, Fig. (3) and Eq. (66) imply that all nonuniform a priori probability distributions of the database elements have better boost in the efficiency of quantum search algorithm than the uniform a priori probability distribution for the original Grover's algorithm. This shows the advantage of nonuniform a prior distribution of database elements in optimizing the performance of quantum search algorithm, which is in accordance with the intuition that one may exploit nonuniform a priori probability distribution of database elements to adjust the initial state of the system and the reflection axis of the diffusion operator to be more beneficial to the states with higher a priori probabilities so that the quantum search algorithm can have lower query complexity. As remarked in the previous section, though the optimization of the quantum search algorithm relies on the decrease of the success probability which may require more trials of the algorithm to find the solution, such an optimization approach can still increase the efficiency of the search algorithm on average, as the decrease of the success probability is of order \(O(\Delta P)\) while the decrease of the number of oracle calls is of order \(O(\sqrt{\Delta P})\), the latter of which is much larger when \(\Delta P\) is small. The above result for the original quantum search algorithm with uniform a priori probabilities can serve as an intuitive example for this point. It can be seen from Eq. (69) that when the quantum search algorithm is close to completion, i.e., \((2j+1)\arcsin\frac{1}{\sqrt{n}}\) is close to \(\frac{\pi}{2}\), the increase of the number of oracle calls \(j\) can hardly increase the success probability, so if a negligible portion of success probability is dropped, a significant portion of oracle Figure 3: A plot of factor \(S\) with respect to the a priori probability \(p\) for the database model with two-valued a priori probabilities. The number of database elements is \(N=10^{4}\), and the number of elements with a priori probability \(p\) is \(K\). The factor \(S\) is plotted for different \(K\). Note that the range of \(p\) varies with different \(K\), as \(K_{P}\) cannot exceed the total a priori probability \(1\). It can be seen that all the lines have the minimum value \(-2\) at the point \(p=10^{-4}\), as this is the case of uniform a priori probabilities corresponding to the original Grover’s quantum search algorithm which is already optimal and cannot be optimized anymore. All the other points with \(p\neq 10^{-4}\) represent the cases of nonuniform a priori probability distributions which can be optimized by the approach presented in this paper, and the smaller the factor \(S\) is, the more the number of Grover iterations can be reduced. iterations may be reduced! This is why the query complexity of the original Grover's search algorithm can be improved by the optimization method. ## V Conclusions In this work, we study the quantum search algorithm with general a priori probabilities for the elements of the database to be the solution of the search problem, and optimize the initial state of the quantum system and the reflection axis of the diffusion operator to minimize the number of oracle calls at the expense of a failure probability of the quantum search algorithm by a variational approach. The number of the database elements is assumed to be large so that the rescaled number of Grover iterations, i.e., the ratio between the number of Grover iterations and the square root of the database size, is approximately continuous and thus the variation approach can work. We obtain the optimization equations for the initial state and the reflection axis, but the optimization equations are difficult to solve due to its nonlinearity, so we assume the failure probability of the quantum search algorithm to be low so that the optimized initial state and reflection axis have just slight deviation from those in the standard Grover's search algorithm and thus a differentiation approach can be exploited to obtain the relation between the initial state of the quantum system, the reflection axis of the diffusion operator, the rescaled number of Grover iterations and the success probability of the search algorithm. We obtain a formal solution to the rescaled number of oracle calls in terms of the decrease in the success probability, and apply it to a simple database model with two-valued a priori probabilities of the elements to exemplify the effect of the optimization results. The results show that the optimization can indeed lower the query complexity of the quantum search algorithm, even for the original Grover's search algorithm with uniform a priori probabilities for the database elements, which is not surprising as the success probability of the search algorithm is lowered, but the optimization can lead to a larger reduction in the query complexity when the a priori probability distribution is nonuniform, which indicates the advantage of nonuniform a priori probability distributions in improving the efficiency of the quantum search algorithm. We remark that the current result is effective for a small failure probability, as we expand the success probability only to the lowest-order nonzero differential of the rescaled number of oracle calls. If one wants to obtain results effective for larger failure probabilities, higher-order differentials of the rescaled number of oracle calls need to be included. The way to expand the success probability to higher-order differentials of the rescaled number of oracle calls is similar as the differentiation approach presented in this work. The main difference lies in that the differential relation (57) will include higher orders of the differential of the rescaled number of oracle calls, so one will need to solve a polynomial equation to obtain the solution for the decrease of the rescaled number of oracle calls in terms of the decrease of the success probability, which will increase the complexity of this optimization problem. We hope this work can provide new perspective on the quantum search problem, and stimulate further research in improving the efficiency of quantum search algorithm by a priori information. ###### Acknowledgements. The authors acknowledge the helpful discussions with Jingyi Fan and Junyan Li. This work is supported by the National Natural Science Foundation of China (Grant No. 12075323).
2310.18749
Minimal Clifford Shadow Estimation by Mutually Unbiased Bases
Predicting properties of large-scale quantum systems is crucial for the development of quantum science and technology. Shadow estimation is an efficient method for this task based on randomized measurements, where many-qubit random Clifford circuits are used for estimating global properties like quantum fidelity. Here we introduce the minimal Clifford measurement (MCM) to reduce the number of possible random circuits to the minimum, while keeping the effective post-processing channel in shadow estimation. In particular, we show that MCM requires $2^n+1$ distinct Clifford circuits, and it can be realized by Mutually Unbiased Bases (MUB), with $n$ as the total qubit number. By applying the Z-Tableau formalism, this ensemble of circuits can be synthesized to the $\mathrm{-S-CZ-H-}$ structure, which can be composed by $2n-1$ fixed circuit modules, and the total circuit depth is at most $n+1$. Compared to the original Clifford measurements, our MCM significantly reduces the circuit complexity and the compilation costs. In addition, we find the sampling advantage of MCM on estimating off-diagonal operators, and extend this observation to the biased-MCM scheme to enhance the sampling improvement further.
Qingyue Zhang, Qing Liu, You Zhou
2023-10-28T16:22:04Z
http://arxiv.org/abs/2310.18749v2
# Minimal Clifford Shadow Estimation by Mutually Unbiased Bases ###### Abstract Predicting properties of large-scale quantum systems is crucial for the development of quantum science and technology. Shadow estimation is an efficient method for this task based on randomized measurements, where many-qubit random Clifford circuits are used for estimating global properties like quantum fidelity. Here we introduce the minimal Clifford measurement (MCM) to reduce the number of possible random circuits to the minimum, while keeping the effective post-processing channel in shadow estimation. In particular, we show that MCM requires \(2^{n}+1\) distinct Clifford circuits, and it can be realized by Mutually Unbiased Bases (MUB), with \(n\) as the total qubit number. By applying the Z-Tableau formalism, this ensemble of circuits can be synthesized to the \(-\mathrm{S-CZ-H-}\) structure, which can be composed by \(2n-1\)_fixed_ circuit modules, and the total circuit depth is at most \(n+1\). Compared to the original Clifford measurements, our MCM significantly reduces the circuit complexity and the compilation costs. In addition, we find the sampling advantage of MCM on estimating off-diagonal operators, and extend this observation to the biased-MCM scheme to enhance the sampling improvement further. ## I Introduction Learning quantum systems is of both fundamental and practical interests for quantum physics and quantum information processing [1; 2]. With the increase on the qubit-number in various platforms, from quantum networks to quantum simulators and computers [3; 4], it is crucial to develop efficient tools to benchmark them [5; 6], from characterizing quantum noises [7] to measuring interesting properties, such as entanglement entropy [8] and out-of-time-ordered correlator [9]. Shadow tomography [10] is a recently proposed framework to predict many properties of quantum objects, like quantum states and channels with randomized measurements [11]. Compared to transitional quantum tomography aiming to reconstruct quantum objects [12; 13], the key point of shadow estimation is to build a few classical snapshots of the original quantum object [14], which are utilized to estimate many observables in a multiplex way. The performance of shadow estimation depends on the applied random unitary evolution on the unknown state and the observables to be predicted [14; 15; 16]. There are two primary random unitary ensembles [14]. Pauli measurements enabled by independent single-qubit random unitary rotation is efficient to predict local observables with a constant non-trivial support. On the other hand, Clifford measurements with random Clifford evolution on all \(n\) qubits, are efficient for global low-rank observables, such as the fidelity to some entangled states. The Pauli measurement and its variants are more feasible for experiments [17; 18; 19; 20; 21], and there are many realized applications [22; 23; 24]. In contrast, the development of the Clifford measurement is relatively slow. This is mainly due to the complex structure of Clifford group with an astronomical number of elements [25; 26]. In addition, it generally needs \(O(n)\)-depth quantum circuit with considerable sampling and compiling efforts [27], which further hinders the applications on the near-term quantum platform. There exists positive progress, for example, applying random local quantum gates sequentially to approximate full Clifford measurements [15; 28] or using emergent-design with the help of large number of ancilla qubits [29; 30]. However, the performance is only guaranteed in the Figure 1: The outline of MCM shadow estimation. average-case [28], and the classical post-processing is generally in an approximate and empirical manner [15]. To accelerate the application of Clifford measurement, in this work, we propose the Minimal Clifford measurement (MCM) framework for shadow estimation, to avoid the exhaustion of a full Clifford group. More specifically, we reduce the number of chosen Clifford circuits to its minimum, and meanwhile the post-processing maintains the simple form as the original one. In particular, we prove that such minimal set should contain \(2^{n}+1\) elements, which can be realized by Mutually Unbiased Bases (MUB) (Sec. III). We further give a general routine to synthesize these Clifford circuits with the Tableau formalism, and the final circuit structure is in the \(-\mathrm{S-CZ-H}\)- form with the depth at most \(n+1\) over the all-to-all architecture (Sec. IV). Even though the circuit depth still scales with \(n\), S and CZ gates are diagonal in the computational basis which may be realized simultaneously [31], potentially with the help of Ising Hamiltonian [32]. In addition, the \(-\mathrm{S-CZ-}\) parts indeed can be decomposed to \(2n-1\) fixed modules with explicit sub-circuit structure, which are feasible to benchmark experimentally [33]. We give a thorough performance analysis of MCM shadow analytically and numerically. In particular, we find our approach shows advantages for estimating off-diagonal observables, and relate the variance of the estimation to the coherence of the observable (Sec. V). Furthermore, we develop the biased-MCM protocol as a mimic of biased-Pauli measurement [17; 18] in the Clifford scenario and demonstrate its advantages, which may lead to potential applications in fidelity estimation (Sec. VI). ## II Preliminaries for shadow estimation and Clifford measurements In this section, we give a brief review [14] on the paradigm of shadow estimation and Clifford group [34]. For the quantum experiment, an unknown \(n\)-qubit quantum state \(\rho\in\mathcal{H}_{d}\) with \(d=2^{n}\) is evolved under an unitary \(U\), which is randomly selected from some ensemble \(\mathcal{E}\), to \(\rho\mapsto U\rho U^{\dagger}\). After that, it is measured in the computational basis to get the result \(\mathbf{b}\in\{0,1\}^{n}\). According to Born's rule, the corresponding probability is \(\Pr(\mathbf{b}|U)=\left\langle\mathbf{b}|U\rho U^{\dagger}|\mathbf{b}\right\rangle\). Note that both \(U\) and \(\mathbf{b}\) are random variables and the whole process is indeed a random one. For the classical post-processing, one 'prepares' \(U^{\dagger}\left|\mathbf{b}\right\rangle\left\langle\mathbf{b}\right|U\) on the classical computer, and the effective process can be written as a quantum channel. For a fixed \(U\), that is, by taking the expectation first on \(\mathbf{b}\), we denote the effective channel as \[\begin{split}\mathcal{M}(\rho|U):&=\mathbb{E}_{ \mathbf{b}}\ U^{\dagger}\left|\mathbf{b}\right\rangle\left\langle\mathbf{b} \right|U\\ &=\sum_{\mathbf{b}}\mathrm{tr}\Big{[}\rho U^{\dagger}\left|\mathbf{ b}\right\rangle\left\langle\mathbf{b}\right|U\Big{]}U^{\dagger}\left| \mathbf{b}\right\rangle\left\langle\mathbf{b}\right|U,\end{split} \tag{1}\] where the conditional probability \(\Pr(\mathbf{b}|U)\) is in the trace form for later convenience. Thus the whole channel is obtained by further taking the expectation on \(U\), \[\mathcal{M}_{\mathcal{E}}(\rho):=\mathbb{E}_{U\in\mathcal{E}}\ \mathcal{M}(\rho|U). \tag{2}\] If the measurement is tomographically (over-) complete, i.e., one takes sufficient distinct \(U\) for evolution, the whole information of \(\rho\) should preserve. In other words, the channel \(\mathcal{M}_{\mathcal{E}}\) can be reversed mathematically, and one can construct the classical snapshot as \[\hat{\rho}=\mathcal{M}_{\mathcal{E}}^{-1}\ (U^{\dagger}\left|\mathbf{b} \right\rangle\left\langle\mathbf{b}\right|U). \tag{3}\] It is direct to check that \(\mathbb{E}\hat{\rho}=\rho\) by Eq. (1) and (2), and \(\hat{O}:=\mathrm{tr}[\hat{\rho}O]\) is an unbiased estimator of \(\tilde{O}:=\mathrm{tr}[\rho O]\). For applications, one can construct the shadow set containing a few of these independent snapshots \(\{\hat{\rho}_{i}\}\) to predict many properties \(\{O_{j}\}\). There are two prominent unitary ensembles, that is, \(n\)-qubit random Clifford ensemble \(\mathcal{E}_{\mathrm{Cl}}\) and the tensor-product of random single-qubit Clifford gate ensemble \(\mathcal{E}_{\mathrm{Pauli}}\). The Pauli measurement by the ensemble \(\mathcal{E}_{\mathrm{Pauli}}\) can be realized efficiently in experiments, but it works poorly on estimating global observables, such as the fidelity to some many-qubit entangled states. Clifford measurement \(\mathcal{E}_{\mathrm{Cl}}\) is good at this task, with the effective channel and its inverse for Clifford measurement being \[\begin{split}\mathcal{M}_{Cl}(A)&=(2^{n}+1)^{-1}[A+ \mathrm{tr}(A)\mathbb{I}],\\ \mathcal{M}_{Cl}^{-1}(A)&=(2^{n}+1)A-\mathrm{tr}(A) \mathbb{I}.\end{split} \tag{4}\] Every coin has two sides. Random Clifford unitary is challenging to realize on current quantum platforms, as it requires quite a few two-qubit gates. Current methods evenly sample and compile a Clifford to \(O(n)\)-depth quantum circuit with time complexity \(O(n^{2})\)[35; 36]. Note that the synthesis of Clifford circuits is intricately tied to the qubit connectivity of platforms. Over Linear-Nearst-Neighbour architecture, the upper bound of two-qubit circuit depth is \(7n-4\), while for all-to-all architectures, it suggests \(1.5n+O(\log^{2}(n))\)[37]. In the next two sections, we aim to simplify the circuit construction of Clifford measurement to its minimal form with the help of MUB for the all-to-all architecture. At the end of this section, we recast the essentials of Clifford unitary and Pauli group for later use. The Pauli group for \(n\)-qubit quantum system is \(\mathbb{P}^{n}=\{\pm\sqrt{\pm 1}\otimes\{\mathbb{I}_{2},X,Y,Z\}\}^{\otimes n}\), with \(\mathbb{I}_{2}\), \(X,Y,Z\) being the identity and Pauli operators for single-qubit. For convenience, we denote the quotient group without the phase as \(\mathbf{P}^{n}=\mathbb{P}^{n}\big{/}\pm\sqrt{\pm 1}\), and all non-identity Pauli operator as \(\mathbf{P}^{n}_{s}=\mathbf{P}^{n}\setminus\mathbb{I}\). Clifford unitary is the normalizer of \(\mathbb{P}^{n}\), i.e., \(U^{\dagger}PU\in\mathbb{P}^{n},\forall P\in\mathbb{P}^{n}\), and a specific Clifford unitary is determined by its action on \(U^{\dagger}X_{i}U\) and \(U^{\dagger}Z_{i}U\) for \(i\in[n]\), as \(X_{i}\) and \(Z_{i}\) can generate all Pauli operators. Hereafter we use \(Z_{i}(X_{i})\) for short to denote an \(n\)-qubit operator \(Z_{i}\otimes\mathbb{I}_{[n]/\{i\}}\) (\(X_{i}\otimes\mathbb{I}_{[n]/\{i\}}\)) with the \(i\)-th qubit being non-identity. ## III Minimal Clifford measurement from mutually unbiased bases To simplify the full Clifford measurement in the original shadow estimation, we raise the following task which aims to minimize the size of the Clifford group while preserving the effective quantum channel \(\mathcal{M}_{Cl}\). **Task 1**.: _Suppose \(\mathcal{E}\) is a subset of the full Clifford group \(\mathcal{E}_{\mathrm{Cl}}\), the task reads_ \[\begin{split}&\textbf{min:}\ |\mathcal{E}|\\ &\textbf{s.t.:}\ \forall\rho,\frac{1}{|\mathcal{E}|}\sum_{U\in \mathcal{E}}\mathcal{M}(\rho|U)=\mathcal{M}_{Cl}(\rho),\end{split} \tag{5}\] _where \(\mathcal{M}_{Cl}(\rho)\) is the effective quantum channel shown in Eq. (4)._ First of all, we show a lower bound of \(|\mathcal{E}|\) by introducing a lemma that accounts for the effect of individual Clifford unitary. After the Clifford \(U\) rotation, the final measurement is conducted in the computational basis, that is, the \(Z\)-basis. As a result, we only need to care about the action of \(U\) on the \(Z\)-basis. Define the generators in the Heisenberg picture as \[g_{i}=U^{\dagger}Z_{i}U,\ i\in[n]. \tag{6}\] In other words, different Clifford \(U\) with the same (ignoring the phase) lead to the same measurement setting in the shadow estimation, as shown explicitly in the following Lemma 1. **Lemma 1**.: _Denote the operators generated by \(\langle g_{i}\rangle_{i=0}^{n-1}\) given in Eq. (6) as \(S_{\mathbf{m}}=\prod_{i=0}^{n-1}g_{i}^{m_{i}}\) with \(\mathbf{m}\) an \(n\)-bit vector, the quantum channel \(\mathcal{M}(\rho|U)\) defined in Eq. (1) shows_ \[\mathcal{M}\left(\rho|U\right)=2^{-n}\sum_{\mathbf{m}\in\{0,1\}^{n}}\mathrm{tr }(\rho S_{\mathbf{m}})S_{\mathbf{m}}. \tag{7}\] The proof is left in Appendix A.1, which transforms the summation of computational basis \(\mathbf{b}\) to the operators \(S_{\mathbf{m}}\). In fact, \(S_{\mathbf{m}}\) are the stabilizers for the state \(\ket{\Psi_{0}}=U^{\dagger}\ket{\mathbf{0}}\) with \(S_{\mathbf{m}}\ket{\Psi_{0}}=\ket{\Psi_{0}},\forall\mathbf{m}\). \(g_{i}\) and \(S_{\mathbf{m}}\) are Hermitian operators by definition, thus the possible phase does not affect the measurement setting in Eq. (7). Hereafter, we ignore the global phase of them, that is \(g_{i},S_{\mathbf{m}}\in\mathbf{P}^{n}\). **Proposition 1**.: _The cardinality of the unitary ensemble \(\mathcal{E}\) in Task 1 is lower bounded by \(|\mathcal{E}|\geq 2^{n}+1\)._ The proof given in Appendix A.2 is based on the fact that one individual Clifford unitary acquires \(2^{n}-1\) non-identity Pauli information as shown in Lemma 1, and there are totally \(4^{n}-1\) ones. In this case, the Clifford elements in \(\mathcal{E}\) do not share non-identity \(S_{\mathbf{m}}\), one has \(|\mathcal{E}|=(4^{n}-1)/(2^{n}-1)=2^{n}+1\) to cover all the Paulis for tomographic completeness. **Definition 1**.: _Suppose a subset of the Clifford group denoted by \(\mathcal{E}_{\text{min}}\) reaches the lower bound in Proposition 1, we call the corresponding measurement as the minimal Clifford measurement (MCM)._ Next, by introducing the Mutually Unbiased Bases (MUB) [38; 39; 40], we find a typical set \(\mathcal{E}_{min}=\mathcal{E}_{\text{MUB}}\) that saturates the lower bound. Here the Clifford unitaries in \(\mathcal{E}_{\text{MUB}}\) are written in the form of the stabilizer generators \(\langle g_{i}\rangle\). **Proposition 2**.: _The MUB in Ref. [38] is an MCM, and the generators for all \(2^{n}+1\) different Clifford unitaries are written by the stabilizer generators as follows. The \(0\)-th element of \(\mathcal{E}_{\text{MUB}}\) is \(\langle Z_{i}\rangle_{i=0}^{n-1}\), i.e., the \(Z\)-basis measurement; and the other \(2^{n}\) elements labeled by \(v=0,1,...,2^{n}-1\) are_ \[\Big{\langle}g_{i}=\sqrt{-1}^{\alpha_{v,i,i}}X_{k}\bigotimes_{j=0}^{n-1}Z_{j} ^{\alpha_{v,j,i}}\Big{\rangle}, \tag{8}\] _with \(\alpha_{v,j,i}=[(v\odot 2^{t})M_{n}^{(0)}]_{j}\) being a binary value. Here \(\odot\) denotes the multiplication in Galois Field \(GF(2^{n})\), \((\cdot)\) transforms the number to an \(n\)-bit string, and \(M_{n}^{(0)}\) is an \(n\times n\) binary matrix._ The specific definition of \(M_{n}^{(0)}\) is in Appendix B.1, and the detailed proof of \(\mathcal{E}_{\text{MUB}}\) being an MCM is left in Appendix B.2. In fact, Eq. (8) offers another way to represent the MUB previously introduced in Ref. [40]. Some remarks about the quantum-design are demonstrated as follows. Note that the measurement channel in Eq. (1), Eq. (2) and Eq. (7) are second-order functions of the quantum state, so any (projective) \(2\)-design ensemble returns the same channel as the full Clifford measurement in Eq. (4). It was shown that any covariant-Clifford \(2\)-design of an \(n\)-qubit system reaches its minimum as an MUB [41]. As a result, MUB leads to the same channel as the full Clifford. In some sense, our approach here manifests this result from the perspective of the action of individual Clifford element via Lemma 1. ## IV Efficient circuit synthesis of MCM In this section, we focus on the quantum circuit synthesis of MCM, especially for the unitary ensemble \(\mathcal{E}_{\mathrm{MUB}}\) given in Proposition 2. The synthesis follows the framework of general Clifford unitaries via the Tableau formula under the framework of Gottesman-Knill theorem Gottesman and Knill (1963); Knill (1964). Interestingly, since \(|\mathcal{E}_{\mathrm{MUB}}|\ll|\mathcal{E}_{\mathrm{Cl}}|\), there is possible room for further simplification on the quantum circuits. We introduce the simplified Z-Tableau and leave more details in Appendix B.3. As shown in Sec. III, the essential information is the stabilizer generators \(\langle g_{i}\rangle\) in Eq. (6) without considering the phase factor. In this way, we write \[\tilde{g}_{i}=\bigotimes_{j=0}^{n-1}X_{j}^{\gamma_{ij}}Z_{j}^{\delta_{ij}},\ i \in[n], \tag{9}\] where \(\gamma_{ij}\) and \(\delta_{ij}\) are elements of two \(n\times n\) binary matrices \(C\) and \(D\), and \(T=[C,D]\) is called the Z-Tableau. The action of Clifford gates \(V\tilde{g}_{i}V^{\dagger}\) on \(\tilde{g}_{i}\) is thus recorded as the transformation of the matrix \(T\). As a result, our task is to find \(V\) such that it takes all \(\tilde{g}_{i}\) back to the original \(Z_{i}\), i.e., take \(T\) to \(T_{0}=[\mathbb{O},\mathbb{I}]\), with \(C=\mathbb{O}\) the null matrix, and \(D=\mathbb{I}\) the identity matrix. We choose the basic generating Clifford gates as the single-qubit Hadamard gate H, phase gate S, and two-qubit controlled-Z gate CZ, with the corresponding update rules of Tableau \(T\) listed as follows. For all \(i\), * \(\mathrm{H}(a)\): exchange \(\gamma_{i,a}\) and \(\delta_{i,a}\) ; * \(\mathrm{S}(a)\): \(\delta_{i,a}:=\gamma_{i,a}+\delta_{i,a}\); * \(\mathrm{CZ}(a,b)\): \(\delta_{i,a}:=\delta_{i,a}+\gamma_{i,b},\ \delta_{i,b}:=\delta_{i,b}+\gamma_{i,a}\), on qubit \(a\), and qubit-pair \((a,b)\) respectively Gottesman and Knill (1963); Knill (1964). For the unitary ensemble \(\mathcal{E}_{\mathrm{MUB}}\) given in Proposition 2, the Tableau of the \(0\)-th element is already \([\mathbb{O},\mathbb{I}]\), thus we do not need to synthesis; Tableaus of the remaining \(2^{n}\) ones in Eq. (8) are all in the form of \([\mathbb{I},D_{v}]\) with \(v\) denoting the element label. In particular, the \(i\)-th row of \(D_{v}\) is the vector \((v\odot 2^{i})M_{n}^{(0)}\)Marbani and Kliesen (2015). The following result shows an important property of the matrix \(D_{v}\). A matrix \(D\) is said to be a Hankel matrix if \(D_{i,j}=D_{i^{\prime},j^{\prime}}\) for any \(i+j=i^{\prime}+j^{\prime}\). **Proposition 3**.: _The matrix \(D_{v}\) from the Tableau \([\mathbb{I},D_{v}]\) of the element in \(\mathcal{E}_{\mathrm{MUB}}\) given in Eq. (8) is a Hankel matrix, for all \(v=0,1,\cdots,2^{n}-1\)._ The proof is in Appendix B.4. Note that a Hankel matrix is also a symmetric matrix, with a further constraint that it is completely determined by its \(2n-1\) anti-diagonals. As a result, one can take the anti-diagonals as the basis for any Hankel matrix. We define \(\{\mathbb{H}_{k}\}_{k=0}^{2n-2}\) with \([\mathbb{H}_{k}]_{i,j}=\delta_{i+j,k}\), in which only \(1\)'s on the \(k\)th anti-diagonal. Thus, our target matrix \(D_{v}=\sum_{k=0}^{2n-2}\beta_{k}^{v}\mathbb{H}_{k}\), and the \((2n-1)\)-bit vector \(\vec{\beta}^{v}\) characterizes the information of the element from Eq. (8). Our synthesis strategy is as follows. For each \(\mathbb{H}_{k}\) with \(\beta_{k}=1\) in the summation, one can use a circuit module \(M_{k}\) satisfying \([\mathbb{I},\,\mathbb{H}]\to[\mathbb{I},\mathbb{H}-\mathbb{H}_{k}]\) to transform the Tableau from the initial \([\mathbb{I},\mathbb{H}]\) to \([\mathbb{I},\mathbb{O}]\). After that, by applying the Hadamard gates \(\mathrm{H}^{\otimes n}\), one can transform the Tableau \([\mathbb{I},\mathbb{O}]\to[\mathbb{O},\mathbb{I}]\) to complete the synthesis. This process is repeated for all \([\mathbb{I},D_{v}]\), and the whole synthesis process is listed in Algorithm 1. ``` 0: The qubit number \(n\), and the parameter vector of the Hankel matrix \(\{\beta_{k}\}_{k=0}^{2n-2}\). 0: The Clifford Circuit \(\mathbf{C}\), initialized as \(\mathbf{C}=\emptyset\). 1:for\(k=0\)to\(2n-2\)do 2:if\(\beta_{k}=1\)then 3:if\(k\) is even then 4:\(\mathbf{C}\leftarrow\mathbf{C}\cup\mathbf{S}(\frac{\mathbf{k}}{2})\) 5:endif 6: Initialize \(p,q=0\). 7:if\(k<n-1\)then 8:\(p=0\), \(q=k\). 9:else 10:\(p=k-n+1\), \(q=n-1\). 11:endif 12:while\(p<q\)do 13:\(\mathbf{C}\leftarrow\mathbf{C}\cup\mathbf{CZ}(\mathbf{p},\mathbf{q})\) 14:\(p=p+1\), \(q=q-1\) 15:endwhile 16:endif 17:endfor 18:for\(i=0\)to\(n-1\)do 19:\(\mathbf{C}\leftarrow\mathbf{C}\cup\mathbf{H}(\mathbf{i})\) 20:endfor ``` **Algorithm 1** Circuit synthesis for Z-Tableau \([\mathbb{I},\,\mathbb{H}]\) Fig. 2 shows the general circuit construction of our approach. The module \(M_{k}\) may contain S and CZ gates on the first \(k\)-qubit with \(k<n\). \(M_{k}\) and \(M_{2n-2-k}\) are identical to each other but act on qubits in reverse order. There are in total \(2n-1\) modules, and the circuit depth for all modules can be parallelized to \(n\) by combining \(M_{k}\) and \(M_{n+k}\) to the same layer. As S gates and CZ gates commute, the whole circuit can also be synthesized in a three-stage form \(-\mathrm{S}-\mathrm{CZ}-\mathrm{H}-\). We provide Fig. 3 as a visual explanation of how to eliminate the Z-Tableau and synthesize the corresponding quantum circuit, as listed in Algorithm 1. Moreover, we give an example of MUB for \(n=3\), starting from the Eq (8) to the synthesis of quantum circuits, which is left in Appendix B.5. ## V Performance analysis and off-diagonal advantage With the synthesized quantum circuits for MUB at hand, one can proceed the MCM shadow estimation via Eq. (1), (2), and (3), using the same post-processing scheme as the full Clifford measurements in Eq. (4). In this section, we analyse the performance of the introduced MCM shadow in depth by investigating the variance for estimating some observable \(O\). In shadow estimation, such variance shows \[\operatorname{Var}_{\mathcal{E}}(\hat{O})=\operatorname{Var}_{\mathcal{E}}( \hat{O}_{0})\leq\max_{\rho}\mathbb{E}\operatorname{tr}(O_{0}\widetilde{\rho})^ {2}=\|O_{0}\|_{\text{s},\mathcal{E}}^{2}. \tag{10}\] Here, \(O_{0}=O-\operatorname{tr}(O)\mathbb{I}/d\) is the traceless part of \(O\). Note that the variance is further bounded by the square of the shadow-norm \(\|O_{0}\|_{\text{s},\mathcal{E}}\)[14]. Shadow-norm is a function of the observable \(O\) and the unitary ensemble \(\mathcal{E}\), by taking the maximization over all possible input state \(\rho\). Another variance quantification is called the locally-scrambled shadow-norm \(\|O\|_{\text{h},\mathcal{E}}\leq\|O\|_{\text{s},\mathcal{E}}\)[15; 28; 44], by considering the _average_ on \(\rho\) from a 1-design ensemble, or equivalently, the input state \(\rho=\mathbb{I}/2^{n}\) being the maximally mixed state. **Theorem 1**.: _For MCM shadow estimation, if one takes the unitary ensemble \(\mathcal{E}_{\mathrm{MUB}}\) determined by some MUB, the corresponding shadow-norm and locally-scrambled shadow-norm can be upper bounded by_ \[\|O_{0}\|_{\text{s},\mathcal{E}_{\mathrm{MUB}}}^{2} \leq(2^{n}+1)\operatorname{tr}\bigl{(}O_{0}^{2}\bigr{)}, \tag{11}\] \[\|O_{0}\|_{\text{s},\mathcal{E}_{\mathrm{MUB}}}^{2} \leq\frac{2^{n}+1}{2^{n}}\operatorname{tr}\bigl{(}O_{0}^{2}\bigr{)}.\] Proof.: The proof is mainly based on the 2-design property [41] of MUB. Here, we denote the MUB state as \(\ket{\Phi_{U,\mathbf{b}}}=U^{\dagger}\ket{\mathbf{b}}\), and the density matrix as \(\Phi=\ket{\Phi}\bra{\Phi}\). By utilizing the inverse channel in Eq. (4), and recalling Eq. (10), the variance can be upper bounded by \[\mathbb{E}\operatorname{tr}(O_{0}\hat{\rho})^{2}=\mathbb{E} \operatorname{tr}\bigl{[}O_{0}\ \mathcal{M}^{-1}(\Phi_{U,\mathbf{b}})\bigr{]}^{2} \tag{12}\] \[=(2^{n}+1)^{2}\ \mathbb{E}\operatorname{tr}[O_{0}\Phi_{U, \mathbf{b}}]^{2}\] \[=(2^{n}+1)\sum_{U\in\mathcal{E}_{MUB},\mathbf{b}}\operatorname{ tr}[O_{0}\Phi_{U,\mathbf{b}}]^{2}\operatorname{tr}(\rho\Phi_{U,\mathbf{b}})\] \[\leq(2^{n}+1)\operatorname{tr}\Biggl{[}O_{0}^{\otimes 2}\sum_{U \in\mathcal{E}_{MUB},\mathbf{b}}\Phi_{U,\mathbf{b}}^{\otimes 2}\Biggr{]}\] \[=(2^{n}+1)\operatorname{tr}\bigl{[}O_{0}^{\otimes 2}\ (\mathbb{S}+ \mathbb{I}^{\otimes 2})\bigr{]}=(2^{n}+1)\operatorname{tr}\bigl{(}O_{0}^{2}\bigr{)}.\] Here, \(\mathbb{S}\) and \(\mathbb{I}^{\otimes 2}\) are swap and identity operators on the 2-copy space. The inequality is due to the Figure 3: Illustration of the circuit synthesis in Algorithm 1 given input Z-Tableau \([C,D]\) when \(n=3\). The D-Matrix of the input Z-Tableau is a Hankel matrix. To eliminate the \(k\)-th anti-diagonal of the D-matrix, we make use of the \(k\)-th module called \(M_{k}\). Finally a fully H-layer is utilized to transform the Z-Tableau \([\mathbb{I},\mathbb{O}]\rightarrow[\mathbb{O},\mathbb{I}]\). fidelity \(\operatorname{tr}(\rho\Phi_{U,\mathbf{b}})\leq 1\). And the final equality is because \(\sum\Phi_{U,\mathbf{b}}^{\otimes 2}\) is a projective 2-design, thus the summation result is proportional to the projection to the symmetric subspace \(\Pi_{sym}=2^{-n}(2^{n}+1)^{-1}(\mathbb{S}+\mathbb{I}^{\otimes 2})\) on \(\mathcal{H}_{d}^{\otimes 2}\). For the locally-scrambled shadow-norm, one just take \(\rho=\mathbb{I}/2^{n}\) in Eq. (12), which contributes a \(2^{-n}\) to the final result. Some remarks on the comparison to the original full Clifford shadow are illustrated as follows. For the worst case, i.e., the original shadow-norm is \(\left\|O_{0}\right\|_{\text{s},\mathcal{E}_{\text{MUB}}}^{2}\sim O(1) \operatorname{tr}(O_{0}^{2})\) and our MCM shadow is exponentially worse than it, the average behaviour of both two protocols are the same by the 2-design property \(\left\|O_{0}\right\|_{\text{ls},\mathcal{E}_{\text{MUB}}}^{2}=\left\|O_{0} \right\|_{\text{ls},\mathcal{E}_{\text{Cl}}}^{2}\). On the other hand, the bound on \(\left\|O_{0}\right\|_{\text{s},\mathcal{E}_{\text{MUB}}}^{2}\) is tight in some sense, that is, the exponential scaling with the qubit-number \(n\) is not by mathematical derivation, but the essence of MCM. Let us denote \(\left\langle O_{0}\right\rangle_{U,\mathbf{b}}:=\operatorname{tr}[O_{0}\Phi_{ U,\mathbf{b}}],\left\langle\rho\right\rangle_{U,\mathbf{b}}:=\operatorname{tr}( \rho\Phi_{U,\mathbf{b}})\) for short, the third line of Eq. (12) can be written as \[\operatorname{Var}_{\mathcal{E}_{\text{MUB}}}(\hat{O})\leq(2^{n}+1)\sum_{U, \mathbf{b}}\left\langle O_{0}\right\rangle_{U,\mathbf{b}}^{2}\langle\rho \rangle_{U,\mathbf{b}}, \tag{13}\] then we have the following result. **Observation 1**.: _Suppose there is some \(\Phi_{U,\mathbf{b}}\), such that both \(\left\langle O_{0}\right\rangle_{U,\mathbf{b}}=\Theta(1)\) and \(\left\langle\rho\right\rangle_{U,\mathbf{b}}=\Theta(1)\), then the variance of the MUB-based shadow would scale as \(\Theta(2^{n})\)._ The observation is direct to see, as in this case one of the terms in the summation in Eq. (13) \(\left\langle O_{0}\right\rangle_{U,\mathbf{b}}^{2}\langle\rho\rangle_{U, \mathbf{b}}=\Theta(1)\). That is, if \(O_{0}\) and \(\rho\) both share constant overlap with some basis state \(\Phi_{U,\mathbf{b}}\), the final result should be exponential with \(n\). Indeed, it is not hard to find examples to support this observation. Let us consider the fidelity estimation task, which is the main application of the original shadow estimation. In this task, \(O=\Psi\) is some pure (stabilizer) state, and thus \(\operatorname{tr}\bigl{(}O_{0}^{2}\bigr{)}\) is a constant. Here, in the numerical simulation we take the observable and the input state \(O=\rho=\left|\operatorname{GHZ}\right\rangle\left\langle\operatorname{GHZ}\right|\) as an example. In this case, there exists \(\Phi_{\mathbb{I},\mathbf{0}}\), such that \(\left\langle O_{0}\right\rangle_{\mathbb{I},\mathbf{0}},\left\langle\rho \right\rangle_{\mathbb{I},\mathbf{0}}=\Theta(1)\). That is, the state \(\Phi_{\mathbb{I},\mathbf{0}}\) is a diagonal state corresponding to the 0-th element of MUB--the \(Z\)-basis with \(U=\mathbb{I}\). As a result, the variance should be \(\Theta(2^{n})\), which is also manifested by the numerics in Fig. 4 (a). Observation 1 motivates us to relieve such worst-case exponential-scaling variance of MCM by considering the diagonal/off-diagonal part of the observable \(O\) separately, under a chosen basis determined by \(U\in\mathcal{E}_{\text{MUB}}\) from all possible \((2^{n}+1)\) elements. We demonstrate the variance for estimating only the off-diagonal part as follows. **Theorem 2**.: _For a given basis from one element of MUB, i.e., \(\{\left|\Phi_{U,\mathbf{b}}\right\rangle\}\) with a fixed \(U\in\mathcal{E}_{\text{MUB}}\), consider the off-diagonal part of observable \(O\), \(O_{F}=\sum_{\mathbf{b}\neq\mathbf{b}^{\prime}}O_{\mathbf{b},\mathbf{b}^{ \prime}}\left|\Phi_{U,\mathbf{b}}\right\rangle\left\langle\Phi_{U,\mathbf{b}^ {\prime}}\right|\). The variance of estimating \(O_{F}\) with MCM-shadow is upper bounded by_ \[\operatorname{Var}_{\mathcal{E}_{\text{MUB}}}\left(\widehat{O}_{F}\right)\leq \frac{2^{n}+1}{2^{n}}[C_{l_{1}}(O)]^{2}, \tag{14}\] _where \(C_{l_{1}}(O)=\sum_{\mathbf{b}\neq\mathbf{b}^{\prime}}\left|O_{\mathbf{b}, \mathbf{b}^{\prime}}\right|\) denotes the \(l_{1}-\)norm of quantum coherence [45]._ The proof is left in Appendix C.2. As such, the variance is at most polynomial if \(C_{l_{1}}(O)=O(\text{poly(n)})\). In practise, one can estimate the diagonal part of \(O\) by directly selecting the measurement basis determined by some preferred \(U\), which is easy to conduct. For the challenging part, i.e., the off-diagonal part [16; 46], one can apply the MCM-based shadow to estimate it. Finally, by combining the diagonal and off-diagonal results, the whole estimation procedure is finalized. In the example of estimating the fidelity of a GHZ state, the chosen basis is the Z-basis, i.e., \(U=\mathbb{I}\). \(O_{F}=\frac{1}{2}(\left|1\rangle\langle 0\right|^{\otimes n}+\left|0 \rangle\langle 1\right|^{\otimes n})\) with \(C_{l_{1}}(O)=1\). One can measure \(O_{F}\) with MCM-shadow and the diagonal part with the Z-basis measurement respectively. The final variance would not scale with \(n\), which is further manifested by Fig. 4(b). The off-diagonal advantage shown in Theorem 2 and the showcase of GHZ state lies on the essence that we choose a proper basis from all MUBs as the 'proper' diagonal basis, then separately measure the diagonal/off-diagonal in standard/shadow measurements. The 'proper' basis is chosen such that the constant overlap \(\left\langle O_{0}\right\rangle_{U,\mathbf{b}}=\Theta(1)\) in Observation 1 can be avoided in the shadow Figure 4: The statistical variance of shadow estimation with total number of snapshots \(N=10000\) using Pauli measurement (blue), Clifford measurement (green) and Minimal Clifford measurement (red), respectively. In both (a) and (b) the state is the standard GHZ state \(\left|\operatorname{GHZ}\right\rangle=\frac{1}{\sqrt{2}}(\left|0\right\rangle^ {\otimes n}+\left|1\right\rangle^{\otimes n})\). In (a) \(O=\left|\operatorname{GHZ}\right\rangle\)(GHZ], and in (b) \(O_{F}=[(\left|\left\langle 0\right\rangle\left\langle 1\right|)^{\otimes n}+\left|1 \right\rangle\langle 0\right|]^{\otimes n}]/2\), which is the off-diagonal term of the GHZ state. The dotted lines show the fitting curves with the corresponding slopes. one may would like to measure a complex observable, spanning quite a few different MUBs. This situation makes the selection of the 'proper' basis frustrating, and this motivates us to consider the biased-MCM in the next section. ## VI Biased-MCM shadow estimation In this section, we further propose the biased-MCM scheme as an extension and enhancement of the off-diagonal advantage discussed in the previous section. In the off-diagonal advantage, a reference basis is chosen deterministically in some sense, compared to the other bases which are selected randomly. Here, by leveraging prior knowledge about the observable to be predicted, we extend this kind of unbiasedness treatment to allow sampling the MUB elements with varying probabilities. Intuitively, one can increase the probability to choose some basis if it can reveal more information about the underlying observable. Note that there is biased scheme for Pauli measurements [17; 18], but none for Clifford measurements, on account of the large cardinality of Clifford group. Here our MCM approach makes the biased scheme possible for Clifford measurements. Compared to the (uniform) MCM shadow, two modifications are made as follows. * Probability of sampling unitary \(U\in\mathcal{E}_{\text{MUB}}\): \(\frac{1}{2^{n}+1}\to P_{U}\). * The formula of post-processing: \[\widehat{O_{0}}=\frac{\operatorname{tr}\bigl{(}O_{0}U^{\dagger}|\mathbf{b} \rangle\langle\mathbf{b}|U\bigr{)}}{P_{U}},\] (15) and the estimator of \(O\) is defined as \(\hat{O}=\widehat{O_{0}}+2^{-n}\operatorname{tr}(O)\). We demonstrate the unbiasedness of \(\hat{O}\) in Appendix D.1. In general, an arbitrary observable can be decomposed into MUBs as [47; 48] \[O=-\operatorname{tr}(O)\mathbb{I}+\sum_{U\in\mathcal{E}_{\text{MUB}}}\sum_{ \mathbf{b}\in\{0,1\}^{n}}\alpha_{U,\mathbf{b}}\Phi_{U,\mathbf{b}}, \tag{16}\] where \(\alpha_{U,\mathbf{b}}=\operatorname{tr}(O\Phi_{U,\mathbf{b}})\) with \(\Phi_{U,\mathbf{b}}=U^{\dagger}|\mathbf{b}\rangle\langle\mathbf{b}|U\). Based on this canonical decomposition, we analytically find the 'optimal' probability distribution to conduct biased-MCM. Here optimal means that we can find a solution to optimize the upper bound of the variance in the estimation. In particular, we choose \[P_{U}=\frac{B_{U}}{\sum_{U^{\prime}\in\mathcal{E}_{\text{MUB}}}B_{U^{\prime}}}, \tag{17}\] with \[\begin{split} B_{U}:&=\max_{\mathbf{b}\in\{0,1\}^ {n}}|\operatorname{tr}(\Phi_{U,\mathbf{b}}O_{0})|\\ &=\max_{\mathbf{b}\in\{0,1\}^{n}}|\alpha_{U,\mathbf{b}}-2^{-n} \operatorname{tr}(O)|.\end{split} \tag{18}\] **Theorem 3**.: _For an observable \(O\) whose decomposition on MUBs is given by Eq. (16), if one applies the biased-MCM shadow estimation with the probability \(P_{U}\) given in Eq. (17), the variance of estimation is upper bounded by_ \[\operatorname{Var}_{\text{biased-}\mathcal{E}_{\text{MUB}}}(\hat{O})\leq \left(\sum_{U\in\mathcal{E}_{\text{MUB}}}B_{U}\right)^{2}. \tag{19}\] The proof is in Appendix D.2. Moreover, Fig. 5(a) showcases the significant sampling advantage of the biased-MCM by considering the fidelity estimation of GHZ-like state. In particular, when \(\rho=O=|\text{GHZ}\rangle\langle\text{GHZ}|\) at \(\theta=0.5\pi\), the variance disappears. And we provide a more general result in the following Proposition 4, by applying the interesting properties for the important case where the observable \(O\) is a stabilizer state, as shown in Lemma 2. Figure 5: The statistical variance of shadow estimation with total number of snapshots \(N=1000\) using Clifford measurement (green), MCM (red) and biased-MCM (orange) in three different scenarios. In (a), the quantum state \(\rho\) is parameterized as the phased-GHZ states \(|\text{GHZ}(\theta)\rangle=1/\sqrt{2}[\cos(\theta/2)|0\rangle^{\otimes 6}+ \sin(\theta/2)|1\rangle^{\otimes 6}]\), with \(\theta\in[0.1\pi,0.9\pi]\). And the observable \(O\) set to be a 6-qubit GHZ state. In (b), we estimate the fidelity of a set of Haar random states \(\rho_{i}=\Phi_{i}\) for \(i\in[1,100]\) with some Clifford stabilizer \(O=\Phi_{C}\). The variance is the average of these 100 cases. In (c) and (d), we estimate the fidelity of \(\rho_{i}=\Phi_{i}\) with \(O_{i}=\Phi_{i}\) and \(O_{i}=\Phi_{1}\), respectively. **Proposition 4**.: _Suppose the observable and the unkown quantum state are both being \(O=\rho=V^{\dagger}|0\rangle\langle 0|V\) for any Clifford unitary \(V\), one has_ \[\mathrm{Var}_{biased-\mathcal{E}_{\mathrm{MUB}}}(\hat{O})=0, \tag{20}\] _by applying the biased-MCM with the optimized probability given in Eq. (17)._ The proof is left in Appendix D.4. **Lemma 2**.: _Suppose the observable \(O=V^{\dagger}|\mathbf{0}\rangle\langle\mathbf{0}|V\), where \(V\) is an arbitrary Clifford, then the element of the canonical MUB decomposition \(\alpha_{U,\mathbf{b}}=\langle\mathbf{b}|UV^{\dagger}|\mathbf{0}\rangle|^{2}\) satisfies_ \[\sum_{U\in\mathcal{E}_{\mathrm{MUB}}}\max_{\mathbf{b}}\alpha_{U,\mathbf{b}}=2. \tag{21}\] _Furthermore, suppose the Z-Tableau of \(UV^{\dagger}\) is \(T_{UV^{\dagger}}=[C,D]\), the distribution of \(\alpha_{U,\mathbf{b}}\) for different \(\mathbf{b}\in\{0,1\}^{n}\) are as follows: totally \(2^{r_{U}}\) of them taking \(2^{-r_{U}}\) and the remaining \(2^{n}-2^{r_{U}}\) being 0. Here \(r_{U}=\mathrm{rank}_{\mathbb{F}_{2}}(C)\) on the binary field._ This lemma characterize the projection of the stabilizer states to the MUBs, which is of independent interest. And the proof is left in Appendix D.3. In addition, we numerically demonstrate the variance of estimating observables through different estimation protocols as shown in Fig. 5. Specifically, in Fig. 5(b), we randomly generate 100 quantum states \(\mathcal{E}=\{|\Phi_{i}\rangle\}_{i=1}^{10}\) according to the Haar measure, along with one Clifford stabilizer state \(|\Phi_{C}\rangle\). Then we estimate the fidelity with \(O=\Phi_{C}\) as \(\mathrm{tr}(\Phi_{i}\Phi_{C})=|\langle\Phi_{i}|\Phi_{C}\rangle|^{2}\) via different shadow protocols. The experimental result as an average from these 100 experiments highlights a constant advantage of the biased-MCM over other protocols. Moreover, by replacing the Clifford stabilizer with Haar-random observables \(O=\Phi_{i}\) and \(O=\Phi_{1}\) in Fig. 5 (c) and (d) respectively, we find that the advantage of the biased-MCM protocol vanishes. ## VII Conclusion and outlook In this work, we propose the MCM shadow framework to simplify the original Clifford measurement to the greatest extent. By applying the canonical MUB and the Tableau formalism, we give an explicit and efficient method to synthesize the random unitary ensemble, where the budgets of both the circuit synthesizing and the experimental realization are significantly reduced. For example, the quantum circuit can realize in the platforms with the all-to-all architectures, such as the Rydberg atoms with optical tweezers [49]. The performance analysis shows the limit and advantages of MCM shadow, especially in estimating off-diagonal observables. The updated biased-MCM shadow protocol adjusts the sampling probability of the random Clifford circuit based on the target observable, which can further enhance the performance of the framework. From the framework presented here, there are a few intriguing directions to explore in the future. First, it is very interesting to apply the MCM shadow to detect multipartite entanglement [50; 51], as genuine entanglement [52; 53] and more detailed structures [54; 55; 56] can be revealed by the fidelities to some target entangled states. Second, the quantum circuit structure is mainly determined by the Z-Tableau according to the selection of MUBs. It is thus very worth studying how different circuit architectures [57] and ensemble definitions [58; 59] would enhance the performance of the whole shadow protocol. Third, in the biased-MCM, we need the projection results of the target observables to all MUBs. Consequently, it is very important to develop additional adaptive or real-time method to ease this [60; 61], and find applications where the decomposition on MUB is of polynomial terms, like the Pauli operator summation for general Hamiltonian. Finally, it is possible to extend our MCM to higher-dimensional systems, where the existence of full set of MUB is unknown [40]. _Acknowledgements--_ We thank useful discussions with Zhou You. This work is supported by National Natural Science Foundation of China(NSFC) Grant No. 12205048, the start-up funding of Fudan University and Innovation Program for Quantum Science and Technology 2021ZD0302000.
2304.13783
Fine Tuning with Abnormal Examples
Given the prevalence of crowd sourced labor in creating Natural Language processing datasets, these aforementioned sets have become increasingly large. For instance, the SQUAD dataset currently sits at over 80,000 records. However, because the English language is rather repetitive in structure, the distribution of word frequencies in the SQUAD dataset's contexts are relatively unchanged. By measuring each sentences distance from the co-variate distance of frequencies of all sentences in the dataset, we identify 10,500 examples that create a more uniform distribution for training. While fine-tuning ELECTRA [4] on this subset of examples reaches better performance to a model trained on all 87,000 examples. Herein we introduce a methodology for systematically pruning datasets for fine tuning reaching better out of sample performance.
Will Rieger
2023-04-26T18:59:48Z
http://arxiv.org/abs/2304.13783v1
# Fine Tuning With Abnormal Examples ###### Abstract Given the prevalence of crowd sourced labor in creating Natural Language processing datasets, these aforementioned sets have become increasingly large. For instance, the SQUAD dataset currently sits at over 80,000 records. However, because the English language is rather repetitive in structure, the distribution of word frequencies in the SQUAD dataset's contexts are relatively unchanged. By measuring each sentences distance from the co-variate distance of frequencies of all sentences in the dataset, we identify 10,500 examples that create a more uniform distribution for training. While fine-tuning ELECTRA [4] on this subset of examples reaches better performance to a model trained on all 87,000 examples. Herein we introduce a methodology for systematically pruning datasets for fine tuning reaching better out of sample performance. ## I Introduction Whether it is because of extensive crowd sourced labor, better computing, or enhanced storage, there is a plethora of data available to researchers training Natural Language Processing models for question answering tasks. However, because most of these question answering datasets (specifically SQUAD) rely of encyclopedic contexts, the distribution of word choice is similar amongst most of the examples. Given that most question answering tasks begin by predicting the starting or ending location where the answer might be within the context. Intuitively, it is reasonable to believe that by using a relative location in the context and some dataset artifacts this value can be reached using a set of non-linear models. While, obviously, there has been great success in recent years using these kinds of models for this task, relative performance has increased somewhat asymptotically to that of human performance. Our hypothesis is that this ceiling has been created based upon underlying distributions in the data. No matter how large a training set becomes, it is not sufficiently robust until there is a sufficient number of examples that are mutually abnormal. If new examples added to the training set follow a similar lexical distribution artifacts will persist and not lead to models that are sufficiently robust. Work has been done to similarly understand how models treat various examples through Dataset Cartography [3]. Swayamdipta, et al. primarily focus on during training performance and variability. By simultaneously measuring the confidence and variability they categorize examples into three categories: easy-to-learn, ambiguous, and hard-to-learn. The methodology presented herein produces a similar grouping of examples. Our methodology produces three categories of examples: low abnormality, mutually abnormal, and high abnormality. We further go on to show that solely training on a subset of the most representative of those samples is sufficient for reasonably robust training. ## II Measuring Abnormality Measuring this abnormality can be done through Mahalanobis Distance [1]. Developed in 1927, the Mahalanobis Distance was used to analyze distinct sub-groups in the features of human skulls. By measuring the relative distance from a multinomial distribution, a scalar value can be assigned to each sample and grouped accordingly. In practice, this score can be thought of as its relative abnormality to the distribution across these dimensions. \[d_{t}=(x_{t}-\mu)\Sigma^{-1}(x_{t}-\mu)^{\prime} \tag{1}\] For each example at step t, the Mahalanobis distance can be calculated by de-meaning the features of the example and multiplying it with the inverse of the Covariance matrix of the features of the entire dataset (Equation 1). For each of the 87,000 examples in the SQUAD dataset, the features of each context are defined as the density of each individual word's occurrence in the overall dataset. Additionally to properly compute the Covariance matrix, each context is back padded with 0's to ensure each example is the same length. Using each example's abnormality, we were able to examine the relative density of mutual abnormality between each example in the training set and the distribution of all examples' features (Figure 1). The leptokurtic distribution of abnormalities supports our hypothesis that most examples with similar abnormalities to one another in their relative word-distribution. The fat tails of the low abnormality (left skew) and high abnormality (right skew) define sets of further abnormal examples better representing mutually abnormal examples (from one another). Fig. 1: Visualization of 1-Gram individual word abnormality. ## III Building the New Dataset In order to build a training set we sample 10,500 examples from each of the three categories defined. The high abnormality set is done by taking the indexes of the highest 3500 abnormality scores. The low abnormality set is done by taking the indexes of the lowest 3500 abnormality scores. The mutually abnormal set is done by sampling the 3500 indexes of examples closest to the mean of the distribution. To better exemplify the differences between these three categories of examples, it is important to understand the categories of the contexts from each of the sampled distributions. For repeat-ability, the lowest abnormality score's index was example #21706. The highest abnormality score's index was example #12171. And the most mutually abnormal example has index #9879. Anecdotally, the lowest example has the title "Brain", the highest example has the title "Space_Race", and the mutually abnormal example has the title "Institute_of_technology". ## IV Training Following the aggregation of the 10500 training examples, the ELECTRA based model was trained for 3 epochs on the set. Each training loop took roughly 12 hours to complete. The complete training of ELECTRA took about 10 hours done on a Google Colab Pro GPU. So there is a noticeable speed increase by decreasing the size of the training set from 87,000 to 10,500 examples. ## V Results Our initial ELECTRA model reached an F1 score on SQUAD of 70.24 trained over 10 hours. The modified dataset reached an F1 score of 80.15 trained over 12 hours. We believe this notable increase in performance is due to better sampling the tails of the density distribution. Further we were able to naturally train the model with adversarial example by systematically choosing examples from the tails of the 1-gram distribution. ## VI Further Analysis Upon further review our of our results, we noticed an interesting (certainly unintended) artifact in our 1-gram examples. There exists a linear relationship between the length of the context (in characters) and its abnormality score (Figure 2). While there is still a substantial variability in score, we believe that this unintended factor inclusion degraded our overall performance. Further investigating this phenomena we find that increasing the n-gram to 3, yields a less linearly correlated relationship between context length abnormality score. We reached the conclusion that individual word distributions would contain higher skew and therefore be more impacted by the matrix sparseness introduced by padding the shorter contexts. By increasing the length of the n-gram, the examples became more uniformly distributed and therefore less impacted by the sparseness impacted by padding. During this review, we identified but were unable to test a way of using this revelation in future research. Ideally, in 250 character length segments of context, you sample from the high, low and mean of the distributions created to generate a training set. This would continue to support the initial hypothesis of dataset concentricity in the multiple dimensions of the lexical distribution. Additionally, this would yield a new sample distribution significantly less correlated to the dataset factor of length. In future work we plan to enhance the sampling methodology by sectioning abnormality within different baskets of length to remove the perceived correlation between length and abnormality. ## VII Conclusion We set out to investigate the relationship of lexical distribution and dataset crowding due to repetitiveness in the English language; especially related to encyclopedic datasets like SQUAD. We believe we made meaningful strides in the generation of a new methodology in defining the mutually abnormality of examples in a dataset versus this distribution. Future research is ongoing in creating subsets of data that meet the performance of training on the whole training dataset. The hope is that by exploring the avenues discussed in the section on Further Analysis these empirical artifacts (or features identified) may yield similarly or more robust performance in shorter fine-tuning training time.
2308.15027
Improving Neural Ranking Models with Traditional IR Methods
Neural ranking methods based on large transformer models have recently gained significant attention in the information retrieval community, and have been adopted by major commercial solutions. Nevertheless, they are computationally expensive to create, and require a great deal of labeled data for specialized corpora. In this paper, we explore a low resource alternative which is a bag-of-embedding model for document retrieval and find that it is competitive with large transformer models fine tuned on information retrieval tasks. Our results show that a simple combination of TF-IDF, a traditional keyword matching method, with a shallow embedding model provides a low cost path to compete well with the performance of complex neural ranking models on 3 datasets. Furthermore, adding TF-IDF measures improves the performance of large-scale fine tuned models on these tasks.
Anik Saha, Oktie Hassanzadeh, Alex Gittens, Jian Ni, Kavitha Srinivas, Bulent Yener
2023-08-29T05:18:47Z
http://arxiv.org/abs/2308.15027v1
# Improving Neural Ranking Models with Traditional IR Methods ###### Abstract Neural ranking methods based on large transformer models have recently gained significant attention in the information retrieval community, and have been adopted by major commercial solutions. Nevertheless, they are computationally expensive to create, and require a great deal of labeled data for specialized corpora. In this paper, we explore a low resource alternative which is a bag-of-embedding model for document retrieval and find that it is competitive with large transformer models fine tuned on information retrieval tasks. Our results show that a simple combination of TF-IDF, a traditional keyword matching method, with a shallow embedding model provides a low cost path to compete well with the performance of complex neural ranking models on 3 datasets. Furthermore, adding TF-IDF measures improves the performance of large-scale fine tuned models on these tasks. ## 1 Introduction Traditional information retrieval methods such as TF-IDF, and BM25 work very well for keyword based queries, but they are not as effective for natural language queries containing full sentences. These models are based on the idea of exact match where tokens in the query have to be in a document to be considered relevant. The relevance between documents is compared by the frequency of the matched tokens and their importance. Although this works very well for keyword based queries, they are not as effective for natural language queries containing full sentences Gupta and Bendersky (2015). In recent years, a plethora of neural models Guo et al. (2019) have been applied for ranking tasks. These models generally represent a body of text like a query or an article with a low dimensional vector in the embedding space and measure their similarity based on their cosine distance. Since neural models are trained to capture the meaning of a sentence or paragraph, they put more "attention" to the key words relevant to the semantics. These models however will not work well on rare words because embeddings for low frequency words are not tuned. We devise a novel retrieval method that combines a highly efficient neural retrieval model for conceptual retrieval along with keyword matching methods. Our method has the advantage of being weakly supervised, i.e., not requiring extensive training data or click-through data, and could scale to millions of documents. Our experiments over three large data sets with sentence-based queries show the effectiveness of our approach comparing with neural or traditional keyword-based matching methods alone. We contrast performance on this model with transformer models fine tuned for information retrieval, to show that even in that case, performance improves with the addition of traditional IR metrics. ## 2 Related Work Many neural ranking models have been proposed for IR tasks Guo et al. (2019), but none to our knowledge combine keyword-based techniques with neural techniques directly for relevance scoring. DSSM uses a fully connected network for learning a semantic representation of the query and document, and uses that to rank search results. The C-DSSM Shen et al. (2014) model replaces the fully connected layer in DSSM with a convolutional layer. The motivation for this is to extract contextual features relevant to IR. Palangi et al. (2016) uses LSTM as an encoder in a similar fashion. The DSSM, C-DSSM and LSTM-RNN models use click-through data from a search engine for creating the training set where the query is a normal web query and the document is the title of the clicked web page, so their focus is on the case where users have explicitly formulated queries relevant to the document. Another set of models have been developed for retrieving semantically similar questions in online QA forums. Note that the task here is matching questions that tend to be relatively short, and approximately the same length, as opposed to a conceptual query against a long document. The BOW-CNN model [14] learns embeddings for matching similar questions in AskUbuntu and Quora questions datasets. They train bag-of-words weights similar to TF-IDF and a neural feature extractor for learning representations of questions. The recurrent convolutional network in Lei et al. (2016) learns to weigh the words in a question title or body by using neural gates like LSTM for convolutional features. They also pretrained an encoder-decoder model on the question and their bodies for learning better representation from the smaller training set. Gillick et al. (2018) converted this question retrieval task to a ranking task by creating a set of similar questions from pairs. Weak Supervision has been used to generate training samples from huge datasets for building neural models. Dehghani et al. (2017) use a keyword-based ranker like BM25 as a first stage ranker to get a smaller collection (e.g. 1000 documents) and a neural model is used to rank them, thus limiting the results to keyword-based matches. Zamani et al. (2018) use weak supervision using BM25 to build training data for a neural model, but their approach then relies only on the neural model. Recent work by MacAvaney et al. (2019) uses a content-based approach for creating training data. They use headlines and headings that are more similar to queries formulated by users for the retrieval. In summary, we know of no work that directly tries to combine the advantages of keyword-based retrieval methods with neural methods in relevance scoring. ## 3 Model ### Bag-of-Embedding We train a dual encoder model for matching queries with articles. There are two identical encoders for queries and articles. This encoder module returns an embedding for a sequence of tokens by averaging their word embeddings. The similarity score for a pair of query and article is given by the cosine similarity. \[score(q,a)=\sigma(<v_{q},v_{a}>) \tag{1}\] where \(v_{q}\) and \(v_{a}\) are normalized embeddings of query \(q\) and article \(a\) respectively. Here, \(\sigma\) is the sigmoid function. The model is trained with a margin loss function that maximizes the difference in score between a positive pair and a negative pair. \[L=\max(0,\delta-s_{p}+s_{n}) \tag{2}\] where, \(\delta\) is the margin, \(s_{p}\) is a positive pair and \(s_{n}\) is a negative pair of query and article. The positive and negative examples are created from a batch of positive examples. Our dataset contains \(1\) relevant article for each query. So we treat all other articles in a batch as negative examples. For a query, we select the article with the lowest similarity score in the batch to form the negative pair in Eqn. 2. ### Combination with TF-IDF We use a classic TF-IDF matching approach that transforms a sequence of tokens into a vector of the size of the vocabulary. Each element in this vector is the product of term frequency (\(tf\)) and inverse document frequency (\(idf\)), TF-IDF\((t,d)=tf(t,d)*idf(t)\) where \(tf(t,d)=1+\log(f_{t,d})\), \(f_{t,d}\) is the raw count of term \(t\) in document \(d\), and \(idf(t)=\log\left(\frac{1+n}{1+df(t)}\right)+1\), \(n\) is the number of documents in the collection and \(df(t)\) is the number of documents containing the term \(t\). Cosine similarity is used for the query and article vectors to rank them. So, the ranking score is: \(s_{\text{TF-IDF}}(q,a)=<v_{q},v_{a}>\) where \(v_{q}\) and \(v_{a}\) are normalized TF-IDF vectors for query \(q\) and article \(a\) respectively. We combine the embedding and TF-IDF models by aggregating their score for a pair of query and article. \[s(q,a)=s_{TF-IDF}(q,a)+s_{embed}(q,a) \tag{3}\] where \(s_{embed}(q,a)\) is the dot product of the normalized embeddings vectors of \(q\) and \(a\). Experiments ### Datasets We conducted experiments on 3 datasets to evaluate the ability of a model to semantically match a query and an article: 1) Signal Media News dataset [1] containing \(1M\) articles, 2) Wikipedia corpus containing \(6M\) articles and 3) Google Natural Questions dataset with \(300K\) question and article pairs. We formed query-article pair from the news and Wikipedia dataset by selecting the first sentence of the article as query and treat the rest as the article. In the natural questions dataset, we used the question as the query and the Wikipedia article containing the answer as the article. The news and wikipedia datasets were shuffled and split into train, validation and test sets. For natural questions, we use the validation set as the test set in our experiments since the test set is not public. Table 1 shows the statistics. Table 2 lists the average number of words in the queries and articles for these datasets. ### Training During preprocessing, we performed standard word-based tokenization. Since we are interested in verbose queries, we did not include sentences shorter than 5 words as queries in the news and Wikipedia datasets. Due to limited GPU memory, the articles were truncated to first 1000 tokens during training. We trained the bag-of-embeddings model with Adam [13] optimizer using a learning rate of 0.001. The model was trained on the Wikipedia dataset for 20 epochs. On both the news and natural questions datasets, the model was trained for 50 epochs. Based on validation set performance, we set the embedding dimension to 768, the batch size to 1000 and the margin parameter \(\delta\) to 0.5. The code for experiments on the Signal Media News data set is available in [https://github.com/aniksh/dual_encoder](https://github.com/aniksh/dual_encoder). ### Baselines #### TF-Idf. We used both unigrams and bigrams for building the features in TF-IDF after stopword removal. The _scikit-learn_ implementation is used for this method. BM25.Okapi BM25 [12] is a strong baseline for information retrieval tasks. We tuned the parameters \(k_{1}\) and \(b\) of this model using 100 queries from the validation set. We performed grid search with \(k_{1}\) in the range \([0.5:0.5:5]\) and \(b\) in the range \([0.3:0.1:0.9]\). Dirichlet Language Model.Language model with Dirichlet smoothing has been shown to be a strong baseline in verbose retrieval [10]. We tuned the smoothing parameter \(\mu\) from the range {100, 200, 300, 400, 500, 1000, 1500, 2000, 2500, 3000}. Sentence-Transformers.We use the pretrained model from the sentence-transformers library [13] for information retrieval, msmarco-distibert-base-v2 1. This ranking model is based on BERT [14] and was fine tuned on the MSMARCO passage ranking [1] dataset. Footnote 1: [https://www.sbert.net/docs/pretrained_models.html](https://www.sbert.net/docs/pretrained_models.html) ### Results We report the retrieval accuracy in the test set for 3 datasets using mean reciprocal rank (MRR) and mean precision at top-k (k = 1,3,10) results. These are standard metrics for evaluating ranking models. \[MRR=\frac{1}{|Q|}\sum_{q\in Q}\frac{1}{r_{q}}\] Precision@k measures the existence of the relevant article in the top k predicted results from the model. So precision@1 is equivalent to the accuracy of the ranking model. The models are used to rank all articles in the test set for all queries. In the News dataset, the TF-IDF + embedding model performs close to the BERT-based ranker (Table 3. MRR for the simple embedding model is 0.15 lower but it has a much higher precision at 3 and 10. The bag-of-embedding model lags behind the BERT ranker on Wikipedia dataset (Table 4). Similar to the News dataset, we see that the bag-of-embedding model performs better in the preci \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Train Size & Dev Size & Test Size \\ \hline News & 906.523K & 10K & 10K \\ Wikipedia & 5.123M & 10K & 10K \\ Natural Questions & 297.373K & 10K & 7.83K \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics. \begin{table} \begin{tabular}{l c c} \hline \hline Dataset & Query Length & Article Length \\ \hline News & 10 & 457 \\ Wikipedia & 25 & 447 \\ Natural Questions & 9 & 5643 \\ \hline \hline \end{tabular} \end{table} Table 2: Query and article length sion@3 and precision@10 metrics. So, this model is not as accurate as the BERT-based model but it provides more correct results in the top 3 or the top 10 search results. In the Natural Questions dataset, the combination of TF-IDF and bag-of-embedding model outperforms the BERT ranking model on all metrics. ### Discussion It is evident from the results on these 3 datasets that the shallow embedding model performs better than the huge BERT-based retrieval model when the article length is high. The average article length is close to 450 words for both the News and the Wikipedia datasets while it is more than 5000 for the Natural Questions dataset. The complex BERT model was fine tuned on the MSMARCO passage ranking dataset where the average article length is about 55 words. We know BERT uses the WordPiece tokenizer and has a maximum sequence length of 512 tokens. So for long articles, there is a limitation to the quality of representation obtained by BERT. Our embedding model was trained with a maximum sequence length of 1000 words. So it can achieve better performance for matching queries to articles on the Natural Questions dataset. Effect of TF-IDFAdding TF-IDF scores to the relevance score from any embedding model improves its performance on all 3 datasets. We show an example from the News dataset in Figure 1. The BERT-ranker focused on the word _spirit_ to match with the top ranking article. But the name _Tom Wood_ is an easy match for the actual article. Since TF-IDF puts more weight on rare words for matching, adding the TF-IDF score to the prediction from BERT-ranker pushed this article to the top result. This explains how TF-IDF can improve the performance of neural models with a negligible computational cost. ## 5 Conclusion In this paper, we showed that simple superposition of relevance scores from TF-IDF and neural ranking models can provide significant boost in retrieval performance. We presented a scalable and efficient neural ranking model using bag of word embeddings, and showed its effectiveness through experiments on three datasets with different characteristics. We plan to use this training framework to build more efficient neural ranking models from language models that can seamlessly work on longer articles. A major challenge in evaluation of neural retrieval models is the lack of publicly available models and open-source implementation of the solutions. Hence, we intend to make our source code as well as our data sets publicly available to ensure the reproducibility of our results. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{MRR} & Mean & Mean & Mean \\ & & P@1 & P@3 & P@10 \\ \hline TF-IDF & 66.62 & 58.96 & 72.43 & 79.88 \\ BM25 & 67.75 & 63.97 & 69.93 & 74.23 \\ LM-Dirichlet & 50.68 & 46.81 & 52.67 & 57.5 \\ \hline BERT-ranker & 77.99 & 74.55 & 79.76 & 84.07 \\ TF-IDF + BERT-ranker & **80.45** & **77.4** & 81.93 & 85.71 \\ \hline TF-IDF + BOE & 79.33 & 71.99 & **84.59** & **92.1** \\ \hline \hline \end{tabular} \end{table} Table 4: Retrieval performance on the Wikipedia dataset. Figure 1: Example query from the News dataset and result from the BERT-based ranking model \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{MRR} & Mean & Mean & Mean \\ & & P@1 & P@3 & P@10 \\ \hline TF-IDF & 66.21 & 58.34 & 71.23 & 80.32 \\ BM25 & 67.63 & 61.86 & 71.2 & 77.81 \\ LM-Dirichlet & 55.6 & 50.49 & 58.64 & 64.87 \\ \hline BERT-ranker & 76.59 & 71.58 & 80.13 & 84.96 \\ TF-IDF + BERT-ranker & **82.96** & **79.15** & 85.76 & 89.05 \\ \hline TF-IDF + BOE & 82.81 & 77.75 & **87.22** & **91.13** \\ \hline \hline \end{tabular} \end{table} Table 3: Retrieval performance on the News dataset. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{MRR} & Mean & Mean & Mean \\ & & P@1 & P@3 & P@10 \\ \hline TF-IDF & 70.07 & 59.65 & 77.34 & 88.59 \\ BM25 & 71.48 & 61.09 & 79.30 & 89.17 \\ LM-Dirichlet & 64.42 & 54.05 & 71.78 & 82.61 \\ \hline BERT-ranker & 73.84 & 68.66 & 84.94 & 92.4 \\ TF-IDF + BERT-ranker & 78.69 & 68.68 & 87.31 & 94.62 \\ \hline TF-IDF + BOE & **79.66** & **69.68** & **88.14** & **95.40** \\ \hline \hline \end{tabular} \end{table} Table 5: Retrieval performance on the Natural Questions dataset.
2310.01218
Making LLaMA SEE and Draw with SEED Tokenizer
The great success of Large Language Models (LLMs) has expanded the potential of multimodality, contributing to the gradual evolution of General Artificial Intelligence (AGI). A true AGI agent should not only possess the capability to perform predefined multi-tasks but also exhibit emergent abilities in an open-world context. However, despite the considerable advancements made by recent multimodal LLMs, they still fall short in effectively unifying comprehension and generation tasks, let alone open-world emergent abilities. We contend that the key to overcoming the present impasse lies in enabling text and images to be represented and processed interchangeably within a unified autoregressive Transformer. To this end, we introduce SEED, an elaborate image tokenizer that empowers LLMs with the ability to SEE and Draw at the same time. We identify two crucial design principles: (1) Image tokens should be independent of 2D physical patch positions and instead be produced with a 1D causal dependency, exhibiting intrinsic interdependence that aligns with the left-to-right autoregressive prediction mechanism in LLMs. (2) Image tokens should capture high-level semantics consistent with the degree of semantic abstraction in words, and be optimized for both discriminativeness and reconstruction during the tokenizer training phase. With SEED tokens, LLM is able to perform scalable multimodal autoregression under its original training recipe, i.e., next-word prediction. SEED-LLaMA is therefore produced by large-scale pretraining and instruction tuning on the interleaved textual and visual data, demonstrating impressive performance on a broad range of multimodal comprehension and generation tasks. More importantly, SEED-LLaMA has exhibited compositional emergent abilities such as multi-turn in-context multimodal generation, acting like your AI assistant.
Yuying Ge, Sijie Zhao, Ziyun Zeng, Yixiao Ge, Chen Li, Xintao Wang, Ying Shan
2023-10-02T14:03:02Z
http://arxiv.org/abs/2310.01218v1
# Making LLaMA SEE and Draw with SEED Tokenizer ###### Abstract The great success of Large Language Models (LLMs) has expanded the potential of multimodality, contributing to the gradual evolution of General Artificial Intelligence (AGI). A true AGI agent should not only possess the capability to perform predefined multi-tasks but also exhibit emergent abilities in an open-world context. However, despite the considerable advancements made by recent multimodal LLMs, they still fall short in effectively unifying comprehension and generation tasks, let alone open-world emergent abilities. We contend that the key to overcoming the present impasse lies in enabling text and images to be represented and processed interchangeably within a unified autoregressive Transformer. To this end, we introduce **SEED**, an elaborate image tokenizer that empowers LLMs with the ability to **SEE** and **D**raw at the same time. We identify two crucial design principles: (1) Image tokens should be independent of 2D physical patch positions and instead be produced with a _1D causal dependency_, exhibiting intrinsic interdependence that aligns with the left-to-right autoregressive prediction mechanism in LLMs. (2) Image tokens should capture _high-level semantics_ consistent with the degree of semantic abstraction in words, and be optimized for both discriminativeness and reconstruction during the tokenizer training phase. With SEED tokens, LLM is able to perform scalable multimodal autoregression under its original training recipe, i.e., next-word prediction. SEED-LLaMA1 is therefore produced by large-scale pretraining and instruction tuning on the interleaved textual and visual data, demonstrating impressive performance on a broad range of multimodal comprehension and generation tasks. More importantly, SEED-LLaMA has exhibited compositional emergent abilities such as multi-turn in-context multimodal generation, acting like your AI assistant. Footnote 1: This work is a follow-up of SEED [1], where we update the visual tokenizer and present SEED-LLaMA. Figure 1: The introduced SEED-LLaMA, a multimodal AI assistant, demonstrates **emergent ability** in the multi-turn in-context image and text generation given multimodal instructions. ## 1 Introduction In recent years, Large Language Models [2; 3; 4] (LLMs) pre-trained on massive text corpus with straightforward training objectives such as next-word prediction have exhibited remarkable abilities to understand, reason, and generate texts across a variety of open-ended tasks. Recent studies further exploit the strong generality of LLMs to improve visual understanding or generation tasks, collectively referred to as Multimodal LLM (MLLM). While these studies have contributed to technological advancements, MLLMs have yet to achieve the remarkable success of LLMs in terms of emergent capabilities. We have made a bold assumption that the premise for the emergence of multimodal capabilities is that text and images can be represented and processed **interchangeably** in a unified autoregressive Transformer. We posit that a proper visual tokenizer is the key as it can facilitate the follow-up multimodal training by (i) easing the semantic alignment between visual and word tokens, and (ii) enabling LLM's original training recipe (i.e., next-word prediction) for multimodal data without specific adaptation for visual tokens. Representing images as a sequence of discrete IDs is naturally compatible with the autoregressive training objective of LLMs. But unfortunately, works [5; 6] that utilize discretized visual tokens for multimodal tasks have receded from prominence, as such models generally rely on super-scale training to converge, leading to substantial training costs. Moreover, our previous work [1] empirically found that the dominant tokenizer VQ-VAE [7] in existing works captures too low-level information for LLMs to effectively perform multimodal comprehension tasks. Existing image tokenizers fail to meet the requirements of unifying the generation of images and texts and facilitating multimodal training. To this end, we introduce **SEED**, a VQ-based image tokenizer that produces discrete visual codes with 1D causal dependency and necessary high-level semantics for both visual comprehension and generation tasks, as shown in Fig. 2 (a). The off-the-shelf LLMs can be readily equipped with SEED by treating discrete visual tokens as new words and updating the vocabulary. We would like to emphasize the design principles of SEED. (1) _Why causal-dependent tokens?_ Existing visual tokens (_e.g._, from VQ-VAE or CLIP-ViT [8]) are generated using 2D context, which is incompatible with the unidirectional attention in dominant LLMs and counterintuitive for text-to-image tasks requiring raster order prediction. Thus, we convert 2D raster-ordered embeddings into a sequence of semantic codes with 1D causal dependency. (2) _Why high-level semantics?_ Since visual and textual tokens in LLMs are expected to be interoperable--sharing weights and training objectives--they should encompass the same degree of semantics to prevent misalignment, i.e., the high-level semantics inherently present in words. Specifically, the SEED tokenizer is composed of a ViT encoder [9], Causal Q-Former, VQ Codebook [7], multi-layer perceptron (MLP), and a UNet decoder [10]. The ViT encoder and UNet decoder are directly derived from the pre-trained BLIP-2 [11] and unCLIP-SD model [12; 13], respectively. (1) _Tokenize:_ Causal Q-Former converts 2D raster-ordered features produced by the ViT encoder into a sequence of causal semantic embeddings, which are further discretized by the VQ Codebook. Figure 2: (a) SEED is a discrete image tokenizer, producing quantized visual codes with 1D causal dependency and high-level semantics. (b) With SEED tokenizer, LLM is able to perform scalable multimodal autoregression on interleaved visual and textual data with next-word-prediction objective. (2) _De-Tokenize:_ The discrete visual codes are decoded into generation embedding via MLP. The generation embedding is aligned with the latent space of unCLIP-SD so that realistic images with consistent semantics can be generated using the off-the-shelf SD-UNet. We further present **SEED-LLaMA** by equipping the pre-trained LLM [2] with SEED tokenizer. SEED-LLaMA is pretrained on multimodal data, including image-text pairs, video-text pairs, and interleaved image-text data, toward the training objective of next-word prediction as shown in Fig. 2 (b). Such an easy-to-implement and unified proxy task facilitates scalable multimodal pretraining. We further apply multimodal instruction tuning to align SEED-LLaMA with human instructions through supervised fine-tuning. Our model demonstrates extensive emergent abilities such as multi-turn in-context image and text generation given multimodal instructions as shown in Fig. 1. We also benchmark on a broad range of tasks including image captioning, image/video question answering, and text-to-image generation, receiving competitive performance. In summary, our contributions are three-fold. (1) We introduce SEED, an advanced image tokenizer, designed based on the insights that visual tokens compatible with LLMs should capture high-level semantics while being generated with 1D causal dependency. The tailored SEED improves the scalability of subsequent multimodal training. (2) We present SEED-LLaMA, composed of a pre-trained LLM and SEED tokenizer, through large-scale multimodal pretraining and instruction tuning under the next-word-prediction training objective. It successfully unified multimodal comprehension and generation tasks in one framework. (3) SEED-LLaMA shows competitive results on existing multimodal tasks (e.g., text-to-image, image-to-text) and further demonstrates emergent abilities in multi-turn in-context multimodal understanding, reasoning, and generation. ## 2 Related Work **MLLMs for Comprehension and Generation.** With the impressive success of Large language models [2; 3; 4] (LLMs), recent studies work on Multimodal LLM (MLLM) to improve visual **comprehension** through utilizing the strong generality of LLMs. Previous work [14; 11; 15; 16; 17; 18; 19; 20] align visual features of pre-trained image encoder with LLMs on image-text datasets. However, these work commonly use the prediction of the next _text token_ as the objective, thus can only output texts. To empower LLMs with the image **generation** ability, CogView [6] pre-trains a visual tokenizer by reconstructing image pixels, and fine-tunes GPT [3] with the objective of next-token prediction. GILL [21] learns a mapping between the embeddings of a LLM and a frozen text-to-image generation model. Both work aim to generate images with LLMs, without being explicitly designed for unifying multimodal comprehension and generation. Our concurrent works [22; 23] both perform multimodal autoregression including the generation of images and texts. CM3Leon [23] utilizes discrete visual codes from a image tokenizer [24] pre-trained on image pixel reconstruction and performs image-to-text and text-to-image autoregression. However, it yields suboptimal performance in visual comprehension tasks (e.g., CIDEr 61.6 vs. ours 126.9 on COCO image captioning) because the image tokenizer captures too low-level information. Emu [22] employs continuous visual representations and is pre-trained on interleaved multimodal sequences through classifying the next text token or **regressing** the next visual embedding. For image generation, Emu further fine-tunes a SD model to accommodate the output representations from the LLM. By contrast, we pre-train a discrete image tokenizer, where the visual codes can be decoded to realistic images using the off-the-shelf SD model, and perform multimodal autoregressive with a unified next-word-prediction objective, which facilitates scalable multimodal training. **Visual Tokenizer.** Visual tokenizer aims to represent images as a sequence of discrete tokens. Previous work [7; 5; 25; 26] trains a Vector Quantized Variational AutoEncoders (VQ-VAE) by reconstructing image pixels, which captures only low-level details such as color, texture and edge. Beit v2 [27] trains a visual tokenizer through reconstructing high-level features from the teacher model, but its visual codes from 2D features of a vision transformer [9] are incompatible with the unidirectional attention in dominant LLMs for image generation. By contrast, we present SEED tokenizer, which produces discrete visual codes with 1D causal dependency and high-level semantics. ## 3 Method ### SEED Tokenizer As shown in Fig. 3, the SEED tokenizer is composed of a ViT encoder [9], Causal Q-Former, VQ Codebook [7], multi-layer perceptron (MLP), and a UNet decoder [10]. The ViT encoder and UNet decoder are directly derived from the pre-trained BLIP-2 [11] and unCLIP [13] Stable Diffusion (unCLIP-SD) [12], respectively. We first train a Causal Q-Former to convert 2D raster-ordered features (16\(\times\)16 tokens) produced by the ViT encoder into a sequence of causal embeddings (32 tokens). We then train a visual codebook to discretize the causal embeddings to quantized visual codes (32 tokens) with causal dependency. We employ a MLP to decode the visual codes into generation embedding (1 token), which is aligned with the latent space of the pre-trained unCLIP-SD conditioned on image embedding. Our previous work [1] aligns generation embeddings with the text embeddings of SD [12], and we analyze the difference in Sec. 4.3. We pre-train SEED tokenizer on CC3M [28], Unsplash [29], LAION-COCO [30] and MS-COCO [31]. #### 3.1.1 Training Stage I: Causal Q-Former As shown in Fig. 3, a set number of learnable query embeddings (32 tokens) and features of a pre-trained ViT encoder [8] are fed into the Causal Q-former to encode a fixed number of causal embeddings (32 tokens) of the input image. Specifically, the query embeddings can interact with only previous queries through self-attention layers with causal mask, and interact with frozen image features through cross-attention layers. We adopt contrastive learning to optimize Causal Q-former fine-tuned from BLIP-2 Q-Former on image-text pairs. We use contrastive loss to maximize the similarity between the **final** causal embedding and text features of the corresponding caption. #### 3.1.2 Training Stage II: Visual Tokenize and De-tokenize As shown in Fig. 3, we train a VQ codebook to discretize the causal embeddings (32 tokens) into quantized visual codes (32 tokens). Specifically, a quantizer looks up the nearest neighbor in the codebook for each causal embedding and obtains the corresponding code. We employ a decoder, which is a multi-layer Transformer [9], to reconstruct the continuous causal embeddings from discrete codes. During training, we maximize the cosine similarity between the output of the decoder and the causal embeddings. We further employ a MLP to reconstruct the image embedding (1 token) of a frozen unCLIP-SD from discrete codes. During training, we minimize the MSE loss between the Figure 3: Overview of **SEED** tokenizer, which produces discrete visual codes with causal dependency and high-level semantics. The generation embedding from visual codes can be decoded to realistic images with the frozen unCLIP [13] SD, which is conditioned on image embedding. generation embedding and the image embedding of unCLIP-SD. During inference, the generation embedding are fed into the off-the-shelf SD-UNet to decode realistic images. ### SEED-LLaMA #### 3.2.1 Training Stage I: Multimodal Pretraining As shown in Fig. 4, SEED-LLaMA adopts a unified next-word-prediction training objective on interleaved visual and textual data. Specifically, visual inputs are first discretized into a sequence of causal codes by SEED tokenizer. Then the interleaved visual codes and text tokens are fed into the pretrained LLM for performing multimodal autoregression, where the visual codes are treated as new words and the vocabulary of the LLM is updated accordingly. We maximize the likelihood in a unified autoregressive manner as follows: \[L(\mathcal{U})=\sum_{i}\log P\left(u_{i}\mid u_{i-k},\dots,u_{i-1};\Theta\right) \tag{1}\] where \(u_{i}\) represents visual code or text token, and \(\Theta\) denotes the the parameters of the transformer. We initialize SEED-LLaMA from a pre-trained LLM, and add 8192 visual codes to the vocabulary. The embedding layer and decoder head layer in the transformer are expanded and the parameters of added visual codes are randomly initialized. For efficiency, we first train SEED-LLaMA using LoRA [32] tuning and together optimize the parameters of the embedding layer and decoder head layer due to the added visual codes. We then merge the parameters of LoRA onto the LLM backbone and fine-tune all parameters except for the embedding layer. We freeze the embedding layer since we observe that fine-tuning it together with other parameters can lead to unstable training loss, which is also reported in BLOOM [33] and GLM-130B [34]. We preprocess the images and videos into discrete tokens beforehand to conserve computational resources. We perform pretraining using two versions of LLM, Vicuna-7B and Llama2-chat-13B, with 64 A100-40G GPUs, and yield SEED-LLaMA-8B (144 hours) and SEED-LLaMA-14B (216 hours), respectively. See Appendix. B for details. #### 3.2.2 Training Stage II: Multimodal Instruction Tuning We perform multimodal instruction tuning on SEED-LLaMA to align it with human instructions through supervised finetuning on public datasets. The details of datasets can be found in Appendix. C. We fine-tune a LoRA module on the pre-trained SEED-LLaMA with the template as below, \[\text{USER:}\quad\text{<Instruction>}\quad\text{ASSISTANT:}\quad\text{< Answer>} \tag{2}\] Only the content of <Answer> is accounted for loss. The overall instruction tuning phase takes 16 hours for SEED-LLaMA-8B and 27 hours for SEED-LLaMA-14B with 32 A100-80G GPUs. Figure 4: Overview of the multimodal autoregressive pretraining on interleaved visual and textual data for **SEED-LLaMA**. Visual inputs are pre-processed into discrete tokens to conserve computational resources. Given the multimodal discrete sequence, a unified next-word-prediction objective is employed. During inference, visual codes are decoded into a realistic image by SEED De-Tokenization. ## 4 Experiment ### SEED Tokenizer **Evaluation of Causal Embeddings.** We evaluate the performance of Causal Q-Former on the image-text retrieval using COCO [35] and Flickr30K [36]. The performance is measured by \(Recall@K\) (R@K). Note that we adopt the dual-stream paradigm for inference and remove the image-text-matching (ITM) re-rank module in BLIP-2 for a fair comparison. As shown in Tab. 1, our Causal Q-former achieves better results than BLIP-2 in terms of an aggregated metric \(Recall@mean\). It demonstrates that the output query embeddings with causal dependency do not drop performance than the output embeddings with bi-directional attention in BLIP-2. **Evaluation of Causal Codes.** We evaluate causal codes on the image-text retrieval, where the reconstructed embeddings from causal codes are used for retrieval. As shown in Tab. 1, discrete codes exhibit competitive performance compared to BLIP-2, which demonstrates that the discrete codes from SEED tokenizer capture high-level semantics, which are suitable for visual comprehension. We further evaluate image reconstruction on COCO and Flickr30K dataset. SEED first discretizes input images into causal codes (32 tokens) and obtain generation embedding (1 token), which are fed into the unCLIP-SD-UNet for reconstruction. We follow GILL [21] to compute the CLIP similarity score as the metric to evaluate the semantic consistency. As shown in Tab. 2, compared with the upper bound unCLIP-SD, SEED only slightly drops performance. We visualize the reconstructed images of SEED tokenizer in Fig. 5. Through obtaining the generation embedding from the causal visual codes, realistic images can be generated using the frozen SD-UNet, which maintain consistent semantics with inputs. _The above evaluation and visualization demonstrate the versatility of SEED visual tokens for both comprehension and generation tasks._ ### SEED-LLaMA #### 4.2.1 Quantitative Evaluation **Multimodal Comprehension.** We evaluate SEED-LLaMA on a wide range of multimodal comprehension tasks including image captioning and image/video question answering. Details of these benchmarks and evaluation metrics are provided in Appendix. D. As shown in Tab. 3, our SEED-LLaMA achieves competitive performance in both the image and video understanding tasks compared \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{6}{c|}{Fickr30K (1K test set)} & \multicolumn{6}{c}{COCO (5K test set)} \\ \cline{2-13} & \multicolumn{2}{c}{Image \(\rightarrow\) Text} & \multicolumn{2}{c}{Text \(\rightarrow\) Image} & \multicolumn{2}{c}{Image \(\rightarrow\) Text} & \multicolumn{2}{c}{Text \(\rightarrow\) Image} \\ \cline{2-13} & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@m & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@10 & R@m \\ \hline BLIP-2 [11] & 81.9 & 98.4 & 99.7 & **82.4** & **96.5** & **98.4** & 92.9 & 65.3 & 89.9 & 95.3 & **59.1** & 82.7 & **89.4** & 80.3 \\ SEED (causal embedding) & **91.0** & **99.5** & **100.0** & 79.3 & 94.8 & 97.1 & **93.6** & **74.2** & **93.1** & **96.7** & 59.0 & **82.8** & 89.2 & **82.5** \\ SEED (causal code) & 85.4 & 98.3 & 99.6 & 73.7 & 92.3 & 95.7 & 90.8 & 66.9 & 89.3 & 94.4 & 53.2 & 78.8 & 86.6 & 78.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of Image-Text Retrieval. Causal codes are quantized causal embeddings. \begin{table} \begin{tabular}{l c c} \hline \hline Model & COCO & Flickr30K \\ \hline _Image-to-image_ & \\ unCLIP [13] SD & **79.30** & **79.55** \\ SEED\({}^{\text{ext}}\)[1] & 68.23 & 65.22 \\ SEED & 77.35 & 76.52 \\ \hline _Text-to-image_ & \\ GILL [37] & 67.45 & 65.16 \\ Emu [22] & 66.46 & 64.82 \\ SEED-LLaMA & 69.07 & 65.54 \\ SEED-LLaMA-I & **70.68** & **66.55** \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation of Image Generation. Figure 5: Reconstruction images of SEED tokenizer (_i.e._, original image \(\rightarrow\) SEED tokenize \(\rightarrow\) causal visual codes \(\rightarrow\) SEED de-tokenize \(\rightarrow\) reconstructed image). Figure 6: Qualitative examples of multi-turn in-context image and text generation by SEED-LLAMA given multimodal instructions. \begin{table} \begin{tabular}{l|c|c|c c c c c|c c c} \hline \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Size} & \multirow{2}{*}{Image} & \multicolumn{4}{c|}{Image-Text Tasks} & \multicolumn{4}{c}{Video-Text Tasks} \\ & & & Gen & COCO & VQAv2 & OKVQA & VizWiz & SEED & \multirow{2}{*}{MSVDQA} & \multirow{2}{*}{MSRVTTQA} & \multirow{2}{*}{NEXTAQ} \\ & & & & & & & & & & & \\ \hline Flamingo [19] & 9B & \(\times\) & 79.4 & 51.8 & 44.7 & 28.8 & 42.7 & 30.2 & 13.7 & 23.0 \\ BLIP-2 [38] & 4.1B & \(\times\) & **144.5** & 63.0 & 40.7 & 29.8 & 49.7 & 33.7 & 16.2 & - \\ InstructBLIP [11] & 8.1B & \(\times\) & - & - & - & 34.5 & **58.8** & 41.8 & 22.1 & - \\ Kosmos-1 [39] & 1.6B & \(\times\) & 84.7 & 51.0 & - & 29.2 & - & - & - & - \\ Kosmos-2 [40] & 1.6B & \(\times\) & - & 45.6 & - & - & 54.4 & - & - & - \\ MetaLLM [41] & 1.7B & \(\times\) & 82.2 & 41.1 & 11.4 & - & - & - & - & - \\ IDEFICS [42] & 80B & \(\times\) & 91.8 & 60.0 & 45.2 & 36.0 & - & - & - & - \\ IDEFICS-1 [42] & 80B & \(\times\) & 117.2 & 37.4 & 36.9 & 26.2 & 53.2 & - & - & - \\ CM3leon [23] & 7B & \(\times\) & 61.6 & 47.6 & 23.8 & 37.6 & - & - & - & - \\ Emu [22] & 14B & \(\checkmark\) & 112.4 & 52.0 & 38.2 & 34.2 & 47.3 & 18.8 & 8.3 & 19.6 \\ Emu-1 [22] & 14B & \(\times\) & 117.7 & 40.0 & 34.7 & 35.4 & 58.0 & 32.4 & 14.0 & 6.8 \\ \hline **SEED-LLAMA** & 8B & \(\checkmark\) & 123.6 & 44.2 & **29.2** & 21.5 & 42.2 & 11.5 & 5.0 & 14.3 \\ **SEED-LLAMA** & 8B & \(\checkmark\) & 124.5 & **66.2** & **45.9** & **55.1** & 51.5 & 40.9 & 30.8 & **24.9** \\ **SEED-LLAMA** & 14B & \(\checkmark\) & 125.0 & 48.1 & 27.1 & 23.3 & 46.0 & 13.9 & 3.7 & 11.3 \\ **SEED-LLAMA** & 14B & \(\checkmark\) & 126.9 & 63.4 & 43.2 & 49.4 & 53.7 & **45.2** & **35.3** & 24.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison for multimodal comprehension. “Image Gen” denotes whether the model can generate images besides texts, and “-I” denotes the instruction tuned model. The best results are **bold** and the second best are underlined. with MLLMs that use continuous visual representations. The results demonstrate that our SEED tokenizer can generate discrete visual codes with high-level semantics, which facilities the visual comprehension. We can observe that pretraining from a LLM with larger model size improves performance on SEED-Bench and instruction tuning further contributes to enhanced results. Note that as pointed out by recent work [43, 44], previous VQA benchmarks listed in Tab. 3 are not tailored for evaluating MLLMs with open-from output, since they require an exact match between the model prediction and the target word or phrase. The qualitative examples of multimodal comprehension is provided in Appendix. E. **Text-to-image Generation.** We evaluate the text-to-image generation on MS-COCO [31] and Flickr30K [36] and compute the pair-wise CLIP similarity score as the evaluation metric following GILL [37]. As shown in Tab. 2, images generated by our SEED-LLaMA from textual descriptions show higher similarity with the ground-truth images. The results demonstrate that SEED-LLaMA generates images that are highly correlated with text prompts via a frozen SD-UNet. We show qualitative examples of text-to-image generation in Appendix. E. #### 4.2.2 Emergent Ability **Multi-turn In-context Multimodal Generation.** As shown in Fig. 1 and Fig. 6, given multimodal instructions including images and open-form texts from a user, our SEED-LLaMA can respond with synthesized image (_e.g._, a dog in front of the Golden Gate Bridge), sequentially generated images (_e.g._, a cartoon cat in different scenes), instruction-followed image (_e.g._, a closer look-up of a cherry Figure 7: Qualitative examples of compositional image generation by SEED-LLaMA. blossom), various forms of texts via creation and real-world knowledge (_e_.\(g\)., a story, a poem and flower identification). The results illustrate the impressive capability of SEED-LLaMA in reasoning and generating long-context multimodal content. **Compositional Image Generation.** As shown in Fig. 7, our SEED-LLaMA can realize a variety of zero-shot compositional image generation as below, * Stylized Image Generation. SEED-LLaMA can take a text prompt and a style reference image as inputs and produce an output image that adheres to both the style and text prompt. * Image Blending. SEED-LLaMA can take two images as inputs and generate an image that blends the visual components of the input images. * Multimodal Composition. SEED-LLaMA can take an image prompt and a text prompt as inputs and generate a composite image that combines the multimodal inputs. * In-context Generation. SEED-LLaMA can take images, their textual references, and text prompts as inputs and generate context-related images. ### Ablation Study **Generation Embedding.** The generation embedding of SEED is aligned with the image embedding of unCLIP-SD, and can be decoded to realistic images with the unCLIP-SD-UNet. In our previous work [1], we train a visual tokenizer SEED\({}^{\text{text}}\) through aligning the generation embeddings with the text embeddings (77 tokens) of SD [12] conditioned on texts. As shown in Tab. 2, the similarity between the reconstructed images of SEED\({}^{\text{text}}\) and original images drop heavily. The semantic representations of texts can not fully preserve the rich visual information of images. The visual comparison of the the reconstructed images between SEED\({}^{\text{text}}\) and SEED are provided in Appendix. A. **Causal Visual Codes vs. Bilateral Visual Codes.** We train a Causal Q-Former to convert 2D features produced by the ViT encoder into a sequence of causal semantic embeddings, which are further discretized as causal visual codes. To verify whether the causal visual codes are necessary for compatibility with LLM, we train a visual tokenizer SEED\({}^{\text{Bi}}\), which produces bilateral visual codes from a pre-trained Q-Former with bilateral self-attention. We then pre-train SEED\({}^{\text{Bi}}\)-LLM\({}^{*}\) and SEED-LLM\({}^{*}\) on image-text pairs and evaluate the text-to-image generation on COCO test set. Given 5000 captions of COCO, SEED\({}^{\text{Bi}}\)-LLM only generates 2134 images successfully while SEED-LLM\({}^{*}\) generates 4997 images (Failure cases occur when the model predicts a number of visual tokens not equal to 32). The results demonstrate that the non-causal codes lead to highly unstable model performance since they contradict with the left-to-right autoregressive mechanism of LLM. **SEED-LLaMA Pretraining.** We first train SEED-LLaMA using LoRA tuning, and then merge the parameters of LoRA with the original LLM and fine-tune all parameters except for the embedding layer. To explore whether fully fine-tuning helps, we evaluate the performance of the model before and after fully fine-tuning on image captioning and text-to-image generation, with evaluation metric CIDEr and clip similarity score. Tab. 4 shows that fully fine-tuning the LoRA tuned model enhances model's capability for both image comprehension and generation. ## 5 Conclusion We present SEED, a discrete image tokenizer, designed based on the premise that visual tokens compatible with LLMs should capture high-level semantics while being generated with 1D causal dependency. SEED enables LLMs to be trained with multimodal data following the original recipe of text (_i_.\(e\)., next-word prediction), which is mature and scalable. We further present SEED-LLaMA via multimodal pretraining and instruction tuning on the interleaved visual and textual data with SEED tokenizer. SEED-LLaMA not only exhibits remarkable performance across multimodal comprehension and image generation tasks, but also demonstrates extensive compositional emergent abilities. We hope that SEED would draw increased attention to visual tokenizers. A more rational visual tokenizer could substantially reduce the complexity of multimodal LLM training. \begin{table} \begin{tabular}{c c c} \hline \hline Pretraining & Captioning & Generation \\ \hline LoRA & 124.5 & 68.87 \\ LoRA + Fully & **125.0** & **69.07** \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation of image captioning and text-to-image generation on COCO test set.
2303.07989
A CNN Based Framework for Unistroke Numeral Recognition in Air-Writing
Air-writing refers to virtually writing linguistic characters through hand gestures in three-dimensional space with six degrees of freedom. This paper proposes a generic video camera-aided convolutional neural network (CNN) based air-writing framework. Gestures are performed using a marker of fixed color in front of a generic video camera, followed by color-based segmentation to identify the marker and track the trajectory of the marker tip. A pre-trained CNN is then used to classify the gesture. The recognition accuracy is further improved using transfer learning with the newly acquired data. The performance of the system varies significantly on the illumination condition due to color-based segmentation. In a less fluctuating illumination condition, the system is able to recognize isolated unistroke numerals of multiple languages. The proposed framework has achieved 97.7%, 95.4% and 93.7% recognition rates in person independent evaluations on English, Bengali and Devanagari numerals, respectively.
Prasun Roy, Subhankar Ghosh, Umapada Pal
2023-03-14T15:44:45Z
http://arxiv.org/abs/2303.07989v1
# A CNN Based Framework for Unistroke Numerical Recognition in Air-Writing ###### Abstract Air-writing refers to virtually writing linguistic characters through hand gestures in three-dimensional space with six degrees of freedom. This paper proposes a generic video camera-aided convolutional neural network (CNN) based air-writing framework. Gestures are performed using a marker of fixed color in front of a generic video camera, followed by color-based segmentation to identify the marker and track the trajectory of the marker tip. A pre-trained CNN is then used to classify the gesture. The recognition accuracy is further improved using transfer learning with the newly acquired data. The performance of the system varies significantly on the illumination condition due to color-based segmentation. In a less fluctuating illumination condition, the system is able to recognize isolated unistroke numerals of multiple languages. The proposed framework has achieved 97.7%, 95.4% and 93.7% recognition rates in person independent evaluations on English, Bengali and Devanagari numerals, respectively. Air-writing, human-computer interaction, gesture recognition, handwritten character recognition, convolutional neural networks. ## I Introduction Air-writing systems render a form of gestural human-computer interaction. Such systems are especially useful for building advanced user interfaces that do not require traditional mechanisms of linguistic input, such as pen-up-pen-down motion, hardware input devices or virtual keyboards. On the contrary, these advanced systems provide an interface for writing through hand gestures in three-dimensional space with six degrees of freedom. The input scheme in such systems fundamentally differs from a generic pen-up-pen-down input mechanism because, in the case of the former, there is no robust way of defining _start_ (pen-down) and _stop_ (pen-up) states while writing. Unlike conventional writing, air-writing systems lack actual anchoring and reference positions on the writing plane. Gestures are guided by considering imaginary axes in three-dimensional space. Consequently, these facts contribute to the increased variability of writing patterns for such systems, thereby accounting for the non-trivial nature of the problem. The possibility of air-writing systems has emerged with the rapid development of depth sensors, such as Kinect [1] and LEAP Motion [2], in recent years. Depth sensors and computer vision techniques are used to track fingertips, followed by recognition of the performed gestures using a trained model. However, these sensors are not widely available to common devices, which restricts these systems from being easily accessible. While depth sensors are not widely available, generic cameras are embedded in many commonly used devices. Therefore a generic video camera-aided air-writing system can be incredibly beneficial. However, unlike depth sensors, a generic camera does not provide information regarding scene depth or bone joints, making it more challenging to process and achieve reliable recognition accuracy. Common approaches for building air-writing systems involve various sensors, contributing to accurate motion tracking but limiting mass adoption with cost-effective general-purpose usage. In this work, attempts have been made to build a generic video camera-based air-writing system avoiding any other additional sensors. The remainder of this paper is organized as follows. In Sec. II, some of the significant previous works are discussed. In Sec. III, the proposed work is presented. Sec. IV describes the experimental results. Sec. V concludes the paper with a summary of the work and potential scopes. ## II Related Work Most existing works on air-writing rely on depth sensors such as Kinect [1] and LEAP Motion [2], or wearable gesture and motion control hardware such as Myo [3]. While these approaches offer highly accurate motion tracking that results Fig. 1: Overview of the proposed framework. **Top-left:** Original video frame. **Top-right:** Original video frame with approximate marker trajectory overlay. **Bottom-left:** Segmentation mask. **Bottom-right:** Segmented marker and approximate marker trajectory. in a better recognition rate, they limit the cost-efficient general adoption due to essential dependency on special-purpose external hardware. Chen _et al._[4, 5] have used LEAP Motion for tracking and a Hidden Markov Model (HMM) for recognition that achieves 0.8% error rate for word-based recognition and 1.9% error rate for letter-based recognition. Kristensson _et al._[6] have proposed a bimanual markerless interface for depth sensors using a probabilistic scale and translation invariant algorithm that achieves 92.7% accuracy for one-handed and 96.2% accuracy for two-handed gestures. Dash _et al._[7] have utilized a Myo armband sensor along with a novel Fusion model architecture by combining one Convolutional Neural Network (CNN) and two Gated Recurrent Units (GRU). The Fusion model outperforms other widely used models such as HMM, SVM and KNN with an accuracy of 91.7% in a person-independent evaluation and 96.7% in a person-dependent evaluation. Schick _et al._[8] have introduced a sensor-independent markerless framework using multiple cameras for 3D hand tracking, followed by recognition with HMM. This method achieves 86.15% recognition rate for characters and 97.54% recognition rate for isolated word recognition on a small vocabulary. All previously proposed systems involve special-purpose sensors or a multi-camera setup which restricts mainstream adoption of these systems. This paper proposes a single camera-based air-writing framework that can be seamlessly integrated into many common devices with a built-in video camera, such as smartphones and laptops. ## III Proposed Work ### _Marker segmentation_ Due to the high variability of human skin tone, it is significantly challenging to segment hands from the background by color-based segmentation technique. The proposed technique uses a marker object of fixed uniform color to mitigate the potential difficulties in hand and skin segmentation. Due to a uniform color distribution, the marker can be segmented from the background using a threshold. Assuming \(f(x,y)\) and \(g(x,y)\) to be pixel values at position \((x,y)\) of the initially captured video frame and segmented frame, respectively, and \(I_{m}\) being the threshold for segmentation which is essentially the uniformly distributed pixel value of the marker object, \[g(x,y)=\begin{cases}1,&\text{if }f(x,y)=I_{m}\\ 0,&\text{otherwise}\end{cases} \tag{1}\] ### _Marker tip identification_ The resulting segmented binary image \(g(x,y)\) contains the marker object and some noise. If the marker has a uniform distribution of color and this color is sufficiently distinctive from the background, the contour with the largest area in the segmented image can be labeled as the marker. The marker tip is estimated as the topmost point on the contour boundary with the lowest \(y\)-coordinate value. ### _Trajectory approximation_ Unlike conventional _pen-up-pen-down_ motion-based writing with distinct breakpoints, the air-writing scheme is continuous. This fact contributes to higher complexity for segmenting air-written text into individual characters. This work proposes a velocity-based virtual _pen-up-pen-down_ motion estimation to address this difficulty. The effect of different rendering speeds of different cameras is normalized with the number of rendered frames per second (\(N_{FPS}\)) estimated as \[N_{FPS}=\frac{1}{t_{update}} \tag{2}\] where \(t_{update}\) is the time required to process the last video frame. Assuming \(\Delta_{x}\) and \(\Delta_{y}\) as changes in the position of the marker tip along \(x\) and \(y\) direction, respectively, between two consecutive frames, the normalized instantaneous velocity of the marker tip at time instance \(t\) is estimated as \[dx =\frac{1}{N_{FPS}}\sum_{t}^{t+N_{FPS}}\Delta_{x} \tag{3}\] \[dy =\frac{1}{N_{FPS}}\sum_{t}^{t+N_{FPS}}\Delta_{y} \tag{4}\] The _start_ and _stop_ of continuous trajectory during air-writing is decided by comparing \(dx\) and \(dy\) with a velocity threshold \(v_{T}\) estimated experimentally. When both \(dx\) and \(dy\) are below \(v_{T}\), the marker is assumed to be **static** (_pen-up_ state). Otherwise, the marker is assumed to be in **motion** (_pen-down_ state). The trajectory of the marker tip is approximated as a piece-wise linear curve by considering straight line segments between marker tip positions in every two consecutive video frames when the marker is in the dynamic state. The motion modeling scheme is shown in Fig. 2. ### _Character recognition_ The approximate trajectory of the marker tip is the projection of an air-written character from the three-dimensional space onto a two-dimensional image plane. A pre-trained convolutional neural network (CNN) is employed to predict the written character from this projected image. At the time Fig. 2: Velocity-based motion modelling scheme. of this study, no standard dataset is available for air-written numerals. Therefore the CNN model is initially trained on the handwritten digits of the standard MNIST dataset [9]. Later the pre-trained model is fine-tuned on a smaller dataset of newly acquired air-written characters. Fig. 3 shows the architecture of the proposed CNN. The model includes a feature extraction block followed by a classification block. The feature extraction block takes a \(56\times 56\) grayscale image as input, and it consists of two convolution layers, each followed by a pooling layer. The first convolution layer uses 32 filters and a \(5\times 5\) kernel. The second convolution layer uses 16 filters and a \(3\times 3\) kernel. Both convolution layers use rectified linear units (ReLU) as activation functions. The classification block consists of three fully connected layers with 128, 64 and \(n\) computing units (neurons), respectively, where \(n\) is the number of output classes corresponding to the character set under consideration. The first two fully connected layers use ReLU as the activation function, while the final layer uses a normalized exponential function (softmax) for the same purpose. The possibility of overfitting is addressed using 20% dropout between the two blocks. ## IV Results & Discussion ### _Data Acquisition_ To the best of our knowledge, no standard dataset of air-written characters is currently available for benchmarking the proposed method. For this reason, a dataset of air-written numerals is compiled to aid further experimentation. Data is recorded using a marker of fixed uniform color and a generic video camera. After marker segmentation, marker tip identification, followed by trajectory approximation, an image of the locus of the marker tip is obtained. Each resulting image instance is then numerically labeled and appended to the dataset. Three different datasets for English, Bengali and Devanagari air-written numerals are separately prepared. Each dataset contains 10000 air-written numerals collected from 20 individuals, each writing one numeral 50 times. For pre-training, three standard datasets of handwritten numerals, one for each language, are prepared. These datasets consist of 70000 English handwritten numerals from the MNIST dataset [9], 14650 Bengali and 22546 Devanagari handwritten numerals from [10]. The air-writing dataset of each language is divided into two disjoint sets - a training set (TS-A) with 6000 instances and a test set (EVAL) with 4000 instances. The other training set (TS-B) includes 70000 English, 14650 Bengali and 22546 Devanagari handwritten numerals. Dataset distributions of English, Bengali and Devanagari numerals are shown in Table I, II and III, respectively. ### _Results_ Table IV shows the quantitative comparison of test accuracy on EVAL for different combinations of training sets TS-A and TS-B for each language. In every case, the best performance is obtained while pre-training on handwritten samples, followed by fine-tuning with air-written samples. The proposed framework is not directly comparable with most of the previous works because of the difference in evaluation methods and types of datasets. However, under comparable evaluation conditions, the proposed framework achieved a 6% better recognition rate than a recently proposed method [7]. Our proposed system achieved 97.7% accuracy for air-written English numerals whereas Dash _et al._[7] obtained 91.7% accuracy using person independent evaluation protocol. ### _Error analysis_ Figure. 4 shows the confusion matrices obtained during evaluation on EVAL with each combination of TS-A and TS-B for English, Bengali and Devanagari numerals. The recognition rate is significantly improved when pre-training the Fig. 4: Confusion matrices for English, Bengali and Devanagari numerals. For each language – **Top-left:** Training on TS-A and testing on EVAL. **Top-right:** Training on TS-B and testing on EVAL. **Bottom-left:** Training on TS-A + TS-B combined and testing on EVAL. **Bottom-right:** Training on TS-B followed by fine-tuning on TS-A and testing on EVAL. CNN with a larger dataset of handwritten numerals, followed by fine-tuning the model with a much smaller dataset of air-written numerals. This can be achieved due to geometric similarities between handwritten and air-written numerals. Upon pre-training on a larger dataset of handwritten numerals, the convolution layers are initially trained to extract features. Afterward, the model is fine-tuned on a much smaller dataset of air-written numerals to achieve a robust recognition performance through domain adaptation. The proposed technique has achieved a reasonably robust recognition performance on numerals of multiple languages in real-time tests. However, there are certain occasions when the model fails to predict the air-written characters correctly, potentially due to malformed characters during trajectory tracing caused by unintended movements (shakes) of the camera or user. Table V shows a few examples of misclassified numerals recorded during real-time testing, along with actual class (intended by the user), predicted class (inferred by the model) and corresponding confidence scores. ## V Conclusion This paper proposes a robust framework for multi-language unistroke air-written numeral recognition. To avoid the difficulties of human skin segmentation, a marker of uniform color is used. Recognition is performed by a CNN, pre-trained on a large dataset of handwritten numerals, followed by domain adaptation through fine-tuning on a small dataset of air-written numerals. In experiments, the proposed framework has achieved 97.7%, 95.4% and 93.7% recognition rates over English, Bengali and Devanagari numerals, respectively. The primary advantage of the method is a cost-efficient and easily adaptable approach that is entirely independent of any depth or motion sensors, such as Kinect, LEAP Motion and Myo Armband. The framework can be seamlessly integrated into any common device having a generic video camera. Adopting the framework directly for hands without requiring a fixed marker is the potential scope of a major improvement of the method for better flexibility in general usage.
2305.03861
A Sharp Inequality for Trace-Free Matrices with Applications to Hypersurfaces
We derive a sharp inequality relating the second and fourth elementary symmetric functions of the eigenvalues of a trace-free matrix and give two applications. First, we give a new proof of the classification of conformally flat hypersurfaces in spaceforms. Second, we construct a functional which characterizes rotational hypersurfaces and catenoids.
Jeffrey S. Case, Aaron J. Tyrrell
2023-05-05T22:12:25Z
http://arxiv.org/abs/2305.03861v2
# A sharp inequality for trace-free matrices with applications to hypersurfaces ###### Abstract. We derive a sharp inequality relating the second and fourth elementary symmetric functions of the eigenvalues of a trace-free matrix and give two applications. First, we give a new proof of the classification of conformally flat hypersurfaces in spaceforms. Second, we construct a functional which characterizes rotational hypersurfaces and catenoids. Key words and phrases:rotation hypersurface; catenoid; rigidity 2020 Mathematics Subject Classification: Primary 53C42; Secondary 26D05, 53A07, 53C24 ## 1. Introduction Let \(i\colon N^{n}\to(M^{n+1},g)\), \(n\geq 4\), be an immersed hypersurface in a locally conformally flat Riemannian manifold. This note is motivated by two interesting rigidity results for \(N\) in terms of its principal curvatures. First, Nishikawa and Maeda showed [5] that \((N^{n},i^{*}g)\) is locally conformally flat if and only if at each point it has a principal curvature of multiplicity at least \(n-1\). Second, do Carmo and Dajczer showed [1] that, under the stronger assumption that \((M^{n+1},g)\) is complete, simply connected, and of constant sectional curvature \(c\in\mathbb{R}\), the hypersurface \(N\) is contained in a so-called rotation hypersurface [1, Definition 2.2] if at each point it has a principal curvature of multiplicity exactly \(n-1\). Furthermore, if \(N\) is minimal, then it is contained in a catenoid if and only if at each point it has a principal curvature of multiplicity at least \(n-1\). The purpose of this short note is to point out that the condition on the principal curvatures can be recast in terms of a sharp inequality relating the squared-norm \(|\mathring{A}^{2}|^{2}\) of the square of the trace-free part of the second fundamental form \(\mathring{A}\) and the square of the squared-norm \(|\mathring{A}|^{2}\) of \(\mathring{A}\). Our main result is a purely algebraic statement about trace-free linear maps on finite-dimensional vector spaces: **Theorem 1.1**.: _Let \(V\) be an \(n\)-dimensional inner product space, \(n\geq 4\), and let \(\mathring{A}\in\operatorname{End}(V)\) be trace-free. Then_ \[|\mathring{A}^{2}|^{2}\leq\frac{n^{2}-3n+3}{n(n-1)}|\mathring{A}|^{4}\] _with equality if and only if \(\mathring{A}\) has an eigenspace of dimension at least \(n-1\)._ The main idea of the proof of Theorem 1.1 is that the Newton inequality relating the first, second, and third elementary symmetric functions of the eigenvalues of a matrix \(A\) realizes equality if and only if \(A\) is proportional to the identity or has rank \(1\). Applying this to \(\mathring{A}+\lambda I\) for \(I\) the identity map yields the sharp inequality \[\left(\operatorname{tr}\mathring{A}^{3}\right)^{2}\leq\frac{(n-2)^{2}}{n(n-1) }|\mathring{A}|^{6}\] for any trace-free \(\mathring{A}\in\operatorname{End}(V^{n})\); this is sometimes called the \(\lambda\)-method [4]. Repeating this argument for the Newton inequality relating the second, third, and fourth elementary symmetric functions yields Theorem 1.1. We conclude with a number of applications of Theorem 1.1 to the rigidity of locally conformally flat hypersurfaces, of rotation hypersurfaces, and of catenoids. Our first application is a new proof of the characterization [5] of locally conformally flat hypersurfaces in locally conformally flat manifolds in terms of their principal curvatures. **Corollary 1.2**.: _Let \(i\colon N^{n}\to(M^{n+1},g)\), \(n\geq 4\), be an immersed hypersurface in a locally conformally flat manifold. The induced metric \(i^{*}g\) on \(N\) is locally conformally flat if and only if for each \(p\in N\), the shape operator \(A_{p}\colon T_{p}N\to T_{p}N\) has an eigenspace of dimension at least \(n-1\)._ Our second application is a rigidity result for catenoids in simply connected spaceforms amongst all minimial hypersurfaces. **Corollary 1.3**.: _Let \(i\colon N^{n}\to M^{n+1}(c)\), \(n\geq 4\), be a minimal hypersurface in the complete, simply connected Riemannian \((n+1)\)-manifold of constant sectional curvature \(c\in\mathbb{R}\). Then the shape operator \(A\colon TN\to TN\) satisfies_ \[|A^{2}|^{2}\leq\frac{n^{2}-3n+3}{n(n-1)}|A|^{4}. \tag{1.1}\] _Moreover, equality holds if and only if \(i(N)\) is contained in a catenoid._ The above results motivate the introduction of two energy functionals for hypersurfaces of a Riemannian manifold. First, given an immersion \(i\colon N^{n}\to(M^{n+1},g)\), we define the _rotational energy_ of \(N\) by \[E_{rot}[N]:=\int_{N}\left(\frac{n^{2}-3n+3}{n(n-1)}|\mathring{A}|^{4}-| \mathring{A}^{2}|^{2}\right)\,dA,\] assuming it is finite. Our terminology is motivated by the characterization [1] of rotation hypersurfaces in simply connected spaceforms. **Corollary 1.4**.: _Let \(i\colon N^{n}\to M^{n+1}(c)\), \(n\geq 4\), be a nowhere umbilic immersed hypersurface in the complete, simply connected, Riemannian \((n+1)\)-manifold of constant sectional curvature \(c\in\mathbb{R}\). Then \(E_{rot}[N]\geq 0\) with equality if and only if \(N\) is contained in a rotation hypersurface._ This result is sharper for minimal immersions. **Corollary 1.5**.: _Let \(i\colon N^{n}\to M^{n+1}(c)\), \(n\geq 4\), be a minimal immersed hypersurface in the complete, simply connected, Riemannian \((n+1)\)-manifold of constant sectional curvature \(c\in\mathbb{R}\). Then \(E_{rot}[N]\geq 0\) with equality if and only if \(N\) is contained in a catenoid._ Second, given an immersion \(i\colon N^{n}\to(M^{n+1},g)\), we define the _conformal rotational energy_ of \(N\) by \[E_{rot}^{conf}[N]:=\int_{N}|\mathring{A}|^{n-4}\left(\frac{n^{2}-3n+3}{n(n-1) }|\mathring{A}|^{4}-|\mathring{A}^{2}|^{2}\right)\,dA,\] assuming it is finite. Note that \(E_{rot}^{conf}=E_{rot}\) for four-dimensional hypersurfaces. The main point is of this definition is that \(E_{rot}^{conf}\) is conformally invariant, and hence gives a conformally invariant characterization of immersed submanifolds for which the shape operator has an eigenspace of dimension at least \(n-1\). **Corollary 1.6**.: _Let \(i\colon N^{n}\to(M^{n+1},g)\), \(n\geq 4\), be an immersed hypersurface in a Riemannian manifold. Then \(E^{conf}_{rot}[N]\geq 0\) with equality if and only if for each \(p\in N\), the shape operator \(A_{p}\colon T_{p}N\to T_{p}N\) has an eigenspace of dimension at least \(n-1\)._ This note is organized as follows. In Section 2 we give an elementary proof of Theorem 1.1. In Section 3 we prove our geometric applications of Theorem 1.1. ## 2. Elementary symmetric functions Let \(A\in\operatorname{End}(V^{n})\) be a linear map on an \(n\)-dimensional vector space. Denote by \[\sigma_{k}(A):=\sum_{i_{1}<\cdots<i_{k}}\lambda_{i_{1}}\cdots\lambda_{i_{k}} \tag{2.1}\] the \(k\)-th elementary symmetric function of the eigenvalues \(\{\lambda_{1},\ldots,\lambda_{n}\}\) of \(A\), with the convention \(\sigma_{0}(A)=1\). Note that \(\sigma_{1}(A)=\operatorname{tr}A\). Define \(p_{k}(A)\), \(k\leq n\), by \[\sigma_{k}(A)=\binom{n}{k}p_{k}(A);\] this renormalization is such that \(p_{k}(\lambda I)=\lambda^{k}\), where \(I\) is the identity map. The main tool in the proof of Theorem 1.1 is the sharp version of Newton's inequalities: **Lemma 2.1** ([6, Section 2]).: _Let \(A\in\operatorname{End}(V^{n})\) and let \(k\leq n-1\) be a positive integer. Then_ \[p_{k}^{2}(A)\geq p_{k-1}(A)p_{k+1}(A)\] _with equality if and only if \(A\) is proportional to the identity or \(\dim\ker A\geq n-k+1\)._ Applying the \(\lambda\)-method to Lemma 2.1 with \(k=2\) and \(k=3\) yields useful inequalities, the second of which is recorded in Theorem 1.1. We record the resulting inequalities separately due to their different dependence on \(n=\dim V\). Our first inequality relates \(p_{2}(\mathring{A})\) and \(p_{3}(\mathring{A})\) for a trace-free linear map \(\mathring{A}\) on a vector space of dimension at least \(3\). **Proposition 2.2**.: _Let \(\mathring{A}\in\operatorname{End}(V^{n})\), \(n\geq 3\), be such that \(p_{1}(\mathring{A})=0\). Then_ \[p_{3}^{2}(\mathring{A})+4p_{2}^{3}(\mathring{A})\leq 0.\] _Moreover, equality holds if and only if \(\mathring{A}\) has an eigenspace of dimension at least \(n-1\)._ Proof.: It is well-known, and follows easily from Equation (2.1), that \[p_{k}(A+\lambda I)=\sum_{j=0}^{k}\binom{k}{j}\lambda^{j}p_{k-j}(A) \tag{2.2}\] for all \(A\in\operatorname{End}(V^{n})\) and all \(\lambda\in\mathbb{R}\). Now let \(\mathring{A}\in\operatorname{End}(V^{n})\) be such that \(p_{1}(\mathring{A})=0\). Lemma 2.1 implies that \(p_{2}(\mathring{A})\leq 0\) with equality if and only if \(\mathring{A}=0\). Since the conclusion is trivially true if \(\mathring{A}=0\), we may assume that \(p_{2}(\mathring{A})<0\). For notational simplicity, denote \(p_{k}:=p_{k}(\mathring{A})\). Let \(\lambda\in\mathbb{R}\). Applying Lemma 2.1 to \(\mathring{A}+\lambda I\) and simplifying via Equation (2.2) yields \[0 \leq(p_{2}+\lambda^{2})^{2}-\lambda(p_{3}+3\lambda p_{2}+\lambda^{3})\] \[=p_{2}^{2}-\lambda p_{3}-\lambda^{2}p_{2}, \tag{2.3}\] with equality if and only if \(\mathring{A}\) has an eigenspace of dimension at least \(n-1\). In fact, since \(p_{1}=0\) and \(p_{2}<0\), the dimension must equal \(n-1\) in the case of equality in Inequality (2.3). Multiplying Inequality (2.3) by \(4p_{2}<0\) yields \[0 \geq 4p_{2}^{3}-4\lambda p_{2}p_{3}-4\lambda^{2}p_{2}^{2}\] \[=-(2\lambda p_{2}+p_{3})^{2}+4p_{2}^{3}+p_{3}^{2}.\] Therefore \(p_{3}^{2}+4p_{2}^{3}\leq 0\) with equality if and only if \(\mathring{A}\) has an eigenspace of dimension \(n-1\). Our second inequality relates \(p_{2}(\mathring{A})\) and \(p_{4}(\mathring{A})\) for a trace-free linear map \(\mathring{A}\) on a vector space of dimension at least \(4\). **Proposition 2.3**.: _Let \(\mathring{A}\in\operatorname{End}(V^{n})\), \(n\geq 4\), be such that \(p_{1}(\mathring{A})=0\). Then_ \[p_{4}(\mathring{A})+3p_{2}^{2}(\mathring{A})\geq 0.\] _Moreover, equality holds if and only if \(\mathring{A}\) has an eigenspace of dimension at least \(n-1\)._ Proof.: Let \(\mathring{A}\in\operatorname{End}(V^{n})\) be such that \(p_{1}(\mathring{A})=0\). For notational simplicity, denote \(p_{k}:=p_{k}(\mathring{A})\). Lemma 2.1 implies that \(p_{2}(\mathring{A})\leq 0\) with equality if and only if \(\mathring{A}=0\). Since the conclusion is trivially true if \(\mathring{A}=0\), we may assume that \(p_{2}<0\). Now, applying Lemma 2.1 and Equation (2.2) to \(\mathring{A}+\lambda I\) yields \[0 \leq(p_{3}+3\lambda p_{2}+\lambda^{3})^{2}-(p_{2}+\lambda^{2})(p_ {4}+4\lambda p_{3}+6\lambda^{2}p_{2}+\lambda^{4})\] \[=p_{3}^{2}-p_{2}p_{4}+2\lambda p_{2}p_{3}+\lambda^{2}(3p_{2}^{2}- p_{4})-2\lambda^{3}p_{3}-\lambda^{4}p_{2}\] for all \(\lambda\in\mathbb{R}\). Multiplying by \(16p_{2}^{3}\) and denoting \(s:=2\lambda p_{2}+p_{3}\) yields \[0 \geq 16p_{2}^{3}p_{3}^{2}-16p_{2}^{4}p_{4}+32\lambda p_{2}^{4}p_{ 3}+16\lambda^{2}p_{2}^{2}(3p_{2}^{3}-p_{2}p_{4})-32\lambda^{3}p_{2}^{3}p_{3}-16 \lambda^{4}p_{2}^{4}\] \[=(3p_{3}^{2}-4p_{2}p_{4})(p_{3}^{2}+4p_{2}^{3})+8sp_{3}(p_{2}p_{4 }-p_{2}^{3}-p_{3}^{2})+s^{2}(12p_{2}^{3}-4p_{2}p_{4}+6p_{3}^{2})-s^{4}.\] Setting \(s=0\) yields \[(3p_{3}^{2}-4p_{2}p_{4})(p_{3}^{2}+4p_{2}^{3})\leq 0. \tag{2.4}\] Proposition 2.2 implies that \(p_{3}^{2}+4p_{2}^{3}\leq 0\) with equality if and only if, up to a multiplicative constant and a choice of basis, \[\mathring{A}=\begin{pmatrix}1&0&\cdots&0&0\\ 0&1&\cdots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&1&0\\ 0&0&\cdots&0&1-n\end{pmatrix}.\] If \(p_{3}^{2}+4p_{2}^{3}=0\), then a straightforward computation yields \(p_{k}=1-k\) for all \(k\in\mathbb{N}\), and hence \(p_{4}+3p_{2}^{2}=0\). If instead \(p_{3}^{2}+4p_{2}^{3}<0\), then Inequality (2.4) implies that \(3p_{3}^{2}-4p_{2}p_{4}\geq 0\). Therefore \[0\leq 3p_{3}^{2}-4p_{2}p_{4}<-12p_{2}^{3}-4p_{2}p_{4}=-4p_{2}(p_{4}+3p_{2}^{2}).\] Since \(p_{2}<0\), we conclude that \(p_{4}+3p_{2}^{2}>0\). Expressing the nonnegative quantity \(p_{4}+3p_{2}^{2}\) in terms of \(|A|^{4}\) and \(|A^{2}|^{2}\) yields Theorem 1.1. Proof of Theorem 1.1.: Let \(\mathring{A}\in\operatorname{End}(V^{n})\) be such that \(p_{1}(\mathring{A})=0\). Direct computation gives \[\sigma_{2}(\mathring{A}) =-\frac{1}{2}|\mathring{A}|^{2},\] \[\sigma_{4}(\mathring{A}) =\frac{1}{8}|\mathring{A}|^{4}-\frac{1}{4}|\mathring{A}^{2}|^{2}.\] Therefore \[0\leq\binom{n}{4}\left(p_{4}(\mathring{A})+3p_{2}^{2}(\mathring{A})\right)=- \frac{1}{4}\left(|\mathring{A}^{2}|^{2}-\frac{n^{2}-3n+3}{n(n-1)}|\mathring{A} |^{4}\right).\] The conclusion readily follows from Proposition 2.3. ## 3. Geometric applications We conclude by proving our geometric applications of Theorem 1.1. Proof of Corollary 1.2.: Let \(i\colon N^{n}\to(M^{n+1},g)\) be a Riemannian immersion into a locally conformally flat manifold. Denote by \(\overline{W}\) the Weyl tensor of \(i^{*}g\). Denote by \[F:=\frac{1}{n-2}\left(\mathring{A}^{2}-Gi^{*}g\right)\] the Fialkow tensor \(F\in\Gamma(S^{2}T^{*}N)\), where \(G:=\operatorname{tr}_{i^{*}g}F=\frac{1}{2(n-1)}|\mathring{A}|^{2}\) is its trace (cf. [2, Equation (13.9)]). The Gauss-Codazzi equations (cf. [2, Equation (22.13)]) imply that \[\overline{W}=\frac{1}{2}\mathring{A}\wedge\mathring{A}+F\wedge g,\] where \(S\wedge T\) denotes the Kulkarni-Nomizu product \[(S\wedge T)_{abcd}:=S_{ac}T_{bd}+S_{bd}T_{ac}-S_{ad}T_{bc}-S_{bc}T_{ad}.\] Direct computation gives (cf. [3]) \[|\mathring{A}\wedge\mathring{A}|^{2} =8|\mathring{A}|^{4}-8|\mathring{A}^{2}|^{2},\] \[\langle\mathring{A}\wedge\mathring{A},F\wedge g\rangle =-8\langle\mathring{A}^{2},F\rangle,\] \[|F\wedge g|^{2} =4\langle\mathring{A}^{2},F\rangle,\] \[\langle\mathring{A}^{2},F\rangle =\frac{1}{n-2}|\mathring{A}^{2}|^{2}-\frac{1}{2(n-1)(n-2)}| \mathring{A}|^{4}.\] Therefore \[|\overline{W}|^{2}=\frac{2(n^{2}-3n+3)}{(n-1)(n-2)}|\mathring{A}|^{4}-\frac{2n }{n-2}|\mathring{A}^{2}|^{2}.\] The conclusion readily follows from Theorem 1.1. Proof of Corollary 1.3.: Let \(i\colon N^{n}\to M^{n+1}(c)\) be a minimal hypersurface. By minimality, the shape operator is trace-free; i.e. \(A=\mathring{A}\). Theorem 1.1 implies that \(A\) satisfies Inequality (1.1) with equality if and only if \(A\) has an eigenvalue of multiplicity at least \(n-1\). The final conclusion follows from a result [1, Corollary 4.4] of do Carmo and Dajczer. Proof of Corollary 1.4.: Let \(i\colon N^{n}\to M^{n+1}(c)\) be a nowhere umbilic immersed hypersurface. Theorem 1.1 implies that \(E_{rot}[N]\geq 0\) with equality if and only if at each point \(p\in M\) the shape operator \(A_{p}\) has an eigenspace of dimension exactly \(n-1\). The final conclusion follows from a result [1, Theorem 4.2] of do Carmo and Dajczer. Proof of Corollary 1.5.: Let \(i\colon N^{n}\to M^{n+1}(c)\) be a minimal immersed hypersurface. Corollary 1.3 implies that \(E_{rot}[N]\geq 0\) with equality if and only if \(i(N)\) is contained in a catenoid. Proof of Corollary 1.6.: Let \(i\colon N^{n}\to(M^{n+1},g)\) be an immersed hypersurface. Theorem 1.1 implies that \(E_{rot}^{conf}[N]\geq 0\) with equality if and only if \[|\mathring{A}|^{n-4}\left(\frac{n^{2}-3n+3}{n(n-1)}|\mathring{A}|^{4}-| \mathring{A}^{2}|^{2}\right)=0.\] It follows that for each \(p\in N\), either \(\mathring{A}_{p}=0\) or \[|\mathring{A}_{p}^{2}|^{2}=\frac{n^{2}-3n+3}{n(n-1)}|\mathring{A}_{p}|^{4}.\] In the former case, \(\mathring{A}_{p}\) has an eigenspace of dimension \(n\). In the latter case, Theorem 1.1 implies that \(\mathring{A}_{p}\) has an eigenspace of dimension at least \(n-1\). ## Acknowledgements We thank Eudes Leite de Lima for pointing out a mistake in an early version of this paper. JSC was partially supported by the Simons Foundation (Grant #524601).
2310.18991
The Petz (lite) recovery map for scrambling channel
We study properties of the Petz recovery map in chaotic systems, such as the Hayden-Preskill setup for evaporating black holes and the SYK model. Since these systems exhibit the phenomenon called scrambling, we expect that the expression of the recovery channel $\mathcal{R}$ gets simplified, given by just the adjoint $\mathcal{N}^{\dagger}$ of the original channel $\mathcal{N}$ which defines the time evolution of the states in the code subspace embedded into the physical Hilbert space. We check this phenomenon in two examples. The first one is the Hayden-Preskill setup described by Haar random unitaries. We compute the relative entropy $S(\mathcal{R}\left[\mathcal{N}[\rho]\right] ||\rho)$ and show that it vanishes when the decoupling is archived. We further show that the simplified recovery map is equivalent to the protocol proposed by Yoshida and Kitaev. The second example is the SYK model where the two dimensional code subspace is defined by an insertion of a fermionic operator, and the system is evolved by the SYK Hamiltonian. We check the recovery phenomenon by relating some matrix elements of an output density matrix $\langle T|\mathcal{R}[\mathcal{N}[\rho]]|T' \rangle$ to R\'enyi-two modular flowed correlators, and show that they coincide with the elements for the input density matrix with small error after twice the scrambling time.
Yasuaki Nakayama, Akihiro Miyata, Tomonori Ugajin
2023-10-29T12:21:35Z
http://arxiv.org/abs/2310.18991v1
# The Petz (lite) recovery map for scrambling channel ###### Abstract We study properties of the Petz recovery map in chaotic systems, such as the Hayden-Preskill setup for evaporating black holes and the SYK model. Since these systems exhibit the phenomenon called scrambling, we expect that the expression of the recovery channel \(\mathcal{R}\) gets simplified, given by just the adjoint \(\mathcal{N}^{\dagger}\) of the original channel \(\mathcal{N}\) which defines the time evolution of the states in the code subspace embedded into the physical Hilbert space. We check this phenomenon in two examples. The first one is the Hayden-Preskill setup described by Haar random unitaries. We compute the relative entropy \(S(\mathcal{R}\left[\mathcal{N}[\rho]\right]||\rho)\) and show that it vanishes when the decoupling is archived. We further show that the simplified recovery map is equivalent to the protocol proposed by Yoshida and Kitaev. The second example is the SYK model where the two dimensional code subspace is defined by an insertion of a fermionic operator, and the system is evolved by the SYK Hamiltonian. We check the recovery phenomenon by relating some matrix elements of an output density matrix \(\bra{T}\mathcal{R}[\mathcal{N}[\rho]]\ket{T^{\prime}}\) to Renyi-two modular flowed correlators, and show that they coincide with the elements for the input density matrix with small error after twice the scrambling time. ## 1 Introduction Advances in our understanding of the relationship between quantum information theory and holographic principles have revealed the connection between the structure of spacetime and quantum entanglement. In particular, the island formula [1; 2; 3; 4; 5] for the entropy of Hawking radiation implies the island region in the interior of an old black hole is reconstructed from the information of Hawking radiation. However, it still remains to be understood the precise way to recover black hole interior region from Hawking radiation. It has been realized that for this purpose it is convenient to regard the black hole interior as a code subspace embedded in the Hilbert space of Hawking radiation as a quantum error correcting code [6; 7; 8]. For instance, the decoupling theorem by Hayden and Preskill [6] implies that the black hole interior region is protected against the erasure of black hole degrees of freedom, which assures the recovery. Once we regard an evaporating black hole as a quantum error correcting (QEC) code, then the general argument of QEC [9] tells us that the recovery is achieved by applying the Petz recovery map [10; 11]. In this paper, we study properties of the Petz recovery map in chaotic systems, such as the Hayden-Preskill (HP) setup for evaporating black holes and the SYK model. Since these systems exhibit the phenomenon called scrambling, we expect that the recovery channel \(\mathcal{R}\) gets simplified, given by just the adjoint \(\mathcal{N}^{\dagger}\) of the original channel \(\mathcal{N}\) which defines the embedding of the black hole interior into the Hawking radiation. Therefore schematically we have \[\mathcal{R}\sim a\;\mathcal{N}^{\dagger}, \tag{1}\] where \(a\) is some numerical factor depending on the dimensions of the Hilbert spaces of black hole and Hawking radiation. We will see this phenomenon in two examples. The first one is the Hayden-Preskill setup where the dynamics of an evaporating black hole and Hawking radiation is described by Haar random unitaries. We do this by computing the relative entropy \(S(\mathcal{R}\left[\mathcal{N}[\rho]\right]||\rho)\) and show that it is vanishing when the decoupling is archived. We further show that the simplified recovery map is equivalent to the Yoshida-Kitaev protocol1. The second example is one of the SYK model versions of the Hayden-Preskill setup, discussed in [14]2. In this setup, code information is expressed as excitations, and a system is evolved by the SYK Hamiltonian. We check the recovery phenomenon by relating some elements of an output density matrix \(\bra{T}\mathcal{R}[\mathcal{N}[\rho]]\ket{T^{\prime}}\) to Renyi-two modular flowed correlators, and show that they gives an input density matrix \(\bra{T}\rho\ket{T^{\prime}}\) with small error after twice the scrambling time. However, there are still remaining matrix element, which we need to check, but it is difficult to evaluate them directly. In upcoming paper [16], we will give their direct evaluations. In this paper, we do not evaluate them directly, but indirectly guess their expectations for them based on our obtained results. Footnote 1: This equivalence has not been directly shown, but such a equivalence is suggested by Yoshida in [12; 13]. Footnote 2: In [15], the authors discuss another Hayden-Preskill setup in the SYK model, and the setup is different from our setup. Our paper is organized as follows. In section 2, we start with introducing a quantum channel induced by the Hayden-Preskill setup, and explain how we write down the simplified recovery map in the original Hayden-Preskill setup, which is applicable to the SYK case. We also explain a convenient notation to treat quantum channels induced by the Hayden-Preskill setup, and in the notation, one can imagine gravitational interpretation simply. In section 3, by using the convenient notation, we compute some relative entropies to check the sufficiency that we can use the simplified recovery map as a recovery map. Also, we show that the Yoshida-Kitaev protocol can be written as the recovery map. In section 4, we explain one of the Hayden-Preskill setup using the SYK model, and introduce a corresponding quantum channel. After that, we give the simplified recovery map, and show that some matrix elements of output results can be written as "Renyi-two modular flowed correlator". By evaluating the "Renyi-two modular flowed correlators" analytically, we show some matrix elements of output results by the simplified recovery map give desired results. In section 5, from the previous section result we have computed, we estimate the remaining matrix elements of output results, which we are evaluating. The detail of the remaining ones will be reported in upcoming paper [16]. In section 6, we conclude this paper by the discussion of our results and future directions. In appendix A, we give another derivation of the simplified recovery map using a Kraus representation. In appendix B, we show the relation that holds for an EPR state, which is used in section 3. In appendix C, conventions used in section 4 are listed. In appendix D, we show that, in the SYK version of the Hayden-Preskill setup, some recovery results can be written as "Renyi-two modular flowed correlators". ## 2 Recovery map for the Hayden-Preskill channel The Hayden-Preskill setup is a tractable toy model for studying information flow in evaporating black holes. The setup consists of a black hole \(A\) that has been emitting Hawking radiation \(B\). We are particularly interested in the system after the Page time where the black hole has emitted more than half of its original entropy 3, therefore approximately forming a maximally entangled state \(|\text{EPR}\rangle_{AB}\). Suppose Alice throws a quantum state \(\rho_{T}\) (often called a diary) into this old black hole. Then as the black hole further evaporates \(A\to C+D\) by emitting late Hawking radiation \(D\), information thrown into the black hole will eventually appear in total Hawking radiation \(DB\). Here we denoted by \(C\) the remaining black hole after emitting the late radiation \(D\), see the left panel of figure 1. The analysis of Hayden and Preskill [6] showed the diary appears in Hawking radiation almost immediately, namely after the scrambling time. Footnote 3: We follow the notation of Yoshida-Kitaev [17]. To see this, it is useful to introduce an additional system called reference \(R\) and form a maximally entangled state \(|\text{EPR}\rangle_{RT}\) with the diary \(T\). Then in this setup the initial condition of the process is \(|\text{EPR}\rangle_{RT}\otimes|\text{EPR}\rangle_{AB}\). Owing to its chaotic dynamics, information of the diary thrown into the black hole gets scrambled and spread over the entire degrees of freedom. The resulting state is given by \[|\Psi_{HP}\rangle=(I_{R}\otimes U_{T,A\to C,D}\otimes I_{B})\,|\text{EPR} \rangle_{R,T}\otimes|\text{EPR}\rangle_{A,B}\,, \tag{1}\] where \(U_{T,A\to C,D}\) is a random unitary matrix from \(A,T\) to \(C,D\), which models the chaotic dynamics of the black hole. By finding the Hilbert space with which \(R\) is mostly entangled, one can find where is information of the original diary in the final time slice. See again the left panel of figure 1. The surprising results of HP is summarized into the following inequality. \[\frac{\left\|\rho_{RC}-\rho_{R}\otimes\rho_{C}\right\|_{1}^{2}}{\leq\left( \frac{d_{T}}{d_{D}}\right)^{2}}\,, \tag{2}\] where \(\left\|A\right\|_{1}=\operatorname{tr}\sqrt{A^{\dagger}A}\), \(\rho_{RC},\rho_{R},\rho_{C}\) are the reduced density matrices of (1) on the indicated subsystems, and in the left hand side we take average over random unitaries. This inequality (2)implies that if one collects a sufficient number of late Hawking quanta so that \(d_{D}\gg d_{T}\) the system of the remaining black hole and the reference becomes no longer correlated \(\rho_{RC}=\rho_{R}\otimes\rho_{C}\), and there information of the diary has to be encoded in Hawking radiation \(DB\). This result is also natural from the view point of the framework of quantum error correction4. A quantum error correcting code is a scheme to protect quantum states (logical states) in code subspace \(H_{\text{code}}\) against various errors. Such an error is mathematically modeled by a CPTP map called quantum channel \(\mathcal{N}\). The basic idea of quantum error correction is protecting these quantum states in code subspace \(H_{\text{code}}\) by embedding it to the larger Hilbert space often called physical Hilbert space \(H_{phys}\). In the HP protocol, the Hilbert space of the diary \(H_{T}\) corresponds to \(H_{\text{code}}\) in QEC, and \(H_{\text{phys}}\) is \(H_{DB}\). The quantum channel \(\mathcal{N}:T\to DB\), is obtained by tracing out the remaining black hole and the reference system degrees of freedom \(C\) and \(R\) from \(\left|\Psi_{HP}\right\rangle\) in (1) with replacing the reference state \(\left|\text{EPR}\right\rangle_{R,T}\) by \(\sqrt{d_{T}\rho_{T}}\left|\text{EPR}\right\rangle_{R,T}\) Figure 1: **Left**: Hayden-Preskill setup, corresponding to state (1). **Right**: Its decoder (\(\rho_{T}\) is an input state), \[\begin{split}\mathcal{N}_{T\to D,B}\left[\rho_{T}\right]& =\mathrm{tr}_{C}\left[(U_{T,A\to C,D}\otimes I_{B})(\rho_{T} \otimes\left|\mathrm{EPR}\right\rangle_{A,B}\langle\mathrm{EPR}|)(U_{T,A\to C,D}^{\dagger}\otimes I_{B})\right]\\ &=\frac{1}{d_{B}}\sum_{\tilde{D},\tilde{D}^{\prime}=1}^{d_{D}} \sum_{\tilde{B},\tilde{B}^{\prime}=1}^{d_{B}}\left|\tilde{D}\right\rangle_{D} \left\langle\tilde{D}^{\prime}\right|\otimes\left|\tilde{B}\right\rangle_{B} \left\langle\tilde{B}^{\prime}\right|\sum_{C=1}^{d_{C}}\sum_{\tilde{T},\tilde{T} ^{\prime}=1}^{d_{T}}U_{C,\tilde{D};\tilde{T},\tilde{B}}\left(\rho_{T}\right)_{ \tilde{T}\tilde{T}^{\prime}}U_{C,\tilde{D}^{\prime};\tilde{T}^{\prime},\tilde{ B}^{\prime}}^{\dagger}.\end{split} \tag{3}\] We call this quantum channel the HP channel. Then a general theorem of QEC5 tells us that the decoupling condition is equivalent to the existence of a recovery map \(\mathcal{R}:DB\to T\) which satisfies Footnote 5: See e.g., [18, 19] for the theorem. \[\mathcal{R}\left[\mathcal{N}[\rho_{T}]\right]=\rho_{T}\quad\forall\rho_{T}\in H _{T}. \tag{4}\] This again implies the information of the diary is recoverable from Hawking radiation \(DB\). See the right panel of figure 1. Moreover, the concrete expression of the recovery map is known [9], and is called the Petz recovery map \[\mathcal{R}_{\sigma,\mathcal{N}}^{\mathrm{Petz}}[\tau]=\sigma^{\frac{1}{2}} \mathcal{N}^{\dagger}[(\mathcal{N}[\sigma])^{-\frac{1}{2}}\tau(\mathcal{N}[ \sigma])^{-\frac{1}{2}}]\sigma^{\frac{1}{2}}. \tag{5}\] where \(\sigma\) is full rank arbitrary density matrix on the code subspace \(H_{\mathrm{code}}\). \(\mathcal{N}^{-1/2}\) factor of the Petz recovery map is difficult to compute in general. One way for doing this is, as in [4] first making the replacement \(\mathcal{N}^{-1/2}\to\mathcal{N}^{n}\), where \(n\) is a positive integer, computing it for all \(n\), then taking analytic continuation \(n\to-\frac{1}{2}\). Also \(\mathcal{N}^{-1/2}\) part is preventing us from having an operational meaning of the map. However, in systems exhibiting quantum chaos, we expect that the recovery map gets simplified, because \(\mathcal{N}[\sigma]\) has a flat spectrum, therefore the approximation \(\mathcal{R}\sim\mathcal{N}^{\dagger}\) appears to be possible6 Footnote 6: In appendix A, we give another equivalent argument supporting our expectation of this simplification in terms of the Kraus representation of the HP channel. If this is the case, since \(\rho\sim\mathcal{N}^{\dagger}\left[\mathcal{N}[\rho]\right]\) for arbitrary density matrix \(\rho\) in the code subspace, therefore the relative entropy between them \(S(\rho||\mathcal{N}^{\dagger}\left[\mathcal{N}[\rho]\right])\) vanishes. For the HP channel, the adjoint HP channel \(\mathcal{N}^{\dagger}\) is given by \[\begin{split}\mathcal{N}^{\dagger}_{D,B\to T}[\mathcal{O}_{DB}]& =\mathrm{tr}_{A,B}\left[\left|\mathrm{EPR}\right\rangle_{A,B} \langle\mathrm{EPR}|\left(U_{T,A\to C,D}^{\dagger}\,\mathcal{O}_{DB}\,U_{T,A \to C,D}\right)\right]\\ &=_{A,B}\langle\mathrm{TFD}|\left(U_{T,A\to C,D}^{\dagger}\otimes I _{B}\right)\left(\mathcal{O}_{DB}\otimes I_{C}\right)\left(U_{T,A\to C,D} \otimes I_{B}\right)\left|\mathrm{TFD}\right\rangle_{A,B}.\end{split} \tag{6}\] Here, the adjoint channel is defined by the relation7 Footnote 7: More generally, for a quantum channel \(\mathcal{N}\), its adjoint channel is defined by the similar relation \[\mathrm{tr}_{D,B}\left[\mathcal{N}_{T\to D,B}\left[\rho_{T}\right]\,O_{DB} \right]=\mathrm{tr}_{T}\left[\rho_{T}\,\mathcal{N}^{\dagger}_{D,B\to T}[ \mathcal{O}_{DB}]\right]. \tag{7}\] For later convenience, we introduce a correctly normalized recovery map \[\mathcal{R}^{\text{Lite}}_{D,B\to T}[\mathcal{O}_{DB}]\coloneqq\frac{1}{N} \cdot\frac{d_{B}d_{D}}{d_{T}}\mathcal{N}^{\dagger}_{D,B\to T}\left[ \mathcal{O}_{DB}\right], \tag{9}\] and define it as the _Petz-lite8_. Here \(N\) is the normalization constant Footnote 8: The terminology “Petz-lite” is introduced in [4], and we also use this terminology in this paper. \[N=\left(\frac{d_{D}}{d_{T}}\right)^{2}+1, \tag{10}\] determined by the condition \(\overline{\text{tr}_{T}\left[\mathcal{R}^{\text{Lite}}_{D,B\to T}\left[ \mathcal{N}_{T\to D,B}[\sigma_{T}]\right]\right]}=1\), where \(\sigma_{T}\) is some reference state in \(T\). In the Haar random case, the choice of the reference state \(\sigma_{T}\) is not important as long as it is normalized. With this \(N\), the Petz-lite can be expressed as \[\mathcal{R}^{\text{Lite}}_{D,B\to T}[\mathcal{O}_{DB}] =\frac{1}{\left(\frac{d_{D}}{d_{T}}\right)^{2}+1}\cdot\frac{d_{ B}d_{D}}{d_{T}}\mathcal{N}^{\dagger}_{D,B\to T}\left[\mathcal{O}_{DB}\right] \tag{11}\] \[=\frac{1}{1+\left(\frac{d_{T}}{d_{D}}\right)^{2}}\cdot d_{C} \,\mathcal{N}^{\dagger}_{D,B\to T}\left[\mathcal{O}_{DB}\right]\] where in the second line, we used the relation \(d_{B}d_{T}=d_{C}d_{D}\) due to the unitarity of the Haar random unitary. For the parameter region \(d_{T}/d_{D}\ll 1\), the normalization is just given by \(d_{C}\), which coincides with an expression obtained from another discussion. In appendix A, we give the discussion. ### West-coast notation and replica-wormhole-like objects In the following, we are interested in the typical properties of the recovery map \(\mathcal{R}\) for the HP channel \(\mathcal{N}\). To investigate these properties, we will consider replicated quantities, such as \(\text{tr}(\mathcal{N}[\rho_{T}])^{n}\) involving a product of Haar random unitaries and its average. Since such averaging involves Wick type contractions between various pairs of Haar random unitaries in the product, it is convenient to introduce a graphical notation that manifests which pair of unitaries are contracted. Therefore, here we introduce a notation similar to the one employed in [4] for modeling the black hole microstates and their statistical properties, and call this West-coast notation. To begin with, let us define the following black hole microstate on \(C\), involving a Haar random unitary \[\left|\psi_{i}^{T}\right\rangle_{C}:=\sqrt{d_{C}\,d_{D}}\sum_{C=1}^{d_{C}} \left|C\right\rangle U_{C,T;i}\quad. \tag{12}\] Here \(\{|C\rangle\}\) is the set of basis states on the Hilbert space \(H_{C}\) and the index \(i\) collectively denote the indices for both late radiation \(D\) and early radiation \(B\), \(i:(D,B)\) or more concretely \(|i\rangle=|D\rangle\otimes|B\rangle\), so the label \(i\) runs from \(1\) to \(d_{D}d_{B}\equiv k\). In the following, we use this type of states \(\left|\psi_{i}^{T}\right\rangle_{C}\) to write quantities of our interest, instead of random unitary matrices \(U_{C,D;T,B}\). Under this notation, we can write \[\left\langle\psi_{i}^{T}\middle|\psi_{j}^{T^{\prime}}\right\rangle=d_{C}\,d_{D }\sum_{C=1}^{d_{C}}U_{i;C,T}^{\dagger}\,U_{C,T^{\prime};j} \tag{13}\] and therefore the HP channel (3) is given by \[\mathcal{N}_{T\to D,B}[\rho_{T}]=\frac{1}{kd_{C}}\sum_{i,j=1}^{k}|i \rangle\langle j|\cdot\sum_{\tilde{T},\tilde{T}^{\prime}=1}^{d_{T}}\left\langle \psi_{j}^{\tilde{T}^{\prime}}\middle|\psi_{i}^{\tilde{T}}\right\rangle\,( \rho_{T})_{\tilde{T}\tilde{T}^{\prime}}\quad. \tag{14}\] In this notation, we call the subscript index \(i\) Hawking radiation index, and the superscript \(T\) code index. West-coast model treats each of these microstate \(|\psi_{i}\rangle\) by a single-sided AdS black hole with insertion of "end of the world brane" (or EoW brane in short) labeled by the index \(i\) behind the horizon. This state has a Hartle Hawking type preparation, in terms of an Euclidean path integral with the EoW brane which starts from the Euclidean conformal boundary. In this model the overlap between two such states \(\langle\psi_{i}|\psi_{j}\rangle\) is computed by an Euclidean gravitational path integral on a region of Euclidean disc enclosed by the part of the asymptotic boundary (an interval) and the brane in the bulk. With this gravitational path integral picture in mind, here we explain the fact that there is a simple diagrammatic prescription to compute a product of such overlaps \(\overline{\prod_{m=1}^{n}\langle\psi_{i_{m}}^{a_{m}}|\psi_{j_{m}}^{b_{m}}\rangle}\). 9 without directly applying the formulae for the Haar random averages, which becomes quite involved when the number of unitary matrices appearing increases. Footnote 9: In the west-coast paper, this quantity is just called the product of overlap and denoted without the bar, i.e., \(\overline{\prod_{m=1}^{n}\langle\psi_{i_{m}}^{a_{m}}|\psi_{j_{m}}^{b_{m}} \rangle}_{\mid_{ours}}=\prod_{m=1}^{n}\langle\psi_{i_{m}}^{a_{m}}|\psi_{j_{m} }^{b_{m}}\rangle_{WC}\). We will use the convention with the bar to keep in mind we do average over random unitaries in the computation. Then the prescription is the following: 1. For each overlap in the product \(\langle\psi_{i_{m}}^{a_{m}}|\psi_{j_{m}}^{b_{m}}\rangle\) draw an interval with two end points, and associate the labels \((i_{m},a_{m})\) to one end and \((j_{m},b_{m})\) to the other. (In the west-coast model this interval with indices at the end points provides the boundary condition to the gravitational path integral for the product of the overlaps.) 2. The \(n\) intervals prepared in this way have \(2n\) endpoints in total. We pick up two of these end points and connect them by a line, which we call the end of the world brane. We repeat this until all the endpoints are connected to the other by a brane. There are many different ways to do this. One possibility is that the end point of \(m\)-th interval is always connected to the other end point of the same interval. Or the other possibility is that the end point of \(m\)-th interval is always connected to the point on the next \((m+1)\)-th interval. 3. Each diagram \(D\) constructed in this way contains \(n\) end of the world branes. We then associate each brane in the diagram with a Kronecker delta factor. If the brane is connecting two endpoints with the labels\((i_{l},a_{l})\) and \((j_{m},b_{m})\), then this factor is given by \(\delta_{i_{l}j_{m}}\delta_{a_{l}b_{m}}\). We compute this for all branes in the diagram and then multiply these factors. Let us denote this factor for the diagram by \(I_{D}\). 4. Since each diagram can be regarded as (disjoint union of ) two dimensional surfaces, we can associate a Euler number \(\chi_{D}\) to the diagram. We then pick up the factor \((d_{C})^{\chi_{D}}\) which corresponds to the gravitational path integral part in the west-coast model. We then sum the total factor \(I_{D}(d_{C})^{\chi_{D}}\) for all possible diagram \(D\). 5. The average of the overlaps is equal to the sum of these factors over all possible diagrams; \[\overline{\prod_{m=1}^{n}\langle\psi_{i_{m}}^{a_{m}}|\psi_{j_{m}}^{b_{m}} \rangle}=\sum_{D\in\text{All diagrams}}I_{D}\;(d_{C})^{\chi_{D}}.\] (15) Let us provide a few examples. First, for the single overlap \(\overline{\left\langle\psi_{i}^{T}\right|\psi_{j}^{T^{\prime}}\right\rangle}\). We can easily evaluate it \[\begin{split}\overline{\left\langle\psi_{i}^{T}\right|\psi_{j}^{T ^{\prime}}\right\rangle}&=d_{C}\,d_{D}\,\sum_{C=1}^{d_{C}} \overline{U_{i;C,T}^{\dagger}\,U_{C,T^{\prime};j}}\\ &=d_{C}\;\underbrace{\delta_{D_{i}D_{j}}\,\delta_{B_{i}B_{j}}}_{ \delta_{ij}}\;\delta_{TT^{\prime}}\\ &=d_{C}\,\delta_{ij}\,\delta_{TT^{\prime}},\end{split} \tag{16}\] where in the second line we used the general result for two Haar random unitaries \[\overline{U_{a,b}U_{c,d}^{\dagger}}=\frac{1}{d}\,\delta_{ad}\delta_{bc}\qquad (a,b,c,d=1,\cdots,d). \tag{17}\] This result is easily reproduced from the west-coast prescription. Next, let us evaluate the Haar average of the combination of the overlaps for later convenience, \[\left\langle\psi_{i}^{T_{1}}\right|\psi_{j}^{T_{1}^{\prime}}\rangle\cdot \left\langle\psi_{j}^{T_{2}^{\prime}}\right|\psi_{i}^{T_{2}}\right\rangle. \tag{18}\] Clearly, by setting \(T_{1}=T_{2}=T\) and \(T_{1}^{\prime}=T_{2}^{\prime}=T^{\prime}\), the above combination reduces to the variance of the overlap \(\left|\left<\psi_{i}^{T}\middle|\psi_{j}^{T^{\prime}}\right>\right|^{2}\). We can evaluate the above quantity by the diagrammatic prescription mentioned above (See figure 2), \[\overline{\left<\psi_{i}^{T_{1}}\middle|\psi_{j}^{T_{1}^{\prime}}\right>\cdot \left<\psi_{j}^{T_{2}^{\prime}}\middle|\psi_{i}^{T_{2}}\right>}\approx\left(d_ {C}\right)^{2}\delta_{ij}\delta_{T_{1}T_{1}^{\prime}}\cdot\delta_{ji}\delta_{ T_{2}^{\prime}T_{2}}+d_{C}\,\delta_{ii}\delta_{T_{1}T_{2}}\cdot\delta_{jj} \delta_{T_{2}^{\prime}T_{1}^{\prime}}. \tag{19}\] This coincides with the result obtained by using the Weingarten formula, \[\overline{U_{a_{1},b_{1}}U_{c_{1},d_{1}}^{\dagger}\cdot U_{a_{2}, b_{2}}U_{c_{2},d_{2}}^{\dagger}} =\frac{1}{d^{2}-1}\left(\delta_{a_{1}d_{1}}\delta_{b_{1}c_{1}} \cdot\delta_{a_{2}d_{2}}\delta_{b_{2}c_{2}}+\delta_{a_{1}d_{2}}\delta_{b_{1}c _{2}}\cdot\delta_{a_{2}d_{1}}\delta_{b_{2}c_{1}}\right)\] \[\qquad+\frac{1}{d\left(d^{2}-1\right)}\left(\delta_{a_{1}d_{1}} \delta_{a_{2}d_{2}}\delta_{b_{1}c_{2}}\delta_{b_{2}c_{1}}+\delta_{a_{1}d_{2}} \delta_{a_{2}d_{1}}\delta_{b_{1}c_{1}}\delta_{b_{2}c_{2}}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad for any CTPT map \(\mathcal{N}\). By repeating this we have \[S(\rho||\sigma)\geq S(\mathcal{N}[\rho]||\mathcal{N}[\sigma])\geq S(\mathcal{R} \left[\mathcal{N}[\rho]\right]||\mathcal{R}\left[\mathcal{N}[\sigma]\right]), \tag{3.2}\] therefore if the recovery map exists \(\mathcal{R}\circ\mathcal{N}=1_{\text{code}}\), then \(S(\rho||\sigma)=S(\mathcal{N}[\rho]||\mathcal{N}[\sigma])\), for any density matrices on the code subspace. This condition is known as sufficiency, and it was shown that if \(\mathcal{N}\) satisfies this condition, the recovery map is given by (2.5). Here we would like to check the HP channel (2.3) does satisfy sufficiency, by directory computing the relative entropy \(S(\mathcal{N}[\rho]||\mathcal{N}[\sigma])\) in the presence of the quantum channel \(\mathcal{N}\)10. Footnote 10: See [21, 22] for related discussions on original Petz map cases. Since our interest is a typical result under the Haar random average, we consider the Haar averaged relative entropy, \(\overline{S(\mathcal{N}[\rho]||\mathcal{N}[\sigma])}\). To evaluate the relative entropy, we use the replica trick \[\overline{S(\mathcal{N}[\rho]||\mathcal{N}[\sigma])}=\lim_{n\to 1}\frac{1}{n-1} \left(\overline{\log\operatorname{tr}\left[\mathcal{N}[\rho]^{n}\right]}- \overline{\log\operatorname{tr}\left[\mathcal{N}[\rho]\mathcal{N}[\sigma]^{n- 1}\right]}\right). \tag{3.3}\] Generally, since it is difficult to evaluate the Haar average of logarithmic functional, instead of the expression, we consider \[\overline{S(\mathcal{N}[\rho]||\mathcal{N}[\sigma])}\approx\lim_{n\to 1} \frac{1}{n-1}\left(\log\overline{\operatorname{tr}\left[\mathcal{N}[\rho]^{n} \right]}-\log\overline{\operatorname{tr}\left[\mathcal{N}[\rho]\mathcal{N}[ \sigma]^{n-1}\right]}\right). \tag{3.4}\] It is known that in the large Hilbert dimension limit, this quantity is almost equal to the original one. For a moment let us focus on the first term of (3.4). Using the west-coast notation (2.14), the trace \(\operatorname{tr}\left[\mathcal{N}[\rho]^{n}\right]\) can be written in terms of overlaps, \[\operatorname{tr}\left[\mathcal{N}[\rho]^{n}\right]=\frac{1}{\left(k\,d_{C} \right)^{n}}\sum_{\boldsymbol{i}=1}^{k}\sum_{\boldsymbol{T},\boldsymbol{ \tilde{T}}=1}^{d_{T}}\;\prod_{m=0}^{n-1}\left(\langle\psi_{i_{m}}^{\tilde{T}_ {m}}|\psi_{i_{m+1}}^{T_{m}}\rangle\;\rho_{T_{m}\,\tilde{T}_{m}}\right) \tag{3.5}\] where the bold fonts \(\boldsymbol{i},\boldsymbol{T}\) in the summation symbol means the sum with respect to the set of indices; \(\sum_{i=1}^{k}=\sum_{i_{1}=1}^{k}\cdots\sum_{i_{n}=1}^{k}\). In computing the Renyi entropy (3.5) we need to evaluate the product of overlaps \(\prod_{m=0}^{n-1}\langle\psi_{i_{m}}^{\tilde{T}_{m}}|\psi_{i_{m+1}}^{T_{m}}\rangle\) with \(|\psi_{i_{n}}^{T}\rangle\equiv|\psi_{i_{0}}^{T}\rangle\) and its Haar random average. We do this using the diagrammatic technique introduced in the previous section. Among all possible diagrams, we are particularly interested in the ones dominating the sum, both in for early time regime (\(d_{D}\ll d_{T}\)) and late times (\(d_{D}\gg d_{T}\)). We now argue that fully-disconnected diagram (the left panel of Figure 3) where for all EoW branes starting point and end point are on the same interval dominates in the early time regime, and the fully connected diagram (the right panel of Figure 3)where the indices form a single loop dominates the late time regime by explicit calculations. The calculation here is very similar to the ones in [4, 23]. irst, let us evaluate the contribution of the fully disconnected diagram. Since the contribution of this diagram is evaluated as \[\left(\overline{\prod_{m=0}^{n-1}\langle\psi_{i_{m}}^{\tilde{T}_{m}}|\psi_{i_{m+1} }^{T_{m}}\rangle}\right)_{\text{discon}}=d_{C}^{n}\,\prod_{m=0}^{n-1}\left( \delta_{i_{m}i_{m+1}}\delta_{\tilde{T}_{m}T_{m}}\right). \tag{3.6}\] The contribution of this diagram to the Renyi entropy is \[\left.\overline{\operatorname{tr}\left[\mathcal{N}[\rho]^{n}\right]}\right|_ {\text{fully discon}}=\frac{1}{\left(k\,d_{C}\right)^{n}}\cdot k\,\left(d_{C} \right)^{n}\,\sum_{\boldsymbol{T}=1}^{d_{T}}\rho_{T_{1}T_{1}}\,\rho_{T_{2}T_{2 }}\,\cdots\rho_{T_{n}T_{n}}=\frac{1}{\left(k\right)^{n-1}}\left(\operatorname{ tr}\left[\rho\right]\right)^{n}. \tag{3.7}\] Similarly, the value of the fully connected diagram is given by \[\left(\overline{\prod_{m=0}^{n-1}\langle\psi_{i_{m}}^{\tilde{T}_{m}}|\psi_{i_ {m+1}}^{T_{m}}\rangle}\right)_{\text{fully conn}}=d_{C}\prod_{m=0}^{n-1}\left( \delta_{\tilde{T}_{m+1}T_{m}}\right)\,\Rightarrow\left.\overline{ \operatorname{tr}\left[\mathcal{N}[\rho]^{n}\right]}\right|_{\text{fully conn}}= \frac{1}{\left(d_{C}\right)^{n-1}}\operatorname{tr}\left[\rho^{n}\right]. \tag{3.8}\] Combining these two results, \(\overline{\operatorname{tr}\left[\mathcal{N}[\rho]^{n}\right]}\) is given by \[\overline{\operatorname{tr}\left[\mathcal{N}[\rho]^{n}\right]}=\frac{1}{\left( k\right)^{n-1}}\left(\operatorname{tr}\left[\rho\right]\right)^{n}+\frac{1}{ \left(d_{C}\right)^{n-1}}\operatorname{tr}\left[\rho^{n}\right]+\cdots, \tag{3.9}\] where \(\cdots\) means contributions coming from partially connected saddles. Since there are upper and lower bounds on \(\operatorname{tr}\left[\rho^{n}\right]\), that is, \(1/(d_{T})^{n-1}\leq\operatorname{tr}\left[\rho^{n}\right]\leq 1\), we can see that \[\overline{\operatorname{tr}\left[\mathcal{N}[\rho]^{n}\right]} =\frac{1}{\left(k\right)^{n-1}}\left(\operatorname{tr}\left[ \rho\right]\right)^{n}+\frac{1}{\left(d_{C}\right)^{n-1}}\operatorname{tr} \left[\rho^{n}\right]+\cdots, \tag{3.10}\] \[\approx\begin{cases}\frac{1}{\left(k\right)^{n-1}}&k\ll d_{C} \Leftrightarrow d_{T}\ll\left(\frac{d_{T}}{d_{D}}\right)^{2}\\ \frac{1}{\left(d_{C}\right)^{n-1}}\operatorname{tr}\left[\rho^{n}\right]&d_{C} \,d_{T}\ll k\Leftrightarrow\left(\frac{d_{T}}{d_{D}}\right)^{2}\ll 1\end{cases}.\] Figure 3: **Left**: The dominant diagram for (3.5) when \(d_{D}\ll d_{T}\) (disconnected diagram) **Right**: The connected diagram dominating the sum at \(d_{T}\ll d_{D}\). Thus, when the necessary condition for the decoupling condition, \(d_{T}/d_{D}\ll 1\), holds, the dominant contribution is given by the fully connected saddle. We have to carefully evaluate the precise range of \(m\) where the value of the connected saddle gets larger than that of the disconnected saddle. This value of \(m\) depends on the density matrix \(\rho\) on the code subspace, and gets maximized when it is the maximally mixed state \(\rho=I_{T}/d_{T}\). Therefore after \(k>d_{C}d_{T}\) the connected saddle become the dominant one for all density matrices in \(H_{\rm code}\). Next, let us evaluate the second term of (10). This computation is completely parallel to the above computation. In terms of the overlaps, it is given by \[\operatorname{tr}\left[\mathcal{N}[\rho]\mathcal{N}[\sigma]^{n-1}\right]= \frac{1}{\left(k\,d_{C}\right)^{n}}\sum_{\boldsymbol{i}=1}^{k}\sum_{ \boldsymbol{T},\tilde{\boldsymbol{T}}=1}^{d_{T}}\left(\prod_{m=0}^{n-1} \langle\psi_{i_{m}}^{\tilde{T}_{m}}|\psi_{i_{m+1}}^{T_{m}}\rangle\right)\; \rho_{T_{0}\,\tilde{T}_{0}}\left(\prod_{m=1}^{n-1}\sigma_{T_{m}\,\tilde{T}_{m }}\right). \tag{23}\] The contribution of the fully disconnected diagram and the connected diagram to the second term of (10) can be evaluated, again by substituting the result (11) and (12) \[\operatorname{tr}\left[\mathcal{N}[\rho]\mathcal{N}[\sigma]^{n-1}\right] \big{|}_{\rm discon}=\frac{1}{\left(k\right)^{n-1}}\,\operatorname{tr}\left[ \rho\right]\,\left(\operatorname{tr}\left[\sigma\right]\right)^{n-1},\quad \overline{\operatorname{tr}\left[\mathcal{N}[\rho]\mathcal{N}[\sigma]^{n-1} \right]}\big{|}_{\rm conn}=\frac{1}{\left(d_{C}\right)^{n-1}}\operatorname{ tr}\left[\rho\,\sigma^{n-1}\right]. \tag{24}\] Thus using these results, we obtain \[\overline{\operatorname{tr}\left[\mathcal{N}[\rho]\mathcal{N}[ \sigma]^{n-1}\right]} =\frac{1}{\left(k\right)^{n-1}}\operatorname{tr}\left[\rho \right]\left(\operatorname{tr}\left[\sigma\right]\right)^{n-1}+\frac{1}{\left( d_{C}\right)^{n-1}}\operatorname{tr}\left[\rho\sigma^{n-1}\right]+\cdots, \tag{25}\] \[\approx\begin{cases}\frac{1}{\left(k\right)^{n-1}}&k\ll d_{C} \Leftrightarrow d_{T}\ll\left(\frac{d_{T}}{d_{D}}\right)^{2}\\ \frac{1}{\left(d_{C}\right)^{n-1}}\operatorname{tr}\left[\rho \sigma^{n-1}\right]&k\gg d_{C}d_{T}\Leftrightarrow\left(\frac{d_{T}}{d_{D}} \right)^{2}\ll 1\end{cases},\] where \(\cdots\) again means contributions coming from partially connected saddles, and also, in the second approximate equality we assumed that \(1/(d_{T})^{n-1}\lesssim\operatorname{tr}\left[\rho\sigma^{n-1}\right]\leq 1\) in order to obtain the conditions11. Footnote 11: If the support of the density matrix \(\rho\) is not contained in that of \(\sigma\), then \(\operatorname{tr}\left[\rho\sigma^{n-1}\right]=0\), implying the divergent relative entropy \(S(\rho||\sigma)=\infty\). In that case, we would need another treatment, so we do not consider such a case in this paper. Now that we have evaluated the two terms appeared in the relative entropy, we can obtain the resulting relative entropy \[\overline{S(\mathcal{N}[\rho]||\mathcal{N}[\sigma])} \approx\lim_{n\to 1}\frac{1}{n-1}\left(\log\overline{\operatorname{tr} \left[\mathcal{N}[\rho]^{n}\right]}-\log\overline{\operatorname{tr}\left[ \mathcal{N}[\rho]\mathcal{N}[\sigma]^{n-1}\right]}\right), \tag{3.14}\] \[\approx\begin{cases}0&k\ll d_{C}\Leftrightarrow d_{T}\ll\left( \frac{d_{T}}{d_{D}}\right)^{2}\\ \lim_{n\to 1}\frac{1}{n-1}\left(\log\operatorname{tr}\left[\rho^{n} \right]-\log\operatorname{tr}\left[\rho\sigma^{n-1}\right]\right)&k\gg d_{C}d_ {T}\Leftrightarrow\left(\frac{d_{T}}{d_{D}}\right)^{2}\ll 1\end{cases}\] \[=\begin{cases}0&k\ll d_{C}\Leftrightarrow d_{T}\ll\left(\frac{d_ {T}}{d_{D}}\right)^{2}\\ S(\rho||\sigma)&k\gg d_{C}d_{T}\Leftrightarrow\left(\frac{d_{T}}{d_{D}}\right) ^{2}\ll 1.\end{cases}\] Thus we can conclude that, when the condition \(d_{T}/d_{D}\ll 1\) is satisfied, the relative entropies obeys the relation \[\overline{S(\mathcal{N}[\rho]||\mathcal{N}[\sigma])}\approx S(\rho||\sigma). \tag{3.15}\] This result implies that the condition of sufficiency holds for the Hayden-Preskill channel when \(\left(\frac{d_{T}}{d_{D}}\right)^{2}\ll 1\). ### Check the recovery map We argued that in chaotic systems (2.5) the Petz recovery map gets simplified and is reduced to so called Petz-lite map \(\mathcal{R}^{\text{Lite}}\)defined in (2.11). In this section, we show this by checking \[\overline{S(\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho_{T}]\right]||\rho _{T})}=0,\quad\text{when}\,\,\left(\frac{d_{T}}{d_{D}}\right)^{2}\ll 1. \tag{3.16}\] for any density matrix \(\rho_{T}\) on the code subspace. This means that at sufficiently late time one can recover \(\rho_{T}\) from the state of the Hawking radiation \(\mathcal{N}[\rho_{T}]\) by applying the recovery map \(\mathcal{R}^{\text{Lite}}\). One can show this by computing the relative entropy by the replica trick similar to (3.3), \[\overline{S(\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho_{T}]\right]||\rho _{T})}=\lim_{n\to 1}\frac{1}{n-1}\left(\log\overline{\operatorname{tr}( \mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho]\right])^{n}}-\log\overline{ \operatorname{tr}(\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho]\right]\rho ^{n-1})}\right). \tag{3.17}\] In terms of Haar random unitaries, \(\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho_{T}]\right]\) is given by \[\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho_{T}]\right] =\frac{1}{N}\cdot\frac{d_{B}d_{D}}{d_{T}}\cdot\mathcal{N}_{D,B \to T}^{\dagger}\left[\mathcal{N}_{T\to D,B}\left[\rho\right]\right] \tag{3.18}\] herefore the first term in (3.17) is given by \[\text{tr}(\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho]\right])^{n}=\frac{1}{( Nkd_{C}^{2}d_{T})^{n}}\sum_{\mathbf{T},\mathbf{T}^{\prime}=1}^{d_{T}}\sum_{\mathbf{T},\mathbf{T}^{ \prime}=1}^{d_{T}}\sum_{\mathbf{i},\mathbf{j}=1}^{k}\ \prod_{m=1}^{n}\ \left(\left\langle \psi_{i_{m}}^{T_{m}}\middle|\psi_{j_{m}}^{T_{m+1}}\right\rangle\left\langle \psi_{j_{m}}^{\tilde{T}_{m}}\middle|\psi_{i_{m}}^{\tilde{T}_{m}^{\prime}}\right \rangle\rho_{\tilde{T}_{m}\tilde{T}_{m}}\right). \tag{3.19}\] We compute this by following the procedure explained in section 2.1, namely by preparing an interval for each overlap, and connecting the endpoints of the intervals by EoW branes, then evaluating each diagram generated in this way. As shown in the figure, \(m\) th replica consists of two intervals with indices for Hawking radiation \(i_{m},j_{m}\). Therefore it is clear that the dominant diagram when \(k=d_{D}d_{B}\) is sufficiently large is the one connecting the endpoint with the index \(i_{m}\) in the first interval of to the endpoint of the second replica with the same index in the same replica (the right panel of Figure 4). Similarly, we connect the endpoints with \(j_{m}\) in this replica. This is because, if there is an EoW brane connecting endpoints with distinct Hawking indices (say \(i,j\)), then the value of the diagram is significantly reduced in the large \(k\) limit because of the Kronecker delta factor \(\delta_{ij}\) coming from the brane. This means that in the dominant saddle two different replicas are not connected by any EoW brane, because they start and end at the same replica. This means that the Renyi entropy is a self averaging quantity \[\overline{\text{tr}(\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho]\right])^ {n}}=\text{tr}\left(\overline{\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[ \rho]\right]}\right)^{n} \tag{3.20}\] A similar statement holds for the second term of (3.17), therefore we conclude that the relative entropy of our interest is also self averaging, \[\overline{S(\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho_{T}]\right]\ ||\rho_{T})}=S(\overline{\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho_{T}] \right]}\ ||\rho_{T}). \tag{3.21}\] when \(k\) is sufficiently large. This implies that in the relative entropy one can replace \(\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho_{T}]\right]\) to its average \(\overline{\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho_{T}]\right]}\). The average of the density matrix is given by \[\overline{\mathcal{R}^{\text{Lite}}\left[\mathcal{N}[\rho_{T}]\right]}=\frac{1 }{1+\left(\frac{d_{T}}{d_{D}}\right)^{2}}\left(\rho+\left(\frac{d_{T}}{d_{D}} \right)^{2}\cdot\frac{I_{T}}{d_{T}}\right). \tag{3.22}\] Figure 4: Diagrams for the product of overlaps appearing in the calculation of (3.19). **Left**: disconnected diagram. **Right**: The connected diagram. A more precise way to argue this is the following: Let us compute \[\overline{\operatorname{tr}\left[\left(\mathcal{R}^{\operatorname{Lite} }\left[\mathcal{N}[\rho_{T}]\right]-\overline{\mathcal{R}^{\operatorname{Lite} }\left[\mathcal{N}[\rho_{T}]\right]}\right)^{2}\right]}=\overline{ \operatorname{tr}\left[\left(\mathcal{R}^{\operatorname{Lite}}\left[\mathcal{N}[ \rho_{T}]\right]\right)^{2}\right]}-\operatorname{tr}\left[\left(\overline{ \mathcal{R}^{\operatorname{Lite}}\left[\mathcal{N}[\rho_{T}]\right]}\right)^{2} \right]. \tag{3.23}\] Then, the right hand side of the above equation is given by \[\frac{(k\,d_{C})^{4}}{(Nk(d_{C})^{2}d_{T})^{2}}\Bigg{\{}\frac{1}{k \,d_{C}}\Bigg{[}\frac{1}{k^{2}}\left(2+d_{T}\operatorname{tr}\left[\rho^{2} \right]+(d_{T})^{2}\right) \tag{3.24}\] which becomes small when \(k\gg d_{C}d_{T}\). By plugging this expression, we have \[\overline{S(\mathcal{R}^{\operatorname{Lite}}\left[\mathcal{N}[ \rho_{T}]\right]||\rho_{T})} \approx S(\overline{\mathcal{R}^{\operatorname{Lite}}\left[ \mathcal{N}[\rho_{T}]\right]}\,||\rho_{T}) \tag{3.25}\] \[=\begin{cases}S\bigg{(}\rho\left\|\frac{I_{T}}{d_{T}}\right)&k \ll d_{C}\Leftrightarrow d_{T}\ll\left(\frac{d_{T}}{d_{D}}\right)^{2}\\ 0&k\gg d_{C}d_{T}\Leftrightarrow\left(\frac{d_{T}}{d_{D}}\right)^{2}\ll 1.\end{cases}\] Thus, for early times \(k\ll d_{C}\), the relative entropy is non-vanishing unless \(\rho=I_{T}/d_{T}\), but for late times \(d_{C}\,d_{T}\ll k\), the relative entropy is vanishing. This result implies that when \(k\gg d_{C}d_{T}\), \(\mathcal{R}^{\operatorname{Lite}}\) indeed works as a recovery map. ### Relation to the Yoshida-Kitaev protocol So far we have shown that when \(k\gg d_{C}d_{T}\), the Petz-lite \(\mathcal{R}^{\operatorname{Lite}}\sim\mathcal{N}^{\dagger}\) indeed works as a recovery map. However, we have not discussed the physical interpretation of the Petz-lite. So, in this subsection, we explain the interpretation by showing the equivalence between the Petz-lite and the well-known Yoshida-Kitaev (YK) protocol. The relation between the Yoshida-Kitaev protocol and the Petz map has been suggested by Yoshida [12; 13]. In [17], Yoshida and Kitaev proposed an interesting recovery protocol for the object thrown into the black hole \(T\) from late and early radiation \(DB\). Brief summary of their protocol is as follows: 1. In addition to the original Hayden-Preskill setup, introduce a copy of the diary and the reference, denoted by \(R^{\prime}T^{\prime}\). We choose the state on \(R^{\prime}T^{\prime}\) to be an EPR state. Bob can manipulate Hawking radiation \(DB\) and \(R^{\prime}T^{\prime}\). Before applying the decoding protocol the state of the total system is \[\left|\Psi_{HP}\right\rangle\otimes\left|\text{EPR}\right\rangle_{R^{\prime}T^ {\prime}}\] (3.26) here \(|\Psi_{HP}\rangle\) is the state on \(RCDB\) given by (1). 2. We then use the early Hawking radiation \(B\) and the copy of the diary \(T^{\prime}\) to simulate the black hole dynamic by applying \(U^{*}\) which is the complex conjugate of \(U\) for the time evolution of the original system. After the simulation total system consists of \(RCDR^{\prime}C^{\prime}D^{\prime}\), and the state is \[|\Psi_{YK}\rangle_{RCDD^{\prime}C^{\prime}R^{\prime}}=\left(I_{RC}\otimes I_{D} \otimes U^{*}_{D^{\prime}C^{\prime}\to BT^{\prime}}\otimes I_{R}\right)|\Psi_{ HP}\rangle\otimes|\text{EPR}\rangle_{R^{\prime}T^{\prime}}\] (33) 3. Post select to the EPR pair on \(DD^{\prime}\). If it succeeds, the state on \(RR^{\prime}\) is the EPR state with high fidelity, meaning the success of information recovery. The quantum circuit for the protocol is shown in the left panel of figure 5. Combining these steps, the quantum channel \(\mathcal{R}^{KY}_{D,B\to R^{\prime}}\) for the Yoshida-Kitaev (YK) recovery map is given by \[\mathcal{R}^{\text{YK}}_{D,B\to R^{\prime}}\left[\mathcal{O}_{DB}\right]\] \[=\frac{1}{\Delta}\operatorname{tr}_{C^{\prime}}\Big{[}\left.{}_{ D,D^{\prime}}\langle\text{EPR}|\,U^{*}_{B,T^{\prime}\to C^{\prime},D^{\prime}} \left(\mathcal{O}_{DB}\otimes|\text{EPR}\rangle_{T^{\prime},R^{\prime}} \langle\text{EPR}|\right)U^{T}_{B,T^{\prime}\to C^{\prime},D^{\prime}} \left|\text{EPR}\right\rangle_{D,D^{\prime}}\right], \tag{34}\] where \(\Delta\) is a normalization factor given by \[\Delta=\overline{\big{|}_{D,D^{\prime}}\left\langle\text{EPR}|\Psi_{YK} \rangle\right|^{2}}\approx\frac{1}{(d_{T})^{2}}+\frac{1}{(d_{D})^{2}}. \tag{35}\] For the above YK recovery map, we show the equivalence between the YK recovery map \(\mathcal{R}^{\text{YK}}_{D,B\to R^{\prime}}\) and the Petz-lite (9), \(\mathcal{R}^{\text{Lite}}_{D,B\to T}\) up to the isomorphism \(V_{T\to R^{\prime}}\) between systems \(T\) and \(R^{\prime}\), \[\mathcal{R}^{KY}_{D,B\to R^{\prime}}\left[\mathcal{O}_{DB}\right]=V_{T\to R^{ \prime}}\,\mathcal{R}^{\text{Lite}}_{D,B\to T}\left[\mathcal{O}_{DB}\right] \,V^{\dagger}_{T\to R^{\prime}}\quad, \tag{36}\] where \(V_{T\to R^{\prime}}\) is explicitly given by \[V_{T\to R^{\prime}}\coloneqq d_{T\,T,T^{\prime}}\langle\text{EPR}|\text{EPR} \rangle_{T^{\prime},R^{\prime}}=\sum_{\widehat{T}=1}^{d_{T}}\left|\widehat{T} \right\rangle_{R^{\prime}T}\!\!\left\langle\widehat{T}\right|. \tag{37}\] Figure 5: **Left**: Yoshida-Kitaev decoding protocol. **Right**: operator transpose that providing the key equivalence (34). The argument for the equivalence is summarized in the right panel of figure 5. We start with the YK recovery map (3.28). First, we rewrite the trace of subsystem \(C^{\prime}\) in the YK recovery map as \[\mathrm{tr}_{C^{\prime}}\left[\mathcal{O}\right]=d_{C\;C,C^{\prime}}\langle \mathrm{EPR}|\left(I_{C}\otimes\mathcal{O}\right)|\mathrm{EPR}\rangle_{C,C^{ \prime}}\,, \tag{3.32}\] and introduce two EPR states \(|\mathrm{EPR}\rangle_{D,D^{\prime}}\) and \(|\mathrm{EPR}\rangle_{C,C^{\prime}}\). Next, by using (3.32) and the relation (see appendix B for the derivation) \[U_{C^{\prime},D^{\prime}\to B,T^{\prime}}^{T}\left|\mathrm{EPR}\right\rangle_{C,C^{\prime}}\otimes\left|\mathrm{EPR}\right\rangle_{D,D^{\prime}}=U_{A,T \to C,D}\left|\mathrm{EPR}\right\rangle_{A,B}\otimes\left|\mathrm{EPR} \right\rangle_{T,T^{\prime}}, \tag{3.33}\] the KY recovery map (3.30) can be rewritten as \[\mathcal{R}_{D,B\to R^{\prime}}^{KY}\left[\mathcal{O}_{DB}\right] \tag{3.34}\] \[=\frac{d_{C}}{\Delta}\left(\,{}_{A,B}\langle\mathrm{EPR}|\otimes \,{}_{T,T^{\prime}}\langle\mathrm{EPR}|\right)\left[U_{A,T\to C,D}^{ \dagger}\left(\mathcal{O}_{DB}\otimes|\mathrm{EPR}\right)_{T^{\prime},R^{ \prime}}\langle\mathrm{EPR}|\right)U_{A,T\to C,D}\right]\] \[\times\left(\left|\mathrm{EPR}\right\rangle_{A,B}\otimes\left| \mathrm{EPR}\right\rangle_{T,T^{\prime}}\right)\] \[=\frac{d_{C}}{\Delta}\left(\,{}_{T,T^{\prime}}\langle\mathrm{EPR }|\mathrm{EPR}\rangle_{T^{\prime},R^{\prime}}\right)\] \[\times\,{}_{A,B}\langle\mathrm{EPR}|\left[U_{T,A\to C,D}^{ \dagger}\,\mathcal{O}_{DB}\,U_{T,A\to C,D}\right]|\mathrm{EPR}\rangle_{A,B}\] \[\times\left(\,{}_{T^{\prime},R^{\prime}}\langle\mathrm{EPR}| \mathrm{EPR}\rangle_{T,T^{\prime}}\right)\] \[=\frac{d_{C}}{(d_{T})^{2}\,\Delta}\,V_{T\to R^{\prime}}\, \mathcal{N}_{D,B\to T,\mathcal{A}}^{\dagger}\left[\mathcal{O}_{D,B}\right]\,V_ {T\to R^{\prime}}^{\dagger},\] where, in the final line, we used the definition of the isomorphism (3.31) and the adjoint HP channel (2.6). Additionally, the above overall constant \(\frac{d_{C}}{(d_{T})^{2}\,\Delta}\) coincides with that of the Petz-lite (2.11), since \[\frac{d_{C}}{(d_{T})^{2}\,\Delta}=\frac{d_{C}}{1+\left(\frac{d_{T}}{d_{D}} \right)^{2}}, \tag{3.35}\] where we used the definition of \(\Delta\), (3.29). Therefore the above expression implies the desired relation (3.30). ## 4 Recovery map for the Hayden-Preskill channel in SYK So far we have given the evidence that the Petz-lite works as a recovery map, under the Haar random unitary which is highly chaotic. In this section, we argue that this continues to hold for a more realistic but tractable model of chaotic dynamics; the Sachdev-Ye-Kitaev (SYK) model [24; 25; 26]. In this paper, we briefly explain the relevant calculations, leaving details in the upcoming paper [16]. ### Setup of SYK Hayden-Preskill protocol In this section, we explain the setup to study the Hayden Preskill like protocol (what we call SYK HP channel) in the SYK model. This was first introduced in [14; 27]. The SYK model is a theory of \(N\) Majorana fermions \(\psi_{i}\), and its Hamiltonian is given by \[H=(i)^{q/2}\sum_{1\leq i_{1}<i_{2}<\cdots<i_{q}\leq N}j_{i_{1}i_{2}\cdots i_{q} }\psi_{i_{1}}\psi_{i_{2}}\cdots\psi_{i_{q}}, \tag{4.1}\] where \(q\in 2\mathbb{N}\,(q>2)\), \(j_{i_{1}i_{2}\cdots i_{q}}\) is a random coefficient drawn from a Gaussian random distribution with zero mean and the variance \(\left\langle j_{i_{1}i_{2}\cdots i_{q}}^{2}\right\rangle=J^{2}(q-1)!/N^{q-1}\). Following [14], we consider two copies of the Hilbert space of the SYK model, say left SYK system \(L\) and right one \(R\). Hereafter we denote the Majorana fermions on the left system by \(\psi_{i,L}\) and \(\psi_{i,R}\) for the right. For notational simplicity, we use the convention \[\{\psi_{i},\psi_{j}\}=2\delta_{i,j}, \tag{4.2}\] for the anti commutation relation for the fermions on the same side. In this set up, the right SYK system corresponds to early radiation degrees of freedom of the original Hayden-Preskill setup, and the left SYK system corresponds to the rest; the union of the diary system and the initial black hole before the action of the random unitary, or equivalently the remaining black hole plus late radiation degrees of freedom after the unitary evolution. In particular, the left system \(L\) is divided into two subsystems, say \(\tilde{L}\) and \(K\), the former correspond to the remaining black hole and the latter to the late radiation part of the original HP setup. On the union of the above SYK systems \(L\) and \(R\), we consider the following thermo-field double (TFD) state ; \[\left|\text{TFD}\right\rangle_{L,R}=Z^{-1/2}(\beta)\,e^{-\beta(H_{L}+H_{R})/4} \left|0\right\rangle_{L,R}, \tag{4.3}\] where \(Z(\beta)\) is a normalization factor of the state, and \(\left|0\right\rangle_{L,R}\) is given by [28] \[\left[\,\psi_{j,L}(0)+i\psi_{j,R}(0)\,\right]\left|0\right\rangle_{L,R}=0 \quad\text{for }\forall j. \tag{4.4}\] Note that the thermo-field double state (4.3) satisfies the relation \(\left(H_{L}-H_{R}\right)\left|\text{TFD}\right\rangle=0\). This TFD state corresponds to an entangled state between the initial black hole and the early radiation. The code subspace (a diary system) of our interest is two dimensional, and let's denote two basis vectors by \(\left|0\right\rangle\) and \(\left|1\right\rangle\). This code subspace is embedded into the physical Hilbert space \(LR\) by an isometry. The image of the code subspace is spanned by the TFD state \(\left|\text{TFD}\right\rangle_{L,R}\) and the excited state \(\psi_{i,L}(0)\left|\text{TFD}\right\rangle_{L,R}\). Here we assume that the Majorana fermion \(\psi_{i,L}(0)\) acting on the TFD state lives in the subsystem \(\tilde{L}\), \(i\in\tilde{L}\). More explicitly, by the isometry, the states in the code subspace \(\left|T\right\rangle\) (\(T=0,1\)) are mapped to \[\left(V_{T,L\to L}\otimes I_{R}\right)\left(\left|T\right\rangle_{T} \otimes\left|\text{TFD}\right\rangle_{L,R}\right)\coloneqq\begin{cases}\left| \text{TFD}\right\rangle_{L,R}&\text{ for }T=0\\ \dfrac{1}{(Z_{\delta})^{\frac{1}{2}}}\psi_{i,L}(i\delta)\left|\text{TFD} \right\rangle_{L,R}&\text{ for }T=1,\end{cases} \tag{4.5}\] where \(\psi_{i,L}(i\delta)\) is the regulated Majorana fermion operator \[\psi_{i,L}(i\delta)=e^{-\delta H_{L}}\psi_{i,L}(0)e^{\delta H_{L}}, \tag{4.6}\] and \(\delta\) is an infinitesimal cutoff parameter to normalize the state with the operator insertion even in the conformal limit, where the SYK model has an effective description in terms of the reparametrization modes [29]. \(Z_{\delta}\) is its normalization factor given by the two point function \[Z_{\delta} =\frac{1}{N-K}\sum_{i=1}^{N-K}\frac{1}{Z(\beta)}\operatorname{tr }\left[e^{-\beta H_{L}}\psi_{i,L}(-i\delta)\psi_{i,L}(i\delta)\right] \tag{4.7}\] \[=\frac{1}{N-K}\sum_{i=1}^{N-K}\frac{1}{Z(\beta)}\operatorname{tr }\left[e^{-\beta H_{L}}e^{2\delta H_{L}}\psi_{i,L}(0)e^{-2\delta H_{L}}\psi_{ i,L}(0)\right]=G_{\beta}(2\delta).\] This normalization factor is not for the specific Majorana fermion "\(i\)", but averaged over the region \(\tilde{L}\) with \(N-K\) sites. We expect the difference between the two only appears in subleading terms with respect to \(K/N\) because of typicality. Therefore, we use this normalization factor (4.7) for later convenience. Using the above embedding, we can holographically prepare an initial entangle state between the early radiation and an initial black hole containing a diary in the SYK model. For this system, we consider an unitary time evolution on the left system \(L\) by the SYK Hamiltonian \(H_{L}\), \[U_{L}(t)=\exp\left(itH_{L}\right). \tag{4.8}\] By this time evolution, information in the diary gets scrambled and uniformly distributed over the left SYK system after the scrambling time. The resulting state is \[\left|\Psi_{\text{SYK HP}}\right\rangle=\left(I_{\text{Ref}}\otimes U_{L}(t) \otimes I_{R}\right)\left(I_{\text{Ref}}\otimes V_{T,L\to L}\otimes I_{R} \right)\left(\left|\text{EPR}\right\rangle_{\text{Ref},T}\otimes\left|\text{ TFD}\right\rangle_{L,R}\right), \tag{4.9}\] which corresponds to the state (2.1). In figure 6, we give the circuit diagram corresponding to the state (4.9). We are interested in recovering the diary information from the early and late radiations \(R\) and \(\tilde{L}\) by using the Petz-lite for the SYK HP protocol. As in (2.3), the SYK HP channel \(\mathcal{N}_{T\to K,R}^{\text{SYK}}\) representing error is obtained by tracing out the remaining black hole part \(\tilde{L}\) in the final state (4.9), \[\mathcal{N}_{T\to K,R}^{\text{SYK}}[\rho_{T}]\coloneqq\operatorname{tr}_{ \tilde{L}}\left[U_{L}V_{T,L\to L}\left(\rho_{T}\otimes\left|\text{TFD} \right\rangle_{L,R}\langle\text{TFD}\right|\right)V_{T,L\to L}^{\dagger}U_{L}^ {\dagger}\right]. \tag{4.10}\] This channel maps a density matrix on the diary \(T\) to the one on the late and early radiation system \(K,R\). Also, the adjoint \(\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\) of the SYK HP channel is given by \[\begin{split}\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}[ \mathcal{O}_{KR}]&:=\text{tr}_{L,R}\left[\left|\text{TFD}\right>_{ L,R}\langle\text{TFD}|\left(V_{L\to T,L}^{\dagger}U_{L}^{\dagger}\,\mathcal{O}_{KR}\,U_{L}V_{L \to T,L}\right)\right]\\ &=\,_{L,R}\left\langle\text{TFD}|\Big{(}V_{L\to T,L}^{ \dagger}U_{L}^{\dagger}\,\mathcal{O}_{KR}\,U_{L}V_{L\to T,L}\Big{)}|\text{TFD} \right>_{L,R}.\end{split} \tag{4.11}\] The above quantum channels are analogous to the original HP channel and its adjoint for the Haar random unitary. However, we note that the difference that the SYK HP channel and its adjoint includes the embedding map \(V\), which induces (fermionic) excitations. ### Some matrix elements of the Petz-lite and Renyi-two correlators Now that we have prepared the SYK HP channel and its adjoint, we can construct the Petz-lite map for this channel. As in the Petz-lite for the Haar random case (2.9), we consider the Petz-lite for the SYK case, \[\mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[\mathcal{O}_{KR}\right]=\frac{1} {N_{\text{SYK}}}\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}[\mathcal{O}_{KR}], \tag{4.12}\] where \(N_{\text{SYK}}\) is the normalization factor, which is determined by the condition \[\text{tr}_{T}\left[\mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[ \mathcal{N}^{\text{SYK}}_{T\to K,R}[\sigma_{T}]\right]\right]=1. \tag{4.13}\] Here \(\sigma_{T}\) is some reference state in \(T\) for the normalization. We take it to be \(\sigma_{T}=\left|0\right>_{T}\!\left<0\right|\). For this choice, the normalization factor is given by \[N_{\text{SYK}}=\sum_{T=0,1}\,\,\langle T|\mathcal{N}^{\text{ SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T\to K,R}[\left|0 \right>_{T}\!\left<0\right|]\right]\!\left|T\right>. \tag{4.14}\] Figure 6: Circuit diagram corresponding to state (4.9). We note that due to this normalization, we can see that the Petz-lite (4.12) for the SYK HP protocol have a similar overall constant with the Petz-lite for the original HP protocol (2.11). To see the similarity, we first rewrite the Petz-lite (4.12) with the normalization factor (4.14) as follows \[\mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[\mathcal{O}_{KR}\right]=\frac{ \left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}}{1+\left\langle\hat{d}_{ \tilde{L}}\right\rangle_{\beta}\,\left\langle 1|\mathcal{N}^{\text{SYK}\dagger}_{K,R \to T}\left[\mathcal{N}^{\text{SYK}}_{T\to K,R}[|0\rangle_{T} \langle 0|]\right]\right|1\rangle}\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}[ \mathcal{O}_{KR}], \tag{4.15}\] where \(\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\) is an effective dimension of subsystem \(\tilde{L}\) defined by the purity \(\text{tr}_{\tilde{L}}\left[\left(\rho_{\tilde{L}}\right)^{2}\right]\) of the TFD state with respect to the subsystem12, Footnote 12: We note that in our setting, subsystem \(\tilde{L}\) is smaller than the complement system \(KR\). \[\langle 0|\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{ \text{SYK}}_{T\to K,R}[|0\rangle_{T}\langle 0|]\right]\right]\lvert 0 \rangle=\text{tr}_{KR}\left[\left(\rho_{KR}\right)^{2}\right]=\text{tr}_{ \tilde{L}}\left[\left(\rho_{\tilde{L}}\right)^{2}\right]\eqqcolon\frac{1}{ \left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}}. \tag{4.16}\] The effective dimension is analogous to the dimension of the remaining black hole in the original HP setup. Indeed in the infinite temperature limit \(\beta\to 0\), the effective dimension is almost reduced to the actual dimension of subsystem \(\tilde{L}\), \(d_{\tilde{L}}=2^{N-K}\). However in general the effective dimension is smaller than the actual dimension due to the property of the purity and thermal effects; \[1\,\leq\,\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\,\leq\,d_{ \tilde{L}}, \tag{4.17}\] where this effective dimension becomes closed to \(1\) in \(\beta\to\infty\) and \(d_{\tilde{L}}\) in \(\beta\to 0\). With this effective dimension, we can compare the Petz-lite (4.15) for the SYK model to that for the original one (2.11) in the HP setup \[\mathcal{R}^{\text{Lite,HP}}_{D,B\to T}[\mathcal{O}_{DB}]=\frac{d_{C}}{1+ \left(\frac{d_{T}}{d_{D}}\right)^{2}\mathcal{N}^{\dagger}_{D,B\to T}\left[ \mathcal{O}_{DB}\right].\] The similarities between the quantities in the HP and the SYK are summarized in the following identifications; \[\begin{split} d_{C}&\longleftrightarrow\quad \left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta},\\ \left(\frac{d_{T}}{d_{D}}\right)^{2}&\longleftrightarrow \quad\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\,\left\langle 1| \mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{ SYK}}_{T\to K,R}[|0\rangle_{T}\langle 0|]\right]\right]\lvert 1\rangle\,.\end{split} \tag{4.18}\] Also, we have the unitarity constraint on the dimensions of the Hilbert spaces; \(d_{T}\,d_{B}=d_{C}\,d_{D}\). By using the relation, we can rewrite the dimension as \[\left(\frac{d_{T}}{d_{D}}\right)^{2}=\frac{d_{C}\,d_{T}}{d_{B}\,d_{D}}, \tag{4.19}\] from which we have the following identification \[\langle 1|\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{ SYK}}_{T\to K,R}[|0\rangle_{T}\langle 0|]\right]|1\rangle\quad\longleftrightarrow\quad \frac{1}{d_{C}}\cdot\left(\frac{d_{T}}{d_{D}}\right)^{2}=\frac{d_{T}}{d_{B}\,d _{D}}=\frac{d_{T}}{k}. \tag{4.20}\] This might be a good ratio to understand current physics; if we have a sufficiently large amount of Hawking radiation compared with the diary, \(d_{T}\ll d_{B}\,d_{D}=k\), the ratio becomes almost \(0\). As we see soon after, the left quantity also becomes almost \(0\) around and/or after a critical time. With this discussion of the normalization factor in mind, we consider a matrix element of \(\mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T\to K,R}[\rho_{T}]\right]\) for a general density matrix \(\rho_{T}\) in the Hilbert space of the diary, \[\left\langle T\middle|\mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[ \mathcal{N}^{\text{SYK}}_{T\to K,R}[\rho_{T}]\right]\middle|T^{\prime} \right\rangle. \tag{4.21}\] To check whether the Petz-lite works as the recovery map, it is sufficient to see whether the following relation holds (approximately) or not, \[\left\langle T\middle|\mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[ \mathcal{N}^{\text{SYK}}_{T\to K,R}[\rho_{T}]\right]\middle|T^{\prime} \right\rangle\stackrel{{?}}{{\approx}}\left\langle T\middle|\rho_ {T}\middle|T^{\prime}\right\rangle\quad\text{for }\forall\rho_{T}. \tag{4.22}\] Checking the above relation is equivalent to focus on the matrix elements \[\left\langle T\middle|\mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[ \mathcal{N}^{\text{SYK}}_{T\to K,R}\left[\middle|\tilde{T}\right\rangle_{T} \!\left\langle\tilde{T}^{\prime}\middle|\right]\right]\middle|T^{\prime} \right\rangle\stackrel{{?}}{{\approx}}\left\langle T\middle|\tilde{T }\right\rangle\left\langle\tilde{T}^{\prime}\middle|T^{\prime}\right\rangle, \qquad\forall T,T^{\prime},\tilde{T},\tilde{T}^{\prime} \tag{4.23}\] Generally, we have 16 components of the above matrix, but half of them, including odd Majorana fermions, are trivially vanishing due to the fermionic parity of the SYK model. In other words, matrix elements which satisfies \((T+T^{\prime}+\tilde{T}+\tilde{T}^{\prime})\equiv 1\mod 2\) are vanishing. Now, we focus on three of non-zero matrix elements, and briefly explain how we can evaluate them 13. First, we consider the \(T,T^{\prime},\tilde{T},\tilde{T}^{\prime}=0\) case. If (4.23) holds then since its right-hand side is \(1\), and therefore the following identity holds, Footnote 13: The details of the calculation will be discussed in upcoming paper [16]. \[1\stackrel{{?}}{{\approx}}\left\langle 0|\mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T\to K,R}\left[|0\rangle_{T}\langle 0| \right]\right]|0\right\rangle=\left(1+\left\langle\hat{d}_{\tilde{L}}\right\rangle _{\beta}\cdot\left\langle 1|\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[ \mathcal{N}^{\text{SYK}}_{T\to K,R}[|0\rangle_{T}\langle 0|]\right]\right]|1 \right)^{-1}. \tag{4.24}\] The second one is for the \(T,T^{\prime}=1\), \(\tilde{T},\tilde{T}^{\prime}=0\) case, where the matrix element is expected to become \(0\). In this case, we can see that this matrix element has the same ratios as above, \[0\stackrel{{?}}{{\approx}}\left\langle 1|\mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T\to K,R}\left[|0\rangle_{T}\langle 0 |\right]\right]|1\right\rangle=\frac{\left\langle\hat{d}_{\tilde{L}}\right\rangle _{\beta}\cdot\left\langle 1|\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[ \mathcal{N}^{\text{SYK}}_{T\to K,R}[|0\rangle_{T}\langle 0|]\right]\right]|1 \right\rangle}{1+\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 1| \mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T\to K,R}[|0\rangle_{T}\langle 0|]\right]\right|1\rangle}. \tag{4.25}\] The final one is for \(T,\tilde{T}=0\), \(T^{\prime},\tilde{T}^{\prime}=1\), the matrix elements (4.23), which is expected to be 1, becomes \[1\stackrel{{?}}{{\approx}}\langle 0|\mathcal{R}^{\text{Lite,SYK}}_{K,R \to T}\left[\mathcal{N}^{\text{SYK}}_{T\to K,R}\left[|0\rangle_{T} \langle 1|\right]\right]|1\rangle=\frac{\left\langle\hat{d}_{\tilde{L}}\right\rangle _{\beta}\cdot\left\langle 0|\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[ \mathcal{N}^{\text{SYK}}_{T\to K,R}[|0\rangle_{T}\langle 1|]\right] \right|1\rangle}{1+\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot \left\langle 1|\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[ \mathcal{N}^{\text{SYK}}_{T\to K,R}[|0\rangle_{T}\langle 0|]\right] \right|1\rangle}. \tag{4.26}\] The rest of matrix elements \[\langle 0|\mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[\mathcal{N}^{\text{ SYK}}_{T\to K,R}\left[|1\rangle_{T}\langle 0|\right]\right]|1\rangle\,,\quad\langle 1| \mathcal{R}^{\text{Lite,SYK}}_{K,R\to T}\left[\mathcal{N}^{\text{ SYK}}_{T\to K,R}\left[|1\rangle_{T}\langle 1|\right]\right]|1\rangle\,.\] are difficult to directly evaluate as we will mention in footnote 16. In the next section, we evaluate these matrix elements indirectly from the results of this section. Thus, to see the recovery (4.23), we need to study the behaviours of the matrix elements of \(\mathcal{N}^{\dagger}\mathcal{N}\) which appear in the right hand side of (4.24), (4.25), (4.26). In order for the recovery to happen, these have to satisfy \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 1| \mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{ SYK}}_{T\to K,R}[|0\rangle_{T}\langle 0|]\right]\right|1\rangle\stackrel{{?}}{{ \approx}}0, \tag{4.27}\] \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 0| \mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{ SYK}}_{T\to K,R}[|0\rangle_{T}\langle 1|]\right]\right|1\rangle\stackrel{{?}}{{ \approx}}1, \tag{4.28}\] We study the behavior of the left hand side of(4.27) (4.28) below. To this end, it is convenient to rewrite the quantities as correlators. From the definitions of the channels (2.3) and (4.11), we obtain the left-left correlators \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 1| \mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{ SYK}}_{T\to K,R}[|0\rangle_{T}\langle 0|]\right]\right]|1\rangle=\frac{1}{Z_{\delta}}\cdot \frac{\left\langle\text{TFD}|\psi_{i,L}(t-i\delta)\left(I_{\tilde{L}}\otimes \rho_{KR}\right)\,\psi_{i,L}(t+i\delta)|\text{TFD}\right\rangle}{\text{tr}_{ KR}\left[\left(\rho_{KR}\right)^{2}\right]}, \tag{4.29}\] \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 0| \mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{ SYK}}_{T\to K,R}[|0\rangle_{T}\langle 1|]\right]|1\rangle=\frac{1}{Z_{\delta}}\cdot \frac{\left\langle\text{TFD}|\psi_{i,L}(t-i\delta)\left(\rho_{\tilde{L}} \otimes I_{KR}\right)\,\psi_{i,L}(t+i\delta)|\text{TFD}\right\rangle}{\text{ tr}_{KR}\left[\left(\rho_{KR}\right)^{2}\right]}, \tag{4.30}\] where the two fermions are put on the left system, and \(\rho_{KR}\) and \(\rho_{\tilde{L}}\) are defined by \[\rho_{\tilde{L}}=\text{tr}_{KR}\left[|\text{TFD}\rangle_{LR}\langle\text{TFD }|\right],\qquad\rho_{KR}=\text{tr}_{\tilde{L}}\left[|\text{TFD}\rangle_{LR} \langle\text{TFD}|\right]. \tag{4.31}\] We give the derivation of the correlators in appendix D. We also note that the numerators in the above correlators can be written as \[\begin{split}&\langle\text{TFD}|\psi_{i,L}(t-i\delta)\,\left(I_{ \tilde{L}}\otimes\rho_{KR}\right)\,\psi_{i,L}(t+i\delta)|\text{TFD}\rangle\\ &=\text{tr}_{KR}\left[\text{tr}_{\tilde{L}}\left[\psi_{i,L}(t+i \delta)\left|\text{TFD}\right\rangle_{L,R}\langle\text{TFD}|\,\psi_{i,L}(t-i \delta)^{\dagger}\right]\rho_{KR}\right]\end{split} \tag{4.32}\] and \[\begin{split}&\langle{\rm TFD}|\psi_{i,L}(t-i\delta)\,\left(\rho_{ \tilde{L}}\otimes I_{KR}\right)\,\psi_{i,L}(t+i\delta)|{\rm TFD}\rangle\\ &={\rm tr}_{\tilde{L}}\left[{\rm tr}_{KR}\left[\psi_{i,L}(t+i \delta)\,|{\rm TFD}\rangle_{L,R}\langle{\rm TFD}|\,\psi_{i,L}(t-i\delta)^{ \dagger}\right]\rho_{\tilde{L}}\right].\end{split} \tag{4.33}\] These expressions are also useful to see that these quantities are related to "Renyi-2" quantities as explained below. Below we would like to evaluate these correlators analytically, but the expressions (4.29) and (4.30) are not suitable for an analytic treatment as they are"specific site" correlators, so we can not apply the large-\(N\) techniques to evaluate them. However, since we are basically interested in typical behaviors under highly chaotic dynamics in our setup, the specific choice of the embedding would _not_ be essential. Therefore, below we consider the "typical" embedding of the code information into the whole \(\tilde{L}\) system uniformly. Therefore we replace these correlators with their averages on \(\tilde{L}\), \[\begin{split}\frac{1}{Z_{\delta}}\cdot&\frac{ \langle{\rm TFD}|\psi_{i,L}(t-i\delta)\,\left(I_{\tilde{L}}\otimes\rho_{KR} \right)\,\psi_{i,L}(t+i\delta)|{\rm TFD}\rangle}{{\rm tr}_{KR}\left[\left(\rho _{KR}\right)^{2}\right]}\\ &\to\frac{1}{N-K}\sum_{i=1}^{N-K}\frac{1}{Z_{\delta}}\cdot\frac{ \langle{\rm TFD}|\psi_{i,L}(t-i\delta)\,\left(I_{\tilde{L}}\otimes\rho_{KR} \right)\,\psi_{i,L}(t+i\delta)|{\rm TFD}\rangle}{{\rm tr}_{KR}\left[\left(\rho _{KR}\right)^{2}\right]}\end{split} \tag{4.34}\] and \[\begin{split}\frac{1}{Z_{\delta}}\cdot&\frac{ \langle{\rm TFD}|\psi_{i,L}(t-i\delta)\,\left(\rho_{\tilde{L}}\otimes I_{KR} \right)\,\psi_{i,L}(t+i\delta)|{\rm TFD}\rangle}{{\rm tr}_{KR}\left[\left(\rho _{KR}\right)^{2}\right]}\\ &\to\frac{1}{N-K}\sum_{i=1}^{N-K}\frac{1}{Z_{\delta}}\cdot\frac{ \langle{\rm TFD}|\psi_{i,L}(t-i\delta)\,\left(\rho_{\tilde{L}}\otimes I_{KR} \right)\,\psi_{i,L}(t+i\delta)|{\rm TFD}\rangle}{{\rm tr}_{KR}\left[\left(\rho _{KR}\right)^{2}\right]}.\end{split} \tag{4.35}\] These replacements would change the correlators in sub-leading orders of \(N\), but the essential physics would not be changed, because of typicality. These averaged two point functions are special cases of the (right-left) modular-flowed correlators of the form \[\frac{1}{N-K}\sum_{i=1}^{N-K}\frac{\langle{\rm TFD}|\psi_{i,R}(\tau)\left( \rho_{\tilde{L}}^{n-1-k}\otimes\rho_{KR}^{k}\right)\psi_{i,L}(\tau^{\prime})| {\rm TFD}\rangle}{{\rm tr}\left[\rho_{KR}^{n}\right]}, \tag{4.36}\] where one of the fermions is put on the left system and the other one is on the right system. In the Euclidean regime, they are computed by using the replica trick in [14] when \(K\ll N\). We use the result to compute "Renyi-2" (left-left) modular-flowed correlators (4.34) and (4.35) from the Euclidean (right-left) correlator (4.36), by taking the limits \(k\to n-1\) (and \(k\to 0\)), and \(n\to 2\), then analytically continuing to the Lorentzian regime. We note there is a difference between the above correlator (4.36) computed in [14] and our correlators (4.34) and (4.35), namely that in(4.36) two fermions are living on opposite sides but in our correlators they live on the same side. In our setup, one can relate the correlator to the following diagrams (figure 7). We study the correlators in the large \(\beta J\) limit because their analytic expressions of are available in the limit. One can instead work in the large \(q\) limit while keeping the value of \(\beta J\) finite. We will not do this here because it is the former limit where the generalization to two-dimensional CFT is straightforward [16]. The right hand side of (4.34) and (4.35) in the Euclidean regime are evaluated in the large \(\beta J\) and \(K\ll N\) limit as \[\frac{1}{N-K}\sum_{i=1}^{N-K}\,\frac{\langle{\rm TFD}|\psi_{i,L}( \tau)\,\left(I_{L}\otimes\rho_{KR}\right)\,\psi_{i,L}(\tau^{\prime})|{\rm TFD }\rangle}{{\rm tr}_{KR}\left[\left(\rho_{KR}\right)^{2}\right]} \tag{4.37}\] \[=G_{2\beta}(\tau+2\beta-\tau^{\prime})+2\frac{K}{N}\left( \mathcal{F}(\tau+2\beta,\tau^{\prime};\beta,0)-\mathcal{F}_{0}(\tau+2\beta, \tau^{\prime};\beta,0)\right)+\mathcal{O}\left(\left(\frac{K}{N}\right)^{2} \right),\] \[\frac{1}{N-K}\sum_{i=1}^{N-K}\,\frac{\langle{\rm TFD}|\psi_{i,L} (\tau)\,\left(\rho_{\tilde{L}}\otimes I_{KR}\right)\,\psi_{i,L}(\tau^{\prime })|{\rm TFD}\rangle}{{\rm tr}_{KR}\left[\left(\rho_{KR}\right)^{2}\right]}\] (4.38) \[=G_{2\beta}(\tau+\beta-\tau^{\prime})+2\frac{K}{N}\left( \mathcal{F}(\tau+\beta,\tau^{\prime};\beta,0)-\mathcal{F}_{0}(\tau+\beta,\tau ^{\prime};\beta,0)\right)+\mathcal{O}\left(\left(\frac{K}{N}\right)^{2}\right),\] Here, \(G_{2\beta}(\tau)\) is an Euclidean thermal SYK two point function for subsystem \(\tilde{L}\) with periodicity \(2\beta\), \(\mathcal{F}_{0}(\tau_{1},\tau_{2};\tau_{3},\tau_{4})\) is the connected SYK four point function, which is related to the bare one \(\mathcal{F}_{0}(\tau_{1},\tau_{2};\tau_{3},\tau_{4})\) by the so-called ladder kernel \(K_{c}(\tau_{1},\tau_{2};\tau_{3},\tau_{4})\), \[\mathcal{F}(\tau_{1},\tau_{2};\tau_{3},\tau_{4}) =\int d\tau\,\int d\tau^{\prime}\,\frac{1}{1-K_{c}(\tau_{1},\tau_ {2};\tau,\tau^{\prime})}\mathcal{F}_{0}(\tau,\tau^{\prime};\tau_{3},\tau_{4}), \tag{4.39}\] \[\mathcal{F}_{0}(\tau_{1},\tau_{2};\tau_{3},\tau_{4}) =G_{2\beta}(\tau_{13})G_{2\beta}(\tau_{42})-G_{2\beta}(\tau_{14})G _{2\beta}(\tau_{32}),\qquad\tau_{ij}=\tau_{i}-\tau_{j},\] \[K_{c}(\tau_{1},\tau_{2};\tau_{3},\tau_{4}) =-J^{2}(q-1)G_{2\beta}(\tau_{13})G_{2\beta}(\tau_{24})\left(G_{2 \beta}(\tau_{34})\right)^{q-2}.\] In the SYK model, these two-point and four point functions are well-studied in many papers e.g., [29; 30; 31; 32; 33; 34]. See also [35; 36] for the review and references therein. The Euclidean times \(\tau,\tau^{\prime}\) (4.37) and (4.38) are continued to the Lorentzian time with a regularization parameter \(0<\delta\ll 1\); \(\tau\to-it-\delta,\tau^{\prime}\to-it+\delta\). In this way, the correlator (4.37) is continued to Lorentzian time as an out-of-time ordering correlator (OTOC), \(\tau_{1}>\tau_{3}>\tau_{2}>\tau_{4}\), under the condition \(1\ll\beta J\ll N/K\). This correlator with the ordering is given Figure 7: Diagrams for the path integral calculation of the correlator (4.29) with using the relation (4.32) (Top), and the other correlator (4.30) with (4.33) (Bottom). The red regions in the figure correspond to subsystem \(RK\), and the blue regions correspond to subsystem \(\tilde{L}\). The semicircles correspond to the Euclidean segments that prepares the TFD states. Orange dots represent the insertions of the SYK Majorana fermion with the regularization, \(\psi_{i,L}(t+i\delta)\). The combination of the upper two semicircles with the operator insertions corresponds to the density matrix \(\operatorname{tr}_{\tilde{L}}[\psi_{i,L}\left|\operatorname{TFD}\right>_{L,R} \langle\operatorname{TFD}|\,\psi_{i,L}^{\dagger}]\) (and \(\operatorname{tr}_{KR}[\psi_{i,L}\left|\operatorname{TFD}\right>_{L,R}\langle \operatorname{TFD}|\,\psi_{i,L}^{\dagger}]\)), and the remaining combination represents the other one \(\rho_{KR}\) (and \(\rho_{\tilde{L}}\)). Solid green arrows in the figure correspond to \(\beta/2\) Euclidean evolutions. The two insertions are separated by Euclidean time \(2\beta\) (Top), and \(\beta\) (Bottom). These separations are directly related to \(\tau+2\beta\) and \(\tau+\beta\) appearing in (4.37)and (4.38) respectively. by [29; 35], \[\mathcal{F}(\tau_{1},\tau_{2};\tau_{3},\tau_{4})=G_{2\beta}(\tau_{12})G_{2\beta}( \tau_{34})\frac{2\beta J}{q^{2}\pi C}\left[1-\frac{\pi}{2}\frac{\sin\left( \frac{\pi}{\beta}\Delta\tau\right)}{\sin\left(\frac{\pi}{\beta}\cdot\frac{ \tau_{12}}{2}\right)\sin\left(\frac{\pi}{\beta}\cdot\frac{\tau_{34}}{2}\right) }\right], \tag{112}\] where \(\Delta\tau=(\tau_{1}+\tau_{2})/2-(\tau_{3}+\tau_{4})/2\), and \(C\) is a constant related to an overall constant of the Schwarzian action derived from the Schwinger-Dyson equation of the SYK model [29; 35]. Thus, we have the following continuation \[\mathcal{F}(\tau+2\beta,\tau^{\prime};\beta,0)\rightarrow \mathcal{F}(-it-\delta+2\beta,-it+\delta;\beta,0) \tag{113}\] \[=2G_{2\beta}(2\beta-2\delta)G_{2\beta}(\beta)\cdot\frac{2\beta J }{q^{2}\pi C}\left[1-\frac{\pi}{2}\frac{\cosh\left(\frac{\pi\delta}{\beta} \right)}{\sin\left(\frac{\pi\delta}{\beta}\right)}\right]\] \[\approx-2G_{2\beta}(2\beta-2\delta)G_{2\beta}(\beta)\cdot\frac{ \beta J}{2q^{2}C}\cdot\frac{\exp\left(\frac{\pi}{\beta}t\right)}{\sin\left( \frac{\pi\delta}{\beta}\right)}.\] In particular, the correlator is exponentially growing in time. On the other hand, the other correlator (110) is continued to Lorentzian time with the ordering \(\tau_{3}>\tau_{1}>\tau_{2}>\tau_{4}\) under the condition \(1\ll\beta J\ll N/K\), therefore it is not OTOC. The correlator with the ordering \(\tau_{3}>\tau_{1}>\tau_{2}>\tau_{4}\) is given by \[\mathcal{F}(\tau_{1},\tau_{2};\tau_{3},\tau_{4}) \tag{114}\] \[=-G_{2\beta}(\tau_{12})G_{2\beta}(\tau_{34})\frac{2\beta J}{q^{2 }\pi C}\left[\left(\frac{\pi\tau_{12}}{2\beta\tan\left(\frac{\pi}{\beta}\cdot \frac{\tau_{12}}{2}\right)}+\frac{\pi}{\tan\left(\frac{\pi}{\beta}\cdot\frac{ \tau_{12}}{2}\right)}-1\right)\left(\frac{\pi\tau_{34}}{2\beta\tan\left(\frac{ \pi}{\beta}\cdot\frac{\tau_{34}}{2}\right)}-1\right)\right],\] and its analytic continuation is \[\mathcal{F}(\tau+\beta,\tau^{\prime};\beta,0) \tag{115}\] \[\rightarrow\mathcal{F}(-it-\delta+\beta,-it+\delta;\beta,0)=-2G_{ 2\beta}(\beta-2\delta)G_{2\beta}(\beta)\cdot\frac{2\beta J}{q^{2}\pi C}\left[ 1-\left(\frac{\pi}{2}-\frac{\pi\delta}{\beta}\right)\tan\left(\frac{\pi\delta }{\beta}\right)\right].\] Clearly, this is time-independent unlike the previous case. We do not evaluate bare four point functions \(\mathcal{F}_{0}(\tau_{1},\tau_{2};\tau_{3},\tau_{4})\) for (111) and (112), because they are particular combinations of the thermal SYK two point functions with the power low behavior with respect to time, therefore they do not give dominant contributions to the correlators (111) and (112). Combining the above results, we can obtain the analytic expressions of the quantities (4.37) and (4.38), \[\begin{split}&\left\langle\hat{d}_{\hat{L}}\right\rangle_{\beta} \cdot\,\langle 1|\mathcal{N}_{K,R\to T}^{\text{SYK}\dagger}\left[\mathcal{N}_{T\to K,R}^{ \text{SYK}}[|0\rangle_{T}\langle 0|]\right]|1\rangle\\ &\approx\frac{1}{Z_{\delta}}\left[G_{2\beta}(2\beta-2\delta)-G_{2 \beta}(2\beta-2\delta)G_{2\beta}(\beta)\cdot\frac{2\beta J}{q^{2}C}\cdot\frac {K}{N}\frac{\exp\left(\frac{\pi}{\beta}t\right)}{\sin\left(\frac{\pi\delta}{ \beta}\right)}+\mathcal{O}\left(\left(\frac{K}{N}\right)^{2}\right)\right]\\ &\approx\frac{G_{2\beta}(2\beta-2\delta)}{G_{\beta}(2\delta)} \left[1-\frac{G_{2\beta}(\beta)}{\sin\left(\frac{\pi\delta}{\beta}\right)} \cdot\frac{2\beta J}{q^{2}C}\cdot\frac{K}{N}\exp\left(\frac{\pi}{\beta}t \right)\right],\end{split} \tag{4.44}\] and \[\begin{split}&\left\langle\hat{d}_{\hat{L}}\right\rangle_{\beta} \cdot\,\langle 0|\mathcal{N}_{K,R\to T}^{\text{SYK}\dagger}\left[\mathcal{N}_{T\to K,R}^{ \text{SYK}}[|0\rangle_{T}\langle 1|]\right]|1\rangle\\ &\approx\frac{1}{Z_{\delta}}\left[G_{2\beta}(\beta-2\delta)-G_{2 \beta}(\beta-2\delta)G_{2\beta}(\beta)\cdot\frac{8\beta J}{q^{2}\pi C}\cdot \frac{K}{N}\left[1-\left(\frac{\pi}{2}-\frac{\pi\delta}{\beta}\right)\tan \left(\frac{\pi\delta}{\beta}\right)\right]+\mathcal{O}\left(\left(\frac{K}{N }\right)^{2}\right)\right]\\ &\approx\frac{G_{2\beta}(\beta-2\delta)}{G_{\beta}(2\delta)} \left[1-G_{2\beta}(\beta)\cdot\frac{8\beta J}{q^{2}\pi C}\cdot\frac{K}{N} \left[1-\left(\frac{\pi}{2}-\frac{\pi\delta}{\beta}\right)\tan\left(\frac{\pi \delta}{\beta}\right)\right]\right],\end{split} \tag{4.45}\] where we ignored would-be sub-leading terms, coming from the replacements (4.34) and (4.35) in (4.29) and (4.30), and the sub-sub-leading terms of the averaged correlators. Let us consider the consequences of the above results. First, we focus on the ratios \(G_{2\beta}(2\beta-2\delta)/G_{\beta}(2\delta)\) and \(G_{2\beta}(\beta-2\delta)/G_{\beta}(2\delta)\) appearing in the above results. Since the SYK two point function under the conformal limit \(\beta J\gg 1\) is given by [29], \[G_{\beta}(\tau)=b\left[\frac{\pi}{\beta\sin\frac{\pi\tau}{\beta}}\right]^{2 \Delta},\qquad\Delta=\frac{1}{q}\quad J^{2}b^{q}\pi=\left(\frac{1}{2}-\Delta \right)\tan\pi\Delta, \tag{4.46}\] we can evaluate the ratios as follows \[\frac{G_{2\beta}(2\beta-2\delta)}{G_{\beta}(2\delta)}=\cos^{2\Delta}\left( \frac{\pi\delta}{\beta}\right), \tag{4.47}\] and \[\frac{G_{2\beta}(\beta-2\delta)}{G_{\beta}(2\delta)}=\sin^{2\Delta}\left( \frac{\pi\delta}{\beta}\right). \tag{4.48}\] Thus, these ratios can not be \(1\) simultaneously for general \(\delta\) and \(\beta\). However since \(\Delta=1/q\) when \(q\) is large these ratios are close to \(1\). We give plots of the above two functions for several \(q\) in figure 8. As we can see from plots 8 or directly from (4.47) and (4.48), we need to consider a (relatively) large-\(q\) regime, which implies that the SYK Majorana fermion has a small conformal dimension, \(\Delta=1/q\ll 1\), in order to achieve recovery. One may wonder why here we take the large \(q\) limit, because the (SYK)\({}_{q}\) is chaotic for all \(q\geq 4\) so the identities (4.27), (4.28) are expected to hold for any value of \(q\) in this range. Nevertheless here we have to take the large \(q\) limit because we define the code subspace using the SYK Majorana fermion operator \(\psi_{i,L}\) and the calculations of the relevant correlation functions can be possible only in the large \(\beta J\) limit where the entanglement between \(L\) and \(R\) is weak. Because of the weakness of the entanglement, the recovery is only possible when the dimension of the operator that defines the code subspace is small, implying the necessity of taking the large \(q\) limit. Next, we consider the two point function \(G_{2\beta}(\beta)\) appearing in the sub-leading terms. The two point function \(G_{2\beta}(\beta)\) can be written as \[G_{2\beta}(\beta)=b\left[\frac{\pi}{2\beta\sin\frac{\pi}{2}}\right]^{2\Delta}= \left[\left(\frac{1}{2}-\Delta\right)\frac{\pi\tan\pi\Delta}{(2\beta J)^{2}} \right]^{\Delta}. \tag{4.49}\] Figure 8: Plots of ratios (4.47) and (4.48) as a function of \(\beta J\) for smaller \(q\) (Top), and for larger \(q\) (Bottom). Here, we set \(\delta J=0.1\). For large \(q\) regions, all the ratios become close to 1. The above expression includes \((1/\beta J)^{\Delta}\), so in \(\beta J\to\infty\) limit, the SYK two point function \(G_{2\beta}(\beta)\) vanishes. We also note the \(q\)-dependence of the SYK two point function. Plots of the above function and \(\beta JG_{2\beta}(\beta)\) for several \(q=\Delta^{-1}\) are given in figure 9 and 10 respectively. The plots show that as \(q\) increases, the two point function \(G_{2\beta}(\beta)\) and \(\beta JG_{2\beta}(\beta)\) take larger values. Thus, from the above discussion, in the strict \(\beta J\to\infty\) limit14, we have \(G_{2\beta}(\beta)\to 0\), Figure 10: Plots of \(\beta JG_{2\beta}(\beta)\) as a function of \(\beta J\) for several \(q=\Delta^{-1}\). The dotted line is just \(\beta J\), which is equivalent to \(\beta JG_{2\beta}(\beta)\) under the \(q\to\infty\) limit. Figure 9: Plots of the SYK two point function \(G_{2\beta}(\beta)\), (4.49) as a function of \(\beta J\) for several \(q=\Delta^{-1}\). hence the second terms including \(G_{2\beta}(\beta)\) in (4.44) and (4.45) vanish if we keep the exponential factor \(\exp{(\pi t/\beta)}\) in (4.45) fixed. Therefore, in this strict \(\beta J\to\infty\) limit, we can not have contributions from the second terms including \(G_{2\beta}(\beta)\) in (4.44) and (4.45). These terms are of order \(K/N\) and crucial for the following discussion. Footnote 14: We note that the \(\beta\)-dependence of \(G_{2\beta}(\beta)\) in (4.44) is not a function of \(\beta\). Finally, let us focus on the time dependence of the results (4.44) and (4.45). First, we focus on the second case (4.45). This result is time-independent at least up to the \(K/N\)-order, and the second term is always suppressed by the time-independent factor at the \(K/N\)-order, so the second terms is very small compared with the first term. This implies that the quantity (4.45) is almost given by the ratio \(G_{2\beta}(\beta-2\delta)/G_{\beta}(2\delta)\), which becomes close to \(1\) when \(q\) is large. Next, we focus on (4.44). Because of the exponential time dependent factor, this correlator has crucially different behavior as a function of time from (4.45). For early times \(t\ll 1\), the exponential in the second term can be approximated by \(1\), they are similar. However, because of the exponentially growing factor, the perturbative expansion with respect to \(K/N\) breaks down, similar to the fact that the perturbative calculations of OTOCs in \(1/N\) become invalid. The time scale of this break down can be estimated by equating the second term with the first term in (4.44). From the condition, we can find a critical time \(t_{*}\)15, Footnote 15: In defining the critical time, we might have the ambiguity that which factors should be included into the critical time (or correspondingly the scrambling time), e.g., \(\beta J\) and also \(G_{2\beta}(\beta)\). However, as we saw before, the two point function is typically order one \(G_{2\beta}(\beta)=\mathcal{O}(1)\), so we might need not to include the factor to the scrambling time. Another factor \(1/\sin{\left(\frac{\pi\delta}{\beta}\right)}\) can be set to be \(\mathcal{O}(1)\) by setting the cutoff \(\delta\) suitably. For the other factor \(\beta J\), since we have the condition \(\beta J\ll N/K\), the factor can not give a significant contribution compared to the leading factor \(N/K\), thus including the factor would be redundant. Therefore, the critical time here would be the simplest choice. \[\frac{K}{N}\exp{\left(\frac{\pi}{\beta}t_{*}\right)}\sim 1\qquad \Longrightarrow\quad t_{*}=\frac{\beta}{\pi}\log{\left(\frac{N}{K}\right)}=2 t_{\rm Scram}, \tag{4.50}\] where we introduce the usual scrambling time \(t_{\rm Scram}\), [37] given by \[t_{\rm Scram}=\frac{\beta}{2\pi}\log{\left(\frac{N}{K}\right)}\,. \tag{4.51}\] Using this time scale, we can rewrite the correlator (4.44) as \[\begin{split}&\left\langle\hat{d}_{L}\right\rangle_{\beta}\cdot \left.\langle 1|\mathcal{N}^{\rm SYK\dagger}_{K,R\to T}\left[\mathcal{N}^{\rm SYK}_{T \to K,R}[|0\rangle_{T}\langle 0|]\right]\right|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where we introduce the Lyapunov exponent \(\lambda_{L}\) for a black hole with temperature \(\beta\), \[\lambda_{L}=\frac{2\pi}{\beta}. \tag{108}\] Thus around the critical time, which is twice the scrambling time, we can see that the overall coefficient of \(G_{2\beta}(2\beta-2\delta)/G_{2\beta}(2\delta)\) becomes very small as usual OTOC correlators. This reproduces the expected result (104) under the condition \(\beta J\gg 1\). From the discussion so far, we have confirmed that the matrix elements (103), (104) and (105) do behave as we expect them to under the condition \(1\ll\beta J\ll N/K\). ## 5 Expected properties of the Petz-lite under the SYK dynamics So far, we have confirmed that the matrix elements we computed (104)and (105) reproduce our expected results under the conditions of relatively large-\(q\) interaction, after the critical time \(t_{*}=2t_{\rm Scram}\). Additionally, of course, the following trivial matrix element is equal to 1 by the definition, \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\,\langle 0|{\cal N}^{ \rm SYK\dagger}_{K,R\to T}\left[{\cal N}^{\rm SYK}_{T\to K,R}[|0\rangle_{T} \langle 0|]\right]|0\rangle=1. \tag{109}\] Also, we can obtain the same consequences for two related matrix elements. Let us explain them. First, the matrix element (104), which becomes close to 0, is directly related to \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\,\langle 1|{\cal N}^{ \rm SYK\dagger}_{K,R\to T}\left[{\cal N}^{\rm SYK}_{T\to K,R}[|0 \rangle_{T}\langle 0|]\right]|1\rangle=\left\langle\hat{d}_{\tilde{L}}\right\rangle_{ \beta}\cdot\,\langle 0|{\cal N}^{\rm SYK\dagger}_{K,R\to T}\left[{\cal N}^{\rm SYK}_{T \to K,R}[|1\rangle_{T}\langle 1|]\right]|0\rangle \tag{110}\] via the definition of the adjoint channel (7). Thus, this matrix element also becomes close to 0 after the critical time, and the behavior is consistent with our expectation. Next, for the matrix element (105), being almost equal to 1, we have the following relation through the definition of the adjoint channel (7) again, \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\langle 0|{\cal N}^{ \rm SYK\dagger}_{K,R\to T}\left[{\cal N}^{\rm SYK}_{T\to K,R}[|0 \rangle_{T}\langle 1|]\right]|1\rangle=\left\langle\hat{d}_{\tilde{L}}\right\rangle_{ \beta}\cdot\langle 1|{\cal N}^{\rm SYK\dagger}_{K,R\to T}\left[{\cal N}^{ \rm SYK}_{T\to K,R}[|1\rangle_{T}\langle 0|]\right]|0\rangle\,. \tag{111}\] Thus, although we have eight non-trivial matrix elements (105) that should be checked, we already know the behavior of the above five matrix elements, and there are still three matrix elements. However, since two of them are related by the complex conjugation, essentially we need to investigate following two matrix elements \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\,\langle 0|{\cal N}^{ \rm SYK\dagger}_{K,R\to T}\left[{\cal N}^{\rm SYK}_{T\to K,R}[|1 \rangle_{T}\langle 0|]\right]|1\rangle\,, \tag{112}\] and \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\,\langle 1|{\cal N}^{ \rm SYK\dagger}_{K,R\to T}\left[{\cal N}^{\rm SYK}_{T\to K,R}[|1 \rangle_{T}\langle 1|]\right]|1\rangle\,. \tag{113}\] Here, the first matrix element is related to the following one \[\left(\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 0 |\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T \to K,R}[|1\rangle_{T}\langle 0|]\right]\right|1\rangle\right)^{*}=\left\langle\hat{d}_{ \tilde{L}}\right\rangle_{\beta}\cdot\left\langle 1|\mathcal{N}^{\text{SYK}\dagger}_{K,R \to T}\left[\mathcal{N}^{\text{SYK}}_{T\to K,R}[|0\rangle_{T} \langle 1|]\right]|0\right\rangle. \tag{101}\] In evaluating these matrix elements, we can not directly use the technique of [14] unlike the cases for the matrix elements (4.29) and (4.30)16. In the upcoming paper [16], we will report their results, but here we explain their expected behaviors from our obtained results. To this end, it would be useful to introduce the Kraus representation of the quantum channel (4.10), Footnote 16: We briefly explain the reason why the evaluations of the matrix elements (100) and (101) are difficult. The reason is that they do not have simple expressions like (4.32) and (4.33) naively. Of course, for matrix element (100), we can consider the similar expression like (4.32) with replacing the TFD state with the excited state \(\psi_{i,L}\left|\text{TFD}\right\rangle_{L,R}\), but in that case, we can no longer use the techniques in [14], and we need to consider the modular operator for the excited state. For the other matrix element (101), we naively need to introduce transition matrices, not density matrices, to write it in terms of a correlator. \[\mathcal{N}^{\text{SYK}}_{T\to K,R}[\rho_{T}]=\sum_{m=1}^{d_{\tilde{L}}}E^{ \text{SYK}}_{m}\rho_{T}E^{\text{SYK}\dagger}_{m} \tag{102}\] given by \[E^{\text{SYK}}_{m}=\left.{}_{\tilde{L}}\langle m|\,U_{L}V_{T,L\to L} \left|\text{TFD}\right\rangle_{LR}. \tag{103}\] We can obtain this Kraus representation by introducing an orthonormal basis of the subsystem \(\tilde{L}\) as \(\left\{|m\rangle_{\tilde{L}}\right\}_{m=1}^{d_{\tilde{L}}}\). We also note that the adjoint channel (4.11) can be written as \[\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}[\mathcal{O}_{KR}]=\sum_{m=1}^{d_{ \tilde{L}}}E^{\text{SYK}\dagger}_{m}\mathcal{O}_{KR}E^{\text{SYK}}_{m}. \tag{104}\] Using this Kraus representation, it is possible to extract the very important "typical" relation from our results. Here, the "typical" means that the relation almost does not depend on the detail of a specific state \(|m\rangle_{\tilde{L}}\) in the subsystem \(\tilde{L}\), corresponding to a black hole microstate. First, the matrix elements (100) is equal to \(1\) and can be expressed as \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 0 |\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T \to K,R}[|0\rangle_{T}\langle 0|]\right]\right|0\rangle =\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\sum_{m,n=1 }^{d_{\tilde{L}}}\left.{}_{T}\left\langle 0|E^{\text{SYK}\dagger}_{m}E^{\text{SYK}}_{n} \left|0\right\rangle_{T}\langle 0|\,E^{\text{SYK}}_{n}E^{\text{SYK}\dagger}_{m}|0 \right\rangle_{T}\] \[=\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\sum_{m,n=1 }^{d_{\tilde{L}}}\left|\,_{T}\left\langle 0|E^{\text{SYK}\dagger}_{m}E^{\text{SYK}}_{n} |0\right\rangle_{T}\right|^{2}, \tag{105}\] and we expect the typical relation \[\left.{}_{T}\left\langle 0|E^{\text{SYK}\dagger}_{m}E^{\text{SYK}}_{n} |0\right\rangle_{T}\sim\frac{1}{\sqrt{d_{\tilde{L}}\cdot\left\langle\hat{d}_{ \tilde{L}}\right\rangle_{\beta}}}\delta_{mn}. \tag{106}\] Next, we focus on the matrix element (4.28). This matrix element is also equal to \(1\), and we can express the matrix element in terms of the Kraus operators, \[\left\langle\hat{d}_{L}\right\rangle_{\beta}\cdot\left\langle 0|\mathcal{N}^{ \text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T \to K,R}[\ket{0}_{T}\bra{1}]\right]\right]\ket{1}=\left\langle\hat{d}_{L} \right\rangle_{\beta}\sum_{m,n=1}^{d_{L}}\,{}_{T}\left\langle 0|E^{\text{SYK} \dagger}_{m}E^{\text{SYK}}_{n}\ket{0}_{T}\bra{1}E^{\text{SYK}}_{n}E^{\text{ SYK}\dagger}_{m}\ket{1}_{T}. \tag{5.12}\] By using the relation (5.11), we extract a similar relation, \[{}_{T}\left\langle 1|E^{\text{SYK}\dagger}_{m}E^{\text{SYK}}_{n}|1\right\rangle _{T}\sim\frac{1}{\sqrt{d_{L}\cdot\left\langle\hat{d}_{L}\right\rangle_{\beta}} }\delta_{mn}. \tag{5.13}\] Finally, the time-dependent matrix element (4.27), which almost vanishes around the critical time \(t_{*}\), can be written as \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 1| \mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{ SYK}}_{T\to K,R}[\ket{0}_{T}\bra{0}]\right]\right]\ket{1} =\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\sum_{m,n=1} ^{d_{\tilde{L}}}\,{}_{T}\left\langle 1|E^{\text{SYK}\dagger}_{m}E^{\text{ SYK}}_{n}\ket{0}_{T}\bra{0}E^{\text{SYK}}_{n}E^{\text{SYK}\dagger}_{m}\ket{1}_{T} \tag{5.14}\] From this expression, we expect the following relation and its complex conjugation, \[{}_{T}\left\langle 1|E^{\text{SYK}\dagger}_{m}E^{\text{SYK}}_{n}|0\right\rangle _{T}\sim 0, \tag{5.15}\] around and/or after the critical time. Combing the above expectations, we obtain the typically expected relation17 Footnote 17: Here, we check the Knill-Laflamme condition from our obtained results. However, in principle, it would be possible to investigate the Knill-Laflamme condition directly by introducing a basis [28]. It would be interesting to investigate this topic. \[{}_{T}\left\langle T\middle|E^{\text{SYK}\dagger}_{m}E^{\text{SYK}}_{n}\middle| T^{\prime}\right\rangle_{T}\sim\frac{1}{\sqrt{d_{\tilde{L}}\cdot\left\langle\hat{d}_{ \tilde{L}}\right\rangle_{\beta}}}\delta_{mn}\delta_{TT^{\prime}}\qquad\text{ for }t\gtrsim t_{*}, \tag{5.16}\] which corresponds to the Knill-Laflamme condition [38]. Using this relation, the remaining matrix elements (5.4), (5.5) are expected to behave as follows \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 0 |\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{ SYK}}_{T\to K,R}[\ket{1}_{T}\bra{0}]\right]\right]\ket{1} =\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\sum_{m,n=1 }^{d_{\tilde{L}}}\,{}_{T}\left\langle 0|E^{\text{SYK}\dagger}_{m}E^{\text{ SYK}}_{n}\ket{1}_{T}\bra{0}E^{\text{SYK}}_{n}E^{\text{SYK}\dagger}_{m} \ket{1}_{T} \tag{5.17}\] and \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\,\left\langle 1| \mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T \to K,R}[|1\rangle_{T}\langle 1|]\right]|1\right\rangle =\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\sum_{m,n=1} ^{d_{\tilde{L}}}\,r\left\langle 1|E^{\text{SYK}\dagger}_{m}E^{\text{SYK}}_{n}\,|1 \rangle_{T}\langle 1|\,E^{\text{SYK}}_{n}E^{\text{SYK}\dagger}_{m}|1 \rangle_{T} \tag{111}\] \[\sim 1.\] These results are, of course, consistent with our original expectation (106), but the discussion so far using the typical relation is indirect (110). Nevertheless, since this typicality is strong enough for a highly chaotic theory, we expect that nearly identical results can be obtained by direct calculations of the matrix elements (107) and (108). ## 6 Discussion In this paper, we studied a recovery map for the Hayden-Preskill type scrambling channel \(\mathcal{N}\). We showed that one can use a simplified recovery map, called Petz-lite, consisting of the adjoint channel \(\mathcal{N}^{\dagger}\) with a suitable normalization factor for this purpose. We considered two examples, the Hayden-Preskill setup and the SYK model, and show that in both cases the Petz-lite indeed works as a recovery map. Also, we find that if the Petz-lite for the SYK case is used to recover information of given code subspace, it takes twice the scrambling time for the recovery. However, the SYK model case includes we did not evaluate all of the matrix elements necessary to show the recovery because of technical difficulties. Instead, we evaluate them in an indirect way in section 5. In the upcoming paper [16], we will explain their results, and also some generalizations of our results. Let us discuss our results. First, we focus on the physical interpretation of the critical time given by twice the scrambling time, \(t_{*}=2t_{\text{Scram}}\), when the matrix elements gives the input information, \(\mathcal{R}[\mathcal{N}[\rho]]\sim\rho\). It was argued in [37] that information of a diary thrown into a black hole appears after the scrambling time. This means that, after the scrambling time, the HP scrambling channel \(\mathcal{N}\) maps the diary information to Hawking radiation completely. However, even if the diary information appears in the Hawking radiation, it is difficult to get it directly since the information is uniformly embedded into the Hawking radiation. To extract the information, we need a recovery operation given by the Petz-lite \(\mathcal{R}\sim\mathcal{N}^{\dagger}\). Since it is the adjoint of the HP channel \(\mathcal{N}\), it again takes the scrambling time to apply the recovery map. Thus, in total, we need to wait for twice the scrambling time for the identity (106) to get satisfied. Next, let us explain the bulk interpretation of our results18. The bulk interpretation comes from the island prescription [1; 2]. First, the Hayden-Preskill setup concerns post Page time regimes. In these regimes, there is an island, which is a non-trivial entanglement wedge of Hawking radiation in the black hole interior. Thus, if one throws a diary into a black hole and waits for the scrambling time, then the diary enters the island region, implying that the diary is encoded into the Hawking radiation in a very complicated way. The mechanism that the thrown diary is encoded into the Hawking radiation corresponds to our quantum channel \(\mathcal{N}\). To recover the diary information from the Hawking radiation, we need to consider the recovery operation corresponding to the map \(\mathcal{R}\sim\mathcal{N}^{\dagger}\). The recovery map is given by the adjoint channel of the quantum channel \(\mathcal{N}\). In the bulk side, the action of the adjoint channel \(\mathcal{N}^{\dagger}\) means that the "reverse" process of the original quantum channel \(\mathcal{N}\)19. More precisely, the "reverse" process is given as follows: First, we start from the output state provided by the action of the quantum channel \(\mathcal{N}\), implying the diary is located on the island at some time slice \(\Sigma\). The application of the adjoint channel \(\mathcal{N}^{\dagger}\) then is interpreted as replacing the future of this time slice \(\Sigma\) by a white hole. Because of the replacement, the diary on the island region of the original black hole is coming out from the horizon of the white hole. Here, the reason why the white hole appears is that the adjoint channel includes the Hermit conjugation of unitaries \(U\) (and \(U^{\dagger}\)) compared to the quantum channel \(\mathcal{N}\). Thus, the diary thrown into the black hole reappears from the white hole induced by \(\mathcal{N}^{\dagger}\). This bulk interpretation is consistent with the critical time. This is because, after throwing the diary, it takes the scrambling time for the diary to enter the island region, and in the "reverse" process, it would also take the scrambling time for the diary to go outside the island region and the horizon. Footnote 19: Here, we note that in these two processes, we need to use two different (remaining) black holes since, in defining the quantum channel, (remaining) black holes are treated as internal degrees of freedom of the quantum channel. Finally, we end with discussing some of our in-progress works and future directions: Analysis in high temperature regime, \(\beta J\ll 1\)In this paper, we have focused on the large \(\beta J\) limit (low-temperature limit) in the SYK model to make the calculation analytic and for the purpose of the generalization to the CFT \({}_{2}\) case. In the limit, we can use emergent conformal symmetry of the SYK model and also we would be able to use semi-classical intuition of the dual Jackiw-Teitelboim gravity, but we have a relatively weak initial entangle state \(|\text{TFD}\rangle_{L,R}\) between the left and right SYK systems. Due to this weak entangle state, we would require some conditions to consider successful recovery protocol, e.g., large-\(q\) regime. Thus, analysis without taking the large \(\beta J\) limit would be interesting. In that case, we would need to consider numerical approaches. Direct bulk analysis and relation to other protocolsIn this paper, we studied the recovery protocol from the boundary CFT perspective. One would be able to consider corresponding bulk computations. Also, it would be interesting to figure out the relation between other proposed protocols e.g., [39; 40; 41; 42] and ours20. Footnote 20: For such protocols, one can characterize protocol by computing “price”, “distance”, etc. [14; 43; 44]. One would be able to find the relation between our results and such quantities. Generalization to (Holographic) CFT\({}_{2}\) and other systemsWhile this paper focuses on the SYK model, which is a \(0+1\)-dimensional quantum system, it can also be interpreted as a spin chain with \(q\)-body SYK interactions. Thus, we can interpret that the SYK model has a spatial direction effectively. As a result, we expect that a similar analysis can be applied to a two-dimensional CFT exhibiting chaos, e.g., holographic CFT\({}_{2}\). Indeed, one of the Hayden-Preskill setups in a two-dimensional holographic CFT is introduced in [27]. Also, there are other possibilities for generalizations to other systems exhibiting chaos. For example, studying the Petz-lite in a chaotic spin chain would be interesting. Chaotic-Integrable transitionIn this paper, the chaotic nature is important for the simplification of the Petz map to the Petz-lite. Thus, if a system do not exhibit the chaotic nature, in other words, the system is integrable, then the Petz-lite (also the original Petz map) is not expected to works correctly. This is because, in an integrable system, the decoupling condition is not expected to hold. In the framework of the SYK model, we can prepare integrable and non-integrable (chaotic) situation by adding two-body interaction [45]. Using the setup, we would be able to study Petz-lite. Higher dimensional code sub-space?The SYK version of the HP setup studied in this paper treats the two-dimensional code sub-space spanned by the vacuum and the excited state. However, in a more realistic situation, one needs to deal with code sub-spaces with dimensions greater than two. For example, the interior of a black hole, when it is viewed as a code subspace embedded into the Hawking radiation, the dimension of its Hilbert space has to be large enough to accommodate a part of the semi-classical QFT degrees of freedom to have a geometric interpretation of the black hole interior21. To this end, one would need to consider a more complicated embedding involving for example states like, \(\psi_{i,L}\psi_{j\neq i,L}\left|\text{TFD}\right\rangle_{L,R}\). In that case, we can evaluate corresponding matrix elements in principle, but it would be difficult to them analytically since we encounter higher-point functions. Footnote 21: Of course, the interior degrees of freedom may appear to be infinite, but almost all of them can not contribute due to post-selection [46]. Even in that case, there can be degrees of freedom with Bekenstein-Hawking entropy. Another possibility for higher dimensional code sub-space is to consider a random embedding and the double-scaling limit. For example, we might be able to use the state \(\kappa_{ij}\psi_{i,L}\psi_{j,L}\left|\text{TFD}\right\rangle_{L,R}\), where \(\kappa_{ij}\) is random like observables in the double-scaled SYK model [47]. In this case, by taking the double-scaling limit and using chord diagram techniques, we might be able to evaluate the resulting matrix element analytically. Also, this might open up an interesting connection between QEC in the SYK model and recent discussions of the von Neumann algebra of quantum gravity, in particular, [48]. ###### Acknowledgements. We thank Yoshifumi Nakata for discussions. AM thanks Norihiro Iizuka, Tomoki Nosaka, Masahiro Nozaki and Jia Tian for comments. AM also thanks Chen Bai for related discussions. AM thanks the workshop "Beijing Osaka String/Gravity/Black Hole Workshop" at KITS, where this work was presented. AM also thanks the long-term work shop "Quantum Information, Quantum Matter and Quantum Gravity" YITP-T-23-01 at YITP, where this work was also presented. YN was supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2123. TU was supported in part by JSPS Grant-in-Aid for Young Scientists 19K14716 and in part by MEXT KAKENHI Grant-in-Aid for Transformative Research Areas A "Extreme Universe" No.21H05184. ## Appendix A Derivation of the Petz like using Kraus representation In this appendix, we derive the Petz-lite with a different normalization factor based on paper [49]. See e.g., SS10.3 of [50] for related reviews. We start with the Kraus representation of the HP channel (3). The Kraus representation can be introduced by expressing the trace as \[\begin{split}\mathcal{N}_{T\to D,B}\left[\rho_{T}\right]& =\text{tr}_{C}\left[(U_{T,A\to C,D}\otimes I_{B})(\rho_{T}\otimes \left|\text{EPR}\right\rangle_{A,B}\langle\text{EPR}|)(U_{T,A\to C,D}^{\dagger}\otimes I_{B})\right]\\ &=\sum_{m=1}^{d_{C}}\,_{C}\left\langle m|(U_{T,A\to C,D} \otimes I_{B})\left|\text{EPR}\right\rangle_{A,B}\,\,\rho_{T\,A,B}\left\langle \text{EPR}\right|(U_{T,A\to C,D}^{\dagger}\otimes I_{B})|m\right\rangle_{C}\\ &=\sum_{m=1}^{d_{C}}E_{m}\rho_{T}E_{m}^{\dagger},\end{split} \tag{100}\] where \(\left|m\right\rangle_{C}\) is an orthonormal basis of subsystem \(C\), and \(E_{m}\) is the Kraus operator defined by \[E_{m}=\,_{C}\left\langle m\right|(U_{T,A\to C,D}\otimes I_{B})\left| \text{EPR}\right\rangle_{A,B}. \tag{101}\] Here, we note that since the state \(\left|m\right\rangle_{C}\) is a basis state of the remaining black hole \(C\). We also note that the adjoint HP channel is expressed in terms of the Kraus operators, \[\mathcal{N}\left[\mathcal{O}\right]=\sum_{m=1}^{d_{C}}E_{m}^{\dagger}\mathcal{O }E_{m}. \tag{100}\] Using this Kraus operator, let us investigate the Knill-Laflamme condition [38], \[P_{code}E_{m}^{\dagger}E_{n}P_{code}=\alpha_{mn}P_{code}\quad\left(\alpha_{mn}= \alpha_{nm}^{*}\in\mathbb{C}\,\text{with}\,\,\sum_{m=1}^{d_{C}}\alpha_{mm}=1 \right),\,\text{for}\,\,\forall m,n=1,\cdots,d_{C}. \tag{101}\] where \(P_{code}\) is a projection operator onto a code subspace in general, but in our setup, \(P_{code}\) is assumed to be just given by the identity operator \(P_{code}=I_{T}\), since all input states should be recoverable under the Hayden-Preskill setup. If this condition holds, we can construct a recovery map22. Footnote 22: See e.g., §10.3, in particular, theorem 10.1, of [50] for the review. Under Haar random averaging, we can easily evaluate the Knill-Laflamme condition from the expression (100) and Haar average (17), \[\overline{E_{m}^{\dagger}E_{n}}=\frac{1}{d_{C}}\delta_{mn}I_{T} \tag{102}\] This result appears to imply that the Knill-Laflamme condition holds _always_ under the averaging, but this is not correct. This is because, even if the Knill-Laflamme condition is satisfied, higher moments of the Knill-Laflamme condition, e.g., \(\left|P_{code}E_{m}^{\dagger}E_{n}P_{code}\right|^{2}\), might not hold due to contributions coming from Weingarten calculus. We can see their contributions by directly evaluating the second moment23, Footnote 23: See also [51] for related discussions. \[\overline{\left|P_{code}E_{m}^{\dagger}E_{n}P_{code}\right|^{2}}\approx\frac{ 1}{(d_{C})^{2}}\cdot I_{T}\left[\delta_{mn}+\frac{d_{T}}{d_{D}d_{B}}\right] \tag{103}\] where we used the know result (20) with large-\(d\) approximation. Thus, when we do not have enough Hawking radiation \(D,B\) compared to the diary \(T\), that is, \(d_{D}d_{B}\gtrsim d_{T}\), we can not ignore the second term, implying the break down of the Knill-Laflamme condition. On the other hand, in the opposite limit \(d_{D}d_{B}\gg d_{T}\), where we have enough Hawking radiation, we can ignore the second term, and we get the Knill-Laflamme condition. We note that this is consistent with the decoupling condition (2), since the unitarity means the relation \[\frac{d_{T}}{d_{D}d_{B}}=\frac{1}{d_{C}}\cdot\left(\frac{d_{T}}{d_{D}}\right)^ {2}, \tag{104}\] and the factor \((d_{T}/d_{D})^{2}\) gives an upper bound of the decoupling condition (2). Next, we construct a recovery map for the HP quantum channel. With the Knill-Laflamme condition in mind, we consider the following map, which is equal to the adjoint HP channel up to the overall factor \(d_{C}\), \[\mathcal{R}[\mathcal{O}]\coloneqq d_{C}\sum_{m=1}^{d_{C}}E_{m}^{\dagger} \mathcal{O}E_{m}=d_{C}\mathcal{N}^{\dagger}\left[\mathcal{O}\right]. \tag{111}\] Under the Haar random average, this map gives \[\overline{\mathcal{R}[\mathcal{N}\left[\rho_{T}\right]]} =d_{C}\sum_{m,n=1}^{d_{C}}\overline{E_{m}^{\dagger}E_{n}\rho_{T} E_{n}^{\dagger}E_{m}} \tag{112}\] \[\approx d_{C}\sum_{m,n=1}^{d_{C}}\left[\overline{E_{m}^{\dagger}E_{ n}\rho_{T}\overline{E_{n}^{\dagger}E_{m}}}+\overline{E_{m}^{\dagger}\overline{E _{n}\rho_{T}E_{n}^{\dagger}E_{m}}}\right]\] \[=d_{C}\sum_{m,n=1}^{d_{C}}\frac{1}{(d_{C})^{2}}\left[\delta_{mn} \,\rho_{T}+\frac{\operatorname{tr}\left[\rho_{T}\right]}{d_{D}d_{B}}I_{T}\right]\] \[=\rho_{T}+\left(\frac{d_{T}}{d_{D}}\right)^{2}\cdot\frac{1}{d_{T }}I_{T},\] where in the second line we used the fact that in the large-Hilbert space dimension limit, Weingarten calculus reduces to Wick calculus, and in the final line, we used \(\operatorname{tr}\rho_{T}=1\) and the relation \(d_{T}d_{B}=d_{C}d_{D}\). In the third line, we encountered the Knill-Laflamme condition for the first term (108), and the second terms disturb the Knill-Laflamme condition. These two terms in the third line correspond to the first and second terms in (109). Thus, under the situation \(d_{B}d_{D}\gg d_{T}\) where the Knill-Laflamme condition holds (approximately), we can ignore the second term of the above result, implying that the map(111) works as a recovery map. This is a quantum information theoretic derivation of the Petz-lite. However, we note that the recovery map here is little bit different from the one (11) up to the overall factor, but the difference almost vanishes when the condition \(d_{B}d_{D}\gg d_{T}\) is satisfied. Finally, we end this appendix by giving the connection between the Petz map and the Petz-lite in terms of the Kraus operator and the Knill-Laflamme condition. Generally, since the coefficients \((\alpha_{mn})\) is Hermitian, we can diagonalize the Knill-Laflamme condition by some unitary \((U_{mn})\) as follows [50], \[P_{code}F_{m}^{\dagger}F_{n}P_{code}=\lambda_{m}\delta_{mn}P_{code}\quad\left( \lambda_{m}\in\mathbb{R},\text{with }\sum_{m=1}^{d_{C}}\lambda_{m}=1,\,\lambda_{m}>0\right),\,\text{for }\forall m,n=1, \cdots,d_{C}, \tag{113}\] where \(F_{m}=\sum_{n}U_{mn}E_{n}\) is the newly defined Kraus operator. Using this Kraus operator, one can define the following map \[\mathcal{R}[\mathcal{O}]\coloneqq\sum_{m=1}^{d_{C}}\frac{1}{\lambda_{m}}P_{ code}F_{m}^{\dagger}\mathcal{O}F_{m}P_{code}. \tag{114}\] This map can be also expressed in terms of the original quantum channel with introducing some full rank reference state \(\sigma\) as follows [49] \[\mathcal{R}[\mathcal{O}]=\sigma^{1/2}\mathcal{N}^{\dagger}\left[\left(\mathcal{N }[\sigma]\right)^{-1/2}\mathcal{O}\left(\mathcal{N}[\sigma]\right)^{-1/2} \right]\sigma^{1/2},\] (A.12) and this is exactly the Petz map. In the recovery map (A.11), the factor \(\lambda_{m}\) prevents us from directly giving the adjoint channel \(\mathcal{N}^{\dagger}\), and we need to introduce the curios factors \(\left(\mathcal{N}[\sigma]\right)^{-1/2}\) and \(\sigma^{1/2}\). However, for the case where \(\lambda_{m}=1/d_{C}\left(m=1,\cdots,d_{C}\right)\), one can consider the map (A.8) instead of the above map. As we have seen, the Haar random case with the Knill-Laflamme condition (A.5) is certainly this case. ## Appendix B Operator Transpose for the EPR state In this appendix, we derive the relation (3.33) algebraically. We can show the relation directly as follows; \[U_{C^{\prime},D^{\prime}\to B,T^{\prime}}^{T}\left|\text{EPR} \right\rangle_{C,C^{\prime}}\otimes\left|\text{EPR}\right\rangle_{D,D^{\prime}}\] (B.1) \[=\frac{1}{\sqrt{d_{C}d_{D}}}\sum_{\tilde{C}=1}^{d_{C}}\sum_{ \tilde{D}=1}^{d_{D}}\left|\tilde{C},\,\tilde{D}\right\rangle_{C,D}\otimes \left(U_{C^{\prime},D^{\prime}\to B,T^{\prime}}^{T}\left|\tilde{C}, \tilde{D}\right\rangle_{C^{\prime},D^{\prime}}\right)\] \[=\frac{1}{\sqrt{d_{C}d_{D}}}\sum_{\tilde{C}=1}^{d_{D}}\sum_{ \tilde{D}=1}^{d_{D}}\sum_{\tilde{B}=1}^{d_{T}}\sum_{\tilde{T}=1}^{d_{T}}\left| \tilde{C},\,\tilde{D}\right\rangle_{C,D}\otimes\left|\tilde{B},\tilde{T} \right\rangle_{B,T^{\prime}}\cdot_{B,T^{\prime}}\left\langle\tilde{B},\tilde{T }\right|U_{C^{\prime},D^{\prime}\to B,T^{\prime}}^{T}\left|\tilde{C}, \tilde{D}\right\rangle_{C^{\prime},D^{\prime}}\] \[=\frac{1}{\sqrt{d_{C}d_{D}}}\sum_{\tilde{C}=1}^{d_{D}}\sum_{ \tilde{D}=1}^{d_{B}}\sum_{\tilde{B}=1}^{d_{T}}\sum_{\tilde{T}=1}^{d_{T}}\left| \tilde{C},\,\tilde{D}\right\rangle_{C,D}\otimes\left|\tilde{B},\tilde{T} \right\rangle_{B,T^{\prime}}\cdot_{C,D}\left\langle\tilde{C},\tilde{D}\right| U_{A,T\to C,D}\left|\tilde{B},\tilde{T}\right\rangle_{A,T}\] \[=\frac{1}{\sqrt{d_{B}d_{T}}}\sum_{\tilde{B}=1}^{d_{B}}\sum_{ \tilde{T}=1}^{d_{T}}\left(U_{A,T\to C,D}\left|\tilde{B},\tilde{T}\right\rangle _{A,T}\right)\otimes\left|\tilde{B},\tilde{T}\right\rangle_{B,T^{\prime}}\] \[=\left(U_{A,T\to C,D}\otimes I_{B}\otimes I_{T^{\prime}}\right) \left|\text{EPR}\right\rangle_{A,B}\otimes\left|\text{EPR}\right\rangle_{T,T^ {\prime}}\] \[=U_{A,T\to C,D}\left|\text{EPR}\right\rangle_{A,B}\otimes\left| \text{EPR}\right\rangle_{T,T^{\prime}},\] where in the fifth equality, we used the unitarity condition of the Hilbert space dimensions \(d_{T}\,d_{B}=d_{C}\,d_{D}\). The above relation implies that the left and right diagrams in figure 11 are equivalent. ## Appendix C Convention in the SYK Hayden-Preskill protocol In this appendix, we gather some important definitions and conventions which we use in section 4. Majorana SYK fermions * Anti-commutation relation \[\{\psi_{i},\psi_{j}\}=2\delta_{ij}\] * The unitary time evolution operator \[U_{\alpha}=U_{\alpha}(t)=\exp{(itH_{\alpha})}\quad(\alpha=L,R)\] * Positive direction of time evolutions in left and right SYK systems (in Lorentzian signature) \[\psi_{i,L}(t) \equiv U_{L}\psi_{i,L}(0)U_{L}^{\dagger}=e^{itH_{L}}\psi_{i,L}(0)e ^{-itH_{L}},\] \[\psi_{i,R}(t) \equiv U_{R}^{\dagger}\psi_{i,R}(0)U_{R}=e^{-itH_{R}}\psi_{i,R}(0)e ^{itH_{R}},\] which can be written as \[\psi_{i,\alpha}(t)=\Delta_{L}^{-i\frac{t}{\beta}}\psi_{i,\alpha}(0)\Delta_{L} ^{i\frac{t}{\beta}}=\Delta_{R}^{i\frac{t}{\beta}}\psi_{i,\alpha}(0)\Delta_{R}^ {-i\frac{t}{\beta}}\quad(\alpha=L,R),\] (114) where \(\Delta_{L}=\Delta_{R}^{-1}\) is the modular operator defined by \[\Delta_{L}=\rho_{L}\otimes\rho_{R}^{-1}=e^{-K_{L}}\otimes e^{K_{R}}=e^{-(K_{L} -K_{R})},\qquad K_{\alpha}\equiv\beta H_{\alpha}\quad(\alpha=L,R).\] (115) Here \(\rho_{\alpha}\,(\alpha=L,R)\) is defined by \[\rho_{L}=\mathrm{tr}_{R}\left[\left|\mathrm{TFD}\right\rangle_{L,R}\langle \mathrm{TFD}\right|\right],\quad\rho_{R}=\mathrm{tr}_{L}\left[\left|\mathrm{ TFD}\right\rangle_{L,R}\langle\mathrm{TFD}\right|\right]\] (116) In the Euclidean signature, one can rewrite the above formal formula as \[\psi_{i,\alpha}(\tau)=\Delta_{L}^{\frac{\tau}{\beta}}\psi_{i,\alpha}(0)\Delta _{L}^{-\frac{\tau}{\beta}}\quad(\alpha=L,R),\] (117) and recover the Lorentzian operator by the analytic continuation \(\tau\to-it\). * Euclidean regularization parametrized by the cutoff \(\delta\) \[\psi_{i,L}(t+i\delta)\equiv e^{i(t+i\delta)H_{L}}\psi_{i,L}(0)e^{-i(t+i\delta) H_{L}}=e^{(-\delta+it)H_{L}}\psi_{i,L}(0)e^{(\delta-it)H_{L}}\] (118) This regularized operator is related to the Euclidean evolved operator (117) by continuation \(\tau\to-it+\delta\). Figure 11: Diagrams representing left and right hand sides of the relation (114). **Left**: The left hand side of the relation. **Right**: The right hand side of the relation. The left and right diagrams are equivalent. #### SYK Hayden-Preskill channel * SYK Hayden-Preskill channel (4.10) \[\mathcal{N}^{\text{SYK}}_{T\to K,R}[\rho_{T}]\coloneqq\text{tr}_{\tilde{L}} \left[U_{L}V_{T,L\to L}\left(\rho_{T}\otimes\left|\text{TFD}\right\rangle_{L,R} \langle\text{TFD}\right|\right)V_{T,L\to L}^{\dagger}U_{L}^{\dagger}\right].\] * Adjoint SYK Hayden-Preskill channel (4.11) \[\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T}[\mathcal{O}_{KR}] \coloneqq\text{tr}_{L,R}\left[\left|\text{TFD}\right\rangle_{L,R} \langle\text{TFD}|\left(V_{L\to T,L}^{\dagger}U_{L}^{\dagger}\,\mathcal{O}_{KR} \,U_{L}V_{L\to T,L}\right)\right]\] \[=\,_{L,R}\,\langle\text{TFD}|\Big{(}V_{L\to T,L}^{\dagger}U_{L}^{ \dagger}\,\mathcal{O}_{KR}\,U_{L}V_{L\to T,L}\Big{)}|\text{TFD}\rangle_{L,R}\] ## Appendix D Derivation of correlator from quantum channels In this appendix, we give the derivation of the relation (4.29) and (4.30). We can derive the relation graphically, but below we give an algebraic derivation of the relation. We start with the derivation of the relation (4.29), which can be obtained straightforwardly from the definition of the quantum channels (4.10) and (4.11). We first note that, from the definition of the quantum channel (4.10), the state \(\ket{0}_{T}\!\bra{0}\) is mapped to \[\begin{split}\mathcal{N}^{\text{SYK}}_{T\to K,R}[\ket{0}_{T} \!\bra{0}]]&=\text{tr}_{\tilde{L}}\left[U_{L}\left|\text{TFD} \right\rangle_{L,R}\langle\text{TFD}|\,U_{L}^{\dagger}\right]\\ &=U_{R}\,\rho_{KR}\,U_{R}^{\dagger},\end{split}\] (D.1) where we used the fact that \(\left(H_{L}-H_{R}\right)\left|\text{TFD}\right\rangle_{L,R}\) leading to \(U_{L}\left|\text{TFD}\right\rangle_{L,R}=U_{R}\left|\text{TFD}\right\rangle_{ L,R}\), and \(\rho_{KR}\) is defined by (4.31). For this density matrix, we consider the action of the adjoint channel (4.11), and take the following matrix element; \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\,\bra{1}\mathcal{ N}^{\text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T\to K,R}[ \ket{0}_{T}\!\bra{0}]]\right]\ket{1}=\frac{\bra{1}\mathcal{N}^{\text{SYK} \dagger}_{K,R\to T}\left[\,U_{R}\,\rho_{KR}\,U_{R}^{\dagger}\,\right]\ket{1}}{ \text{tr}\left[\left(\rho_{KR}\right)^{2}\right]},\] (D.2) where we used the definition (4.16). Using the definition (4.11), we can evaluate the denominator as \[\begin{split}&\bra{1}\mathcal{N}^{\text{SYK}\dagger}_{K,R\to T} \left[\,U_{R}\,\rho_{KR}\,U_{R}^{\dagger}\,\right]\ket{1}\\ &=\big{(}\,_{L,R}\,\langle\text{TFD}|\otimes\,_{T}\bra{1}\rangle \big{(}V_{L\to T,L}^{\dagger}U_{L}^{\dagger}\,U_{R}\,\rho_{KR}\,U_{R}^{ \dagger}\,U_{L}V_{L\to T,L}\Big{)}\big{(}\left|\text{TFD}\right\rangle_{L,R} \otimes\ket{1}_{T}\big{)}\\ &=\frac{1}{Z_{\delta}}\cdot\,_{L,R}\,\langle\text{TFD}|\left( \psi_{i,L}^{\dagger}(-i\delta)\,U_{L}^{\dagger}\,U_{R}\,\rho_{KR}\,U_{R}^{ \dagger}\,U_{L}\,\psi_{i,L}(i\delta)\right)\left|\text{TFD}\right\rangle_{L,R }\\ &=\frac{1}{Z_{\delta}}\cdot\,_{L,R}\,\langle\text{TFD}|\left(U_{R} \psi_{i,L}^{\dagger}(-i\delta)\,U_{L}^{\dagger}\,\rho_{KR}\,U_{L}\,\psi_{i,L}( i\delta)\,U_{R}^{\dagger}\right)\left|\text{TFD}\right\rangle_{L,R}\\ &=\frac{1}{Z_{\delta}}\cdot\,_{L,R}\,\langle\text{TFD}|\left(U_{L} \psi_{i,L}^{\dagger}(-i\delta)\,U_{L}^{\dagger}\,\rho_{KR}\,U_{L}\,\psi_{i,L}( i\delta)\,U_{R}^{\dagger}\right)\left|\text{TFD}\right\rangle_{L,R}\\ &=\frac{1}{Z_{\delta}}\cdot\,_{L,R}\,\langle\text{TFD}|\left( \psi_{i,L}^{\dagger}(t-i\delta)\,\rho_{KR}\,\psi_{i,L}(t+i\delta)\right)\left| \text{TFD}\right\rangle_{L,R},\end{split}\] (D.3) where in the 4-th equality, we used the relation \(U_{L}\left|\text{TFD}\right\rangle_{L,R}=U_{R}\left|\text{TFD}\right\rangle_{L,R}\). Thus, by combining the above expressions, we obtain the relation (4.29), \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 1|\mathcal{N}_{K,R \to T}^{\text{SYK}\dagger}\left[\mathcal{N}_{T\to K,R}^{\text{SYK}}[|0\rangle_ {T}\langle 0|]\right]\right|1\rangle=\frac{1}{Z_{\delta}}\cdot\frac{\left\langle \text{TFD}|\psi_{i,L}(t-i\delta)\left(I_{\tilde{L}}\otimes\rho_{KR}\right)\, \psi_{i,L}(t+i\delta)|\text{TFD}\right\rangle}{\text{tr}_{KR}\left[\left(\rho_ {KR}\right)^{2}\right]}.\] Next, we derive the relation (4.30). Since \(\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}^{-1}=\text{tr}_{KR} \left[\left(\rho_{KR}\right)^{2}\right]\) by the definition (4.16), we focus on the remaining factor \(\left\langle 0|\mathcal{N}_{K,R\to T}^{\text{SYK}\dagger}\left[\mathcal{N}_{T\to K,R}^{ \text{SYK}}[|0\rangle_{T}\langle 1|]\right]|1\rangle\). To evaluate the factor, we use the definition of the adjoint channel (2.7), \[\begin{split}&\langle 0|\mathcal{N}_{K,R\to T}^{\text{SYK}\dagger}\left[ \mathcal{N}_{T\to K,R}^{\text{SYK}}[|0\rangle_{T}\langle 1|]\right]|1 \rangle\\ &=\text{tr}_{K,R}\left[\left.\mathcal{N}_{T\to K,R}^{\text{ SYK}}[|0\rangle_{T}\langle 1|]\right]\mathcal{N}_{T\to K,R}^{\text{SYK}}[|1\rangle_{T} \langle 0|\right]\left[\right.\right]\\ &=\text{tr}_{K,R}\left[\left.\text{tr}_{\tilde{L}}\left[U_{L} \left|\text{TFD}\right\rangle_{L,R}\langle\text{TFD}|\,\psi_{i,L}^{\dagger}(-i \delta)U_{L}^{\dagger}\right]\,\text{tr}_{\tilde{L}}\left[U_{L}\psi_{i,L}(i \delta)\left|\text{TFD}\right\rangle_{L,R}\langle\text{TFD}|\,U_{L}^{\dagger }\right]\right]\\ &=\text{tr}_{K,R}\left[\left.\text{tr}_{\tilde{L}}\left[U_{R} \left|\text{TFD}\right\rangle_{L,R}\langle\text{TFD}|\,\psi_{i,L}^{\dagger}(-i \delta)U_{L}^{\dagger}\right]\,U_{R}U_{R}^{\dagger}\,\text{tr}_{\tilde{L}} \left[U_{L}\psi_{i,L}(i\delta)\left|\text{TFD}\right\rangle_{L,R}\langle\text{ TFD}|\,U_{R}^{\dagger}\right]\right]\\ &=\text{tr}_{K,R}\left[\left.U_{R}\,\text{tr}_{\tilde{L}}\left[ \left|\text{TFD}\right\rangle_{L,R}\langle\text{TFD}|\,U_{R}\,\psi_{i,L}^{ \dagger}(-i\delta)U_{L}^{\dagger}\right]\,\text{tr}_{\tilde{L}}\left[U_{L} \psi_{i,L}(i\delta)U_{R}^{\dagger}\left|\text{TFD}\right\rangle_{L,R}\langle \text{TFD}|\right]\left.U_{R}^{\dagger}\right]\\ &=\text{tr}_{K,R}\left[\left.\text{tr}_{\tilde{L}}\left[\left| \text{TFD}\right\rangle_{L,R}\langle\text{TFD}|\,U_{L}\,\psi_{i,L}^{\dagger}(-i \delta)U_{L}^{\dagger}\right]\,\text{tr}_{\tilde{L}}\left[U_{L}\psi_{i,L}(i \delta)U_{L}^{\dagger}\left|\text{TFD}\right\rangle_{L,R}\langle\text{TFD}| \right]\left.\right]\right]\\ &=\text{tr}_{K,R}\left[\left.\text{tr}_{\tilde{L}}\left[\left| \text{TFD}\right\rangle_{L,R}\langle\text{TFD}|\,\,\psi_{i,L}^{\dagger}(t-i \delta)\right]\,\text{tr}_{\tilde{L}}\left[\psi_{i,L}(t+i\delta)\left|\text{ TFD}\right\rangle_{L,R}\langle\text{TFD}|\right]\left.\right].\end{split}\] (D.4) By explicitly introducing bases for the traces, we can rewrite the last expression as follows, \[\begin{split}&\text{tr}_{K,R}\left[\left.\text{tr}_{\tilde{L}} \left[\left|\text{TFD}\right\rangle_{L,R}\langle\text{TFD}|\,\,\psi_{i,L}^{ \dagger}(t-i\delta)\right]\,\text{tr}_{\tilde{L}}\left[\psi_{i,L}(t+i\delta) \left|\text{TFD}\right\rangle_{L,R}\langle\text{TFD}|\right]\left.\right]\\ &=\sum_{\alpha,\alpha^{\prime}=1}^{d_{K}d_{R}}\sum_{a,a^{\prime}=1 }^{d_{\tilde{L}}}\left(\left.\right.\right._{KR}\left\langle\alpha\right| \otimes\left.\left.\left.\left.\tilde{L}\left\langle a\right|\right)\left( \left|\text{TFD}\right\rangle_{L,R}\langle\text{TFD}|\,\,\psi_{i,L}^{\dagger}(t -i\delta)\right)\left(\left|\alpha^{\prime}\right\rangle_{KR}\otimes\left|a \right\rangle_{\tilde{L}}\right)\\ &\hskip 85.358268pt\times\left(\left.\right.\right._{KR}\left\langle \alpha^{\prime}\right|\otimes\left.\left.\left.\tilde{L}\left\langle a^{\prime} \right|\right)\left(\psi_{i,L}(t+i\delta)\left|\text{TFD}\right\rangle_{L,R} \langle\text{TFD}|\right)\left(\left|\alpha\right\rangle_{KR}\otimes\left|a^{ \prime}\right\rangle_{\tilde{L}}\right)\\ &=\sum_{\alpha,\alpha^{\prime}=1}^{d_{K}d_{R}}\sum_{a,a^{\prime}=1 }^{d_{\tilde{L}}}\left.\right._{L,R}\left\langle\text{TFD}|\,\psi_{i,L}^{ \dagger}(t-i\delta)\left(\left|\alpha^{\prime}\right\rangle_{KR}\otimes\left|a \right\rangle_{\tilde{L}}\right)\\ &\hskip 85.358268pt\times\left(\left.\right.\right._{KR}\left\langle \alpha\right|\otimes\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left. Therefore, we get the relation (4.30), \[\left\langle\hat{d}_{\tilde{L}}\right\rangle_{\beta}\cdot\left\langle 0|\mathcal{N}^{ \text{SYK}\dagger}_{K,R\to T}\left[\mathcal{N}^{\text{SYK}}_{T\to K,R}[|0 \rangle_{T}\langle 1|]]\right]|1\right\rangle=\frac{1}{Z_{\delta}}\cdot\frac{\langle \text{TFD}|\psi_{i,L}(t-i\delta)\,\left(\rho_{\tilde{L}}\otimes I_{KR}\right)\, \psi_{i,L}(t+i\delta)|\text{TFD}\rangle}{\text{tr}_{KR}\left[\left(\rho_{KR} \right)^{2}\right]}.\]
2306.05940
Arthur Byron Coble, 1878--1966
A short essay on the life and mathematical heritage of Coble. A substantially edited version will be part of the series of biographical memoirs of past members of the National Academy of Sciences. Version 2: minor changes.
János Kollár
2023-06-09T14:58:04Z
http://arxiv.org/abs/2306.05940v2
# Arthur Byron Coble, 1878-1966 ###### Abstract. A short essay on the life and mathematical heritage of Coble. A substantially edited version will be part of the series of biographical memoirs of past members of the National Academy of Sciences. Coble was born November 3, 1878 in Dauphin County, Pennsylvania, near Harrisburg. He graduated from Pennsylvania College (now Gettysburg College) in 1897. After a year of public school teaching, he studied at the Johns Hopkins University (1898-1902), earning a Ph.D. with Frank Morley. The title of his dissertation was _The quartic curve as related to conics._ He taught for one year at the University of Missouri, then returned to the Johns Hopkins University as a research assistant at the Carnegie Institute, where he was later promoted to research associate and to associate professor. In 1904 he visited Greifswald and Bonn Universities in Germany with the support of the Carnegie Institute. In 1918 Coble accepted a professorship at the University of Illinois at Urbana-Champaign and stayed there save for visiting positions at the University of Chicago (1919) and the Johns Hopkins University (1927-28). He was head of the department from 1933 until his retirement at 1947. He moved back to Dauphin County, Pennsylvania and passed away in Harrisburg on December 8, 1966. Coble was very active in the American Mathematical Society, served on the governing council (1911-14), vice president (1917-20), chair of the Chicago section (1922) and as president (1933-34). He was editor of the AMS Transactions (1920-25), Proceedings of the AMS (1933-34) and Duke Math. Journal (1936-38). Several times he served on the National Research Council and on investigating committees of the American Association of University Professors. He was elected to the National Academy of Sciences in 1924, delivered the AMS Colloquium lectures in 1928, and received an honorary degree from Gettysburg College in 1932. Coble had 27 doctoral students, among them 7 were women, starting with Bessie Miller (the Johns Hopkins University, 1914) and ending with Janie Lapsley Bell (University of Illinois, 1943). He was among the leading advisors for women doctorates in mathematics before 1940. Coble's students did not seem to have continued his work in algebraic geometry. About half of the theses of his over 200 descendants are in applied mathematics and automata theory, the other half in mathematics education. **Mathematical works.** MathSciNet lists 24 publications, Archibald's volume on the history of the AMS (1938) lists 43, the Memorial resolution of the University of Illinois Senate (1968) mentions 'over 60' and Zentralblatt refers to 85. Coble's most important contribution is the book _Algebraic geometry and theta functions_ (American Mathematical Society Colloquium Publications, Vol. 10, AMS, Providence, R.I., 1929). The detailed review by Zariski notes that "thanks to its rich
2310.09644
Classical Shadow Tomography with Mutually Unbiased Bases
Classical shadow tomography, harnessing randomized informationally complete (IC) measurements, provides an effective avenue for predicting many properties of unknown quantum states with sample-efficient precision. Projections onto $2^n+1$ mutually unbiased bases (MUBs) are widely recognized as minimal and optimal IC measurements for full-state tomography. We study how to use MUBs circuits as the ensemble in classical shadow tomography. For the general observables, the variance to predict their expectation value is shown to be exponential to the number of qubits $n$. However, for a special class termed as appropriate MUBs-average (AMA) observables, the variance decreases to $poly(n)$. Additionally, we find that through biased sampling of MUBs circuits, the variance for non-AMA observables can again be reduced to $poly(n)$ with the MUBs-sparse condition. The performance and complexity of using the MUBs and Clifford circuits as the ensemble in the classical shadow tomography are compared in the end.
Yu Wang, Wei Cui
2023-10-14T19:15:06Z
http://arxiv.org/abs/2310.09644v2
# classical shadow tomography with mutually unbiased bases ###### Abstract Classical shadow tomography, harnessing randomized informationally complete (IC) measurements, provides an effective avenue for predicting many properties of unknown quantum states with sample-efficient precision. Projections onto \(2^{n}+1\) mutually unbiased bases (MUBs) are widely recognized as minimal and optimal measurements for tomography. We aim to establish a theoretical framework for conducting classical shadow tomography with MUBs. This approach may offer several advantages over random Clifford measurements [Nat. Phys. 16, 1050 (2020)]. Firstly, it simplifies the random measurement process since \(2^{n}+1\) MUBs circuits are a subset of all \(O(2^{n^{2}})\) Clifford circuits, and significantly reducing the number of all possible classical snapshots. Secondly, MUBs share the same reconstruction channel as Cliffords but with a lower shadow norm square (\(<2\mathrm{tr}(O_{0}^{2})\)), enabling equivalent property predictions with reduced sampling complexity (two-thirds). Thirdly, MUBs exhibit a uniform circuit structure, enhancing coherence with a consistent gate sequence like \(-CZ-P-H-\), which is simpler than that of the Clifford circuits. ## 1 Introduction In the realm of quantum information science, efficiently extracting information from unknown quantum states is pivotal. This is traditionally achieved through quantum state tomography [1; 2; 3], performing IC measurements usually projected on \(\{U_{j}|k\rangle,k=0,\cdots,d-1;j=1\cdots\}\), obtaining experimental data \(\mathrm{tr}(\rho U_{j}|k\rangle\langle k|U_{j}^{\dagger})\), then uniquely reconstructing density matrix \(\rho\) with different methods. It allows for the prediction of various important functions \(f(\rho)\), such as predicting the properties \(\mathrm{tr}(O\rho)\) under certain observable \(O\), along with metrics like purity and entropy [4; 5; 6]. These predictions are central to many-body physics and quantum information theory [7; 8]. However, as quantum systems scale up, such as in the case of \(n\)-qubit quantum systems with dimension \(d=2^{n}\), this method becomes impractical and even infeasible due to the enormous memory requirements. Nevertheless, when computing specific functions \(f(\rho)\), we can avoid the need to accurately calculate all elements of the density matrix with exponential measurements. While shadow tomography was initially proposed with polynomial sampling [9], it required exponential-depth quantum circuits applied to copies of all quantum states, presenting challenges for quantum hardware. Subsequently, Huang et al. introduced classical shadow tomography [10], enabling random measurements on individual quantum states and efficient prediction of various properties with a sampling complexity of \(\log(M)\|\cdot\|_{\mathrm{norm}}^{2}\), where \(M\) represents the number of observables, and \(\|\cdot\|_{\mathrm{norm}}^{2}\) denotes the norm of the corresponding observables. This norm is also influenced by the unitary ensemble \(\{U_{j}\}\) we randomly choose. The initial procedure applies random unitaries from a specific IC ensemble to the system and then performs computational projective measurements, which is equivalent to performing randomly \(3^{n}\) Pauli measurements or all Clifford measurements. Pauli measurements are ideal for predicting localized target functions, while Clifford measurements excel in estimating functions with constant Hilbert-Schmidt norms, both offering valuable tools for various quantum tasks. Subsequently, various other ensembles have been explored, including fermionic Gaussian unitaries [11], chaotic Hamiltonian evolutions [12], locally scrambled unitary ensembles [13; 14; 15], and Pauli-invariant unitary ensembles [16]. The concept of randomly selecting multiple sets of projective measurements has been theoretically generalized to one POVM [17; 18]. Up to now, Classical shadow tomography has found applications in diverse fields, including energy estimation [19], entanglement detection [20; 21], and quantum chaos [22], quantum gate engineering cycle [23], and quantum error mitigation [24] to name a few. The minimal number of elements is \(2^{n}+1\) for IC unitary ensembles. Projective measurements onto the set of \(2^{n}+1\) mutually unbiased bases (MUBs) are recognized as the optimal approach for quantum tomography [25; 26; 27]. When vectors are prepared within a specific basis and projected onto any other basis within the MUB set, a uniform distribution is consistently achieved. These measurements are regarded as maximal incompatibility and complementarity [28], finding applications in various aspects of quantum information science, including quantum tomography [29; 30], uncertainty relations [31; 32; 33], quantum key distribution [34; 35], quantum error correction [36; 37; 38], as well as the identification of entanglement and other forms of quantum correlations [39; 40; 41; 42; 43; 44]. In this study, we explore the use of mutually unbiased bases (MUBs) for classical shadow tomography. Specifically, our investigation focuses on the reconstruction channel achieved through random sampling of MUBs, the assessment of associated performance guarantees in terms of norms, and the conduct of numerical simulations. We compare the algorithmic complexity of generating MUBs and Clifford circuits, the number of elementary gates in quantum circuits, and the computation time required for classical shadow generation. Our findings demonstrate that utilizing MUBs in classical shadow tomography offers comprehensive advantages over Clifford measurements. ## 2 Procedure Let \(n\) be the number of qubits, \(\rho\) be the unknown quantum state. Here, we review the procedure of classical shadow tomography in the following [10]. First, one needs to generate the classical shadow of the \(\rho\). Choose an ensemble of unitary matrices \(\{U_{j}\}\) that is information complete. Rotate the state by a randomly chosen unitary matrix \(\rho\to U_{j}\rho U_{j}^{\dagger}\), then followed by a measurement in the computational basis. After performing this rotation-measurement \(N\) times, one obtains a set of \(n-\)bit measurement outcomes \(|\hat{p_{i}}\rangle\) with \(i=1,2,\ldots,N\), which will be stored as \(U_{j}^{+}|\hat{p_{i}}\rangle\langle\hat{p_{i}}|U_{j}\) in classical memory. The subscript \(i\) is index for the \(i\)th rotation-measurement, not the index for the qubit. As explained in [10], each rotation-measurement operation defines a classical snapshot of \(\rho\) by \[\hat{\rho}_{i}=\mathcal{M}^{-1}(U_{j}^{+}|\hat{b_{i}}\rangle\langle\hat{b_{i} }|U_{j}),\quad i=1,2,\ldots,N \tag{1}\] Here, \(\mathcal{M}^{-1}\) is the reconstruction channel depending on the choice of ensemble of unitary transformations. For those that are made from the Clifford circuits, the reconstruction channel is \[\mathcal{M}^{-1}(A)=(2^{n}+1)A-\mathrm{tr}(A)I_{n} \tag{2}\] for any \(d\times d\) matrix \(A\). As we will see, the reconstruction channel also works for the ensemble constructed by the MUB matrices. The classical shadow of \(\rho\) is the set of these classical snapshots \[S(\rho,\{U_{k}\},N)=\{\hat{\rho}_{1},\hat{\rho}_{2},\ldots,\hat{\rho}_{N}\} \tag{3}\] with dependence on the number of measurement \(N\) and the ensemble of unitary matrices. As we will demostrate using the numerical simulation below. We expect that the classical shadows constructed from the ensemble of MUB matrices can be more effective or equally effective on predicting observables. Next, we will use the classical shadows obtained above to predict various observables \(\{O_{1},O_{2},\ldots,O_{M}\}\) of the unknown quantum state \(\rho\). The expectation value of them are given by \[o_{i}=\mathrm{tr}(O_{i}\rho),\quad 1\leq i\leq M\.\] which can be approximated by the median of means of the expectation values \[o_{i}\approx\hat{o}_{i}(N,K)=\mathrm{median}\{\hat{o}_{i}^{(1)}(L,1),\hat{o}_ {i}^{(2)}(L,1),\ldots,\hat{o}_{i}^{(K)}(L,1)\}\] where \(L=\lfloor N/K\rfloor\) and \[\hat{o}_{i}^{(k)}(L,1)=\frac{1}{L}\sum_{j=(k-1)L+1}^{kL}\mathrm{tr}(O_{i}\hat {\rho}_{j}),\quad 1\leq k\leq K\.\] This approximation depends on the parameters \(L\) and \(K\). The detailed error analysis of this approach can be found in the original paper [10]. ### New procedure with \(2^{n}+1\) MUBs Denote a pair of two orthonormal bases in a \(d\)-dimensional Hilbert space \(\mathbb{H}_{d}\) as \(\{|e_{j}\rangle\}_{j=0}^{d-1}\) and \(\{|f_{k}\rangle\}_{k=0}^{d-1}\). The two bases are called mutually unbiased if the property holds that \[|\langle e_{j}|f_{k}\rangle|^{2}=\frac{1}{d} \tag{4}\] for all \(j\) and \(k\). For an \(n\)-qubit system, the dimension \(d\) is \(2^{n}\) and there are precisely \(2^{n}+1\) mutually unbiased bases (MUBs). Define \(\mathcal{B}_{0}\) as the canonical one \(\{|0\rangle,\cdots,|2^{n}-1\rangle\}\). Denote the another \(2^{n}\) sets of orthonormal bases as \(\mathcal{B}_{j}\), \(j=1\cdots,2^{n}\). \(\mathcal{B}_{j}=\{|e_{j}^{\prime}\rangle\}_{t=0}^{2^{n}-1}\). We know that there exists a unitary operation such that \(V_{j}\mathcal{B}_{0}=\mathcal{B}_{j}\). Consequently, the unitary ensemble for mutually unbiased bases is \(\{U_{j}=V_{j}^{\dagger}\}_{j=0}^{2^{n}}\), and all potential memorized states can be expressed as \(V_{j}|k\rangle\langle k|V_{j}^{\dagger}\), where \(j=0,\cdots,2^{n}\) and \(k=0,\cdots,2^{n}-1\). In the procedure, we randomly chose \(U_{j}\) from \(2^{n}+1\) unitary circuits and then record the classical measurement outcome \(\vec{b}\). The conditions we currently have are sufficient to calculate the reconstruction channel through random MUBs with classical shadow tomography, including the computation of shadow norms. This will allow us to conduct a rigorous performance analysis, ensuring clarity and precision in our research. Next, we will provide a detailed exploration of forms corresponding to all \(2^{n}(2^{n}+1)\) mutually unbiased bases states, the algorithmic time to obtain each circuit for \(U_{j}\), the structure of circuits, and the number of elementary gates decomposed. We will also delve into the time required for computing classical snapshots and make a comprehensive comparison with the approach of random Clifford measurements. ## 3 Results ### Reconstruction channel Interestingly, randomly sampling from \(2^{n}+1\) MUBs yields an equivalent reconstruction channel in classical shadow tomography as randomly sampling from the \(O(2^{2n})\) Clifford operations. We can calculate the channel in the following way. \[\mathcal{M}(\rho) =\frac{1}{2^{n}+1}\sum_{t=1}^{2^{n}(2^{n}+1)}\text{tr}(\rho|\phi_ {t}\rangle\langle\phi_{t}|)\cdot|\phi_{t}\rangle\langle\phi_{t}|;\text{ randomly projected to }(2^{n}+1)2^{n}\text{ MUBs states}. \tag{5}\] \[=\frac{1}{2^{n}+1}\sum_{j=0}^{2^{n}-1}\sum_{k=0}^{2^{n}-1}\text{ tr}(\rho V_{j}|k\rangle\langle k|V_{j}^{\dagger})\cdot V_{j}|k\rangle\langle k|V_{j} ^{\dagger};\text{ }V_{j}\mathcal{B}_{0}=\mathcal{B}_{j}. \tag{6}\] As the \(2^{n}+1\) MUBs are informationally complete, each \(\rho\) can be expressed with the form of \[\rho=\sum_{a=0}^{2^{n}}\sum_{b=0}^{2^{n}-1}x_{ab}V_{a}|b\rangle\langle b|V_{a}^ {\dagger}.\] Note here the coefficients \(\{x_{ab}\}\) may not be unique. If we want to guarantee this, we can choose the following \(4^{n}\) rank-\(1\) projections. For the first basis \(\mathcal{B}_{0}\), we keep all of the eigenstates \(\{|k\rangle:k=0,\cdots,2^{n}-1\}\). For the other bases \(\{\mathcal{B}_{j}:j=1,\cdots,2^{n}-1\}\), we remove the last eigenstates and keep the states \(\{U_{j}|k\rangle:k=0,\cdots,2^{n}-2\}\). Then in total, we have \(2^{n}+2^{n}(2^{n}-1)=4^{n}\) eigenstates. It has been proved that the rank-\(1\) projections of these eigenstates are minimal and informationally complete [27]. \[\mathrm{tr}(\rho V_{j}|k\rangle\langle k|V_{j}^{\dagger}) =\mathrm{tr}[(\sum_{a=0}^{2^{n}}\sum_{b=0}^{2^{n}-1}x_{ab}V_{a}|b \rangle\langle b|V_{a}^{\dagger})\cdot V_{j}|k\rangle\langle k|V_{j}^{\dagger}] \tag{7}\] \[=(\sum_{a=j,b=k}+\sum_{a=j,b\neq k}+\sum_{a\neq j})\mathrm{tr}[(x_ {ab}V_{a}|b\rangle\langle b|V_{a}^{\dagger})\cdot V_{j}|k\rangle\langle k|V_{j }^{\dagger}]\] (8) \[=x_{jk}+\underbrace{0+\ldots+0}_{2^{n}-1}+\frac{1}{2^{n}}\sum_{a \neq j}\sum_{b=0}^{2^{n}-1}x_{ab}\] (9) \[=x_{jk}+\frac{1}{2^{n}}\sum_{a=0}^{2^{n}}\sum_{b=0}^{2^{n}-1}x_{ ab}-\frac{1}{2^{n}}\sum_{b=0}^{2^{n}-1}x_{jb}\] (10) \[=x_{jk}+\frac{1}{2^{n}}\mathrm{tr}(\rho)-\frac{1}{2^{n}}\sum_{b= 0}^{2^{n}-1}x_{jb} \tag{11}\] Here Eq.(9) we use the property that the square of the inner product of each two eigenstates in different MUBs is \(1/2^{n}\). \[\mathcal{M}(\rho) =\frac{1}{2^{n}+1}\sum_{j=0}^{2^{n}}\sum_{k=0}^{2^{n}-1}(x_{jk}+ \frac{\mathrm{tr}(\rho)}{2^{n}}-\frac{1}{2^{n}}\sum_{b=0}^{2^{n}-1}x_{jb}) \cdot V_{j}|k\rangle\langle k|V_{j}^{\dagger} \tag{12}\] \[=\frac{1}{2^{n}+1}(\rho+\frac{\mathrm{tr}(\rho)(2^{n}+1)I}{2^{n}} -\sum_{j=0}^{2^{n}}\sum_{b=0}^{2^{n}-1}\frac{x_{jb}}{2^{n}}I)\] (13) \[=\frac{1}{2^{n}+1}(\rho+\frac{\mathrm{tr}(\rho)(2^{n}+1)I}{2^{n}} -\frac{\mathrm{tr}(\rho)}{2^{n}}I)\] (14) \[=\frac{1}{2^{n}+1}(\rho+\mathrm{tr}(\rho)I) \tag{15}\] Here for each \(j\), \(\sum_{k=1}^{2^{n}}V_{j}|k\rangle\langle k|V_{j}^{\dagger}=I\); \(\mathrm{tr}(\rho)=1\). Then we can calculate the inverse channel: \[\mathcal{M}^{-1}(\rho)=(2^{n}+1)\rho-\mathrm{tr}(\rho)I. \tag{16}\] ### Performance guarantees Here we briefly retrospect the history of performance guarantees of classical shadow tomography [10]. Classical shadows of size \(N\) suffice to predict \(M\) arbitrary linear target functions \(\mathrm{tr}(O_{1}\rho),\ldots,\mathrm{tr}(O_{M}\rho)\) up to additive error \(\epsilon\) given that \[N\geq O\left(\log\frac{M}{\epsilon^{2}}\max_{1\leq i\leq M}\|O_{i}-\frac{ \mathrm{tr}(O_{i})}{2^{n}}\|_{\mathrm{shadow}}^{2}\right). \tag{17}\] The definition of the norm \(\|\cdot\|_{\mathrm{shadow}}\) depends on the ensemble of unitary transformations used to create the classical shadow, which also plays an important role in defining the space of linear functions that can be predicted efficiently. If the randomly chosen unitary ensemble is for \(3^{n}\) Pauli measurements, \(\|\cdot\|_{\mathrm{shadow}}^{2}\leq 4^{k}\|O_{i}\|_{\infty}^{2}\), where \(\|\cdot\|_{\infty}\) denotes the operator norm. The number \(k\) represents that \(O_{i}\) acts nontrivially on at most \(k\) qubits. This prediction technique is most powerful when the target functions do respect some sort of locality constraint. Prominent examples include \(k\)-point correlators or individual terms in a local Hamiltonian If the randomly chosen unitary ensemble is for \(O(2^{n^{2}})\) Clifford measurements, \(\|\cdot\|_{\mathrm{shadow}}^{2}\) is closely related to the Hilbert-Schmidt norm \(\mathrm{tr}(O^{2})\). As a result, a large collection of (global) observables with a bounded Hilbert-Schmidt norm can be predicted efficiently. Otherwise, if \(\|\cdot\|_{\mathrm{shadow}}^{2}\) is related to the dimension \(2^{n}\), the corresponding observable cannot be efficiently estimated. For example, consider Pauli operators \(O_{i}=\sigma_{i1}\otimes\cdots\otimes\sigma_{in}\), where \(\sigma_{ij}\) is a 1-qubit Pauli matrix, \(|\sigma_{ij}|^{2}=I\). Then \(\mathrm{tr}(O_{i}^{2})=2^{n}\). Nevertheless, in our numerical experiments, we use rank-1 projectors, \(O_{i}=|\phi\rangle\langle\phi|\), \(\mathrm{tr}(O_{i}^{2})=1\). Here \(|\phi\rangle=\sum_{k=0}^{2^{n}-1}a_{k}|k\rangle\). This prediction technique is most powerful when the target functions have a constant Hilbert-Schmidt norm. In this case, the sample rate is completely independent of the problem dimension \(2^{n}\). Prominent examples include estimating quantum fidelities (with pure states), or entanglement witnesses. Now we calculate the \(\|O-\frac{\operatorname{tr}(O)}{2^{n}}\|_{\text{shadow}}\) when the unitary ensemble is for MUBs. \[\|O-\frac{\operatorname{tr}(O)}{2^{n}}\|_{\text{shadow}}=\max_{ \sigma:\text{ state}}\left(\mathbb{E}_{U\sim\mathcal{U}}\sum_{b\in[0,1]^{n}}\langle b |U\sigma U^{\dagger}|b\rangle\cdot\langle b|U\mathcal{M}^{-1}(O-\frac{ \operatorname{tr}(O)}{2^{n}})U^{\dagger}|b\rangle^{2}\right)^{1/2} \tag{18}\] Define \(O_{0}=O-\frac{\operatorname{tr}(O)}{2^{n}}\). We have \(\operatorname{tr}(O_{0})=0\). \[\mathcal{M}^{-1}(O-\frac{\operatorname{tr}(O)}{2^{n}})=(2^{n}+1)(O-\frac{ \operatorname{tr}(O)}{2^{n}})-\operatorname{tr}(O-\frac{\operatorname{tr}(O )}{2^{n}})I=(2^{n}+1)(O-\frac{\operatorname{tr}(O)}{2^{n}})=(2^{n}+1)O_{0}\] \[\|O_{0}\|_{\text{shadow}}^{2} =\max_{\sigma:\text{ state}}\ \frac{1}{2^{n}+1}\sum_{j=0}^{2^{n}}\sum_{k=0}^{2^{n}-1} \operatorname{tr}(\sigma U_{j}^{\dagger}|k\rangle\langle k|U_{j})\cdot \operatorname{tr}^{2}((2^{n}+1)O_{0}U_{j}^{\dagger}|k\rangle\langle k|U_{j}) \tag{19}\] \[=\max_{\sigma:\text{ state}}\ \operatorname{tr}\left(\sigma\sum_{j=0}^{2^{n}} \sum_{k=0}^{2^{n}-1}\langle k|U_{j}O_{0}U_{j}^{\dagger}|k\rangle^{2}\cdot U_{j }^{\dagger}|k\rangle\langle k|U_{j}\right) \tag{20}\] Here we express \(O_{0}\) in the first computational basis. \[O_{0}=\sum_{a=0}^{2^{n}-1}\sum_{b=0}^{2^{n}-1}y_{ab}|a\rangle\langle b|.\] \[\langle k|U_{j}O_{0}U_{j}^{\dagger}|k\rangle^{2}=\begin{cases}y_{ jk}^{2},&\text{if}\ \ j=0\\ \frac{1}{4^{n}}\sum_{a,b=0}^{2^{n}-1}\sum_{a_{1},b_{1}=0}^{2^{n}-1}y_{ab}y_{a_{1 }b_{1}}e^{j\cdot f(a,a_{1},b,b_{1},k)},&\text{else}\end{cases} \tag{21}\] Here \(f(a,a_{1},b,b_{1},k)\) is a real number related with \(a,a_{1},b,b_{1},k\). Then \[\|O_{0}\|_{\text{shadow}}^{2} \leq\max_{\sigma:\text{ state}}\ \operatorname{tr}\left(\sigma\sum_{j=0}^{2^{n}-1}\sum_{k=0}^{2^{n}-1} \langle k|U_{j}O_{0}U_{j}^{\dagger}|k\rangle^{2}\cdot U_{j}^{\dagger}|k\rangle \langle k|U_{j}\right) \tag{22}\] \[=\max_{\sigma:\text{ state}}\ \operatorname{tr}\left(\sigma\sum_{k=0}^{2^{n}-1}y_{ jk}^{2}|k\rangle\langle k|\right)+\max_{\sigma:\text{ state}}\ \operatorname{tr}\left(\sigma\frac{\sum_{a,b=0}^{2^{n}-1}\sum_{a_{1},b_{1}=0}^{2^{ n}-1}y_{ab}y_{a_{1}b_{1}}e^{j\cdot f(a,a_{1},b,b_{1},k)}}{4^{n}}2^{n}I\right)\] (23) \[\leq\max_{\sigma:\text{ state}}\ \operatorname{tr}\left(\sigma\sum_{k=0}^{2^{n}-1}y_{ jk}^{2}|k\rangle\langle k|\right)+\max_{\sigma:\text{ state}}\ \operatorname{tr}\left(\sigma\frac{\sum_{a,b=0}^{2^{n}-1}\sum_{a_{1},b_{1}=0}^{2^{ n}-1}\big{|}y_{ab}y_{a_{1}b_{1}}\big{|}}{2^{n}}I\right)\] (24) \[=\max_{\sigma:\text{ state}}\ \operatorname{tr}\left(\sigma\sum_{k=0}^{2^{n}-1}y_{ jk}^{2}|k\rangle\langle k|\right)+\frac{\sum_{a,b=0}^{2^{n}-1}\sum_{a_{1},b_{1}=0}^{2^{ n}-1}\big{|}y_{ab}y_{a_{1}b_{1}}\big{|}}{2^{n}}\] (25) \[\leq\max_{\sigma:\text{ state}}\ \operatorname{tr}\left(\sigma\sum_{k=0}^{2^{n}-1}y_{ jk}^{2}|k\rangle\langle k|\right)+\frac{\sum_{a,b=0}^{2^{n}-1}\sum_{a_{1},b_{1}=0}^{2^{ n}-1}(\big{|}y_{ab}\big{|}^{2}+\big{|}y_{a_{1}b_{1}}\big{|}^{2})}{2\cdot 2 ^{n}} \tag{26}\] Denote the Hilbert-Schmidt norm of \(O_{0}\) as \(\|O_{0}\|_{\text{HS}}\). \(\|O_{0}\|_{\text{HS}}=\sqrt{\sum_{a,b=0}^{2^{n}-1}\big{|}y_{ab}\big{|}^{2}}\). Then we have the following relation. \[\|O_{0}\|_{\text{shadow}}^{2} \leq\max_{\sigma:\text{ state }}\text{tr}\left(\sigma\sum_{k=0}^{2^{n}-1}y_{kk}^{2}|k \rangle\langle k|\right)+\frac{\sum_{a,b=0}^{2^{n}-1}(2^{n}\left|y_{ab}\right|^{ 2}+\|O_{0}\|_{\text{HS}}^{2})}{2\cdot 2^{n}} \tag{27}\] \[\leq\max_{\sigma:\text{ state }}\text{tr}\left(\sigma\sum_{k=0}^{2^ {n}-1}\|O_{0}\|_{\text{HS}}^{2}|k\rangle\langle k|\right)+\frac{\sum_{a,b=0}^{ 2^{n}-1}(2^{n}\left|y_{ab}\right|^{2}+\|O_{0}\|_{\text{HS}}^{2})}{2\cdot 2^{n}}\] (28) \[=\max_{\sigma:\text{ state }}\|O_{0}\|_{\text{HS}}^{2}\text{tr} \left(\sigma\cdot I\right)+\frac{2\cdot 2^{n}\|O_{0}\|_{\text{HS}}^{2}}{2\cdot 2^{n}}\] (29) \[=2\|O_{0}\|_{\text{HS}}^{2}=2\text{tr}(O_{0}^{2}). \tag{30}\] When selecting the unitary ensemble as the Clifford group, it is observed that the variance is constrained to be less than or equal to \(3\text{tr}(O_{0}^{2})\). However, an enhancement in variance is achieved when transitioning to the mutually unbiased bases ensemble. As a result, the sample complexity associated with MUBs can be reduced to two-thirds of that for the Clifford group. This intuitive improvement can be illustrated by considering two sequences of numbers: 'A' with elements \(1,3,5,7,9\) and 'B' with elements \(1,5,9\). Despite having the same expectation value, 'B' contains fewer numbers, and it is evident that the variance of 'B' is smaller than that of 'A'. Let's denote '\(\mathcal{A}\)' as the set comprising all stabilizer states, represented as \(W_{j}|k\rangle\), where '\(W_{j}\)' represents all Clifford circuits, and '\(|k\rangle\)' represents all computational bases. The cardinality of the set '\(\mathcal{A}\)' is approximately on the order of \(2^{n^{2}/2}\). In contrast, let's denote '\(\mathcal{B}\)' as the set of all mutually unbiased bases, which contains \(2^{2n+1}\) states. Importantly, it can be noted that '\(\mathcal{B}\)' is a subset of '\(\mathcal{A}\)'. This fundamental distinction underscores the potential advantages and reduced sample complexity associated with MUBs, as compared to the Clifford group. ### Comparison with random Clifford measurements For the procedure of the classical shadow tomography, we should do the following three things no matter what ensemble we choose. * Uniformly sample a unitary operation \(U_{j}\) from the ensemble. * Decompose any possible \(U\) into the elementary quantum gates. If the gate structures are more uniform, the quantum circuit will require fewer modifications when selecting the next unitary operation \(U_{j}\). * Calculate the possible projected states \(U_{j}^{\dagger}|k\rangle\langle k|U_{j}\), then calculate the classical snapshot \(\mathcal{M}^{-1}(U_{j}^{\dagger}|k\rangle\langle k|U_{j})\), and predicate the value \(\text{tr}(O\mathcal{M}^{-1}(U_{j}^{\dagger}|k\rangle\langle k|U_{j}))\). The \(n\)-qubit Clifford circuits are the ones generated by the CNOT gate, Hadamard gate (\(H\)), and Phase gate (\(P\)). By incorporating an additional \(\pi/8\) gate, these circuits become universal for quantum computation [45]. Clifford circuits find diverse applications in quantum technology, including quantum error correction [46], measurement-based quantum computing [47; 48], fault-tolerant computation [49; 50], quantum data hiding [51], and randomized benchmarking [52]. The ensemble of \(n\)-qubit Clifford circuits comprises a vast number of elements, precisely \(2^{n^{2}+2n}\prod_{j=1}^{n}(4^{j}-1)\). While one could theoretically list these circuits sequentially from \(j=1\) to \(j=2^{n^{2}+2n}\prod_{j=1}^{n}(4^{j}-1)\) and randomly select from that list, the sheer size of this ensemble makes such an approach impractical. To efficiently sample a Clifford circuit, it can be parameterized using the tableau representation of Pauli operators, and the corresponding circuit can be constructed accordingly [53]. Specifically, consider the \(2n\) Pauli operators \(X_{j}\) and \(Z_{j}\), where \(j=1,\cdots,n\). A Clifford circuit \(U\) is uniquely determined, up to a global phase, by these operators. It can be expressed as follows: For \(UX_{j}U^{\dagger}=(-1)^{r_{j}}\prod_{i=1}^{n}X_{i}^{\alpha_{ji}}Z_{i}^{\beta_{ ji}}\), and \(UZ_{j}U^{\dagger}=(-1)^{s_{j}}\prod_{i=1}^{n}X_{i}^{\gamma_{ji}}Z_{i}^{\delta_{ ji}}\). The parameters that define \(U\) are \((\alpha,\beta,\gamma,\delta,r,s)\), where \(\alpha,\beta,\gamma,\delta\) are \(n\times n\) matrices of bits, and \(r,s\) are \(n\)-bit vectors. With these parameters, one can randomly sample and construct the corresponding circuit. Aaronson and Gottesman previously decomposed the Clifford circuits with \(O(n^{2}/\log n)\) elementary gates, organizing them into an 11-stage computation: \(-H-CX-P-CX-P-CX-H-P-CX-P-C-\)[54]. Here, CX represents the CNOT operation. A more straightforward and time-efficient algorithm was introduced earlier, performing a form of Gaussian elimination with a runtime of \(O(n^{3})\), producing a circuit with \(O(n^{2})\) gates [55]. Koenig and Smolin utilized a series of \(O(n)\) symplectic transvections to create a random uniform Clifford operator with a time complexity of \(O(n^{3})\)[53]. Maslov and Roetteler simplified the structure into 7 stages of the decomposition: \(-CX-CZ-P-H-P-CZ-CX-\), where CZ represents the Controlled-Z operation [56]. Berg decomposed the circuit into quantum circuits with a maximum of \(5n+2n^{2}\) elementary gates and a maximum depth of \(O(n\log n)\) on fully connected topologies, with a time complexity of \(O(n^{2})\)[57]. Later, Bravyi and Maslov decomposed \(U\) with a time complexity of \(O(n^{2})\) in the canonical form \(F_{1}HSF_{2}\), where \(S\) is a permutation of qubits, \(F_{1}\) corresponds to the \(-CX-CZ-P-\) part, and \(SF2\) corresponds to the \(-P-CZ-CX-\) part [58]. After obtaining the decomposed circuits \(U_{j}\), the next step involves calculating \(U_{j}^{\dagger}|k\rangle\langle k|U_{j}\) to create the classical snapshot \(\mathcal{M}^{-1}(U_{j}^{\dagger}|k\rangle\langle k|U_{j})\). Multiplying \(O(n^{2})\)\(n\)-qubit gates to obtain \(U_{j}\) will require massive computation. As previously analyzed [59], the time complexity can be on the order of \(O(n\cdot 2^{3n})\) to obtain \(U_{j}^{\dagger}|k\rangle\). To mitigate this complexity and circumvent the need for circuit decomposition, for the special physical platform, they adopted the strategy of randomly sampling all \(O(2^{n^{2}/2})\) potential projected stabilizer states \(U_{j}^{\dagger}|k\rangle\). This approach reduces the computational complexity to \(O(2^{n}n^{3})\) when determining all \(2^{n}\) coefficients of \(U_{j}^{\dagger}|k\rangle\). In comparison, the sampling process for \(2^{n}+1\) unitary operations for MUBs can be more efficient. We need to sample the index from \(\{j=0,\cdots,2^{n}\}\). If the number \(j=0\), the sampled circuit is \(U_{0}=I\). We define a vector \(j_{1},\cdots,j_{n}\), where \(j_{1},\cdots,j_{n}=\{0,1\}\). If \(j=1\), the corresponding vector is \(0,\cdots,0\). If \(j=2^{n}\), the corresponding vector is \(1,\cdots,1\). We do not need to calculate \(4^{n}\) coefficients of \(U_{j}\) to obtain \(U_{j}|k\rangle\). As for MUBs constructed with the Galois-Fourier approach [60], these states are previously defined with the following form: \[U_{j}^{\dagger}|k\rangle=U_{j+1}^{\dagger}|k\rangle=\frac{1}{\sqrt{2^{n}}} \sum_{l=0}^{2^{n}-1}|l\rangle(-1)^{k\odot l}\prod_{m_{1},m_{2}=0}^{n}(\sqrt{ -1})^{f\odot(l_{n}2^{n})\odot(l_{n}2^{n})},\quad k,j^{\prime}=0,1,\ldots,2^{n} -1. \tag{31}\] The multiplication operator \(\odot\) is relevant to the multiplication of two polynomials in \(\mathbb{F}_{2}[x]\) and an irreducible polynomial over \(\mathbb{F}_{2}\). We change \(k\odot l\) to the inner product of vector \(k\) and \(l\) to simplify the circuit construction. Each term of \(U_{j}^{\dagger}|k\rangle\) is of the form \(\frac{\alpha}{\sqrt{2^{n}}}\), where \(\alpha=\pm 1,\pm\sqrt{-1}\). The time complexity to calculate \(U_{j}^{\dagger}|k\rangle\) is decreased into \(O(2^{n}n^{2})\). We know the predication value \(\mathrm{tr}(O\mathcal{M}^{-1}(U_{j}^{\dagger}|k\rangle\langle k|U_{j}))\) is equal to \((2^{n}+1)\mathrm{tr}(OU_{j}^{\dagger}|k\rangle\langle k|U_{j})-2^{n}\). What's more, consider an observable \(O=|\phi_{t}\rangle\langle\phi_{t}|\), where \(|\phi_{t}\rangle\) is sparse with at most \(k\) nonzero coefficients. The time complexity to compute \(U_{j}|k\rangle\) will decrease exponentially into \(O(kn^{2})\), as we only need to calculate the corresponding nonzero parts. The circuit structure for MUBs \(U_{j}\), where \(j=1,\cdots,2^{n}\), exhibits greater uniformity [61]. We can decompose it into three stages of the form \(-CZ-P-H-\). Here, \(H\) represents \(n\) Hadamard gates applied to each qubit. The number of \(CZ\) operations is at most \(n(n-1)/2\), with an average of \(n(n-1)/4\). For instance, when \(n=3\), there are a total of \(12\)\(CZ\) gates in the circuits \(U_{1},\cdots,U_{8}\). Fig.(1) illustrates the former four circuits of MUBs at 4-qubit system. If we choose Pauli \(X\) measurements at each qubit, the unitary ensemble for MUBs can be simplified to \(-CZ-P-\). In this case, \(CZ\) and \(P\) represent diagonal operations, and the gate sequence can be adjusted as needed. In summary, the sampling process, unitary construction, and post-processing time required to calculate the clas Figure 1: Circuits of the former four MUBs for 4-qubit case. sical shadow may offer greater efficiency when utilizing random MUBs measurements, in comparison to random Clifford measurements. ## 4 Application: predicting quantum fidelity The ensemble of MUBs unitary transformations are made from the entangling gates. Thus, unlike the Pauli measurements, they are capable to predict the non-local observables. One of the simplest such observable is the quantum fidelity. In this section, we will use the classical shadows based on random MUBs measurements to predict the fidelity. We will take both the unknown state and target state to be an n-qubit GHZ state \[|\psi_{\rm GHZ}(n)\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle^{8n}+|1\rangle^{8n }\right). \tag{32}\] It is straightforward to show that the fidelity between two such pure states is equivalent to the expectation value of a non-local observable \(O_{\rm GHZ}=|\psi_{\rm GHZ}(n)\rangle\langle\psi_{\rm GHZ}(n)|\). Following the algorithm summarized in Section 2, we generate the MUBs gates using the method in [61] and perform the random MUBs measurements on the unknown GHZ state. The measurement outcomes as sequences containing 0's and 1's are stored in the classical memory. With them, one can construct the classical shadows and predict the expectation value of \(O_{\rm GHZ}\) for different qubit \(n\). In the numerical experiments, we perform \(10^{4}\) MUBs measurements randomly on the GHZ state and predict the quantum fidelity using classical shadows. We repeat the experiment ten times independently and plot the results in Figure 2. As we can see that predicted fidelity is very close to the true value 1 with only \(10^{4}\) measurements. Note that the predicted quantum state from classical shadows can be not positive semi-definite and the predicted fidelity can be larger than 1. However, as increasing the number of the sample, the predicted quantity will converge to the true value. We will also introduce a phase error to the GHZ state with probability \(p\in[0,1]\). The density matrix of this noisy unknown state is \[\rho_{p}=(1-p)|\psi^{+}_{\rm GHZ}(n)\rangle\langle\psi^{+}_{\rm GHZ}(n)|+p|\psi ^{-}_{\rm GHZ}(n)\rangle\langle\psi^{-}_{\rm GHZ}(n)|,\quad|\psi^{\pm}_{\rm GHZ }(n)\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle^{8n}\pm|1\rangle^{8n}\right) \tag{33}\] Figure 2: Quantum fidelity for the perfect GHZ states estimated using the classical shadows constructed by \(10^{4}\) MUBs measurements. The shaded regions are the standard deviation over ten independent runs. By performing random MUBs measurements on this state, we obtain a classical representation of \(\rho_{p}\). Following the procedure in Section 2, we predict the fidelity between this noisy state and a pure GHZ state. The Figure 3 shows that the classical shadow predictions decrease as the increase of the error probability \(p\). For \(p=1\), the unknown state becomes \(|\psi^{-}_{\text{GHZ}}(n)\rangle\) that is orthogonal to the target state \(|\psi^{+}_{\text{GHZ}}(n)\rangle\) and the predicted fidelity approaches to zero as expected. These two numerical experiments show that the classical shadows based on MUBs measurement is an effective method to predict the fidelity, and in more general, the non-local observable of unknown quantum states. ## 5 Conclusion It is well-known that MUBs is the IC set containing minimal number of quantum gates. Given a quantum system with qubit number \(n\), there are only \(2^{n}+1\) MUBs circuits. Based on an ongoing work [61], all MUBs gates are constructed for arbitrary qubits. In this paper, we study how to use the MUBs gates to perform the classical shadow tomography. One of the key component in this powerful algorithm is the reconstruction channel which depends on the choice of the ensemble of the unitary transformations. We calculate this reconstruction channel for MUBs gates. We find that it is identical to the reconstruction channel of Clifford gates. The classical shadow tomography is designed to predict the expectation value of observables from a set of measurements without knowing the underlying quantum state. To analyses the performance of the algorithm, we calculate the standard derivation of these predictions made from the MUBs measurements. An upper bound of the derivation is found to be \(2\text{tr}(O^{2})\) smaller than the \(3\text{tr}(O^{2})\) for Clifford measurements. Thus, to attain the same accuracy, MUBs measurements may require less measurements than Clifford measurements. As an application, we use the classical shadow tomography based on MUBs ensemble to predict the non-local observable of unknown quantum states. We perform numerical experiments to predict the quantum fidelity for both pure and noisy GHZ. Our results show that the classical shadows constructed from MUBs measurements is very effective in the prediction of fidelity. The accuracy and scalability are as good as the Clifford ensemble. But, as the information complete set with minimal quantum gates, the classical shadow tomography scheme based on the MUBs matrices needs much fewer unitary transformations and expected to require less measurements, and thus in general much more easier to implement compared with the Clifford ones. MUBs circuits, a subset of all Clifford circuits, exhibit a structured decomposition Figure 3: Estimated quantum fidelity of a noisy GHZ state and a perfect GHZ. The noise is introduced by the Z-errors with probability \(p\in[0,1]\). The classical shadows is constructed by 5000 MUBs measurements. into 3-stage arrangements, exemplified by \(-CZ-P-H-\). The MUBs states enable us to perform more efficient classical shadow calculations via random MUBs measurements. In conclusion, our research highlights the potential of MUBs-based classical shadow tomography as a powerful tool for making predictions in the quantum realm than random Clifford measurements while offering a streamlined and practical approach. **Acknowledgements--** This work was supported by the National Natural Science Foundation of China under Grants No. 62001260 (Y.W.), and the Beijing Natural Science Foundation under Grant No. Z220002 (Y.W.).
2301.07128
Quantum gradient evaluation through quantum non-demolition measurements
We discuss a Quantum Non-Demolition Measurement (QNDM) protocol to estimate the derivatives of a cost function with a quantum computer. %This is a key step for the implementation of variational quantum circuits. The cost function, which is supposed to be classically hard to evaluate, is associated with the average value of a quantum operator. Then a quantum computer is used to efficiently extract information about the function and its derivative by evolving the system with a so-called variational quantum circuit. To this aim, we propose to use a quantum detector that allows us to directly estimate the derivatives of an observable, i.e., the derivative of the cost function. With respect to the standard direct measurement approach, this leads to a reduction of the number of circuit iterations needed to run the variational quantum circuits. The advantage increases if we want to estimate the higher-order derivatives. We also show that the presented approach can lead to a further advantage in terms of the number of total logical gates needed to run the variational quantum circuits. These results make the QNDM a valuable alternative to implementing the variational quantum circuits.
Paolo Solinas, Simone Caletti, Giovanni Minuto
2023-01-17T19:00:08Z
http://arxiv.org/abs/2301.07128v2
# Quantum gradient evaluation through quantum non-demolition measurements ###### Abstract We discuss a Quantum Non-Demolition Measurement (QNDM) protocol to estimate the derivatives of a cost function with a quantum computer. The cost function, which is supposed to be classically hard to evaluate, is associated with the average value of a quantum operator. Then a quantum computer is used to efficiently extract information about the function and its derivative by evolving the system with a so-called variational quantum circuit. To this aim, we propose to use a quantum detector that allows us to directly estimate the derivatives of an observable, i.e., the derivative of the cost function. With respect to the standard direct measurement approach, this leads to a reduction of the number of circuit iterations needed to run the variational quantum circuits. The advantage increases if we want to estimate the higher-order derivatives. We also show that the presented approach can lead to a further advantage in terms of the number of total logical gates needed to run the variational quantum circuits. These results make the QNDM a valuable alternative to implementing the variational quantum circuits. ## I Introduction The advent of all-purpose quantum computers able to solve hard computational problems is still decades away. However, it is commonly believed that some problems intractable with a classical computer could be within reach of today's Noisy Intermediate-Scale Quantum (NISQ) computers. The problems for which a quantum advantage can be reached in the next years are the ones that require a large working space such as the simulation of complex physical and chemical systems. The most promising architecture is a hybrid quantum-classical one. Among these hybrid algorithms the most relevant ones are the variational quantum circuit, the variational quantum eigensolver [1; 2] and the quantum approximate optimization algorithm [3]. The computational scheme is the following. Given a certain cost function in a large parameter space to be minimized, we associate it with a Hermitian operator. We run a quantum circuit to evaluate the average values of the Hermitian operator, i.e., the value of the cost function, in a specified point of the parameter space. Since the quantum measurements are probabilistic, we need to iterate the process to reach the desired accuracy. This information is then fed into a classical computer which elaborates it and determines the following steps of the quantum computer. Typically, classical computation consists in an optimization algorithm for which we need information about the derivatives of the cost functions. Therefore, in these schemes, the main quantum computational task is to obtain the derivatives of the cost function by measuring quantum observables with accuracy and minimal cost. From the quantum perspective, the main resource costs come from the iterations needed to have an accurate average and the logical gates needed to run the quantum circuit at every iteration. Because of the limitation in quantum hardware and quantum operations, any reduction of the cost can bring us closer to obtaining a quantum advantage in problems of practical interest. While the usual proposals rely on the direct measurement (DM) of a quantum observable to extract information about the cost function [4; 5; 6], here, we discuss an alternative method to estimate directly the derivatives of the observable. A quantum detector is coupled in sequence with the system from which we want to extract the information. The information about the observable, its gradient, or the high derivatives is stored in the detector phase which is eventually measured. This technique is often called Quantum Non-Demolition Measurement (QNDM) [7] or full counting statistics [8] and it is rooted in the idea of weak values and weak measurements [9]. The potential advantages of this approach lie in the fact that, with this unconventional measurement, we can directly estimate the average of the _variation_, i.e., the gradient, of quantum observables which cannot be obtained with direct measurement. Indeed, the same approach has been used to estimate the variation of charge [8] and energy [10; 11; 12; 13] in quantum systems. Since these are related to observables at different times, they are not associated with any hermitian operator [14] and the measurement process suffers from conceptual and practical subtleties [10; 13]. In this paper, we revert the process and identify in the QNDM the ideal approach to extract information about the derivative of an observable. A similar approach was presented in Refs. [15; 16]. Here, we discuss it in a more general framework, extend its applications to estimate high derivatives and discuss the main advantages with respect to the direct measurement approach. As we show below, the QNDM needs fewer resources (in terms of iterations and logical operators) than the DM approach. The advantages increase with the order of the derivatives to be estimated. The paper is divided as follows. In Sec. II, we introduce the quantities to be estimated. In Sec. III, we show how the gradient of an observable can be measured with the QNDM approach and, in Sec. IV we give an explicit example of the advantages of the QNDM approach when the cost function is related to a complex operator. In Sec. V, we extend our framework to the estimate of second and higher derivative and in Sec. VI we discuss the advantages in terms of resources needed. Finally, Sec. VII contains the conclusions.
2303.11133
Sturmian and infinitely desubstitutable words accepted by an ω-automaton
Given an $\omega$-automaton and a set of substitutions, we look at which accepted words can also be defined through these substitutions, and in particular if there is at least one. We introduce a method using desubstitution of $\omega$-automata to describe the structure of preimages of accepted words under arbitrary sequences of homomorphisms: this takes the form of a meta-$\omega$-automaton. We decide the existence of an accepted purely substitutive word, as well as the existence of an accepted fixed point. In the case of multiple substitutions (non-erasing homomorphisms), we decide the existence of an accepted infinitely desubstitutable word, with possibly some constraints on the sequence of substitutions e.g. Sturmian words or Arnoux-Rauzy words). As an application, we decide when a set of finite words codes e.g. a Sturmian word. As another application, we also show that if an $\omega$-automaton accepts a Sturmian word, it accepts the image of the full shift under some Sturmian morphism.
Pierre Béaur, Benjamin Hellouin de Menibus
2023-03-20T14:10:13Z
http://arxiv.org/abs/2303.11133v2
# Sturmian and infinitely desubstitutable words accepted by an \(\omega\)-automaton ###### Abstract Given an \(\omega\)-automaton and a set of substitutions, we look at which accepted words can also be defined through these substitutions, and in particular if there is at least one. We introduce a method using desubstitution of \(\omega\)-automata to describe the structure of preimages of accepted words under arbitrary sequences of homomorphisms: this takes the form of a meta-\(\omega\)-automaton. We decide the existence of an accepted purely substitutive word, as well as the existence of an accepted fixed point. In the case of multiple substitutions (non-erasing homomorphisms), we decide the existence of an accepted infinitely desubstitutable word, with possibly some constraints on the sequence of substitutions (_e.g._ Sturmian words or Arnoux-Rauzy words). As an application, we decide when a set of finite words codes _e.g._ a Sturmian word. As another application, we also show that if an \(\omega\)-automaton accepts a Sturmian word, it accepts the image of the full shift under some Sturmian morphism. Keywords:Substitutions \(\omega\)-automata Sturmian words decidability. ## 1 Introduction One-dimensional symbolic dynamics is the study of infinite words and their associated dynamical structures, and is linked with combinatorics on words. Two classical methods to generate words are the following: on the one hand, sofic shifts are the set of infinite walks on a labeled graph (which can be considered as an \(\omega\)-automaton) [9]; on the other hand, the substitutive approach consists in iterating a word homomorphism on an initial letter. The latter method was introduced by Axel Thue as a way to create counterexamples to conjectures in combinatorics on words [3]. These two constructions usually build words and languages which are of a very different nature. On the one hand, substitutive words tend to have a self-similar structure, and are used to generate minimal aperiodic subshifts ; on the other hand, sofic shifts always contain ultimately periodic words and cannot be minimal if they contain a non-periodic word. We aim at deciding when a given \(\omega\)-automaton accepts a word with a given substitutive structure, and study the properties of sets of such accepted words. Carton and Thomas provided a method to decide this question in the case of substitutive or morphic words on Buchi \(\omega\)-automata, using verification theory and semigroups of congruence [5]. This result was partially reproved by Salo [15], using a more combinatorial point of view. For the last 20 years, the substitutive approach (iterating a single homomorphism) has been generalized to the S-adic approach [6] that lets one alternate betweeen multiple substitutions. This more general framework lets us describe other natural classes, such as the family of Sturmian words. In this paper, we develop a new method based on desubstitutions of \(\omega\)-automata. We can express the preimages of an \(\omega\)-automaton by any sequence of substitutions through a meta-\(\omega\)-automaton, whose vertices are \(\omega\)-automata and whose edges are labeled by substitutions. We use this meta-\(\omega\)-automaton to decide whether an \(\omega\)-automaton accepts a purely substitutive word (giving an alternative proof of [5]), or a fixed point of a substitution, or a morphic word, or an infinitely desubstitutable word (by a set of substitutions). The method is flexible enough to enforce additional constraints on the directive sequences of substitutions, which is powerful enough for example to decide whether an \(\omega\)-automaton accepts a Sturmian word. A consequence is the decidability of whether a given set of finite words codes some Sturmian word (or from any family of words with an \(S\)-adic characterization). We also describe the set of directive sequences of words accepted by some \(\omega\)-automaton, which is an \(\omega\)-regular set. The meta-\(\omega\)-automaton also provides a more combinatorial insight on how Sturmian words and \(\omega\)-regular languages interact: namely, that an \(\omega\)-automaton accepts a Sturmian word if, and only if, it accepts the image of the full shift under a Sturmian morphism. ## 2 Definitions ### Words and \(\omega\)-automata An alphabet \(\mathcal{A}\) is a finite set of symbols. The set of finite words on \(\mathcal{A}\) is denoted as \(\mathcal{A}^{\ast}\), and contains the empty word. A (mono)infinite word is an element of \(\mathcal{A}^{\mathbb{N}}\). It is usual to write \(x=x_{0}x_{1}x_{2}x_{3}\dots\) where \(x_{i}=x(i)\in\mathcal{A}\). If \(x\) is a word, \(|x|\) is the length of the word (if \(x\) is infinite, then \(|x|=\infty\)). For a word \(x\) and \(0\leqslant j\leqslant k<|x|\), \(x_{[j,k]}\) is the word \(x_{j}x_{j+1}x_{j+2}\dots x_{k-1}x_{k}\). We denote \(w\sqsubseteq_{p}x\) when \(w\) is a prefix of \(x\), that is, \(w=x_{[0,k]}\). It is possible to endow \(\mathcal{A}^{\mathbb{N}}\) with a topology, called _the prodiscrete topology_. The prodiscrete topology is defined by the clopen basis \([w]_{n}=\{x\in\mathcal{A}^{\mathbb{N}}\mid x_{n}x_{n+1}\dots x_{n+|w|-1}=w\}\) for \(w\in\mathcal{A}^{\ast}\). To this topology, we can adjunct a dynamic with the shift operator \(S\): \[S:\left(\begin{array}{cc}\mathcal{A}^{\mathbb{N}}&\rightarrow&\mathcal{A}^ {\mathbb{N}}\\ x=x_{0}x_{1}x_{2}x_{3}\dots&\mapsto S(x)=x_{1}x_{2}x_{3}x_{4}\dots\end{array}\right)\] A set \(X\subseteq\mathcal{A}^{\mathbb{N}}\) is called _a shift (space)_ if it is stable by \(S\) and closed for the prodiscrete topology. In particular, \(\mathcal{A}^{\mathbb{N}}\) is a shift space, called _the full shift (space)_. We now introduce the main computational model of this paper: \(\omega\)-automata. Definition 1 (\(\omega\)-automaton): An \(\omega\)-automaton \(\mathfrak{A}\) is a tuple \((\mathcal{A},Q,I,T)\), where \(\mathcal{A}\) is an alphabet, \(Q\) is a finite set of states, \(I\subseteq Q\) is the set of initial states, \(T\subseteq Q\times\mathcal{A}\times Q\) is the set of transitions of \(\mathfrak{A}\). We extend several classical notions from finite automata. We write transitions as \(q_{s}\xrightarrow{a}q_{t}\in T\). Definition 2 (Computations and walks): For \(n\geq 1\) or \(n=\infty\), a sequence \((q_{k})_{0\leq k\leq n}\) with \(q_{k}\in Q\) is a _walk in \(\mathfrak{A}\) if there is \((a_{k})_{1\leq k\leq n}\subseteq\mathcal{A}\) such that for all \(0\leq k\leq n-1\), \(q_{k}\xrightarrow{a_{k+1}}q_{k+1}\in T\). We then write \(q_{0}\xrightarrow{a_{1}}q_{1}\xrightarrow{a_{2}}q_{2}\xrightarrow{a_{3}} \cdots\xrightarrow{a_{n}}q_{n}\). The word \(w=(a_{k})_{1\leq k\leq n}\)_labels_ the walk, and we call _computation_ a labeled walk. If the computation begins with an initial state, \(w\) is _accepted_ by \(\mathfrak{A}\). In the literature, \(\omega\)-automata usually have an acceptance condition (such as the Buchi condition [17]). In this paper, we will consider \(\omega\)-automata to have the largest acceptance condition: every walk beginning with an initial state is accepting. This is a weaker model than Buchi \(\omega\)-automata. Definition 3 (Language of an \(\omega\)-automaton): Let \(\mathfrak{A}\) be an \(\omega\)-automaton. The language of **finite** words of \(\mathfrak{A}\) is \(\mathcal{L}_{F}(\mathfrak{A})=\{w\in\mathcal{A}^{*}\ |\ w\text{ is accepted by }\mathfrak{A}\}\). The language of **infinite** words of \(\mathfrak{A}\) is \(\mathcal{L}_{\infty}(\mathfrak{A})\}\{w\in\mathcal{A}^{\mathbb{N}}\ |\ w\text{ is accepted by }\mathfrak{A}\}\). Then, the language of \(\mathfrak{A}\) is \(\mathcal{L}(\mathfrak{A})=\mathcal{L}_{F}(\mathfrak{A})\cup\mathcal{L}_{ \infty}(\mathfrak{A})\). If all states of \(\mathfrak{A}\) are initial (\(I=Q\)), its language of infinite words is a shift, called a _sofic shift_[9]. ### Substitutions Definition 4 (Homomorphisms and substitutions): A homomorphism is a function \(\sigma:\mathcal{A}^{*}\rightarrow\mathcal{A}^{*}\) such that \(\sigma(uv)=\sigma(u)\sigma(v)\) (concatenation) for all \(u,v\in\mathcal{A}^{*}\). The homomorphism \(\sigma\) is extended to \(\mathcal{A}^{\mathbb{N}}\rightarrow\mathcal{A}^{\mathbb{N}}\) by \(\sigma(x_{0}x_{1}x_{2}\dots)=\sigma(x_{0})\sigma(x_{1})\sigma(x_{2})\dots\) A substitution is a _nonerasing_ homomorphism, that is, \(\sigma(a)\neq\varepsilon\) for all letters \(a\in\mathcal{A}\). Definition 5 (Fixed points, purely substitutive, substitutive and morphic words): Let \(\sigma,\tau:\mathcal{A}^{\mathbb{N}}\rightarrow\mathcal{A}^{\mathbb{N}}\) be two homomorphisms. An infinite word \(x\in\mathcal{A}^{\mathbb{N}}\) is: * \(a\) fixed point _of_ \(\sigma\) _if_ \(\sigma(x)=x\)_;_ * \(a\) purely substitutive word _generated by_ \(\sigma\) _if there is a letter_ \(a\in\mathcal{A}\) _such that_ \(x=\lim\limits_{n\rightarrow\infty}\sigma^{n}(a)\)_, where the limit is well-defined;_ * \(a\) morphic word _generated by_ \(\sigma\) _and_ \(\tau\) _if_ \(x=\tau(y)\)_, where_ \(y\) _is a purely substitutive word generated by_ \(\sigma\)_;_ * \(a\) substitutive word _generated by_ \(\sigma\) _if_ \(x\) _is a morphic word generated by_ \(\sigma\) _and a coding_ \(\tau\)_, i.e._ \(\tau(\mathcal{A})\subseteq\mathcal{A}\) It is now possible to extend these definitions to the case where we use multiple homomorphisms. However, most of literature revolves around the use of multiple non-erasing homomorphisms (substitutions), and we will stick to this case. Let \((\sigma_{n})_{n\in\mathbb{N}}\) be a sequence of substitutions. The equivalent of a fixed-point of one homomorphism is an _infinitely desubstituted word_ by a sequence of substitutions: Definition 6 (Infinitely desubstituted words and directive sequences): Let \(\mathcal{S}\) be a finite set of substitutions on a single alphabet \(\mathcal{A}\), and let \((\sigma_{n})_{n\in\mathbb{N}}\subseteq\mathcal{S}\). An infinite word \(x\) is _infinitely desubstituted_ by \((\sigma_{n})_{n\in\mathbb{N}}\) (called a _directive sequence_ of \(x\)) if, and only if, there exists a sequence of infinite words \((x_{n})_{n\in\mathbb{N}}\) such that \(x_{0}=x\) and \(x_{n}=\sigma_{n}(x_{n+1})\). An infinite word \(x\) is infinitely desubstituted by \(\mathcal{S}\) if \(x\) is infinitely desubstituted by some directive sequence \((\sigma_{n})_{n\in\mathbb{N}}\subseteq\mathcal{S}\). Just like for words, we write \(\sigma_{[\![i,j]\!]}=\sigma_{i}\circ\sigma_{i+1}\circ\cdots\circ\sigma_{j}\). Then, by compactness of \(\mathcal{A}^{\mathbb{N}}\), \(x\) is infinitely desubstitutable by \((\sigma_{n})_{n\in\mathbb{N}}\) if, and only if, there is a sequence of infinite words \((x_{n})_{n\in\mathbb{N}}\) such that \(x=\sigma_{[\![0,n]\!]}(x_{n+1})\) for all \(n\geq 0\). ## 3 Finding substitutive and infinitely desubstituted words in \(\omega\)-automata ### Desubstituting \(\omega\)-automata In this section, we explain our main technical tool: an effective transformation of \(\omega\)-automaton, called desubstitution. We define it for the broad case of possibly erasing homomorphisms. Definition 7 (Desubstitution of an \(\omega\)-automaton): Let \(\mathfrak{A}=(\mathcal{A},Q,I,T)\) be an \(\omega\)-automaton, and \(\sigma\) a homomorphism. We define \(\sigma^{-1}(\mathfrak{A})\) as the \(\omega\)-automaton \((\mathcal{A},Q,I,T^{\prime})\) where, for all \(q_{1},q_{2}\in Q\) and \(a\in\mathcal{A}\), \(q_{1}\xrightarrow{a}q_{2}\in T^{\prime}\) iff \(q_{1}\xrightarrow{\sigma(a)}*q_{2}\) is a computation in \(\mathfrak{A}\). In particular, in this case, we consider that \(q\xrightarrow{\varepsilon}q\) is a computation. Thus, if \(\sigma(a)=\varepsilon\), the desubstituted automaton \(\sigma^{-1}(\mathfrak{A})\) has a loop labeled by \(a\) on every state. For example, consider the following \(\omega\)-automaton \(\mathfrak{A}\) and substitution \(\sigma\) (Figure 1(a,b)). We build the \(\omega\)-automaton \(\sigma^{-1}(\mathfrak{A})\). Start from an empty automaton on the same set of states. For every computation in \(\mathfrak{A}\) labeled by \(01=\sigma(0)\) -- say, \(q\xrightarrow{\sigma(0)}*\) -- add an edge \(q\xrightarrow{0}r\) to the automaton (Figure 1(c)). To conclude, do this with \(\sigma(1)=0\) (Figure 1(d)). Stability by inverse morphism is a classical concept in the theory of finite automata [8], and desubstitution satisfies the following property: Proposition 1: _An infinite word \(u\) is accepted by \(\sigma^{-1}(\mathfrak{A})\) if and only if \(\sigma(u)\) is accepted by \(\mathfrak{A}\). In other words, \(\mathcal{L}_{\infty}(\sigma^{-1}(\mathfrak{A}))=\sigma^{-1}(\mathcal{L}_{ \infty}(\mathfrak{A}))\)._ Proof: Let \(u\) be accepted by \(\sigma^{-1}(\mathfrak{A})\). Consider the associated accepting walk \((q_{i})_{i\in\mathbb{N}}\). By definition of \(\sigma^{-1}(\mathfrak{A})\), for every \(i\in\mathbb{N}\), there exists a computation \(q_{i}\xrightarrow{\sigma(u_{i})}*q_{i+1}\) in \(\mathfrak{A}\). By concatenating these computations, we get an infinite computation \(q_{0}\xrightarrow{\sigma(u_{0})}*q_{1}\xrightarrow{\sigma(u_{1})}*q_{2} \xrightarrow{\sigma(u_{2})}*\cdots\) in \(\mathfrak{A}\) that accepts \(\sigma(u)\) in \(\mathfrak{A}\). Conversely, suppose there is a word of the form \(\sigma(u)\) accepted by \(\mathfrak{A}\). Consider the states \((q_{i})_{i\in\mathbb{N}}\) obtained after reading each \(\sigma(a)\) for \(a\in\mathcal{A}\). This defines an accepting computation labeled by \(u\) in \(\sigma^{-1}(\mathfrak{A})\). This proof actuallly provides a similar result for finite words: Proposition 2: _Let \(w\) be a finite word, \(\mathfrak{A}\) an \(\omega\)-automaton and \(\sigma\) a homomorphism. Then \(q_{s}\xrightarrow{\sigma(w)}*q_{t}\) is a computation in \(\mathfrak{A}\) iff \(q_{s}\xrightarrow{w}*q_{t}\) is a computation in \(\sigma^{-1}(\mathfrak{A})\)._ An easy but significant property is the composition of desubstitution of \(\omega\)-automata: Proposition 3: _Let \(\mathfrak{A}\) be an \(\omega\)-automaton, and \(\sigma\) and \(\tau\) be two homomorphisms. Then, \((\sigma\circ\tau)^{-1}(\mathfrak{A})=\tau^{-1}(\sigma^{-1}(\mathfrak{A}))\)._ Proof: These two \(\omega\)-automata share the same sets of states and of initial states. We prove that they have the same transitions. We have indeed: \[q_{s}\xrightarrow{a}q_{t}\text{ in }(\sigma\circ\tau)^{-1}( \mathfrak{A}) \iff q_{s}\xrightarrow{\sigma\circ\tau(a)}*q_{t}\text{ in }\mathfrak{A}\] \[\iff q_{s}\xrightarrow{\tau(a)}*q_{t}\text{ in }\sigma^{-1}( \mathfrak{A})\text{, by Proposition 2}\] \[\iff q_{s}\xrightarrow{a}q_{t}\text{ in }\tau^{-1}(\sigma^{-1}( \mathfrak{A}))\text{, by Proposition 2 again.}\] ### The problem of the purely substitutive walk We underline the following property of desubstitutions of \(\omega\)-automata: **Fact 1**: _Let \(\mathfrak{A}\) be an \(\omega\)-automaton, let \(\mathfrak{S}(\mathfrak{A})\) be the set of all \(\omega\)-automata which have the same alphabet, the same set of states and the same initial states as \(\mathfrak{A}\). For any homomorphism \(\sigma\) on \(\mathcal{A}\), \(\sigma^{-1}(\mathfrak{A})\) is an element of \(\mathfrak{S}(\mathfrak{A})\)._ Figure 1: Desubstitution of the \(\omega\)-automaton \(\mathfrak{A}\) by \(\sigma\) The crucial point is that \(\mathfrak{S}(\mathfrak{A})\) is finite: given \(\mathfrak{A}=(\mathcal{A},Q,I,T)\), an element of \(\mathfrak{S}(\mathfrak{A})\) is identified by its transitions, which form a subset of \((Q\times\mathcal{A}\times Q)\), so \(\mathrm{Card}(\mathfrak{S}(\mathfrak{A}))=2^{|Q|^{2}\times|\mathcal{A}|}\). We could work on a subset of \(\mathfrak{S}(\mathfrak{A})\) by identifying \(\omega\)-automata with the same language [2], but finiteness is sufficient for our results. Given \(\mathfrak{A}\) an \(\omega\)-automaton and \(\sigma\) a homomorphism, \(\sigma^{-1}\) defines a dynamic on the finite set \(\mathfrak{S}(\mathfrak{A})\). By the pigeonhole principle: **Fact 2**: _Let \(\mathfrak{A}\) be an \(\omega\)-automaton, and \(\sigma\) be a homomorphism. Then there exist \(n<m\leq|\mathfrak{S}(\mathfrak{A})|+1\) such that \(\sigma^{-n}(\mathfrak{A})=\sigma^{-m}(\mathfrak{A})\)._ In the remainder of the section, we prove that, given an \(\omega\)-automaton \(\mathfrak{A}\) and a substitution \(\sigma\), the problems of finding a fixed point of \(\sigma\) or a purely substitutive word generated by \(\sigma\) accepted by \(\mathfrak{A}\) are decidable. A purely substitutive word generated by an erasing homomorphism \(\sigma\) is also generated by a non-erasing homomorphism \(\tau\) (that is, a substitution) that can be effectively constructed: remove every erased letter from \(\mathcal{A}\) and from the images of \(\sigma\), and repeat the process. Thus, we assume \(\sigma\) itself is a substitution. Proposition 4: _Let \(\mathfrak{A}\) be an \(\omega\)-automaton, let \(\sigma\) be a substitution and let \(n<m\leq|\mathfrak{S}(\mathfrak{A})|+1\) such that \(\sigma^{-n}(\mathfrak{A})=\sigma^{-m}(\mathfrak{A})\). Then, \(\mathfrak{A}\) accepts a fixed point for \(\sigma^{k}\) for some \(k\geq 1\) iff \(\mathcal{L}_{\infty}(\sigma^{-n}(\mathfrak{A}))\) is nonempty._ Proof: If \(\mathcal{L}_{\infty}(\sigma^{-n}(\mathfrak{A}))\) is empty, then, by Propositions 1 and 3, \(\mathcal{L}_{\infty}(\sigma^{-p}(\mathfrak{A}))\) is empty for every \(p\geq n\). Let \(k\geq 1\): if there were a fixed point \(x\) for \(\sigma^{k}\) accepted by \(\mathfrak{A}\), we would have \(x=\sigma^{k}(x)=\sigma^{kn}(x)\) by iterating. So \(x\) would be in \(\mathcal{L}_{\infty}(\sigma^{-kn}(\mathfrak{A}))\) which is empty. By contradiction, there is no fixed point for any \(\sigma^{k}\). If \(\mathcal{L}_{\infty}(\sigma^{-n}(\mathfrak{A}))\) is nonempty, let \(x\) be a word accepted by \(\sigma^{-n}(\mathfrak{A})\). Again by Propositions 1 and 3, because \(\sigma^{-n}(\mathfrak{A})=\sigma^{-m}(\mathfrak{A})=\sigma^{-(m-n)}(\sigma^{- n}(\mathfrak{A}))\), \(x\) is accepted by \(\sigma^{-j(m-n)}(\sigma^{-n}(\mathfrak{A}))=\sigma^{-(n+j(m-n))}(\mathfrak{A})\) for all \(j\in\mathbb{N}\). This means that \(\sigma^{n+j(m-n)}(x)\) is accepted by \(\mathfrak{A}\) for all \(j\in\mathbb{N}\). Consider an adherence value \(\tilde{x}\) of the sequence \((\sigma^{n+j(m-n)}(x))_{j\in\mathbb{N}}\). By compactness of the language of an \(\omega\)-automaton, \(\tilde{x}\in\mathcal{L}_{\infty}(\sigma^{-n}(\mathfrak{A}))\). We define \(Q_{\sigma^{m-n}}\subseteq\mathcal{A}\) the set of quiet letters for \(\sigma^{m-n}\): \(a\in Q_{\sigma^{m-n}}\) if \(|\sigma^{j(m-n)}(a)|=1\) for all \(j\geq 1\). Let \(k=\inf\{i\in\mathbb{N}\mid\tilde{x}_{i}\notin Q_{\sigma^{m-n}}\}\) (\(k\) may be infinite). Then, for \(i<k\), because \(\sigma\) is a (nonerasing) substitution and every letter in \(\tilde{x}_{\llbracket 0,k-1\rrbracket}\) is quiet, \(\sigma^{(m-n)}(\tilde{x})_{i}=\sigma^{(m-n)}(\tilde{x}_{i})\). In addition, because \(\tilde{x}\) is an adherence value of \((\sigma^{n+j(m-n)}(x))_{j\in\mathbb{N}}\), there is \(r(\tilde{x}_{i})\geq 1\) such that \(\sigma^{r(\tilde{x}_{i})\cdot(m-n)}(\tilde{x}_{i})=\tilde{x}_{i}\) for every position \(i<k\). Since \(\mathcal{A}\) is finite, \((r(\tilde{x}_{i}))_{0\leq i<k}\) contains only finitely many values, so we can define \(r=\mathrm{lcm}\{r(\tilde{x}_{i})\}\). When \(k<\infty\), there exists \(q\geq 1\) such that \(|\sigma^{q(m-n)}(\tilde{x}_{k})|>1\) and \(\tilde{x}_{k}\sqsubseteq_{p}\sigma^{q(m-n)}(\tilde{x}_{k})\), for the same reason that \(\tilde{x}\) is an adherence value of \((\sigma^{n+j(m-n)}(x))_{j\in\mathbb{N}}\). If \(k=\infty\), we set \(q=1\). Then, by concatenation, \(\tilde{x}_{\llbracket 0,k\rrbracket}\sqsubseteq_{p}\sigma^{rq(m-n)}(\tilde{x}_{ \llbracket 0,k\rrbracket})\). Thus, \((\sigma^{jrq(m-n)}(\tilde{x}))_{j\in\mathbb{N}}\) has a limit, which is a fixed point for \(\sigma^{rq(m-n)}\), and by compactness of \(\mathcal{L}_{\infty}(\mathfrak{A})\), is accepted by \(\mathfrak{A}\). Because the emptiness of the language of an \(\omega\)-automaton is decidable: Corollary 1: _The following problem is decidable:_ **Input:** _An \(\omega\)-automaton_ \(\mathfrak{A}\) _and a substitution_ \(\sigma\)__ **Question:** _Does \(\mathfrak{A}\) accept a fixed point of \(\sigma^{k}\) for some \(k\)?_ As is, this method alone cannot determine, for instance, whether \(\mathfrak{A}\) accepts a fixed point for \(\sigma\) itself (without power). This problem is still decidable, as we show later in Proposition 6 with a refinement of this method. In appendix, we provide examples where \(\mathfrak{A}\) accepts fixed points for some \(\sigma^{k}\) where \(k\) does not correspond to \(m-n\) where \(n<m\) are the minimal powers such that \(\sigma^{-n}(\mathfrak{A})=\sigma^{-m}(\mathfrak{A})\). Now, we come back to purely substitutive words. A purely substitutive word generated by \(\sigma\) is also a fixed point for some \(\sigma^{k}\) (in fact, it is a fixed point for every \(\sigma^{j}\) with \(j\geq 1\)). Proposition 5: _Let \(\mathfrak{A}\) be an \(\omega\)-automaton, \(\sigma\) a substitution and \(n<m\leq|\mathfrak{S}(\mathfrak{A})|+1\) such that \(\sigma^{-n}(\mathfrak{A})=\sigma^{-m}(\mathfrak{A})\). Let \(RP_{\sigma}\subseteq\mathcal{A}\) be the set of letters \(b\) that are right-prolongable for \(\sigma\), i.e. \(b\sqsubseteq_{p}\sigma(b)\) and \(b\neq\sigma(b)\). Then, \(\mathfrak{A}\) accepts a purely substitutive word generated by \(\sigma\) iff \(\sigma^{-n}(\mathfrak{A})\) accepts an infinite word beginning with an element of \(RP_{\sigma}\)._ Proof: If \(\mathfrak{A}\) accepts a purely substitutive word \(u\) generated by \(\sigma\), \(u=\lim_{j\to\infty}\sigma^{j}(b)\) begins by an element of \(RP_{\sigma}\). Since \(\sigma(u)=u\), \(\sigma^{n}(u)\) is accepted by \(\mathfrak{A}\) so \(u\) is accepted by \(\sigma^{-n}(\mathfrak{A})\). On the converse, suppose that \(\sigma^{-n}(\mathfrak{A})\) accepts an infinite word beginning by \(b\in RP_{\sigma}\). Then, \(\sigma^{m-n}(b)\) labels an accepting computation on \(\sigma^{-m}(\mathfrak{A})=\sigma^{-n}(\mathfrak{A})\). By iteration, for every \(k\geq 1\), we have that \(\sigma^{k(m-n)}(b)\) labels an accepting computation on \(\sigma^{-n}(\mathfrak{A})\), so \(\sigma^{n+k(m-n)}(b)\) always labels an accepting computation on \(\mathfrak{A}\). By compactness, \(u=\lim_{k\to\infty}\sigma^{n+k(m-n)}(b)\) is accepted by \(\mathfrak{A}\). Now, because \(b\in RP_{\sigma}\), the word \(\lim_{j\to\infty}\sigma^{j}(b)\) is defined and equal to \(u\). Therefore \(u\), the purely substitutive word generated by \(\sigma\) on the letter \(b\), is accepted by \(\mathfrak{A}\). The following result already appeared in [15], but an erratum clarified that some cases were not covered [16]. It is a parallel to a result in [5]. Our proof is essentially the same, but writing the proof through the lens of desubstitution makes it easier to extend the result to other decision problems. Corollary 2: _The problem of the purely substitutive walk is decidable:_ **Input:** _an_ \(\omega\)_-automaton_ \(\mathfrak{A}\)_, a homomorphism_ \(\sigma\)_._ **Question:** _Does \(\mathfrak{A}\) accept some purely substitutive word generated by \(\sigma\)?_ This result extends to morphic words: to find a morphic word generated by \(\sigma\) and \(\tau\) accepted by \(\mathfrak{A}\), find a purely substitutive word generated by \(\sigma\) accepted by \(\tau^{-1}(\mathfrak{A})\). We now extend the method used to prove Proposition 5 to solve the question of finding a pure fixed point for a substitution \(\sigma\) in an \(\omega\)-automaton. This improves Proposition 4 where we found a fixed point for some power of \(\sigma\). Proposition 6: _The problem of the fixed point walk is decidable:_ **Input:** _an \(\omega\)-automaton \(\mathfrak{A}\), a substitution \(\sigma\)._ **Question:** _Does \(\mathfrak{A}\) accepts a fixed point for \(\sigma\)?_ Proof: Let \(x\) be a fixed point for \(\sigma\) and define \(FP_{\sigma}=\{b\in\mathcal{A}\ |\ \sigma(b)=b\}\) be the set of letters which are fixed points under \(\sigma\). There are two cases: 1. \(x\) is an infinite word on the alphabet \(FP_{\sigma}\). 2. there is a letter \(a\) appearing in \(x\) such that \(\sigma(a)\neq a\). Suppose that \(a\) is the first such letter in \(x\). Then \(x\) can be written as \(x=pax^{\prime}\) where \(p\) is a word on \(FP_{\sigma}\). We have that \(x=\sigma(x)=\sigma(p)\sigma(a)\sigma(x^{\prime})=p\sigma(a)\sigma(x^{\prime})\). So \(a\sqsubseteq_{p}\sigma(a)\): \(a\) is right-prolongable for \(\sigma\), so \(\lim_{n\to\infty}\sigma^{n}(a)\) exists. Since \(x=\sigma^{n}(x)=p\sigma^{n}(a)\sigma^{n}(x^{\prime})\) for every \(n\in\mathbb{N}\), by compactness, \(x=p\lim_{n\to\infty}\sigma^{n}(a)\). The algorithm works as follows. First (case 1), check whether \(\mathfrak{A}\) accepts a word on the alphabet \(FP_{\sigma}\). Second (case 2), define a new automata \(\mathfrak{A}^{\prime}\) which is equal to \(\mathfrak{A}\) except that the set of initial states is all the states reachable in \(\mathfrak{A}\) by words in \(FP_{\sigma}\), and check (by the previous algorithm) if \(\mathfrak{A}^{\prime}\) accepts a purely substitutive word generated by \(\sigma\). The algorithm outputs "yes" if either case is satisfied, and "no" otherwise. ### The problem of the infinitely desubstitutable walk In this section, we suppose that \(\mathfrak{A}\) is an \(\omega\)-automaton and \(\mathcal{S}\) is a finite set of substitutions (i.e. nonerasing homomorphisms, as is usual when studying multiple homomorphisms) on a single alphabet \(\mathcal{A}\). We prove that the problem of finding an infinitely desubstitutable (infinite) word accepted by \(\mathfrak{A}\) is decidable. To study this question, we introduce a meta-\(\omega\)-automaton: each symbol is a substitution, and each state is an \(\omega\)-automaton. Definition 8 (The meta-\(\omega\)-automaton \(\mathcal{S}^{-\infty}(\mathfrak{A})\)): We define the \(\omega\)-automaton \(\mathcal{S}^{-\infty}(\mathfrak{A})=(\mathcal{S},D(\mathfrak{A}),\{\mathfrak{ A}\},\mathcal{T})\) with the alphabet \(\mathcal{S}\), the set of states \(D(\mathfrak{A})=\{\sigma^{-1}(\mathfrak{A}),\ \sigma\in\mathcal{S}^{*}\}\), \(\mathfrak{A}\) the only initial state and set of transitions \(\mathcal{T}=\{\mathfrak{B}\xrightarrow{\sigma},\ \sigma^{-1}(\mathfrak{B})\ |\ \mathfrak{B}\in D( \mathfrak{A}),\sigma\in\mathcal{S}\}\). Because \(D(\mathfrak{A})\subseteq\mathfrak{S}(\mathfrak{A})\) is finite (see Fact 1), \(\mathcal{S}^{-\infty}(\mathfrak{A})\) is computable. We prove that directive sequences of words accepted by \(\mathfrak{A}\) correspond to _non-nilpotent_ walks in \(\mathcal{S}^{-\infty}(\mathfrak{A})\), that is, walks \((\mathfrak{B}_{n})_{n\in\mathbb{N}}\) such that \(\mathcal{L}_{\infty}(\mathfrak{B}_{n})\neq\emptyset\) for all \(n\). Proposition 7: _There exists \(x\) an infinite word infinitely desubstitutable by \((\sigma_{n})_{n\in\mathbb{N}}\) accepted by \(\mathfrak{A}\) if, and only if, there is a non-nilpotent infinite walk in \(\mathcal{S}^{-\infty}(\mathfrak{A})\) labeled by \((\sigma_{n})_{n\in\mathbb{N}}\)._ Corollary 3: _The set of directive sequences of infinitely desubstitutable words accepted by \(\mathfrak{A}\) is the language of some \(\omega\)-automaton._ Proof (of Proposition 7): First, let \(x\) be an infinitely desubstitutable word with directive sequence \((\sigma_{n})_{n\in\mathbb{N}}\), and let \((x_{n})_{n\in\mathbb{N}}\) be the sequence of desubstituted words. Then, by Proposition 1, \(x_{n}\in\mathcal{L}_{\infty}((\sigma_{1}\circ\cdots\circ\sigma_{n-1})^{-1}( \mathfrak{A}))\). So the walk \((\sigma_{\llbracket 0,n\rrbracket}^{-1}(\mathfrak{A}))_{n\in\mathbb{N}}\) is non-nilpotent and labeled by \((\sigma_{n})_{n\in\mathbb{N}}\). Second, let \((\sigma_{n})_{n\in\mathbb{N}}\) label a non-nilpotent infinite walk in \(\mathcal{S}^{-\infty}(\mathfrak{A})\). It means that each language \((\sigma_{1}\circ\cdots\circ\sigma_{k})^{-1}(\mathcal{L}_{\infty}(\mathfrak{A}))\) is nonempty. Now, consider the sequence \(((\sigma_{1}\circ\cdots\circ\sigma_{n})(\mathcal{L}_{\infty}((\sigma_{1}\circ \cdots\circ\sigma_{n})^{-1}(\mathfrak{A}))))_{n\in\mathbb{N}}\). It satisfies the following: 1. each element of the sequence is included in \(\mathcal{L}_{\infty}(\mathfrak{A})\); 2. because \(\mathcal{L}_{\infty}((\sigma_{1}\circ\cdots\circ\sigma_{n})^{-1}(\mathfrak{A})))\) is compact and nonempty, and \((\sigma_{1}\circ\cdots\circ\sigma_{n})\) is continuous, every element of the sequence is compact and nonempty; 3. the sequence is decreasing for inclusion. By Cantor's intersection theorem, there is a point \(x\) in the intersection of every element of the sequence. This point \(x\) is desubstituted by any \(\sigma_{1}\circ\cdots\circ\sigma_{k}\), thus it is infinitely desubstituted by the sequence \((\sigma_{n})_{n\in\mathbb{N}}\). With Proposition 7, we can deduce the decidability of the existence of an infinitely desubstitutable word accepted by an \(\omega\)-automaton \(\mathfrak{A}\). First, build \(\mathcal{S}^{-\infty}(\mathfrak{A})\); second, remove the states corresponding to \(\omega\)-automata with an empty language; last, check whether there is an infinite walk. Proposition 8: _The problem of the infinitely desubstitutable walk is decidable:_ **Input:** _a finite set of substitutions \(\mathcal{S}\), an \(\omega\)-automaton \(\mathfrak{A}\)_ **Question:** _does \(\mathcal{L}_{\infty}(\mathfrak{A})\) contain a word which is infinitely desubstitutable by \(\mathcal{S}\)?_ ### The problem of the Buchi infinitely desubstitutable walk Proposition 8 does not apply directly to Sturmian words. Indeed, the classical characterization of Sturmian words restricts the possible directive sequences. \(\mathcal{S}_{St}\) is the set containing the four following substitutions, called (elementary) Sturmian morphisms, as described by [10]. \[L_{0}:\left\{\begin{array}{l}0\mapsto\ 0\\ 1\mapsto 01\end{array},\quad L_{1}:\left\{\begin{array}{l}0\mapsto 10\\ 1\mapsto\ 1\end{array},\quad R_{0}:\left\{\begin{array}{l}0\mapsto\ 0\\ 1\mapsto\ 10\end{array},\quad R_{1}:\left\{\begin{array}{l}0\mapsto 01\\ 1\mapsto\ 1\end{array}\right.\right.\right.\] Theorem 3.1 ([13]): _A word is Sturmian iff it is infinitely desubstitutable by a directive sequence \((\sigma_{n})_{n\in\mathbb{N}}\subset\mathcal{S}_{St}\) that alternates infinitely in type, i.e.: \(\nexists N\in\mathbb{N},(\forall n\geq N,\sigma_{n}\in\{L_{0},R_{0}\})\) or \((\forall n\geq N,\sigma_{n}\in\{L_{1},R_{1}\})\)._ This characterization is usually expressed in the \(S\)-adic framework, but is equivalent in this context [14]. In this section, we generalize Proposition 8 to Sturmian words and more general restrictions on the directive sequence. Proposition 9: _The problem of the Sturmian walk is decidable:_ **Input:** _an_ \(\omega\)_-automaton_ \(\mathfrak{A}\)_._ **Question:** _is there a Sturmian infinite word accepted by_ \(\mathfrak{A}\)_?_ Proof: Consider the associated representation automaton \(\mathcal{S}_{St}^{-\infty}(\mathfrak{A})\). According to Proposition 7 combined with Theorem 3.1, there is a Sturmian infinite word accepted by \(\mathfrak{A}\) if, and only if, there is an infinite computation accepted by \(\mathcal{S}_{St}^{-\infty}(\mathfrak{A})\) labeled by a word \((\sigma_{n})_{n\in\mathbb{N}}\) which alternates infinitely in type. This last condition is decidable: compute the strong connected components of \(\mathfrak{A}\), and check that there is at least one strongly connected component \(C\) which contains two edges labeled by substitutions in \(\{L_{0},R_{0}\}\) and \(\{L_{1},R_{1}\}\), respectively. In this case, the condition of alternating infinitely in type is easy to check: it can actually be described using a Buchi \(\omega\)-automaton on the alphabet \(\mathcal{S}\). Proposition 9 generalizes to every such condition. Definition 9: Let \(\mathcal{S}\) be a set of substitutions, and \(\mathfrak{R}\) a Buchi \(\omega\)-automaton on the alphabet \(\mathcal{S}\). Define \(X_{\mathfrak{R}}\) as \(\{x\in\mathcal{A}^{\mathbb{N}}\mid\exists(\sigma_{n})_{n\in\mathbb{N}}\in \mathcal{L}_{\infty}(\mathfrak{R}),x\text{ is inf. desub. by }(\sigma_{n})\}\). Proposition 10: _The following problem is decidable:_ **Input:**: _an_ \(\omega\)_-automaton_ \(\mathfrak{A}\)_, a finite set of substitutions_ \(\mathcal{S}\)_, a Buchi_ \(\omega\)_-automaton_ \(\mathfrak{R}\) _on the alphabet_ \(\mathcal{S}\)__ **Question:**: _is there an infinite word of_ \(X_{\mathfrak{R}}\) _accepted by_ \(\mathfrak{A}\)_?_ Proof: The question of the problem is equivalent to: is \(\mathcal{L}_{\infty}(\mathfrak{R})\cap\mathcal{L}_{\infty}(\mathcal{S}^{- \infty}(\mathfrak{A}))\neq\emptyset\)? The intersection between a Buchi \(\omega\)-automaton and an \(\omega\)-automaton is a Buchi \(\omega\)-automaton that can be effectively constructed [11], and checking the non-emptiness of a Buchi \(\omega\)-automaton is decidable. The interest of Proposition 10 is that there exists a zoology of families of words which have a characterization by infinite desubstitution. For instance, Proposition 10 applies to Arnoux-Rauzy words [1] and to minimal dendric ternary words [7]. We also characterize the set of allowed directive sequences akin to Corollary 3: the set of directive sequences on \(\mathcal{S}\) accepted by the Buchi \(\omega\)-automaton \(\mathfrak{R}\) that define a word accepted by \(\mathfrak{A}\) is itself recognized by a Buchi \(\omega\)-automaton. Let us translate Proposition 10 in more dynamical terms: Proposition 11: _The following problem is decidable:_ **Input:**: _a set of substitutions_ \(\mathcal{S}\)_, a Buchi_ \(\omega\)_-automaton_ \(\mathfrak{R}\) _on the alphabet_ \(\mathcal{S}\) _and a sofic shift_ \(\mathbb{S}\)_._ **Question:**: _Is \(\mathbb{S}\cap X_{\mathfrak{R}}\) empty?_ ### Application to the coding of Sturmian words Here is an example of a natural question from combinatorics on words that we solve on Sturmian words, even though the method generalizes easily. Let \(W\) be a finite set of finite words on \(\{0,1\}\). Consider \(W^{\omega}\) the set of infinite concatenations of elements of \(W\), i.e. \(W^{\omega}=\{x\in\{0,1\}^{\mathbb{N}}\mid\exists(w_{n})_{n\in\mathbb{N}} \subseteq W,x=\lim_{n\to\infty}w_{0}w_{1}\ldots w_{n}\}\). Proposition 12: _The following problem is decidable:_ **Input:**: _\(W\) a finite set of words on_ {_0,1_}__ **Question:**: _does_ \(W^{\omega}\) _contain a Sturmian word?_ Proof: The language \(W^{\omega}\) is \(\omega\)-regular: there is an \(\omega\)-automaton \(\mathfrak{A}_{W}\) such that \(\mathcal{L}_{\infty}(\mathfrak{A}_{W})=W^{\omega}\). Then, \(W^{\omega}\) contains a Sturmian word iff \(\mathfrak{A}_{W}\) accepts a Sturmian word, which is decidable by Proposition 9. ## 4 About \(\omega\)-automata recognizing Sturmian words In this Section, we focus on Sturmian words and show that the language of Sturmian words is as far as possible from being regular, in the sense that an \(\omega\)-automaton may only accept a Sturmian word if it accepts the image of the full shift under a Sturmian morphism. Theorem 4.1: _Let \(\mathcal{S}=\mathcal{S}_{St}\) be the set of elementary Sturmian morphisms as defined earlier, and let \(\mathfrak{A}\) be an \(\omega\)-automaton. If \(\mathfrak{A}\) accepts a Sturmian word, then \(\exists\sigma\in\mathcal{S}_{St}^{*},\sigma(\mathcal{A}^{\mathbb{N}})\subseteq \mathcal{L}_{\infty}(\mathfrak{A})\)._ This is equivalent to the presence of a total automaton in \(\mathcal{S}^{-\infty}(\mathfrak{A})\): an \(\omega\)-automaton \(\mathfrak{A}\) is total if \(\mathcal{L}_{\infty}(\mathfrak{A})=\mathcal{A}^{\mathbb{N}}\). Totality is a stable property under any desubstitution. To prove Theorem 4.1, we introduce the following technical tools. Definition 10: Let \(\mathfrak{A}\) be an \(\omega\)-automaton on \(\mathcal{A}=\{0,1\}\). A state \(q\) of \(\mathfrak{A}\) has property \((H)\) if \((\exists q_{s},q\xrightarrow{0}q_{s}\xrightarrow{\omega}\cdots\in\mathfrak{A}) \Leftrightarrow(\exists q_{t},q\xrightarrow{1}q_{t}\xrightarrow{\omega}\cdots \in\mathfrak{A})\), where \(q_{s}\xrightarrow{\omega}\cdots\) means that there is an infinite computation starting from \(q_{t}\) in \(\mathfrak{A}\). If all states of \(\mathfrak{A}\) have property (H), there are two possibilities: if there is no infinite computation starting on an initial state, the infinite language of \(\mathfrak{A}\) is empty; otherwise, \(\mathfrak{A}\) is total. Lemma 1: _Let \(\mathfrak{C}\) be an \(\omega\)-automaton, and \(\phi\in\mathcal{S}_{St}^{*}\) starting with \(L_{0}\) and ending with \(L_{1}\) such that \(\phi^{-1}(\mathfrak{C})=\mathfrak{C}\). Then, every state of \(\mathfrak{C}\) has property \((H)\)._ Proof (of Lemma 1): Let \(\mathfrak{C}=(\{0,1\},Q_{\mathfrak{C}},I_{\mathfrak{C}},T_{\mathfrak{C}})\), and \(q\in Q_{\mathfrak{C}}\). First, suppose that \(q\xrightarrow{0}q_{t}\xrightarrow{\omega}\cdots\) is a computation in \(\mathfrak{C}\). Then \(q\xrightarrow{0}q_{t}\) is also a transition of \(\phi^{-1}(\mathfrak{C})\). So \(q\xrightarrow{\phi(0)}*q_{t}\) is a computation in \(\mathfrak{C}\). Because \(\phi\) ends with \(L_{1}\), \(\phi(1)\subseteq_{p}\phi(0)\). So \(q\xrightarrow{\phi(1)}*q_{u}\xrightarrow{m}*q_{t}\xrightarrow{\omega}\cdots\) is a computation in \(\mathfrak{C}\), with some \(q_{u}\in Q_{\mathfrak{C}}\) and \(\phi(0)=\phi(1)m\). Now, using \(\mathfrak{C}=\phi^{-1}(\mathfrak{C})\), \(q\xrightarrow{1}q_{u}\xrightarrow{m}*q_{t}\xrightarrow{\omega}\cdots\) is a computation in \(\mathfrak{C}\). Conversely, if \(q\xrightarrow{1}q_{t}\xrightarrow{\omega}\cdots\) is a computation in \(\mathfrak{C}=\phi^{-1}(\mathfrak{C})\), there is also \(q\xrightarrow{\phi(1)}*q_{t}\xrightarrow{\omega}\cdots\) Because \(\phi\) begins with \(L_{0}\), \(\phi(1)=0m\) for some finite \(m\). So the last computation can be written \(q\xrightarrow{0}q_{u}\xrightarrow{m}*q_{t}\xrightarrow{\omega}\cdots\) Proof (of Theorem 4.1): Let \(x\) be a Sturmian word accepted by \(\mathfrak{A}\). Consider the transformation of \(\omega\)-automata forget : \((\mathcal{A},Q,I,T)\mapsto(\mathcal{A},Q,Q,T)\) which makes all states initial. Then, forget(\(\mathfrak{A}\)) also accepts \(x\), and \(\mathcal{L}_{\infty}(\text{forget}(\mathfrak{A}))\) is a sofic shift. Then \(\overline{\bigcup_{n\geq 0}S^{n}(x)}\), which is the orbit of \(x\) under the shift \(S\), is contained in \(\mathcal{L}_{\infty}(\text{forget}(\mathfrak{A}))\). Let \(\chi(x)\) be the Sturmian characteristic word associated with \(x\) (see [12]): it belongs to the orbit of \(x\), so it is accepted by forget(\(\mathfrak{A}\)). Then, \(\chi(x)=\lim\limits_{n\to\infty}\sigma_{0}\circ\cdots\circ\sigma_{n}(a_{n})\) with \((\sigma_{n})_{n\in\mathbb{N}}\subseteq\mathcal{S}_{St}\) a sequence that alternates infinitely in type (see Theorem 3.2). Besides, because \(\chi(x)\) is a characteristic word, it represents the orbit of zero from the point of view of circle rotation (see [12]): when combined with Proposition 2.7 of [4], it yields that \((\sigma_{n})_{n\in\mathbb{N}}\subseteq\{L_{0},L_{1}\}^{\mathbb{N}}\). By the pigeonhole principle, there is an \(\omega\)-automaton \(\mathfrak{B}\) that appears infinitely often in the sequence \((\sigma_{\llbracket 0,n\rrbracket}^{-1}(\mathrm{forget}(\mathfrak{A})))_{n\in \mathbb{N}}\subseteq\mathfrak{S}(\mathrm{forget}(\mathfrak{A}))\). Thus, we can find a substitution \(\tau\) such that \(\mathfrak{B}=\tau^{-1}(\mathfrak{B})\) and \(\tau\in\{L_{0},L_{1}\}^{*}\backslash(L_{0}^{*}\cup L_{1}^{*})\). Because \(\tau\) contains both \(L_{0}\) and \(L_{1}\), there are two cases: 1. \(L_{1}L_{0}\sqsubseteq_{f}\tau\): we can write \(\tau=p_{\tau}L_{1}L_{0}s_{\tau}\). Let \(\mathfrak{B}^{\prime}=(p_{\tau}\circ L_{1})^{-1}(\mathfrak{B})\) and \(\tau^{\prime}=L_{0}\circ s_{\tau}\circ p_{\tau}\circ L_{1}\): we have that \(\tau^{\prime-1}(\mathfrak{B}^{\prime})=\mathfrak{B}^{\prime}\). 2. \(L_{1}L_{0}\nsubseteq_{f}\tau\): then, \(\tau\) begins with a \(L_{0}\) and ends with a \(L_{1}\). In both cases, we can come back to the case where \(\tau\) begins with a \(L_{0}\) and ends with a \(L_{1}\). Now, we apply Lemma 1 to show that every state of \(\mathfrak{B}\) has property \((H)\). \(\mathfrak{B}\) can be written as \(\psi^{-1}(\mathrm{forget}(\mathfrak{A}))\) for some Sturmian morphism \(\psi\). Since the transformation forget does not modify the transitions of an \(\omega\)-automaton, this yields that every state of \(\psi^{-1}(\mathfrak{A})\) also has property \((H)\). Since by assumption \(\psi^{-1}(\mathfrak{A})\) accepts an infinite word, it follows that it is total. Let \(f\) be the Fibonacci word, i.e. the substitutive word associated with the substitution \(\sigma_{f}(0)=01,\sigma_{f}(1)=0\). Since Lemma 1 holds when \(\phi=\sigma_{f}^{n}\) (\(n\geq 1\)), by adapting the proof of Theorem 4.1, we obtain an equivalent statement for \(f\): Corollary 4: _Let \(\mathfrak{A}\) be an \(\omega\)-automaton which accepts \(f\). Then, there exists \(n\in\mathbb{N}\) such that \(\sigma_{f}^{-n}(\mathfrak{A})\) is total._ This combinatorial result can be thought in dynamical terms: Corollary 5: _A sofic subshift \(\mathbb{S}\) contains \(f\) iff \(\mathbb{S}\) contains some \(\sigma_{f}^{n}(\mathcal{A}^{\mathbb{N}})\)._ Because the Fibonacci word is aperiodic, containing \(f\) means that there is a substitution \(\tau\) such that \(\tau(\mathcal{A}^{\mathbb{N}})\) is contained in \(\mathbb{S}\). Because the Fibonacci word is Sturmian, Berstel and Seebold [10] established that \(\tau\) had to be a Sturmian morphism. This new analysis specifies that \(\tau\) can be chosen a power of \(\sigma_{f}\). ## 5 Open questions * Following Proposition 8, find an algorithm to find an accepted \(\mathcal{S}\)-adic word. There are technical difficulties to take into account the growth of the directive sequence, which should be solvable using results from [14]. * Can our methods extend to Buchi \(\omega\)-automata, as in [5]? The difficulty is that the language of Buchi \(\omega\)-automata is not always compact, so Proposition 4 does not apply. It may be possible to extend methods from [5]. * For which sets of substitutions does Theorem 4.1 hold?
2301.10454
A Data-Centric Approach for Improving Adversarial Training Through the Lens of Out-of-Distribution Detection
Current machine learning models achieve super-human performance in many real-world applications. Still, they are susceptible against imperceptible adversarial perturbations. The most effective solution for this problem is adversarial training that trains the model with adversarially perturbed samples instead of original ones. Various methods have been developed over recent years to improve adversarial training such as data augmentation or modifying training attacks. In this work, we examine the same problem from a new data-centric perspective. For this purpose, we first demonstrate that the existing model-based methods can be equivalent to applying smaller perturbation or optimization weights to the hard training examples. By using this finding, we propose detecting and removing these hard samples directly from the training procedure rather than applying complicated algorithms to mitigate their effects. For detection, we use maximum softmax probability as an effective method in out-of-distribution detection since we can consider the hard samples as the out-of-distribution samples for the whole data distribution. Our results on SVHN and CIFAR-10 datasets show the effectiveness of this method in improving the adversarial training without adding too much computational cost.
Mohammad Azizmalayeri, Arman Zarei, Alireza Isavand, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
2023-01-25T08:13:50Z
http://arxiv.org/abs/2301.10454v1
A Data-Centric Approach for Improving Adversarial Training Through the Lens of Out-of-Distribution Detection ###### Abstract Current machine learning models achieve super-human performance in many real-world applications. Still, they are susceptible against imperceptible adversarial perturbations. The most effective solution for this problem is adversarial training that trains the model with adversarially perturbed samples instead of original ones. Various methods have been developed over recent years to improve adversarial training such as data augmentation or modifying training attacks. In this work, we examine the same problem from a new data-centric perspective. For this purpose, we first demonstrate that the existing model-based methods can be equivalent to applying smaller perturbation or optimization weights to the hard training examples. By using this finding, we propose detecting and removing these hard samples directly from the training procedure rather than applying complicated algorithms to mitigate their effects. For detection, we use maximum softmax probability as an effective method in out-of-distribution detection since we can consider the hard samples as the out-of-distribution samples for the whole data distribution. Our results on SVHN and CIFAR-10 datasets show the effectiveness of this method in improving the adversarial training without adding too much computational cost. Adversarial Training, Attack, Data-Centric, Out-of-Distribution Detection ## I Introduction In recent years, deep neural networks (DNNs) are proving to be successful in a variety of applications, such as image processing [1], Natural Language Processing [2], etc. Nevertheless, we cannot still rely on them since they are subject to adversarial examples that cannot even be recognized by humans [3, 4, 5]. Adversarial examples are generated by adding an optimized \(\ell_{p}\) norm-bounded perturbation to the original samples. Perturbing a sample within an \(\epsilon\)-ball around it may change the prediction and significantly impact the model's performance while in many applications, such as autonomous driving [6], robustness of the models against these attacks is critical. In order to achieve robust models against adversarial attacks, several approaches have been proposed, with Adversarial Training (AT) [7] proving to be the most effective. The purpose of this method is to learn a robust model through solving a min-max problem. Briefly, AT first tries to find the perturbations within the \(\epsilon\)-ball that causes the maximum loss for each perturbed sample, which is referred as the maximization part. Next, the model loss is minimized on the perturbed samples rather than the original ones to learn more robust features against the adversarial perturbations, which is referred as minimization part. The minimization is done with gradient descent method, while the maximization is often done using an attack called Projected Gradient Descent (PGD) [7]. AT can be formulated as: \[\min_{\theta}\frac{1}{n}\sum_{i=1}^{n}\max_{\left\|x_{i}^{\prime}-x_{i} \right\|_{\infty}\leq\epsilon}\mathcal{L}(f_{\theta}(x_{i}^{\prime}),y_{i}) \tag{1}\] where \(\mathcal{L}(f_{\theta}(x_{i}^{\prime}),y_{i})\) is the prediction loss of model \(f_{\theta}\) with parameters \(\theta\) on the perturbed sample \((x_{i}^{\prime},y_{i})\) within an \(\epsilon\)-ball around \(x_{i}\). Recently, several approaches have been proposed to enhance adversarial training. For instance, data augmentation methods are used to improve robustness by applying better augmentations or adding generated data in the training [9], or perceptual training is suggested to cover a wider range of training perturbation types [10]. In contrast with these methods, there are some other approaches that aim at differentiating between different samples by changing the optimization weight in the training or the training perturbation budget for each sample, without changing the other training settings. For instance, [11] improves robustness by setting a sample-based perturbation budget during adversarial training to move all the training samples to the model's decision boundaries during training. Following these methods, we want to improve robustness by differentiating between the samples with a new data-centric approach. To this end, we try to modify the original training set itself, but not by using training augmentation techniques, or changing the training attack methods, or etc. In other words, our efforts in this work is to fix the data which the code is running on instead of changing the training method, which can be classified as a data-centric approach. For this purpose, we first note that the existing methods that differentiate between different samples mainly try to reduce the effect of the samples in decision boundaries in the training process as discussed in section III-A. Accordingly, some samples in the training set are hard to learn for the model during training. It can happen when samples are near boundaries, or outside of their classes distributions due to the reasons such as wrong labeling. Forcing the model to learn these samples can reduce the generalization ability of the model to the test samples. This problem gets even worse in AT since an \(\epsilon\)-ball is enforced around those hard samples. To mitigate this issue with a data-centric approach, we propose to identify these samples, and improve the data quality by deleting them from the dataset, regardless of the training method and the setup. These samples can be detected before the training process, which we call it the "offline" method, or adaptively during training, which we call the "online" method. Moreover, for identifying these samples, we utilize softmax probability as a measure that can determine whether a sample is out-of-distribution [12]. Finally, the remaining samples are used to train the model after the hard samples are detected and removed from the training dataset. This method is shown schematically in Fig. 1. Our Results on the CIFAR-10 and SVHN datasets demonstrate that this data-centric strategy can enhance model's robustness with both "offline" and "online" methods, without significantly raising computational cost. We also point out that softmax probability as the detection method can be substituted with Mahalanobis distance [13], and the findings still show improvement. We hope this work to be a start on using data-centric approaches in adversarial training. ## II Related Works Deep networks are vulnerable to attacks, while defenses attempt to achieve a robust model against them. In the following, some popular adversarial attacks and defenses are explored. ### _Attack_ The threat of adversarial examples was first noticed in image classification models [4]. The accuracy of the model can be significantly reduced when a norm-bounded perturbation is added to the input. This perturbation can be generated with iterative updates based on the loss function gradient [14] as: \[\delta_{t}=\delta_{t-1}+\alpha.sign(\nabla_{x}J(\theta,x,y)), \tag{2}\] where \(\delta_{t}\) is the perturbation at step \(t\), and \(J(\theta,x,y))\) is the cost used to train the neural network with parameters \(\theta\). To ensure that the perturbation is imperceptible, it can be projected to the \(\ell_{p}\)-norm ball which is known as the Projected Gradient Descent (PGD) attack [8]. There are also some other powerful attacks, but PGD is regarded as a standard attack for training and evaluation. Fig. 1: Overview of our approach. The training set is first divided into \(k\) folds. After that, each time, we train model using \(k-1\) of folds, and the CCSP scores (our measure to detect hard training samples) are calculated for samples of remaining fold using the trained model. By repeating this for all folds, the CCSP score would be calculated for all training samples using this method. Then, we sort scores, and remove \(R\) samples with the lowest CCSP scores. Finally, the model is trained on this new purified dataset rather than the original dataset. ### _Defense_ Several methods have tried to stand against adversarial examples, but they mostly give a false sense of robustness due to the reasons such as gradient obfuscation or vulnerability against newer attacks [7]. Still, the best existing defense is adversarial training [8], which trains the model with adversarial examples. Due to the effectiveness of adversarial training, recent methods have tried to improve it by methods such as data augmentation [9], training attack modification [15], and model hyper-parameters tuning [16]. ## III Method ### _A new perspective on some of existing defenses_ As mentioned in section II-B, there are different variants of adversarial training. Taking a closer look at them from a different angle, we can demonstrate that some of them are equivalent to applying smaller perturbation or optimization weights to the hard training examples in the model optimization. To this end, a number of methods are investigated in the following. **CAT**[17] and **IAAT**[11]: These methods hypothesize that the poor generalization of adversarial training is a consequence of uniform perturbation radius around every training sample. They suggest a sample-specific perturbation radius that increases the radius until the perturbed sample is miss-classified. Thus, the hard training examples would have smaller perturbation budgets during training to simplify the training of the model. **MART**[18]: This method proposed to explicitly differentiate the misclassified and correctly classified examples during the training by setting lower training optimization weight for the misclassified unperturbed samples. Misclassified samples are presumably near the boundary, so they can be considered hard training examples for the model, which this method suggests assigning a lower optimization weight to. **TRADES**[19]: There is a trade-off between clean and adversarial accuracy. To mitigate this issue, TRADES suggests to trade adversarial robustness off against accuracy by training model on clean examples while minimizing the KL divergence of model prediction for clean and perturbed samples. Hence, if a sample is near boundary and have a high prediction error, the KL divergence term would get a smaller value and the model would focus on learning the clean example. **Early stopping**[20]: It is shown that overfitting to the training set harms robust performance in adversarially robust training. In other words, model can learn adversarially perturbed training samples with a high accuracy but it does not generalize well to the test samples. As a solution, the training can be stopped early if overfitting occurs using a validation set. This is equivalent to stop training before learning out-of-distribution or hard samples. **Noisy label**[21]: Similar to standard training, adversarial training also suffers from noisy labels during training. A measure that can help to detect the noisy samples is the number of PGD iterations needed to generate misclassified adversarial examples. In other words, this measure also tries to find the near-boundary training examples using the number of PGD steps as a distance measure. ### _Proposed Method_ When training a model, we might come across a number of samples that are close to the decision boundaries or even outside of the training classes distributions. These samples might have been produced as a result of a labeling error, or other reasons such as conceptual ambiguity in the images that makes classifying them even difficult for humans. Examples of such samples can be seen in the Fig. 2 that represents some images from the CIFAR-10 dataset. Due to the difficulty of training a model on these samples, we refer to them as **Hard Training Samples (HTS)**. Additionally, learning an \(\epsilon\)-ball around these samples makes training even more challenging and raises the risk of problems like overfitting or mixed-up decision boundaries. Therefore, we believe that they may be better avoided during training as discussed in previous section on the existing methods. There are many techniques for identifying such near-boundary or out-of-distribution samples. Detecting samples that do not fit into the classes of the training dataset is mainly known as out-of-distribution (OOD) detection in the literature. A simple but effective existing method for OOD detection is Maximum Softmax Probability (MSP) [12]. This method uses \(max_{c\in\{1,2,\dots,k\}}f_{c}(x)\) as the score function that classifier \(f\) trained on a \(k\)-class dataset returns for input \(x\) to be classified as the in-distribution sample. Inspired by MSP, we propose Correct Class Softmax Probability (CCSP) as our method to detect HTS samples. CCSP for sample \(x\) with label \(y\) is defined as: \[CCSP(x,y)=softmax(z_{y})=\frac{e^{z_{y}}}{\sum_{j}e^{z_{j}}}, \tag{3}\] where \(z\) is the output of classifier for the input \(x\). CCSP score can be used as a measure to find the hard samples in comparison with other ones. This method has no computational overhead during the training, and can be used easily. Fig. 2: Some images from CIFAR-10 dataset whose labels are hard to recognize even by humans. Now that we are able to recognize HTS, we can improve the training process by removing these samples before the training procedure begins. To this end, we need to measure the CCSP score for each sample, which is done through a \(k\)-fold cross-validation. Accordingly, we divide training dataset into \(k\) folds called \(F_{1},...,F_{k}\). Each time, a single fold \(F_{i}\) is put aside and the rest are considered as the training dataset for the model to be trained. Afterward, the CCSP is calculated for the samples in \(F_{i}\) fold. Eventually, the CCSP will have been calculated for all of the samples in the training dataset and \(R\) (hyper-parameter) samples with the lowest CSSP score can be removed from the dataset. Finally, we can re-train our model using the new dataset that is resulted after removing such samples. This algorithm is called "offline" modification of samples since all the process is done before the training starts. Due to the changes in decision boundaries during the training process, samples that make a model's optimization struggling are not constant during training. In this line of thought, we also propose the "online" version of our method. In this version, after each epoch in the training loop, the CCSP scores are calculated for all of the samples in the training dataset and \(R\) samples with the lowest scores are removed from the training dataset just for the subsequent epoch. Note that the CCSP scores are calculated for all the training samples in each epoch, and we do not leave out the samples that were eliminated in the previous epochs. It can be vividly understood that the offline version requires more time to be accomplished, while this has noticeably been diminished in the online version since the CCSP scores are calculated as a apart of the training process rather than the k-fold trainings prior to the main training procedure. ## IV Experiments We conduct experiments to demonstrate how our strategy improves model robustness. We also attempt to assess effectiveness of our method while employing online or offline detection of hard samples. In addition, for the purpose of an ablation study, we use Mahalanobis distance rather than softmax probability to identify hard samples. We also design an experiment in the last part to show that model is capable of learning OOD samples, and conclude that these samples should be avoided in the training. ### _Experimental Setup_ Two different datasets are used for evaluations, which are CIFAR-10 [22] and SVHN [23]. We train our model on CIFAR-10 for 200 epochs while train it on SVHN for 100 epochs since it converges faster. Also, PreActResNet18 is used as the base model. In the training, the initial learning rate is set to 0.1 and it is multiplied by 0.1 after 50% and 75% of epochs. Moreover, SGD with momentum\(=0.9\) and weight decay\(=5e-4\) is used for the optimization. In adversarial training, standard PGD attack with 10 iterations and a single restart is used to generate the perturbations. The perturbation is initialized randomly in the range \([-\epsilon,\epsilon]\), and bounded in an \(\ell_{\infty}\)-ball with \(\epsilon=\frac{8}{255}\). Also, attack step size is set according to \(\alpha=2.5*\frac{\epsilon}{N}\), where \(N\) is the number of iterations. ### _Offline_ According to our "offline" proposed method, first, the dataset is divided into four folds. Each time, one fold is selected and the model is trained on three other folds. Then, the CCSP score is calculated for each sample in the selected fold using trained model. It is done once for each fold to obtain CCSP score for all samples. After that, samples are sorted based on their CCSP scores, and then, \(R\) samples with the lowest CCSP scores are eliminated from the dataset. Please note that setting \(R\) to zero is equivalent to using the base adversarial training, which is our baseline in this work. Results are shown for different values of \(R\) in Table I. As we see, robustness is increased by removing the useless samples. On the other hand, we also note that setting \(R\) to a large value can remove the useful samples in addition to the hard samples. The best performance is achieved by setting \(R\) equal to 500 in CIFAR-10 and 300 in SVHN. Please note that the results are stable and reproducible. For instance, in multiple runs of the experiment where \(R=0\) samples were removed (normal adversarial training), the standard deviation of clean and robust accuracy for the CIFAR-10 dataset is \(0.16\%\) and \(0.54\%\), respectively. ### _Online_ According to our "online" proposed method, the CCSP scores are calculated for each sample after each epoch. Then, the scores are sorted, and \(R\) samples with lowest scores are removed only in the following epoch. \(R=100\) is used in this part to avoid removing useful samples in addition to the hard ones. In addition, according to the adaptive removability of samples during training, removing 100 samples is sufficient. Results are shown in Table II. Accordingly, this method also improves robustness over the baseline (\(R=0\)). So, we can conclude that employing online strategy also works well. ### _Mahalanobis Distance_ In addition to the softmax-based methods, the distance between HTS and conditional distribution of the classes can be used to identify them. Two main techniques in this regard are the Mahalanobis distance (MD) [13] and the Relative MD (RMD) [24]. These techniques fit a conditional Gaussian distribution \(\mathcal{N}(\mu_{k},\Sigma)\) to the pre-logit features \(h\) for a distribution with \(K\) classes. The mean vector and covariance matrix are calculated as: \[\mu_{k}=\frac{1}{N_{k}}\sum_{i:y_{i}=k}h_{i}, \tag{4}\] \[\Sigma=\frac{1}{N}\sum_{k=1}^{K}\sum_{i:y_{i}=k}(h_{i}-\mu_{k})(h_{i}-\mu_{k}) ^{T}, \tag{5}\] for \(k=1,2,...,K\), where \(N_{k}\) is the number of samples in the class with label \(k\), and \(N\) refers to the number of samples in the dataset. Please note that \(\Sigma\) is shared among by all classes. Afterward, the distance of the input \(x\) with pre-logits \(h_{x}\) is calculated as: \[MD_{k}(h_{x})=(h_{x}-\mu_{k})^{T}\Sigma(h_{x}-\mu_{k}), \tag{6}\] \[RMD_{k}(h_{x})=MD_{k}(h_{x})-MD_{0}(h_{x}), \tag{7}\] where \(MD_{0}(h_{x})\) represents the Mahalanobis distance of \(h_{x}\) to a distribution fitted to the entire training dataset as \(\mathcal{N}(\mu_{0},\Sigma_{0})\). \(\mu_{0}\) and \(\Sigma_{0}\) are calculated as: \[\mu_{0}=\frac{1}{N}\sum_{i=1}^{N}h_{i}, \tag{8}\] \[\Sigma_{0}=\frac{1}{N}\sum_{i=1}^{N}(h_{i}-\mu_{0})(h_{i}-\mu_{0})^{T}. \tag{9}\] According to these descriptions, we can use Absolute Relative Mahalanobis Distance (ARMD) as an alternative method to CCSP for detecting HTS. ARMD is defined as follows for sample \(x\) with label \(y\) and pre-logit features \(h_{x}\): \[ARMD(x,y)=|RMD_{y}(h_{x})|. \tag{10}\] By determining the ARMD score for each sample, we can sort the samples in accordance with the calculated scores. After that, we can remove the \(R\) samples with the lowest scores to get rid of the hard samples similar to the proposed method with CCSP method. The result of this approach can be seen in the Table III where a noticeable improvement over the baseline (\(R=0\)) can be observed in most cases. As a result, our method is not sensitive to the detection method. ### _Clean training instead of removing_ An alternative option for removing the hard samples in our method is using them without any perturbation in training. This can be useful if all the samples in the dataset correlate well with their assigned label. To investigate the effectiveness of this method, the experiment in section IV-C is repeated with clean training of hard samples instead of removing them from dataset. Results are shown in Table IV. According to this table, this method is effective in SVHN dataset, but it does not improve the baseline (\(R=0\)) in CIFAR-10. We believe that the reason for this observation is that the quality of some of the CIFAR-10 images is so substandard that even their clean training can be harmful as can be seen in Fig. 2. ### _Are OOD samples learned?_ In this section, we conduct an experiment to show the capability of model in learning the OOD samples during training, which can cause problems in adversarial training. For this purpose, we add 50 random samples from CIFAR-100 dataset to the CIFAR-10 training set. These samples are randomly chosen from classes that are not shared between CIFAR-10 and CIFAR-100, and they are randomly assigned to one of the ten classes in CIFAR-10. So, we can consider these samples as OOD samples in the training set. Afterwards, the model is trained using the updated dataset for 50 epochs. Results show that model classifies 40 out of these 50 samples correctly in the last epoch of training. In other words, model can learn 80% of the OOD samples in training, which confirms our claims in this work. As a result, the dataset should be purified before training, as has been done in this study. ### _Comparison with other defenses_ We have also made a comparison with some other recent defenses on CIFAR-10 dataset in Table V. The defense methods in this table are Dynamic [25], TRADES [19], MMA [26], standard adversarial training [8], and our online and offline proposed methods with the best \(R\). Considering the fact that there is a trade-off between clean and robust accuracy [19], the results show that our method is competitive against other variants of AT. Please note again that our method have a different data-centric approach than those methods. ## V Conclusion Adversarial training is shown to be the most effective existing defense. Therefore, a lot of efforts have been made to improve its result. In this work, we first demonstrated that some of these improvements are made by behaving differently with the near-boundary or hard samples in the training. Accordingly, from a data-centric point of view, we suggested to identify these samples to ignore them during training. For this purpose, we used probablity and distance based methods to detect the hard samples. After that, these samples are removed from the dataset with two "offline" and "online" methods. The "offline" method removes the hard samples before the training, while the "online" removing is done after each epoch in the training. The results on both of the methods show improvements over the baseline.
2302.09455
An Open-Source, Physics-Based, Tropical Cyclone Downscaling Model with Intensity-Dependent Steering
An open-source, physics-based tropical cyclone downscaling model is developed, in order to generate a large climatology of tropical cyclones. The model is composed of three primary components: (1) a random seeding process that determines genesis, (2) an intensity-dependent beta-advection model that determines the track, and (3) a non-linear differential equation set that determines the intensification rate. The model is entirely forced by the large-scale environment. Downscaling ERA5 reanalysis data shows that the model is generally able to reproduce observed tropical cyclone climatology, such as the global seasonal cycle, genesis locations, track density, and lifetime maximum intensity distributions. Inter-annual variability in tropical cyclone count and power-dissipation is also well captured, on both basin-wide and global scales. Regional tropical cyclone hazard estimated by this model is also analyzed using return period maps and curves. In particular, the model is able to reasonably capture the observed return period curves of landfall intensity in various sub-basins around the globe. The incorporation of an intensity-dependent steering flow is shown to lead to regionally dependent changes in power dissipation and return periods. Advantages and disadvantages of this model, compared to other downscaling models, are also discussed.
Jonathan Lin, Raphael Rousseau-Rizzi, Chia-Ying Lee, Adam Sobel
2023-02-19T02:10:10Z
http://arxiv.org/abs/2302.09455v2
# An Open-Source, Physics-Based, Tropical Cyclone Downscaling Model with Intensity-Dependent Steering ###### Abstract The development of an open-source physics-based tropical cyclone downscaling model, based on random seeding, is described. Steering of the tropical cyclone that is intensity-dependent can change tropical cyclone hazard on both local and regional scales. The model reproduces the observed climatology of tropical cyclones, including the seasonal cycle, inter-annual variability, and hazard. ###### Abstract An open-source, physics-based tropical cyclone downscaling model is developed, in order to generate a large climatology of tropical cyclones. The model is composed of three primary components: (1) a random seeding process that determines genesis, (2) an intensity-dependent beta-advection model that determines the track, and (3) a non-linear differential equation set that determines the intensification rate. The model is entirely forced by the large-scale environment. Downscaling ERA5 reanalysis data shows that the model is generally able to reproduce observed tropical cyclone climatology, such as the global seasonal cycle, genesis locations, track density, and lifetime maximum intensity distributions. Inter-annual variability in tropical cyclone count and power-dissipation is also well captured, on both basin-wide and global scales. Regional tropical cyclone hazard estimated by this model is also analyzed using return period maps and curves. In particular, the model is able to reasonably capture the observed return period curves of land-fall intensity in various sub-basins around the globe. The incorporation of an intensity-dependent steering flow is shown to lead to regionally dependent changes in power dissipation and return periods. Advantages and disadvantages of this model, compared to other downscaling models, are also discussed. ## Plain Language Summary Tropical cyclones are rare and extreme weather systems that can cause a lot of damage to society. Because the most intense of tropical cyclones are exceedingly rare, it is difficult to ascertain not only the frequency with which they occur, but how this frequency might change in the future. This problem is also compounded by the fact that even state-of-the-art climate models have trouble representing strong tropical cyclones. This study presents the development of a new, physics-based, tropical cyclone model. The model can rapidly simulate a large number of tropical cyclones given a mean climate, and is shown to reasonably reproduce the general behavior of tropical cyclones observed over the past 43 years. The model is open-source and freely available online. ## 1 Introduction Tropical cyclones are extreme weather systems that are responsible for billions of dollars in damage to society every year (Pielke et al., 2008). As global warming continues, the consensus is that the frequency of intense tropical cyclones will increase (Knutson et al., 2010; Kossin et al., 2020). It follows that wind damage will also increase over time (Emanuel, 2011). Given the societal ramifications of tropical cyclones, it is prudent to understand not only tropical cyclone risk in the current climate, but also how the risk might change with warming. Purely statistical models or statistical-dynamical models (Emanuel et al., 2006) are often used to downscale tropical cyclone activity and estimate risk, instead of explicitly simulated tropical cyclones in reanalysis or climate models. There are a couple key reasons for this. First, a large sample size is necessary to robustly calculate the return periods of rare events. Climate models, at resolutions fine enough to explicitly resolve tropical cyclones, are typically run on time scales close to a century. As a result, tropical cyclones with return periods greater than a few decades are not well sampled. Thus, in general, tropical cyclone downscaling models have the desirable property of being able to rapidly simulate a large number of events given a certain climate, allowing for robust sampling of rare events. Furthermore, while the ability of our climate models to represent tropical cyclones has drastically improved over the past decade (Camargo and Wing, 2016), state-of-the-art climate models still have difficulty representing the most intense tropical cyclones, which are often the ones that are of great societal interest (Zhao et al., 2009; Gentry and Lackmann, 2010; Strachan et al., 2013). Thus, there is still much reason to develop, use, and understand tropical cyclone downscaling models. In the recent decade, there have been a couple of open-source tropical cyclone downscaling models developed (Lee et al., 2018; Bloemendaal et al., 2020). All of these models have their own advantages and disadvantages, using a varying mixture of physics and statistics to generate a large number of synthetic tropical cyclones that are similar to historical tropical cyclones. In this paper, we describe the development of a publicly available, Python-based tropical cyclone downscaling model that synthesizes principles from the MIT tropical cyclone downscaling model (Emanuel et al., 2006, 2008; Emanuel, 2022) and uses the FAST model to simulate tropical cyclone intensity given a large scale environment (Emanuel, 2017). We have also incorporated a variety of changes to the downscaling model. In particular, we have expanded the FAST intensity model to the global scale, included an intensity-dependent steering level coefficient to the track model, introduced changes to the calculation of potential intensity to improve transparency, and incorporated a parameterization of the tropical cyclone ventilation that was previously evaluated in a tropical cyclone forecasting model (Lin et al., 2020). The proposed model, available online at [https://github.com/linjonathan/tropical_cyclone_risk](https://github.com/linjonathan/tropical_cyclone_risk), will help researchers in tropical cyclone climatology and risk to produce large datasets rapidly and transparently. The model is evaluated in the historical period by downscaling ERA5 reanalysis data (Hersbach, 2016). Section 2 describes the model in detail, including the genesis, track, and intensity algorithms. A thorough comparison with the observational record is shown in section 3. Section 4 explores tropical cyclone hazard on a global scale. Finally, section 5 concludes this study with a summary and discussion. ## 2 Materials and Methods ### Genesis This model uses random seeding, where seeds are randomly placed in space and time and allowed to evolve with the large-scale environment. This approach has been shown to successfully reproduce many aspects of tropical cyclone climatology (Emanuel et al., 2008; Emanuel, 2022). We also include a strong weighting function that depends on the background vorticity, similar to those used in genesis potential indices (Emanuel & Nolan, 2004; Tippett et al., 2011). We use the function: \[P(\phi)=[(|\phi|-\phi_{0})/12]^{\xi} \tag{1}\] where \(\phi\) is the latitude, \(\phi_{0}\) is a tuning latitude parameter, and \(\xi\) is a power-dependence that controls how quickly \(P\) decays towards the equator. \(P\) is not allowed to be smaller than zero or larger than unity. \(P\) weights the random seeding, such that there are no seeds near the equator, where there are no observed tropical cyclones. In this model, we have chosen \(\phi_{0}=2\). However, there is a some basin-to-basin variation in the optimal selection of \(\xi\). This is important because \(\xi\) partially controls the frequency of low-latitude genesis, which does exhibit basin-to-basin variation in the observations. Thus, unlike genesis potential indices, which use a globally constant vorticity weighting function, we do vary \(\xi\) by basin, as shown in Table 1. This has the favorable effect of improving how well the genesis patterns, inter-annual variability, and return period curves compare to observations. The seeds must also be initialized at a specific axisymmetric intensity, defined to be the azimuthal wind speed at the radius of maximum wind. An additional parameterization converts the axisymmetric wind speed to a maximum wind speed across the entire storm (which is the quantity reported in observations) and is described in the ensuing section. The seeds are initialized with an axisymmetric intensity of \(v_{init}\), and only seeds that have intensified to at least \(v_{2d}=7\) m/s after 2 days, reach an axisymmetric intensity of at least \(v_{min}\) and a maximum wind speed of at least \(v_{min}^{*}\), are kept. As in Emanuel (2022), the seeds must be initialized at a weak intensity to provide good statis tics. Here, we use \(v_{init}=5\) m/s. In order to accurately compare the downscaling model to observations, we use \(v_{min}=15\) m/s and \(v_{min}^{*}=18\) m/s. ### Track Model After the seeds are initiated, they move in space and time according to the beta-and-advection model. The beta-and-advection model assumes that a tropical cyclone follows a weighted-average of the large-scale winds, plus a north-westward beta-drift correction that is a consequence of non-linear advection of the background vorticity gradient by the tropical cyclone winds (Marks, 1992). Mathematically, this is: \[\mathbf{v}_{t}=(1-\alpha)\mathbf{v}_{250}+\alpha\mathbf{v}_{850}+\mathbf{v}_{ \beta}\cos(\phi) \tag{2}\] where \(\mathbf{v}_{t}\) is the tropical cyclone translational vector, \(\mathbf{v}_{250}\) (\(\mathbf{v}_{850}\)) is the large-scale environmental wind at 250-hPa (850-hPa), \(\alpha\) is a steering coefficient, and \(\mathbf{v}_{\beta}\) is the translational speed correction due to beta-drift (Emanuel et al., 2006). In previous studies using this track model for tropical cyclone downscaling, a constant \(\alpha\,=\,0.8\) was chosen to minimize the 6-hour track displacement error from observations (Emanuel et al., 2006). Here, we iterate on this track model and provide evidence that the steering coefficient, \(\alpha\), varies with intensity. To show this, we begin by performing "vortex surgery" in reanalysis data, where the winds of the tropical cyclone are removed in order to calculate the background environmental steering winds within which each storm evolves. Since the divergence and vorticity of a tropical cyclone are typically elevated over its environment, the tropical cyclone's divergence and vorticity can be isolated from those of the environment and inverted, given suitable boundary conditions. The winds inferred from the inversion can then be subtracted from the full wind field to obtain the environmental wind. The reader is referred to Lin et al. (2020) for more details. We perform this vortex inversion on Atlantic, Eastern Pacific, and Western Pacific tropical cyclones from 2011-2021, using ERA5 reanalysis data over the same period. The tropical cyclones are identified using IBTrACS (K. R. Knapp et al., 2010). Once the 250-hPa and 850-hPa environmental winds are obtained, we calculate the steering level coefficient \(\alpha\) that maximizes the coefficient of determination, \(r^{2}\), between the observed 6-hourly forward translational velocity and the translational speed predicted by Equation (2), with \(\mathbf{v}_{\beta}=0\). Note that \(\mathbf{v}_{\beta}\) is typically set to a constant. In this sense, \(\mathbf{v}_{\beta}\) has a larger influence on the mean-squared error and mean bias of the beta-and-advection model, and less so on \(r^{2}\). Figure 1, top, shows the optimal \(\alpha\) that maximizes \(r^{2}\) for intensity bins of 10-kt total width, starting from 20-kts. The optimal \(\alpha\) decreases with intensity, but seems to level off to a constant after an intensity of 100-kts. This empirical relationship, which has also been qualitatively found in early studies of how the depth of the steering flow relates to tropical cyclone intensity (Dong & Neumann, 1986; Velden & Leslie, 1991), indicates that the steering-level generally deepens as the tropical cyclone intensity increases. This is qualitatively consistent with the idea that as a tropical cyclone's circulation deepens, it is steered by winds further up in the atmosphere. In light of this analysis, we introduce a simple linear function that describes the dependence of \(\alpha\) on the intensity, \(v^{*}\) (knots): \[\alpha(v_{*})=\max\{\max\{v^{*}m_{\alpha}+b_{\alpha},\alpha_{\min}\},\alpha_{ \max}\} \tag{3}\] where \(m_{\alpha}\,=\,0.0025\,\,\,\mathrm{kts}^{-1}\), \(b_{\alpha}\,=\,0.83\), set the slope and intercept of the linear function, and \(\alpha_{\max}\,=\,0.78\), \(\alpha_{\min}\,=\,0.59\) set the upper and lower bounds of \(\alpha\). \(\alpha\) is \(\alpha_{\max}\) at intensities weaker then 20-kts, and decreases linearly with increasing intensity until it is lower bounded by \(\alpha_{\min}\). The dependence of \(\alpha\) on this empirical fit is shown in dashed-black in Figure 1, top. Figure 1, bottom, compares the \(r^{2}\) of zonal and meridional translational velocities predicted by (solid) Equation 3 and (dashed) a constant \(\alpha=0.8\). The Figure 1: (Top): The steering level coefficient, \(\alpha\) that maximizes \(r^{2}\) between predicted and actual 6-hourly forward translational velocity in the (blue) zonal and (red) meridional directions. The solid black line is the mean between the zonal and meridional lines, while the dashed black line is the simple linear function to match the data. (Bottom): \(r^{2}\) values between the predicted and actual forward translational velocity in the (blue) zonal and (red) meridional directions. Solid lines depict \(r^{2}\) values using \(\alpha\,=\,0.8\), while dashed are for those using the simple linear function for intensity-dependent \(\alpha\). Sample set includes Atlantic, Eastern Pacific, and Western Pacific tropical cyclones from 2011-2021, and the bin size is 10 knots. inclusion of a simple intensity dependent \(\alpha\) leads to a significant increase in \(r^{2}\) among all intensity bins. Furthermore, the mean-squared error of the translational velocity decreases for all intensity bins (not shown). The inclusion of the intensity-dependent \(\alpha\) does not degrade or improve the mean statistics shown later in this paper. This is expected, as the majority of storms do not become major hurricanes. However, this finding is significant in the sense that it shifts the modeled spatial distribution of major tropical cyclone activity, as analyzed later in this study. It may also be important in the context of global warming, which is predicted to lead to an increase in tropical cyclone strength (Knutson et al., 2010), an expansion of the tropics (Seidel et al., 2008), and increased poleward latitudes of tropical cyclone genesis (Sharmila and Walsh, 2018). An analysis of these potential effects with warming are left out of the scope of this paper, but will be investigated in future work. Note that there is some variance in the slope of \(\alpha\) with intensity by basin. It is not obvious why this is the case, but one potential source of uncertainty is the fact that linear interpolation between two levels, 250-hPa and 850-hPa, was used in determining the optimal \(\alpha\). Some of the basin-to-basin variations could be explained through differences in the vertical structure of zonal and meridional environmental winds. However, the inclusion of more vertical levels in between 250- and 850-hPa is left to future work. As in Emanuel et al. (2006), stochastic realizations of the 250- and 850-hPa environmental winds are generated from monthly-averages and covariances of daily zonal and meridional winds at those levels. These stochastic realizations of the environmental wind are used to steer the seeded tropical cyclones according to Equation 2 and 3. Since we do not make any changes to the stochastic generation of environmental wind, the reader is referred to the supplement of Emanuel et al. (2006) for more details. ### Intensity Model To evaluate the intensity of the tropical cyclone along the track, we use the FAST intensity model (Emanuel and Zhang, 2017; Emanuel, 2017), a simplified pair of coupled, non-linear ordinary differential equations that evolve \(v\), the maximum azimuthal wind, and \(m\), a non-dimensional inner-core moisture variable, given a particular environmental forcing. As stated in Emanuel (2017), \(m\) can be thought of as a "kind of relative humidity". The model equations are designed to reduce to the nonlinear analytical model of tropical cyclone intensification derived in Emanuel (2012), under a fully water saturated inner core and zero environmental wind shear. This model was used successfully in a probabilistic tropical cyclone forecasting model (Lin et al., 2020). The equations are included below for convenience: \[\frac{dv}{dt}= \frac{1}{2}\frac{C_{k}}{h}\Big{[}\alpha\beta V_{p}^{2}m^{3}-(1- \gamma m^{3})v^{2}\Big{]} \tag{4}\] \[\frac{dm}{dt}= \frac{1}{2}\frac{C_{k}}{h}\Big{[}(1-m)v-\chi Sm\Big{]}\] (5) \[\beta= 1-\epsilon-\kappa\] (6) \[\gamma= \epsilon+\alpha\kappa\] (7) \[\epsilon= \frac{T_{e}-T_{e}}{T_{e}}\] (8) \[\kappa= \frac{5}{2}\frac{C_{d}}{C_{d}}\frac{T_{e}}{R_{d}}\frac{g_{e}^{2} }{T_{e}}\] (9) \[\alpha= 1-0.87\exp^{-z}\] (10) \[z= 0.01\Gamma^{-0.4}h_{m}u_{T}V_{p}v^{-1}\] (11) \[\chi_{\rm grid}= \frac{s^{*}-s_{m}}{s_{0}^{*}-s^{*}} \tag{12}\] where \(C_{k}\) and \(C_{d}\) are the surface enthalpy and drag coefficients, \(h\) is the atmospheric boundary layer depth, \(V_{p}\) is the potential intensity, \(\alpha\) is an ocean interaction parameter, \(\chi_{\rm grid}\) is the gridded mid-level saturation entropy deficit, \(s^{*}\) (\(s_{0}\)) is the saturation moist entropy of the free troposphere (sea surface), \(s_{m}\) is the moist entropy of the middle troposphere, \(S\) is the 250-850-hPa vertical wind shear, \(T_{s}\) is the surface temperature, \(T_{o}\) is the outflow temperature, \(L_{v}\) is a constant latent heat of vaporization, \(R_{d}\) is the dry gas constant, \(q_{s}^{*}\) is the surface saturation specific humidity, \(\epsilon\) is the thermodynamic efficiency, \(\alpha\) is an ocean interaction parameter, \(\Gamma\) is the sub-mixed layer thermal stratification in \(K(100m)^{-1}\), \(h_{m}\) is the mixed layer depth, and \(u_{T}\) is the translation speed. The reader is referred to Emanuel (2017) for further details. For the purposes of simplicity, we take \(\beta\), \(\gamma\), \(\epsilon\), and \(\kappa\) to be constant. As such, the key environmental quantities that drive differences in the intensification of a tropical cyclone in this model are the potential intensity \(V_{p}\), the vertical wind shear \(S\), the environmental entropy deficit \(\chi\), and the ocean interaction parameter \(\alpha\). The vertical wind shear is taken from the synthetic realizations of the upper- and lower-level winds, while the ocean interaction parameter is evolved using climatological profiles of ocean mixed-layer depth and sub-mixed layer thermal stratification. It is possible that using reanalysis estimates of ocean mixed-layer depth and sub-mixed layer thermal stratification could lead to improvements of these results. There are some changes made to the calculations of potential intensity and environmental entropy deficit, which are outlined in the next sections. Since the FAST equations are a coupled set of ordinary differential equations, both \(v\) and \(m\) need to be initialized. \(v\) is given from the random seeding approach, and thus we are left with a choice on how to initialize \(m\). Following Emanuel (2017), which initialized \(m\) as a function of the large-scale relative humidity, we choose to initialize \(m\) as a logistic curve of the large-scale monthly mean relative humidity. \[m_{\rm init}=\frac{L}{1+\exp(-k(\mathcal{H}-\mathcal{H}_{0}))}+m_{0} \tag{13}\] where \(L=0.20\), \(k=10\), \(\mathcal{H}_{0}=0.55\), \(m_{0}=0.125\), and \(\mathcal{H}\) is the large-scale relative humidity. This equation was arrived at somewhat empirically, but with the general idea that a moister large-scale environment is more conducive to tropical cyclone genesis. Note that this is different from the initialization of \(m_{\rm init}=1.2\mathcal{H}\) chosen by Emanuel (2017), which leads to intensification rates much larger than observed. Finally, since the FAST equations predict only the axisymmetric wind, \(v\), a conversion to the maximum wind speed \(v^{*}\) (to easily compare with observations) must be performed. We follow the same model optimized in Lin et al. (2020), adding a wind vector that is a function of the translational speed and large-scale environmental wind to convert \(v\) into \(v^{*}\). #### 2.3.1 Potential Intensity Along with this Python-based model, we briefly describe a new Python-based algorithm for calculating potential intensity (PI, or \(V_{p}\)). This new algorithm is a version of the MATLAB algorithm introduced by Bister and Emanuel (2002, hereafter BE02), which was modified to run faster and be more modular and transparent. As in previous algorithms, \(V_{p}\) is calculated from environmental soundings using the formula \[V_{p}^{2}=S_{w}^{2}\frac{C_{k}}{C_{d}}\frac{T_{s}}{T_{o}}(CAPE^{*}-CAPE), \tag{14}\] where \(CAPE\) and \(CAPE^{*}\) are respectively the environmental convective available potential energies of a near-surface parcel and of a surface saturated parcel at temperature \(T_{s}\). \(S_{w}\) is an empirical constant used to reduce PI-estimated wind-speeds to surface wind speeds observed in tropical cyclones. A value \(S_{w}=0.8\) is chosen, loosely based on the work of Powell (1980). In the CAPE computations, the lower condensation level is computed using the formula of Romps (2017). Model options include computing ascent profiles for CAPE using either pseudoadiabatic (Bryan, 2008) or reversible (Emanuel, 1994) definitions of moist entropy. The new algorithm considers the effects of dissipative heating on storm intensity (Bister & Emanuel, 1998), but not the effect of central pressure drop on eyewall enthalpy transfer (Emanuel, 1988) considered in BE02. While one may argue that by neglecting the iterations on central pressure we are neglecting a physically important mechanism, we find no monotonically increasing difference between \(V_{p}\) computed using our algorithm and \(V_{p}\) computed using the algorithm of BE02 with identical \(S_{w}\) and exchange coefficients. In addition, \(V_{p}\) is not a quantity that can be observed, but instead must be estimated from environmental conditions using different algorithms or even formulas, all subject to different assumptions (Rousseau-Rizzi et al., 2022). Hence, here we do not aim for a perfect correspondence between our PI algorithm and that of BE02, but one sufficient to warrant its use here. Results from the new PI algorithm and that of BE02 are compared in Fig.2 for the particularly active hurricane season of 2017. The figure shows that, qualitatively, the two algorithms produce very similar results, with the new algorithm producing somewhat lower PI in subsidence regions, subtropics and midlatitudes, and higher PI in strongly convecting regions of the deep- tropics. This result suggests that neglecting the effect of central pressure drop on enthalpy transfer in our algorithm is not a problem. If it were, the algorithm of BE02 should produce relatively higher values in the deep tropics, where PI is already high. The differences between the two algorithms are usually less than 5%. A histogram further comparing the values of PI computed using the two algorithms is available in Supporting Information, Figure S1. Figure 2: 2017 hurricane season (August-September-October) average PI, computed using the new algorithm (top panel) and the algorithm of BE02 (center panel). Average of the difference between the two algorithms for the 2017 hurricane season (bottom panel). Computing CAPE requires inverting moist entropy to obtain parcel temperature profiles on pressure levels is a time consuming computational step. Here, we make use of the fact that the range of temperatures and pressures is not large up to the tropopause, and we provide the user with the option to pre-compute tables of temperature in entropy and pressure coordinates. In these tables, each combination of entropy and pressure needs to be inverted only once to obtain temperature. Then, the computationally costly step of inverting the entropy equation to compute CAPE becomes a simple interpolation. We find that interpolation using a pseudoadiabatic entropy table with equally spaced pressure coordinates ranging from 2500 to 105000 Pa and entropy coordinates ranging from 2300 to 3600 J Kg\({}^{-1}\) K\({}^{-1}\) yields negligible differences from inversion when the table resolution is at least 100\(\times\)100. Reversible entropy interpolation tables require an additional "total water mixing ratio" dimension. Note that computing these tables only requires inverting moist entropy between 1e4 and 1e6 times, while computing \(V_{p}\) globally at a single time for a coarsely resolved (e.g., 2.5 degrees and 15 vertical levels) climate simulation requires inverting moist entropy about 3e6 times. Gilford (2021) estimated that the time required for computing \(V_{p}\) at 1e5 points using the BE02 algorithm is 8.5s for the original Matlab implementation and 10s for their Python implementation. The new algorithm used here runs in 2 seconds for pseudoadiabatic and 3 seconds for reversible thermodynamics (this difference is due to the additional dimension of the reversible entropy interpolation table). In addition, the new algorithm is vectorized and designed to be run in parallel. Computing monthly-mean PI over 100 years of climate simulation at 1 degree resolution (8e7 points) and on 10 cores takes less than 3 minutes. #### 2.3.2 Entropy Deficit The ventilation of the tropical cyclone, or drying of the inner-core (Tang & Emanuel, 2010), is parameterized in the FAST system through the term \(\chi Sm\). In Emanuel (2017), the entropy deficit is set as a constant, \(\chi=2.2\). However, the entropy deficit increases with warming, and has been shown to play a critical role in controlling the number of tropical cyclones predicted by downscaling models (Emanuel, 2013; Lee et al., 2020), statistical indices extrapolated to future climates (Camargo et al., 2014), and explicit numerical models (Hsieh et al., 2022) under future warming. In the probabilistic tropical cyclone forecasting model, variations of moisture on daily timescales are important in setting the spatial distribution of saturation entropy deficit. In Lin et al. (2020), \(\chi\) in the FAST equations is parameterized as the 90th-95th percentile of \(\chi\) values in an annulus of fixed radius centered around the tropical cyclone. This parameterization was shown to lead to skillful forecasts of tropical cyclone intensity. Here, we motivate the entropy deficit parameterization in this model with that used in Lin et al. (2020), by computing \(\chi\) as: \[\chi=\exp\left(\log\chi_{\text{grid}}+\chi_{\sigma}\right)+\chi_{a} \tag{15}\] Since \(\chi\) is approximately log-normal distributed, as in Tang and Emanuel (2012), we add \(\chi_{\sigma}\) to the logarithm of the monthly-mean gridded entropy deficit, \(\chi_{\text{grid}}\), as well as \(\chi_{a}\) to \(\chi\) everywhere. In this study, we assume \(\chi_{\sigma}\) and \(\chi_{a}\) to be constant throughout all months, though future work could try to determine if this is choice is indeed optimal. ## 3 Model Benchmarks For the purposes of this model development paper, we benchmark the model using a variety of comparisons to observations. Our comparisons of genesis, track, and intensity statistics are carried out on the global scale. We downscale ERA5 reanalysis data from 1979 to 2021, using monthly-averaged daily winds at 250- and 850-hPa, monthly-mean temperature and relative humidity, and monthly-mean sea surface temperature. Potential intensity is calculated using the new algorithm, under pseudoadiabatic lifting. There are some differences in the ensuing results when using the new potential intensity algorithm, as opposed to the original BE02 algorithm, but the results are not statistically robust. A total of \(\approx\,600,000\) tropical cyclone tracks are generated with the downscaling model, such that the sample sets for the resulting analysis are statistically robust. Where applicable, the results are stratified by basin, as defined in the IBTrACS dataset (K. R. Knapp et al., 2010), except for the Southern Hemisphere basins, which are split into the South Indian (\(30^{\circ}E{-}100^{\circ}E\)), Australian (\(100^{\circ}E{-}180^{\circ}E\)), and South Pacific (\(180^{\circ}E-260^{\circ}E\)) basins. ### Genesis Statistics To begin, we compare the annual density of genesis events to historical observations from 1979 to 2021, using \(3^{\circ}\) by \(3^{\circ}\) boxes to bin events. Figure 3 shows that in general, the observed tropical cyclone genesis distribution is well simulated by the random seeding method combined with the FAST tropical cyclone intensity simulator (Emanuel et al., 2008; Emanuel, 2017). However, there are a few small biases in the model. For instance, the region of greatest probability of genesis seems to be biased too far eastward in the Western Pacific region, and biased too far westward in the Eastern Pacific region. The downscaling model's genesis rate in the Australian region is also too slightly low, while it is too high in the Southern Pacific. This is likely caused by genesis that is biased too far eastward in the South Pacific. A detailed comparison of the fraction of global tropical cyclone in each basin is shown in Figure S2. In addition, the downscaling model typically under-predicts genesis in extratropical regions. This bias could have several possible causes. First, monthly-mean moisture is used to drive the model. Moisture anomalies on time scales shorter than a month may be important to capture tropical cyclones in these regions, as they may temporarily elevate the genesis potential. These biases could also arise from the fact that the downscaling model's physics do not explicitly account for any kinetic energy derived from baroclinic instability (Davis & Bosart, 2004). Re Figure 3: Number of genesis events per year, from (top) the downscaling model, and (bottom) observations from 1979 to 2021. \(3^{\circ}\) by \(3^{\circ}\) boxes are used to bin genesis events. gardless of these biases, the major tropical cyclone genesis regions are well represented in the downscaling model. We also investigate the seasonal cycle of tropical cyclone genesis. First, we compute the seed genesis probability, or the chance that a weak seed will undergo tropical cyclone genesis (a maximum wind of greater than 34 knots). In the random seeding approach, a large number of the randomly placed seeds die and are thrown out (Emanuel et al., 2008; Emanuel, 2022), which is reflected in the low seed probabilities shown in Figure 4, left; globally, only around 1 in every 125 randomly placed seeds survives. Since the seeds are also placed randomly in time, the seed genesis probability shown in Figure 4 also reflects the seasonal cycle of each individual tropical cyclone basin. In general, the downscaling model can represent the sharp tropical cyclone seasonal cycle in each basin, though there are slight negative biases in off-peak tropical cyclone months (for instance, May and November in the Atlantic basin). As mentioned earlier, consideration of moisture anomalies on time scales shorter than a month and/or inclusion of baroclinicity into the model physics could alleviate this bias. However, the off-peak tropical cyclones are typically weak, short-lived, and derivatives of baroclinic instabilities, and thus do not contribute strongly to the power dissipation index or the heavy-tail of tropical cyclone hazard. The global seasonal cycle of tropical cyclone count is also reasonably represented in the downscaling model. Figure 4, right, shows the global seasonal cycle in tropical cyclone count, with error bars indicating the 95% confidence interval when sub-sampling the downscaling events to be the same number as the historical record. Here, it is clear that the downscaling model underpredicts tropical cyclone count during the off-peak months in both basins, namely May to June and November through December. Since the downscaling events are normalized to have the same number of events as the historical period, tropical cyclone count is over-predicted during peak tropical cyclone months and underpredicted during off-peak tropical cyclone months. Nevertheless, the key components of the global seasonal cycle are well reproduced using the downscaling model. Finally, we investigate the latitudinal distribution of genesis, using 3\({}^{\circ}\) latitude bins. Again, we including the 95% confidence interval from sub-sampling the downscaling events Figure 4: (Left): Probability that a weak seed will undergo tropical cyclone genesis, for each basin. (Right): Comparison of number of tropical cyclones per month between observations and the downscaling model. The downscaling model is normalized such that it has the same annual number of tropical cyclones as the observations. Error bars indicate the 95% confidence interval when sub-sampling downscaling events to the same size as observational events. to the same size as the observational record. In general, the downscaling model faithfully represents the latitudinal distribution of genesis, though it underestimates tropical cyclone genesis in the extratropics, as discussed earlier. ### Track and Intensity Statistics In this section, we analyze the track and intensity statistics of the tropical cyclones represented in the downscaling model. First, we look at number of 3-hourly track crossings per year, using \(3^{\circ}\) by \(3^{\circ}\) longitude-latitude boxes. The number of tracks in the downscaling model are normalized such that it has the same number of tropical cyclones per year as the observations. As shown in Figure 6, the modeled track density distribution qualitatively represents that of the observational record, though there are a few notable biases. The bias of largest magnitude is the negative bias in track density over the Eastern Pacific region, which is most likely attributed to the negative bias in genesis in eastern portion of that region. In the Western Pacific region, the number of track crossings are of comparable magnitude between the model and the observations, though the general track of tropical cyclones are biased too far eastward in the downscaling model. There is also a negative bias in track density polewards of around \(30^{\circ}N\) and \(15^{\circ}S\) that again, could be alleviated through inclusion of baroclinic instability and/or moisture anomalies on time scales shorter than a month into the model physics. We also show the distribution of the lifetime maximum intensity (LMI) of downscaled tropical cyclones. Figure 7 shows that the modeled lifetime maximum intensity distribution closely follows the observed distribution, with a peak around 50-kts and an exponential decay in probability with increasing LMI. Here, it is important to note that the differences between the modeled and observational distributions are not statistically significant, except for the bimodality in the distribution that is a direct result of rapidly intensifying storms (Lee et al., 2016). We do not make an explicit attempt to account for the bimodality in the LMI distribution, and leave that for future work. We further note that the good performance of the model in reproducing the distribution of tropi Figure 5: Distribution of the latitude of genesis. The downscaling distribution is normalized to have the same number of total genesis events as the observations. Error bars indicate the 95% confidence interval when sub-sampling downscaling events to the same size as observational events. Figure 6: Number of 3-hourly track crossings per year, using 3\({}^{\circ}\) by 3\({}^{\circ}\) longitude-latitude boxes, from (top) the downscaling model, and (middle) observations from 1979 to 2021. (Bottom) Difference in number of 3-hourly track crossings per year, between the downscaling model and observations. The color scale is linear from -2 to 2 years, and logarithmic where the magnitude of the difference is greater than 2 years. cal cyclone lifetime maximum intensities warrants the use of the simplified rapid algorithm to compute \(V_{p}\). ### Inter-annual Variability Finally, we investigate inter-annual variability in the downscaling model, by analyzing the downscaling model's ability to capture inter-annual variability in tropical cyclone activity. In the ensuing analysis, we consider tropical cyclone activity during winter months (DJF) as occuring during the January/February year, in order to aggregate tropical cyclones in the Nothern and Southern hemisphere into the same year. In the random seeding approach, seeds are randomly placed in space and time at a constant rate, such that inter-annual variability in tropical cyclone count is also a measure of inter-annual variations in probability that a weak seed intensifies into a tropical cyclone. Thus, in this model, inter-annual variability comes from inter-annual changes in the large-scale environment, which ultimately determines the transition probability of the weak proto-vortex into a tropical cyclone. Figure 8 shows that the downscaling model is also able to reasonably capture inter-annual variability in tropical cyclone count, particularly in the Eastern Pacific and North Atlantic regions, where genesis potential indices have high skill (Camargo et al., 2007). The values of the correlation coefficients are comparable, if not higher, than those shown in (Lee et al., 2018), though the years analyzed in that study were from 1981-2012. There is very little correlation in inter-annual variability in the West Pacific basin, which is a documented deficiency of genesis potential indices (Menkes et al., 2012). Though this model is not directly based on a genesis potential index, it uses similar input variables. Finally there is also decent correlation of inter-annual global tropical cyclone count (\(r=0.31\)), mostly owing to high inter-annual skill in the Eastern Pacific and North Atlantic regions. Another metric that is arguably more predictable (or less noisy) than the global tropical cyclone count is the power dissipation index (PDI). The PDI is calculated as in Figure 7: Comparison of the lifetime maximum intensity probability density distribution between the downscaling model and observational record, using 10-kt wide bins. Error bars indicate the 95% confidence interval when sub-sampling downscaling events to the same size as observational events. ## 6 Conclusion Figure 8: Inter-annual variability in the number of tropical cyclones for each basin, from the (black) observational record and the (red) downscaling model. Basin tropical cyclone counts in the downscaling model are normalized by the average tropical cyclone count over the historical period in each basin. Pearson correlation coefficients are shown in the top-right of each panel. Only storms where the LMI is greater than 34 knots are considered. tegral of the cube of the storm intensity over its entire lifetime, over all tropical cyclones in a year. Thus, PDI accounts for not only tropical cyclone frequency, but also duration and intensity. Figure 9 compares historical inter-annual variations in the global PDI with that predicted by the downscaling model. The correlation coefficient is \(r=0.36\), showing that the downscaling model is also able to decently capture global inter-annual variations in the PDI. The PDI correlation is strongly influenced by outliers in the 1990s; the correlation increases to \(r=0.54\) when subsetting the historical period to years after 2000s. We also calculate the storm maximum PDI, which is a simplified version of PDI and is calculated as the sum of the cube of the storm lifetime maximum intensity, over all tropical cyclones in a year. In this sense, storm maximum PDI does not include the overall lifetime of the tropical cyclone. The correlation coefficient is \(r=0.51\), indicating that model skill improves when consider only storm frequency and maximum intensity. ## 4 Tropical Cyclone Hazard Finally, in this section, we will consider global tropical cyclone hazard, which combines information about the genesis, track, and intensity evolution of tropical cyclones. Here, we consider the return period of tropical cyclones that have an intensity of at least 64-knots (Category 1 status). We calculate return period using \(1^{\circ}\) by \(1^{\circ}\) longitude-latitude boxes using both the observational data and downscaling events, ensuring not to double count singular events. Since the sample size of the downscaling events is much larger than that of the historical data, we use a Gaussian kernel of unit standard deviation to smooth the observational counts. The return period, as calculated, is thus defined as how often a \(1^{\circ}\) by \(1^{\circ}\) grid-box will observe tropical cyclone of intensity of at least 64-knots. Figure 10 compares the calculated return period of tropical cyclones that have an intensity of at least 64-knots between the downscaling events and the observational record. Note that at the interfaces between areas where tropical cyclones are observed and those where there is no tropical cyclone activity, the downscaling model will tend to overestimate the return period (underestimate hazard), since the sample size of the observational record is much smaller than those of the downscaling model. There is generally very little disagreement in return period in the Western Pacific basin, whereas there seems Figure 9: Inter-annual variability in the global (left) power dissipation index and the (right) storm maximum power dissipation index. Only storms where the LMI is greater than 34 knots are considered. Figure 10: Global map of the return period of tropical cyclones that reach an intensity of at least 64-knots, from the (top) downscaling model and from (middle) observations, using \(1^{\circ}\) by \(1^{\circ}\) longitude-latitude boxes. A Gaussian kernel of unit standard deviation is used to smooth the observations. (Bottom) The difference in return period between downscaling and observations. Blue (red) shading is where hazard is underestimated (overestimated). The color scale is linear from -2 to 2 years, and logarithmic where the magnitude of the return period difference is greater than 2 years. to be a southward bias in the region of smallest return periods (i.e. where a hurricane is most likely) in the Eastern Pacific basin. The magnitude of return period differences are generally not large in the Atlantic basin either (around 2 years in magnitude on average), except for the western Gulf of Mexico region, where the downscaling model seems to overestimate hazard with respect to the historical record, though the historical record has larger uncertainties at longer return periods. It is also worth commenting on how the intensity-dependent \(\alpha\) changes the general distribution of major tropical cyclone activity. Since \(\alpha\) has the largest differences at the strongest of intensities, we use PDI to understand how an intensity-dependent \(\alpha\) influences major tropical cyclone activity. Figure 11 shows the mean PDI in the downscaling model, as well as differences in PDI between the intensity dependent \(\alpha\) and constant \(\alpha\) experiments. In general, the intensity-dependent alpha expands the region of TC activity - the increases are, for the most part, at the margins of the regions of greatest PDI in the control simulation, while the core TC regions see decreases. However, there is also large regional variability in how PDI changes. For instance, PDI decreases in the Caribbean Sea and Western Gulf of Mexico, while it generally increases over the North Atlantic Ocean. Furthermore, PDI increases in the South China Sea but decreases over the northern part of the sea. The latter can be directly attributed to the presence of mean-easterlies and southerlies at 250-hPa during boreal summer. Investigation of percent changes to the PDI (Figure 11, bottom), shows that in some regions, an intensity-dependent \(\alpha\) can lead to a 5-10% change in the PDI. Finally, we calculate return period curves of landfall intensity at various areas around the globe that are prone to tropical cyclones. Return period curves are valuable since they highlight the frequency of the strongest of tropical cyclones, which often are the most destructive and costly. Each region is defined following Lee et al. (2018), finding all locations over land that are within 50-km of a coastline. Figure 12 shows return period curves of landfall intensity at various regional locations, calculated from the control and intensity-dependent \(\alpha\) downscaling experiments. The return period curves are benchmarked against return periods estimated from observational data, and are not bias-corrected to the observations. In general, we observe that the return period curves are in agreement with those derived from observations at low intensities, though there are small biases, such as an overestimation of the return period of weak storms in Western Mexico, the Bay of Bengal, and the Caribbean Islands. It is also informative to analyze the difference in return period curves between the control and intensity-dependent \(\alpha\) downscaling experiments, since return period curves magnify the tail of the tropical cyclone distribution. In particular, the return period curves show that the frequency of the most intense storms increases along the Gulf of Mexico, Madagascar, Western Mexico coastlines, while there are no discernible differences in Australia, the Bay of Bengal, the Carribean Islands, the Phillipines, and China. Note that for some regions, such as the Northern Australia Coast and the Gulf of Mexico, one portion of the coastline sees an increased frequency of major tropical cyclones under the intensity-dependent \(\alpha\) model, while other portions of the same coastline see a decreased frequency of major tropical cyclones. Whether or not differences between these return period curves will increase or decrease with warming is an important and interesting question, and one that will be the subject of future work. ## 5 Summary and Discussion In this study, we develop an open-source, physics-based tropical cyclone downscaling model. The model synthesizes concepts from the MIT tropical cyclone downscaling model (Emanuel et al., 2004, 2008; Emanuel, 2022), randomly seeding weak vortices in space and time and evolving them within the large-scale environment. The weak seeds translate according to the beta-and-advection model (Marks, 1992), and intensify accord Figure 11: (Top) Mean PDI in the control downscaling experiment over the 43-year reanalysis period, using 3\({}^{\circ}\) by 3\({}^{\circ}\) bins. (Middle) Difference in the mean PDI between the intensity dependent \(\alpha\) and the constant \(\alpha\) simulations. The scale is linear from -10\({}^{8}\) to 10\({}^{8}\), and logarithmic for differences with magnitude above 10\({}^{8}\). (Bottom) Percent difference in PDI from the intensity dependent \(\alpha\) to the constant \(\alpha\) simulations, where grid-points with a mean PDI less than 10\({}^{8}\) are removed. ing to the FAST intensity model (Emanuel, 2017). Only seeds that reach traditionally defined tropical storm strength are kept. A number of changes are made to the MIT tropical cyclone downscaling model. In particular, we include a dependence of the depth of the steering flow on intensity, introduce a new Python-based algorithm to calculate potential intensity, incorporate a new parameterization of ventilation in the FAST intensity model, and expand the same intensity model to the global scale. Using these methods, the model is shown to reasonably represent the climatology of tropical cyclone activity, as compared to the observational record. A number of benchmarks are used evaluate the model. We show that the tropical cyclone downscaling model's seasonal cycle, genesis, track density, and intensity distributions are generally close to the observational record, though there are a few biases as discussed in the main text. Furthermore, correlations in inter-annual tropical cyclone count are comparable to those of genesis potential indices (Camargo et al., 2007). The downscaling model also displays substantial correlation with the historical record of global storm maximum power dissipation index. We also compared return periods of storms that reach an intensity of at-least 64-knots, and found general agreement between return periods calculated from the downscaling model and those calculated from historical data. The genesis method is based on random seeding, as opposed to a statistically trained algorithms that directly reproduces observed tropical cyclone genesis patterns. This should be seen as both a strength and a weakness of this model. For instance, while there are a few biases in the genesis patterns, as shown in Figure 3, the genesis pattern does not depend on the sparse sampling set over the historical period. Research has also shown that tropical cyclone frequency, as predicted by downscaling models, can rapidly diverge in future warming scenarios, depending on whether relative humidity or saturation entropy deficit is used in statistical indices of tropical cyclone genesis (Lee et al., 2020). This is because both quantities vary in synchronicity in the current climate, but diverge in warming scenarios. The random seeding approach does not resolve that issue, but rather, presents an alternative approach, as discussed thoroughly by Emanuel (2022). However, it is worth highlighting this model's dependence on both quantities, as relative humid Figure 12: Return period curves of landfall intensity at labeled areas around the world, from all tracks in the (blue) control and (red) intensity-dependent \(\alpha\) downscaling model. Return period curves calculated form observations are in black. ity plays a role in initializing the inner core moisture of the intensity model, while the saturation entropy deficit modulates the rate at which the inner core moisture dries through ventilation. Future changes to both variables would play a role in the genesis rate predicted by this model. Futhermore, while the parameterization of ventilation in the intensity component of the downscaling model has been evaluated in the same intensity model on forecasting time scales (Lin et al., 2020), the success of this parameterization in a forecasting model by no means guarantees its correctness in its response to warming. This is primarily because the temperature dependence of the parameterization cannot easily be tested in the current climate since temperature fluctuations in the tropics are weak (Sobel et al., 2001). The ventilation process, however, has support from theory and idealized numerical modeling, though it was primarily tested in mature tropical cyclones (Tang &manuel, 2010). Recent work has additionally suggested that ventilation seems to play a large role in modulating tropical cyclone frequency under warming scenarios in numerical models (Hsieh et al., 2020, 2022). Still, an open question is whether or not ventilation (as opposed to some other variable) plays the dominant role in modulating the frequency and intensification rate of precursor tropical distrubances. In this model, the ventilation process has no intensity dependence, i.e. the randomly seeded proto-vortices and most intense of tropical cyclones are equally affected by the environmental saturation entropy deficit. How this assumption modulates this model's response to warming will be the subject of future research. Despite these open problems, this physics-based downscaling model can be used to understand how physical processes in the large-scale environment play a role in modulating tropical cyclone genesis, track, and intensification. Because this model does not significantly depend on statistical sampling of historical tracks, it can, in principle, reproduce tropical cyclone variability in the climate system on decadal and multi-decadal time scales. This is one advantage of this model. In addition, while we only presented results from downscaling reanalysis data, climate models can also be downscaled, though additional tuning and/or bias correction may be necessary. The behavior of tropical cyclones in different climates (and model representations of those climates) can be linked to specific processes in the atmosphere given the physical basis of the downscaling model. Furthermore, while the parameters [see Table A1 for a summary] we used in this study lead to reasonable representations of tropical cyclone climatology, they should not be thought of as fixed. The model is freely available online for those interested in exploring the parameter space. Finally, the downscaling model may appeal to those interested in tropical cyclone hazard, since a large number of synthetic events can be rapidly generated. ## Appendix A Additional Model Information Table A1 shows the summary of parameters used in the downscaling model. All of the variables are described in detail in the main text. ### Open Research Section The daily ERA5 data for zonal and meridional winds are available at [https://cds.climate.copernicus.eu/cdsapp#](https://cds.climate.copernicus.eu/cdsapp#)!/dataset/reanalysis-era5-pressure-levels via DOI: 10.24381/cds.bd0915c6 (Hersbach et al., 2018). The monthly-averaged ERA5 data for temperature and specific humidity are available at [https://cds.climate.copernicus.eu/cdsapp#](https://cds.climate.copernicus.eu/cdsapp#)!/dataset/reanalysis-era5-pressure-levels-monthly-means via DOI: 10.24381/cds.6860a573 (Hersbach et al., 2019a). The monthly-mean ERA5 data for sea-surface temperature and surface pressure fields are available at [https://cds.climate.copernicus.eu/cdsapp#](https://cds.climate.copernicus.eu/cdsapp#)!/dataset/reanalysis-era5-single-levels-monthly-means via DOI: 10.24381/cds.f17050d7 (Hersbach et al., 2019b). The ERA5 reanalysis data are accessible by creating an account with the Climate Data Store service, and usable ac cording to ECMWF license to use Copernicus products. The IBTrACS data used for evaluation of the model with observations are available at [https://www.ncei.noaa.gov/products/international-best-track-archive](https://www.ncei.noaa.gov/products/international-best-track-archive) via DOI: 10.25921/82ty-9e16 (K. Knapp et al., 2018). The physics-based tropical cyclone risk model is freely available at [https://github.com/linjonathan/tropical_cyclone_risk](https://github.com/linjonathan/tropical_cyclone_risk) (Lin, 2023). Code to generate the data, as well as instructions to run the model, are all available at the aforementioned link. J. Lin gratefully acknowledges the support of the National Science Foundation through the NSF-AGS Postdoctoral Fellowship. C.-Y. Lee is supported by the Palisades Geophysical Institute (PGI) Young Scientist award from Lamont-Doherty Earth Observatory, Columbia University. C.-Y. Lee and A. Sobel also gratefully acknowledge support from the Swiss Re Foundation. The authors also Kerry Emanuel for insightful comments on an earlier version of the manuscript.
2302.06000
Probing modified plasma waves in non-linear electrodynamics
Properties of modified plasma waves in non-linear electrodynamics are investigated. We consider a cold, uniform, collisionless, and magnetized plasma model. Initially, we also assume small amplitude waves and the non-relativistic approximation. For electrostatic waves, we obtain a modified Trivelpiece-Gould dispersion relation with a suitable change in the plasma frequency and analyze the stability of modes. Furthermore, electromagnetic waves related to the generalized Appleton-Hartree equation are established. In this case, we discuss modifications in circularly polarized waves, ordinary and extraordinary modes. After that, we apply our results to particular cases of low-energy quantum electrodynamics and a generalized Born-Infeld model. The correspondent dispersion relations and effects on the propagation regions are determined. Finally, we include the relativistic and large amplitude effects for circularly polarized waves. We obtain the dispersion relation within effective non-linear electrodynamics and examine the behavior of the refractive index when the frequency of the propagating wave converges to the plasma frequency.
Leonardo P. R. Ospedal, Fernando Haas
2023-02-12T21:07:17Z
http://arxiv.org/abs/2302.06000v2
# Probing modified plasma waves in non-linear electrodynamics ###### Abstract The properties of modified plasma waves in a general non-linear electrodynamics with parity invariance are investigated. We consider a cold, uniform, collisionless and magnetized plasma model. For electrostatic waves, we obtain the modified Trivelpiece-Gould dispersion relation with a suitable change in the plasma frequency and analyze the stability of the modes. Furthermore, electromagnetic waves related to the generalized Appleton-Hartree equation are established. In this case, we discuss the modifications in circularly polarized waves, ordinary and extraordinary modes. Finally, we apply our results to particular cases of low-energy quantum electrodynamics and a generalized Born-Infeld model. The correspondent dispersion relations and effects on the propagation regions are determined. We also provide estimates about the ambient magnetic field in which the contributions of non-linear electrodynamics become more relevant. non-linear electrodynamic, plasma waves, Trivelpiece-Gould dispersion relation, Appleton-Hartree equation. pacs: 11.10.Lm, 52.35.Fp, 52.35.Mw Introduction In 1933, based on the seminal works by Dirac [1; 2; 3] about the quantum theory for the electron and the concept of quantum vacuum, Halpern pointed out that virtual electron-positron pairs could generate light-by-light scattering [4]. Subsequently, in 1934, Heisenberg published two papers [5; 6], where the connection between quantum vacuum fluctuations and light-by-light scattering was formulated in more detail. With these ideas in mind and the initial development of quantum electrodynamics (QED), in 1935, Euler and Kockel considered low-energy quantum effects and obtained non-linear corrections to Maxwell electrodynamics. Interestingly enough, they also figured out the first calculation for the light-by-light cross section in the low-frequency regime [7]. Thereafter, in 1936, this non-linear electrodynamics was generalized by Heisenberg and Euler [8], where the authors included high-order quantum corrections in a non-perturbative formulation. In parallel, from a classical viewpoint, Born and Infeld presented a non-linear model with the purpose to avoid singularities of Maxwell theory [9]. Since then other non-linear electrodynamics have been proposed with different motivations, such as effective theories, proposals beyond the Standard Model of elementary particles, novel black-hole solutions, and applications in Cosmology (see, for instance, the reviews [10; 11] and references therein). It is appropriate to highlight the renovated interest in non-linear electrodynamics due to the upgrade of some experiments. In 2017, the ATLAS Collaboration carried out a first measurement of light-by-light scattering in heavy-ion collisions [12]. After that, in 2018, the CMS Collaboration also described similar measurements [13]. More recently, in 2021, the CMS and TOTEM Collaborations have reported a first search for light-by-light scattering in proton-proton collisions [14]. These results can be relevant to test the Standard Model and QED predictions, as well as to constraint a set of non-linear electrodynamics. Along the same line, we indicate the work of ref. [15] for some prospects at future colliders. Furthermore, the development of high-intensity lasers has also triggered a growing motivation to probe photon-photon and photon-plasma interactions [16; 17; 18; 19; 20]. Mention should be made that the astrophysical environments provide important situations involving strong magnetic fields, where the impact of non-linear electrodynamics and plasma effects will certainly show up. For more details, we suggest the reviews [21; 22; 23]. A lot of effort has been devoted to studying the properties of general non-linear electro dynamics. For example, in the works of ref. [24; 25], the authors obtained the dispersion relations of photon propagation in a uniform electromagnetic background field. In addition, the effects of non-linear electrodynamics on the optical properties of vacuum were also evaluated in refs. [26; 27; 28]. However, in the context of plasma, the investigations are mainly restricted to vacuum polarization effects [29], as described by Euler and Kockel electrodynamics. It would be interesting to perform a similar analysis of plasma waves in general non-linear electrodynamics. Based on these motivations, we pursue some investigations of modified plasma waves in the context of non-linear electrodynamics. The main purpose of this work is to further elaborate on a general electrodynamics with parity invariance, which encompasses most examples in the literature. This approach offers the advantage to understand some essential aspects without specifying a particular electrodynamics. Furthermore, in a first attempt at this direction, we consider a cold plasma model, consisting of electrons in a homogeneous ionic background. The collisional, relativistic and large amplitude effects are not addressed. Although a simplified plasma model is examined, one may gain further insights to generalize the results with more involved assumptions. This paper is organized with the following outline. In the Section II, we introduce general results of non-linear electrodynamics in the presence of a magnetic background field. Furthermore, we describe the fluid theory under consideration. Subsequently, we investigate some contributions to plasma waves, where the modified Trivelpiece-Gould dispersion relation, the generalized Appleton-Hartree equation, and their corresponding modes are discussed. After that, in the Section III, we apply the previous results to Euler-Kockel electrodynamics (low-energy quantum effects) and Born-Infeld-type model, as well as present some comparisons with the literature. Finally, in the Section IV, our conclusions and perspectives are exhibited. Throughout this work, we adopt the SI units, where \(\varepsilon_{0}\) and \(\mu_{0}\) correspond to the vacuum permittivity and permeability, respectively. In addition, \(c=1/\sqrt{\varepsilon_{0}\,\mu_{0}}\) denotes the speed of light. ## II Non-linear electrodynamics in cold plasma In this section, we present general results of non-linear electrodynamics and the description of the fluid theory. As already mentioned, the main goal is to investigate some modified plasma waves. The first step in this direction is to discuss the essential features of non-linear electrodynamics. For our purpose, the non-covariant formulation is sufficient. Thus, we begin with the general Lagrangian density \[\mathcal{L}=\frac{1}{\mu_{0}}\,L_{nl}-\rho\,\phi+\mathbf{j}\cdot\mathbf{A}\,, \tag{1}\] where \(\rho\) and \(\mathbf{j}\) denote the charge and current densities, respectively, while \(\phi\) and \(\mathbf{A}\) correspond to the electromagnetic potentials such that \(\mathbf{B}=\nabla\times\mathbf{A}\) and \(\mathbf{E}=-\nabla\phi-\partial\mathbf{A}/\partial t\). In order to preserve the Lorentz and gauge symmetries, we have that \(L_{nl}=L_{nl}(\mathcal{F},\mathcal{G})\) must be a function of the invariants \[\mathcal{F}=\frac{1}{2}\left(\frac{\mathbf{E}^{2}}{c^{2}}-\mathbf{B}^{2} \right)\ \,,\ \,\,\mathcal{G}=\frac{\mathbf{E}\cdot\mathbf{B}}{c}\,. \tag{2}\] The field equations associated with Eq. (1) are given by \[\nabla\cdot\mathbf{D} =\rho\,, \tag{3}\] \[\nabla\times\mathbf{H} =\mathbf{j}+\frac{\partial\mathbf{D}}{\partial t}\,, \tag{4}\] where we defined the auxiliary fields \[\mathbf{D} =\frac{\partial L_{nl}}{\partial\mathcal{F}}\,\varepsilon_{0}\, \mathbf{E}+\frac{\partial L_{nl}}{\partial\mathcal{G}}\,c\,\varepsilon_{0}\, \mathbf{B}\,, \tag{5}\] \[\mathbf{H} =\frac{\partial L_{nl}}{\partial\mathcal{F}}\,\frac{\mathbf{B}} {\mu_{0}}-\frac{\partial L_{nl}}{\partial\mathcal{G}}\,\frac{\mathbf{E}}{\mu _{0}\,c}\,. \tag{6}\] For example, in the case of Maxwell electrodynamics, we have \(L_{nl}=\mathcal{F}\), which implies the usual expressions \(\mathbf{D}=\varepsilon_{0}\,\mathbf{E}\) and \(\mathbf{H}=\mathbf{B}/\mu_{0}\) without the magnetization and polarization vectors. We highlight that the homogeneous equations remain unchanged, namely, \[\nabla\times\mathbf{E}=-\frac{\partial\mathbf{B}}{\partial t}\,\,\,,\,\,\, \nabla\cdot\mathbf{B}=0\,. \tag{7}\] Having made these observations, let us now consider the electromagnetic fields around a uniform and constant magnetic background field \(\mathbf{B}_{0}\), such that \(\mathbf{E}=\delta\mathbf{E}\) and \(\mathbf{B}=\mathbf{B}_{0}+\delta\mathbf{B}\), with \(\delta\mathbf{E}\) and \(\delta\mathbf{B}\) being small perturbations. Keeping this in mind, one can expand Eqs. (5) and (6), which lead to \[\mathbf{D} \approx c_{1}\,\varepsilon_{0}\,\delta\mathbf{E}+c_{2}\, \varepsilon_{0}\,c\,(\mathbf{B}_{0}+\delta\mathbf{B})+d_{2}\,\varepsilon_{0} \,\mathbf{B}_{0}\,(\mathbf{B}_{0}\cdot\delta\mathbf{E})-d_{3}\,\varepsilon_{0 }\,c\,\mathbf{B}_{0}\,(\mathbf{B}_{0}\cdot\delta\mathbf{B})\, \tag{8}\] \[\mathbf{H} \approx\frac{c_{1}}{\mu_{0}}\,\left(\mathbf{B}_{0}+\delta \mathbf{B}\right)-\frac{c_{2}}{\mu_{0}c}\,\delta\mathbf{E}-\frac{d_{1}}{\mu_{ 0}}\,\mathbf{B}_{0}\,(\mathbf{B}_{0}\cdot\delta\mathbf{B})+\frac{d_{3}}{\mu_{ 0}c}\,\mathbf{B}_{0}\,(\mathbf{B}_{0}\cdot\delta\mathbf{E})\, \tag{9}\] where the coefficients \(c_{1},c_{2},d_{1},d_{2}\) and \(d_{3}\) are evaluated at the magnetic background field as follows \[c_{1} = \left.\frac{\partial L_{nl}}{\partial\mathcal{F}}\right|_{\mathbf{ B}_{0}}\;,\;\;c_{2}=\left.\frac{\partial L_{nl}}{\partial\mathcal{G}}\right|_{ \mathbf{B}_{0}}\;, \tag{10}\] \[d_{1} = \left.\frac{\partial^{2}L_{nl}}{\partial\mathcal{F}^{2}}\right|_{ \mathbf{B}_{0}}\;,\;\;d_{2}=\left.\frac{\partial^{2}L_{nl}}{\partial\mathcal{G }^{2}}\right|_{\mathbf{B}_{0}}\;,\;\;d_{3}=\left.\frac{\partial^{2}L_{nl}}{ \partial\mathcal{F}\partial\mathcal{G}}\right|_{\mathbf{B}_{0}}\;. \tag{11}\] We have adopted a similar notation of ref. [30]. Moreover, from Eq. (7), we obtain the homogeneous equations for the perturbation fields, \[\nabla\times\delta\mathbf{E} = -\frac{\partial\,\delta\mathbf{B}}{\partial t}\,, \tag{12}\] \[\nabla\cdot\delta\mathbf{B} = 0\,. \tag{13}\] At this stage, it is important to discuss some subtleties. First of all, using the previous Eqs. (12) and (13), one can show that the coefficient \(c_{2}\) does not contribute to the linearized field equations (3) and (4). Furthermore, we consider non-linear electrodynamics with parity symmetry (invariance under the discrete transformation \(\mathbf{x}\rightarrow-\mathbf{x}\)) such that \(L_{nl}=L_{nl}(\mathcal{F},\mathcal{G})\) need to be invariant under the exchange \(\mathcal{G}\rightarrow-\mathcal{G}\). For this reason, we restrict to the models in which \(d_{3}=0\). Thereby, only the coefficients \(c_{1},d_{1}\) and \(d_{2}\) will be relevant in our analyses. Here, it is also convenient to observe that \(c_{1}>0\). Indeed, an effective theory of non-linear electrodynamics can be written as a power series of the invariants \(\mathcal{F}\) and \(\mathcal{G}\), given by \[L_{nl}\approx\sum_{i,j}a_{ij}\,\mathcal{F}^{i}\mathcal{G}^{j}\,, \tag{14}\] where \(a_{ij}\) are constants (see, for instance, refs. [26; 27]). The leading-order term in Eq. (14) corresponds to the Maxwell contribution (\(\mathcal{F}\)). Thus, we expect that \(c_{1}\approx 1+\epsilon\), with \(\epsilon<<1\) being a dimensionless parameter. Next, we pass to describe the fluid theory. We shall adopt a similar methodology of a previous work in ref. [31]. In a first approach, we consider the cold plasma limit described by \[\frac{\partial n}{\partial t}+\nabla\cdot(n\mathbf{u}) = 0\,, \tag{15}\] \[\frac{\partial\mathbf{u}}{\partial t}+\left(\mathbf{u}\cdot \nabla\right)\mathbf{u} = -\frac{e}{m}(\mathbf{E}+\mathbf{u}\times\mathbf{B})\,, \tag{16}\] where \(n\) denotes the electrons number density and \(\mathbf{u}\) is the electrons fluid velocity field. In addition, \(m\) and \(-e\) correspond to the electron mass and charge, respectively. It should be emphasized that we assume the first order approximation in which \({\bf u}=\delta{\bf u}\) (zero equilibrium fluid velocity), \(n=n_{0}+\delta n\) and \({\bf B}={\bf B}_{0}+\delta{\bf B}\), with \(n_{0}\) being the ions background number density, and \({\bf B}_{0}\) represents the equilibrium magnetic field. Both \(n_{0}\) and \({\bf B}_{0}\) are considered to be uniform and constant. For the sake of simplicity, we suppose that the ions are infinitely massive, which is suitable for high frequency waves. Furthermore, we shall also disregard the thermal and collisional effects. We are now in position to obtain the modified Trivelpiece-Gould dispersion relation and the generalized Appleton-Hartree equation, which will be discussed in the next two subsections. ### Electrostatic waves In this subsection, we investigate the electrostatic waves (\(\delta{\bf B}={\bf 0}\)) and the corresponding modified Trivelpiece-Gould modes. To accomplish this purpose, let us assume plane wave perturbations proportional to \(\exp[i({\bf k}\cdot{\bf r}-\omega t)]\), where \({\bf k}\) and \(\omega\) are the wave vector and angular wave frequency, respectively. From now on, we call the attention that \(\delta n,\delta{\bf E}\) and \(\delta{\bf u}\) will denote the Fourier amplitudes. By considering Eq. (3) with the charge density \(\rho=e\left(n_{0}-n\right)\), as well as the fluid equations (15) and (16), we obtain the system \[\omega\,\delta n = n_{0}\,{\bf k}\cdot\delta{\bf u}\,, \tag{17}\] \[-i\omega\,\delta{\bf u} = -\frac{e}{m}\,\left(\delta{\bf E}+\delta{\bf u}\times{\bf B}_{0} \right)\,,\] (18) \[ic_{1}\,{\bf k}\cdot\delta{\bf E} + id_{2}\,{\bf B}_{0}\cdot{\bf k}\left({\bf B}_{0}\cdot\delta{\bf E }\right)=-\frac{e}{\varepsilon_{0}}\,\delta n\,. \tag{19}\] Before proceeding our analysis, we point out that this result also holds for non-linear electrodynamics with \(d_{3}\neq 0\). Indeed, one can easily check that using \(\delta{\bf B}={\bf 0}\) in Eq. (8), the coefficient \(d_{3}\) will not contribute to these expressions. At this stage, the auxiliary field \({\bf H}\) in Eq. (9) is not required. The magnetic field perturbation (\(\delta{\bf B}\neq{\bf 0}\)) will be considered in the next subsection. For the unmagnetized case (\({\bf B}_{0}={\bf 0}\)), one can promptly obtain the solution \(\omega^{2}=\widetilde{\omega}_{p}^{2}\), where we adopted the shorthand notation for the modified plasma frequency \[\widetilde{\omega}_{p}=\omega_{p}/\sqrt{c_{1}}\,, \tag{20}\] with \(\omega_{p}=[n_{0}\,e^{2}/(m\varepsilon_{0})]^{1/2}\) being the usual plasma frequency. Here, we remember that \(c_{1}>0\), thus \(\widetilde{\omega}_{p}\) is well-defined. Now, we pass to consider the magnetized case. Let us assume that \({\bf B}_{0}=B_{0}\,\hat{z}\) and \({\bf k}\parallel\delta{\bf E}\) with \({\bf k}=k\sin\theta\,\hat{x}+k\cos\theta\,\hat{z}\). Solving the system of Eqs. (17)\(-\)(19), we arrive at the modified Trivelpiece-Gould dispersion relation \[\omega^{4}-\left(\bar{\omega}_{p}^{2}+\omega_{c}^{2}\right)\,\omega^{2}+\bar{ \omega}_{p}^{2}\,\omega_{c}^{2}\,\cos^{2}\theta=0\,, \tag{21}\] where \(\omega_{c}=eB_{0}/m\) corresponds to the electron cyclotron frequency and \[\bar{\omega}_{p}=\frac{1}{\left(c_{1}+d_{2}\,B_{0}^{2}\,\cos^{2}\theta\, \right)^{1/2}}\,\omega_{p}\,\,\,. \tag{22}\] Observe that, for \(\theta=\pi/2\) or \(d_{2}=0\), we recover the modified plasma frequency (20). According to the definition (11), the second condition (\(d_{2}=0\)) always occurs for non-linear models that depend only on \(L_{nl}=L_{nl}({\cal F})\). Furthermore, for a real frequency in Eq. (22), we have the following constraint \[c_{1}+d_{2}\,B_{0}^{2}\,\cos^{2}\theta>0\,. \tag{23}\] As a consequence, from Eq. (21), we get the solution \[\omega^{2}=\frac{1}{2}\left[\bar{\omega}_{p}^{2}+\omega_{c}^{2}\pm\left(\left( \bar{\omega}_{p}^{2}-\omega_{c}^{2}\right)^{2}+4\,\bar{\omega}_{p}^{2}\,\omega _{c}^{2}\,\sin^{2}\theta\right)^{1/2}\right]\,, \tag{24}\] and, with the condition (23), one can show that the correspondent modes are always stable (\(\omega^{2}\geq 0\)). Therefore, the standard analysis of electrostatic waves holds with the modified plasma frequency. Finally, it should be mentioned that, for Maxwell electrodynamics, we have \(c_{1}=1\) and \(d_{1}=d_{2}=0\), which implies \(\bar{\omega}_{p}\rightarrow\omega_{p}\) and the usual Trivelpiece-Gould dispersion relation is recovered [32]. ### Generalized Appleton-Hartree equation In this subsection, we now turn our attention to including the effects of magnetic field perturbation (\(\delta{\bf B}\neq{\bf 0}\)). Here, we will be concerned with the electrodynamics in which \(d_{3}=0\), namely, it corresponds to the cases with parity invariance. As done before, we consider plane wave perturbations proportional to \(\exp[i({\bf k}\cdot{\bf r}-\omega t)]\). Additionally, we also assume that \({\bf B}_{0}=B_{0}\,\hat{z}\) and \({\bf k}=k\sin\theta\,\hat{x}+k\cos\theta\,\hat{z}\). Following the usual procedure [33; 34], we first manipulate Eq. (16) to isolate the linearized velocity in terms of electric amplitude and frequency, which yields \[\delta u_{x}=\frac{e}{m}\,\frac{(\omega_{c}\,\delta E_{y}+i\omega\,\delta E_{x })}{(\omega_{c}^{2}-\omega^{2})}\,,\,\delta u_{y}=\frac{e}{m}\,\frac{(-\omega _{c}\,\delta E_{x}+i\omega\,\delta E_{y})}{(\omega_{c}^{2}-\omega^{2})}\,,\, \delta u_{z}=-\frac{ie}{m\omega}\,\delta E_{z}\,. \tag{25}\] Subsequently, from Eq. (12), one can easily see that \(\delta{\bf B}={\bf k}\times\delta{\bf E}/\omega\). By inserting these results into the modified Ampere-Maxwell law (4) with the current density \({\bf j}=-ne\,{\bf u}\), we finally arrive at the system \[\begin{bmatrix}{\cal S}-\eta^{2}\cos^{2}\theta&-i{\cal D}&\eta^{2}\,\cos \theta\sin\theta\\ i{\cal D}&{\cal S}-\eta^{2}\,\alpha(\theta)&0\\ \eta^{2}\cos\theta\sin\theta&0&{\cal P}-\eta^{2}\sin^{2}\theta\end{bmatrix} \begin{bmatrix}\delta E_{x}\\ \delta E_{y}\\ \delta E_{z}\end{bmatrix}=0\,, \tag{26}\] where we defined the modified Difference (\({\cal D}\)), Sum (\({\cal S}\)) and Plasma (\({\cal P}\)) coefficients, \[{\cal D}=\frac{\omega_{c}\,\widetilde{\omega}_{p}^{2}}{\omega(\omega_{c}^{2}- \omega^{2})}\,,\quad{\cal S}=1+\frac{\widetilde{\omega}_{p}^{2}}{\omega_{c}^{ 2}-\omega^{2}}\,,\quad{\cal P}=1-\frac{\widetilde{\omega}_{p}^{2}}{\omega^{2} }+\frac{d_{2}}{c_{1}}\,B_{0}^{2}\,. \tag{27}\] with \(\widetilde{\omega}_{p}=\omega_{p}/\sqrt{c_{1}}\) being the modified plasma frequency. In addition, \(\eta=ck/\omega\) denotes the refractive index and \[\alpha(\theta)=1-\frac{d_{1}}{c_{1}}\,B_{0}^{2}\sin^{2}\theta\,. \tag{28}\] The previous definitions clearly show the contributions of non-linear electrodynamics. First of all, we adopted the standard definitions for the coefficients \({\cal D}\) and \({\cal S}\) with \(\omega_{p}\to\widetilde{\omega}_{p}\). However, for the coefficient \({\cal P}\), we have a non-trivial contribution of \(d_{2}B_{0}^{2}/c_{1}\). Furthermore, we would like to point out the angular dependence in Eq. (28) due to the presence of \(d_{1}B_{0}^{2}/c_{1}\). The non-trivial solutions of Eq. (26) are obtained by imposing that the determinant of the matrix must vanish, which leads to \[A\,\eta^{4}-B\,\eta^{2}+C=0\,, \tag{29}\] where the coefficients are read below \[A =\alpha(\theta)\,\left[{\cal S}\sin^{2}\theta+{\cal P}\cos^{2} \theta\right]\,, \tag{30}\] \[B ={\cal RL}\,\sin^{2}\theta+{\cal S}{\cal P}\left[\alpha(\theta)+ \cos^{2}\theta\right],\] (31) \[C ={\cal P}{\cal RL}\,, \tag{32}\] in which we have also introduced \({\cal R}={\cal S}+{\cal D}\) and \({\cal L}={\cal S}-{\cal D}\) for the modified Right and Left coefficients, respectively. After some standard manipulations of Eq. (29), we promptly find that \[\eta^{2}=1-\,\frac{2(A-B+C)}{2A-B\pm\sqrt{B^{2}-4AC}}\,\,, \tag{33}\] and, by substituting the coefficients (30)\(-\)(32), this expression can be written as \[\eta^{2}=1-\frac{\widetilde{\omega}_{p}^{2}/\omega^{2}}{Q}\,,\quad Q=\left(Q_ {0}\pm F\right)/Q_{1}\,, \tag{34}\] with \(F^{2}\equiv B^{2}-4AC\) given by \[F^{2} =\left[{\cal R}{\cal L}-\left(1+\frac{d_{1}}{c_{1}}\,B_{0}^{2} \right){\cal S}{\cal P}\right]^{2}\sin^{4}\theta+4\left[1-\alpha(\theta) \right]{\cal S}^{2}{\cal P}^{2}+4\left[1-\alpha(\theta)\right]{\cal S}{\cal P }{\cal R}{\cal L}\sin^{2}\theta\] \[-4\left[1-\alpha(\theta)+\frac{d_{1}}{c_{1}}\,B_{0}^{2}\right]{ \cal S}^{2}{\cal P}^{2}\sin^{2}\theta+4\,\alpha(\theta)\,{\cal P}^{2}{\cal D} ^{2}\cos^{2}\theta\,, \tag{35}\] and the definitions \[Q_{0} =\left(1-\frac{\widetilde{\omega}_{p}^{2}}{\omega^{2}}+\frac{d_{2 }}{c_{1}}B_{0}^{2}\right)\left[\alpha(\theta)-1-(\alpha(\theta)+1)\frac{ \widetilde{\omega}_{p}^{2}}{\omega_{c}^{2}-\omega^{2}}\right]+\] \[+\frac{\widetilde{\omega}_{p}^{2}}{\omega^{2}}\,\sin^{2}\theta \left[(2\alpha(\theta)-1)\frac{\omega_{c}^{2}}{\omega_{c}^{2}-\omega^{2}}-(2 \alpha(\theta)-1)\frac{d_{2}}{c_{1}}\,B_{0}^{2}\,\frac{\omega^{2}}{\widetilde {\omega}_{p}^{2}}+\frac{d_{2}}{c_{1}}\,B_{0}^{2}\,\frac{\omega^{2}}{\omega_{c} ^{2}-\omega^{2}}\right]\,, \tag{36}\] \[Q_{1} =\frac{2\left(\omega^{2}-\widetilde{\omega}_{p}^{2}+d_{2}\,B_{0} ^{2}\,\omega^{2}/c_{1}\right)}{\omega_{c}^{2}-\omega^{2}}\left[1-\alpha( \theta)-\frac{\widetilde{\omega}_{p}^{2}}{\omega^{2}}\right]+\] \[+2\sin^{2}\theta\left[(\alpha(\theta)-1)\frac{\omega_{c}^{2}}{ \omega_{c}^{2}-\omega^{2}}-(\alpha(\theta)-1)\,\frac{\omega^{2}}{\widetilde {\omega}_{p}^{2}}\,\frac{d_{2}}{c_{1}}\,B_{0}^{2}+\frac{\omega^{2}}{\omega_{c} ^{2}-\omega^{2}}\,\frac{d_{2}}{c_{1}}\,B_{0}^{2}\right]\,. \tag{37}\] We highlight that Eq. (34) corresponds to the generalized Appleton-Hartree equation with the contributions of non-linear electrodynamics. As expected, in the Maxwell case (\(c_{1}=1\) and \(d_{1}=d_{2}=0\)), we recover the well-known Appleton-Hartree equation [33; 34], where \[Q=1-\frac{\omega_{c}^{2}\,\sin^{2}\theta}{2(\omega^{2}-\omega_{p}^{2})}\pm \left(\frac{\omega_{c}^{4}\,\sin^{4}\theta}{4(\omega^{2}-\omega_{p}^{2})^{2}}+ \frac{\omega_{c}^{2}\,\cos^{2}\theta}{\omega^{2}}\right)^{1/2}\,. \tag{38}\] The general solution of Eq. (34) is quite involved. However, the analyses of the principal modes are feasible. In what follows, we consider the situations of propagation parallel or perpendicular to the equilibrium magnetic field \({\bf B}_{0}\). For \(\theta=0\), we obtain \(\alpha(\theta)=1\) and \(Q=1\pm\omega_{c}/\omega\), such that Eq. (34) can be written in the two modes below \[\frac{c^{2}k^{2}}{\omega^{2}} =1-\frac{\widetilde{\omega}_{p}^{2}}{\omega(\omega-\omega_{c})}\,, \tag{39}\] \[\frac{c^{2}k^{2}}{\omega^{2}} =1-\frac{\widetilde{\omega}_{p}^{2}}{\omega(\omega+\omega_{c})}\,, \tag{40}\] which correspond to the modified right-hand and left-hand circularly polarized waves (RCP and LCP), respectively. Therefore, only the coefficient \(c_{1}\) will contribute to changes in the RCP and LCP modes and the well-known properties are reproduced here through the replacement of plasma frequency \(\omega_{p}\,\rightarrow\,\widetilde{\omega}_{p}=\omega_{p}/\sqrt{c_{1}}\) in the usual results [32; 33; 34]. Moreover, the parallel propagation case allows another solution. By using \(\theta=0\), all the coefficients \(A,B\) and \(C\) in Eqs. (30)\(-\)(32) become proportional to \(\mathcal{P}\). Hence, \(\mathcal{P}=0\) is a possible solution of Eq. (29), which leads to \[\omega=\frac{\omega_{p}}{\sqrt{c_{1}+d_{2}\,B_{0}^{2}}}\;. \tag{41}\] Note that this frequency coincides with \(\bar{w}_{p}\) defined in the context of electrostatic waves (see Eq. (22) with \(\theta=0\)). Next, we pass to consider Eq. (34) with \(\theta=\pi/2\). After algebraic manipulations, we obtain two solutions. The first one describes the modified ordinary (\(O\)) mode, given by \[\omega^{2}=\frac{c^{2}k^{2}+\widetilde{\omega}_{p}^{2}}{\left(1+\frac{d_{2}}{ c_{1}}B_{0}^{2}\right)}\,. \tag{42}\] In comparison with the usual \(O-\)mode (\(\omega^{2}=c^{2}k^{2}+\omega_{p}^{2}\)), we get new contributions due to the coefficients \(c_{1}\) and \(d_{2}\). The analysis of the modified \(O-\)mode can be carried out with the definitions of cut-off and resonance, which help us to divide the regions of propagation and non-propagation. We remember that a cut-off occurs whenever the refractive index \(\eta\to 0\), while a resonance happens when \(\eta\rightarrow\infty\). In general, the wave will be reflected at a cut-off and absorbed at a resonance. A qualitative description is displayed in Fig. 1, where we consider a diagram \(1/\eta^{2}\) versus \(\omega\). In this case, there is one cut-off at \(\omega_{p}/\sqrt{c_{1}+d_{2}B_{0}^{2}}\) and no resonances. The wave only propagates in the region with \(\eta^{2}>0\), namely, for frequency \(\omega>\omega_{p}/\sqrt{c_{1}+d_{2}B_{0}^{2}}\). At this stage, we recall that \(1/\eta^{2}=v_{\phi}^{2}/c^{2}\), where \(v_{\phi}\) denotes the phase velocity. Therefore, the wave travels faster or slower than \(c\) depending on the values of the coefficients \(c_{1}\) and \(d_{2}\) (the red line may be above or below to \(1/\eta^{2}=1\)). For high frequency, we have that \(1/\eta^{2}\) approaches to \(1/(1+d_{2}B_{0}^{2}/c_{1})\). The other solution for perpendicular propagation corresponds to the modified extraordinary (\(X\)) mode, described by \[\frac{c^{2}k^{2}}{\omega^{2}}\left(1-\frac{d_{1}}{c_{1}}B_{0}^{2}\right)=1- \frac{\widetilde{\omega}_{p}^{2}}{\omega^{2}}\frac{(\omega^{2}-\widetilde{ \omega}_{p}^{2})}{(\omega^{2}-\widetilde{\omega}_{h}^{2})}\,\,, \tag{43}\] with \(\widetilde{\omega}_{h}\) being the modified upper-hybrid frequency such that \(\widetilde{\omega}_{h}^{2}=\omega_{c}^{2}+\widetilde{\omega}_{p}^{2}\). By comparing with the usual \(X-\)mode, \[\frac{c^{2}k^{2}}{\omega^{2}}=1-\frac{\omega_{p}^{2}}{\omega^{2}}\frac{( \omega^{2}-\omega_{p}^{2})}{(\omega^{2}-\omega_{h}^{2})}\,\,, \tag{44}\] we observe that only the coefficients \(c_{1}\) and \(d_{1}\) will introduce modifications by means of a factor \((1-d_{1}B_{0}^{2}/c_{1})\) and the replacements \(\omega_{p}\to\widetilde{\omega}_{p}\) and \(\omega_{h}\to\widetilde{\omega}_{h}\). It is worthwhile mentioning that Eq. (43) can be recast as \[\frac{c^{2}k^{2}}{\omega^{2}}\left(1-\frac{d_{1}}{c_{1}}B_{0}^{2}\right)=\frac {(\omega^{2}-\widetilde{\omega}_{L}^{2})(\omega^{2}-\widetilde{\omega}_{R}^{2 })}{\omega^{2}(\omega^{2}-\widetilde{\omega}_{h}^{2})}\,\,\,, \tag{45}\] where we define the modified cut-off frequencies \[\widetilde{\omega}_{L} = \frac{1}{2}\left[-\omega_{c}+\sqrt{\omega_{c}^{2}+4\,\widetilde{ \omega}_{p}^{2}}\,\right]\,, \tag{46}\] \[\widetilde{\omega}_{R} = \frac{1}{2}\left[\omega_{c}+\sqrt{\omega_{c}^{2}+4\,\widetilde{ \omega}_{p}^{2}}\,\right]\,. \tag{47}\] From these results, one can easily understand the behaviour of the modified \(X-\)mode, which is exhibited in Fig. 2. First of all, there is a resonance at \(\widetilde{\omega}_{h}\), and the cut-off frequencies are situated at \(\widetilde{\omega}_{L}\) and \(\widetilde{\omega}_{R}\). In the regions \(0<\omega<\widetilde{\omega}_{L}\) and \(\widetilde{\omega}_{h}<\omega<\widetilde{\omega}_{R}\), we see that \(1/\eta^{2}\) is negative and, consequently, there is no propagation. On the other hand, we have two regions of propagation, given by \(\widetilde{\omega}_{L}<\omega<\widetilde{\omega}_{h}\) and \(\omega>\widetilde{\omega}_{R}\), which are separated by a stop band. Furthermore, it should be mentioned that the coefficients \(c_{1}\) and \(d_{1}\) also modify the regions in which the wave travels faster or slower than \(c\) (again, the red line may be above or below to \(1/\eta^{2}=1\)). For high frequency, we have that \(1/\eta^{2}\) approaches to \((1-d_{1}B_{0}^{2}/c_{1})\). A final comment we would like to raise concerns the limit of a null plasma frequency. From Eqs. (39) and (40), we find the trivial solution \(\omega=ck\). In this limit, it is also expected to recover the well-known dispersion relations of non-linear electrodynamics in vacuum with an external magnetic field. Indeed, from Eqs. (42) and (43), one can promptly arrive at \[\omega \approx \frac{ck}{\sqrt{1+\frac{d_{2}}{c_{1}}B_{0}^{2}}}\,, \tag{48}\] \[\omega \approx ck\,\sqrt{1-\frac{d_{1}}{c_{1}}B_{0}^{2}}\,, \tag{49}\] which agrees with the results in refs. [24; 30]. Notice that the dispersion relations (48) and (49) can be obtained from the asymptotic behaviour for high frequency in Figs. 1 and 2, respectively. Having established the principal modes, we are now equipped to investigate some specific models. It will be done in the next section. ## III Applications So far we have not specified any electrodynamics, except for some considerations about the Maxwell limit. The only assumption is parity invariance so that we restrict to the cases with \(d_{3}=0\). In what follows, we apply our results to particular models. Initially, we consider the well-known Euler-Kockel electrodynamics to illustrate our methodology and establish some comparisons with the literature. After that, we pursue an investigation of a generalized Born-Infeld electrodynamics, which has been a subject of intense research. ### Euler-Kockel electrodynamics The effective theory obtained by Euler and Kockel (EK) [7] is one of the most investigated non-linear electrodynamics. This proposal takes into account the low-energy quantum effects of vacuum polarization produced by virtual electron\(-\)positron pair, which leads to the following corrections \[L_{EK}={\cal F}+\frac{2}{45}\,\frac{\alpha^{2}}{m^{4}}\,\frac{\hbar^{3} \varepsilon_{0}}{c^{3}}\left(4{\cal F}^{2}+7{\cal G}^{2}\right)\,, \tag{50}\] where \(\hbar\equiv h/2\pi\) denotes the reduced Planck constant and \(\alpha\equiv e^{2}/(4\pi\varepsilon_{0}\hbar c)\approx 1/137\) is the fine structure constant. It is appropriate to remark that the EK model holds for frequencies below the Compton frequency (\(\omega<<\omega_{e}=mc^{2}/\hbar\)), and for electromagnetic fields in the weak regime \(|{\bf E}|<<E_{c}\) and \(|{\bf B}|<<B_{c}\), where the critical fields are given by \(E_{c}\equiv m^{2}c^{3}/e\hbar\approx 1,3\times 10^{18}\,{\rm V/m}\) and \(B_{c}\equiv m^{2}c^{2}/e\hbar\approx 4,4\times 10^{9}\,{\rm T}\). Mention should be made that EK result was generalized by Heisenberg and Euler (HE) electrodynamics [8], where the authors obtained a non-perturbative expression including high-order quantum corrections. For more details on EK and HE electrodynamics, we point out the reviews [35; 36] and references therein. Here, let us focus on the leading quantum correction, as described in Eq. (50). According to the prescription in Eqs. (10) and (11) with \(L_{nl}=L_{EK}\), we find that \[c_{1} = 1-\frac{2}{45}\,\frac{\alpha}{\pi}\,\left(\frac{B_{0}}{B_{c}} \right)^{2}\ \, \tag{51}\] \[d_{1} = \frac{4}{45}\,\frac{\alpha}{\pi}\,\frac{1}{B_{c}^{2}}\,\] (52) \[d_{2} = \frac{7}{45}\,\frac{\alpha}{\pi}\,\frac{1}{B_{c}^{2}}. \tag{53}\] As already expected due to parity symmetry, we also obtain \(d_{3}=0\). From Eq. (51), one can immediately see that \(c_{1}>0\). In this model, note that the coefficients \(d_{1}\) and \(d_{2}\) do not depend on the equilibrium magnetic field \(B_{0}\). Moreover, we draw attention to the fact that \(d_{2}>0\), then the constraint (23) related to electrostatic waves is automatically satisfied for any \(\theta\)-angle with the modified plasma frequency \[\bar{\omega}_{p}\approx\left[1+\frac{\alpha}{45\pi}\,\left(\frac{B_{0}}{B_{c} }\right)^{2}\,\left(1-\frac{7}{2}\cos^{2}\theta\right)\right]\omega_{p}\,. \tag{54}\] Next, let us examine the RCP mode. As described in Eq. (39), only \(c_{1}\) will be relevant. Therefore, using the coefficient (51) and the assumption \(B_{0}<<B_{c}\), we obtain \[\eta^{2}\approx 1-\frac{\omega_{p}^{2}}{\omega(\omega-\omega_{c})}-\frac{2}{4 5}\frac{\alpha}{\pi}\left(\frac{B_{0}}{B_{c}}\right)^{2}\frac{\omega_{p}^{2} }{\omega(\omega-\omega_{c})}\,\,, \tag{55}\] which agrees with the result of ref. [37]. Similarly, one can get the LCP mode by substituting \(\omega_{c}\rightarrow-\omega_{c}\) in the previous expression. We now consider the \(O-\)mode. In this case, it is opportune to define the dimensionless parameter \(\xi=(\alpha/90\pi)(B_{0}/B_{c})^{2}\), such that the coefficients (51) and (53) can be written as \(c_{1}=1-4\xi\) and \(d_{2}B_{0}^{2}=14\xi\). Keep this definition in mind and \(\xi<<1\), from Eq. (42), one can promptly arrive at \[\omega^{2}\approx(1-14\xi)\ c^{2}k^{2}+(1-10\xi)\ \omega_{p}^{2}\,\,, \tag{56}\] which is consistent with refs. [38; 39]. ### Born-Infeld-type electrodynamics One of the first non-linear electrodynamics was proposed by Born and Infeld (BI) to remove the singularity of the electric field at short distances and, consequently, to avoid the divergence of the electron self-energy [9]. The BI model is described by \[L_{BI}=\frac{\beta^{2}}{c^{2}}\left[1-\sqrt{1-\frac{2c^{2}}{\beta^{2}}\,{ \cal F}-\frac{c^{4}}{\beta^{4}}\,{\cal G}^{2}}\,\right]\,, \tag{57}\] where \(\beta\) denotes the BI parameter with the same dimension as the electric field. Observe that, at the limit of extremely large \(\beta\), this model reduces to Maxwell electrodynamics. Interestingly, the BI model was recovered in the low-energy limit of string theories [40; 41]. Another particular feature is the absence of the birefringence phenomenon under an external electromagnetic field [42]. However, it is pertinent to comment that this result can be circumvented by coupling the BI model with a dark matter candidate [43]. In this subsection, based on ref. [44], we address to the following generalization \[L_{BI-\text{type}}=\frac{\beta^{2}}{c^{2}}\left[1-\left(1-\frac{c^{2}}{p\beta^{ 2}}\,\mathcal{F}-\frac{c^{4}}{2p\beta^{4}}\,\gamma\,\mathcal{G}^{2}\right)^{p }\,\right]\,, \tag{58}\] with \(p\) and \(\gamma\) being dimensionless parameters. One can immediately see that the BI model (57) is recovered when \(p=1/2\) and \(\gamma=1\). Similar models were also analyzed in refs. [45; 46; 47; 30], where the authors discuss the effects of birefringence, dichroism and obtain a finite electric field at the origin for some specific values of the parameters. It is important to point out that at the limit of low-energy fields (\(\beta>>c\,\mathcal{F},c\,\mathcal{G}\)), the Born-Infeld-type model (58) can be approximated by \[L_{BI-\text{type}}\approx\mathcal{F}+\frac{(1-p)}{p}\,\frac{c^{2}}{\beta^{2}} \frac{\mathcal{F}^{2}}{2}+\gamma\,\frac{c^{2}}{\beta^{2}}\frac{\mathcal{G}^{2 }}{2}\,, \tag{59}\] which corresponds to the most general post-Maxwellian model up to second order in the invariants \(\mathcal{F}\) and \(\mathcal{G}\). Notice that the term proportional to \(\mathcal{F}\mathcal{G}\) does not appear due to parity symmetry. In addition, by taking the limit \(p\to\infty\) in Eq. (58), we arrive at the so-called exponential electrodynamics, \[L_{\text{exp}}=\lim_{p\to\infty}L_{BI-\text{type}}=\frac{\beta^{2}}{c^{2}} \left[1-\exp\left(-\frac{c^{2}}{\beta^{2}}\,\mathcal{F}-\frac{c^{4}}{\beta^{4 }}\,\frac{\gamma\,\mathcal{G}^{2}}{2}\right)\right]\,. \tag{60}\] This type of model was initially investigated in the context of black hole solutions [48; 49]. For a detailed review of BI-type and exponential electrodynamics, as well as other generalizations and their properties, we highlight the recent work of ref. [50]. Therefore, the BI-type model (58) allows us to investigate a series of electrodynamics in the literature. We only need to consider different values and limits in the parameter space \((p,\gamma,\beta)\). After these motivations, we now proceed to study the corresponding cold plasma waves. First of all, using the prescription in Eqs. (10) and (11) with \(L_{nl}=L_{BI-{\rm type}}\), we obtain \[c_{1} = \left(1+\frac{c^{2}B_{0}^{2}}{2p\,\beta^{2}}\right)^{p-1}\, \tag{61}\] \[d_{1} = \frac{(1-p)}{p}\,\frac{c^{2}}{\beta^{2}}\,\left(1+\frac{c^{2}B_{0 }^{2}}{2p\,\beta^{2}}\,\right)^{p-2}\,\] (62) \[d_{2} = \gamma\,\frac{c^{2}}{\beta^{2}}\,\left(1+\frac{c^{2}B_{0}^{2}}{2 p\,\beta^{2}}\,\right)^{p-1}. \tag{63}\] Recalling again that we get \(d_{3}=0\). The requirement that \(c_{1}>0\) is satisfied by \(p>-c^{2}B_{0}^{2}/(2\beta^{2})\). However, from Eq. (58), we note that \(p<0\) would allow field configurations in which \(L_{BI-{\rm type}}\to\pm\infty\). To avoid such singularities, we restrict ourselves to cases where \(p>0\). We begin our analysis with electrostatic waves. Before going into details, it should be mentioned that the BI model was investigated in ref. [51], but the authors focused on large amplitude effects and the maximum values for the electric field and frequency. Here, as already emphasized, we will be interested in the modified Trivelpiece-Gould modes (24). For this purpose, we need to figure out the modified plasma frequency (22). In this manner, by using the coefficients (61) and (63), we obtain \[\bar{\omega}_{p}=\omega_{p}\,\left(1+\frac{c^{2}B_{0}^{2}}{2p\,\beta^{2}} \right)^{(1-p)/2}\left(1+\gamma\,\frac{c^{2}B_{0}^{2}}{\beta^{2}}\,\cos^{2} \theta\right)^{-1/2}\,. \tag{64}\] As a consequence, we observe that negative values for \(\gamma\) are possible with the condition \(|\gamma|\cos^{2}\theta<\beta^{2}/(c^{2}B_{0}^{2})\). For \(\gamma\geq 0\), we have \(d_{2}\geq 0\) and the constraint (23) is contemplated for any \(\theta\)-angle. It is also interesting to consider the weak field approximation \(\beta^{2}>>c^{2}B_{0}^{2}\). Therefore, we can expand Eq. (64), which leads to \[\bar{\omega}_{p}\approx\omega_{p}-\omega_{p}\,\frac{c^{2}B_{0}^{2}}{2\beta^{2 }}\left[\frac{(p-1)}{2p}+\gamma\cos^{2}\theta\right]\,. \tag{65}\] This result clearly shows that \(\bar{\omega}_{p}\approx\omega_{p}\) for some regions in the parameter space \((p,\gamma,\theta)\). In other words, the first order contributions from non-linear electrodynamics cancel out if the parameters satisfy \((1-p)/2p=\gamma\cos^{2}\theta\). For instance, we display specific values in table 1. Notice that the Born-Infeld model (\(p=1/2\,\) and \(\,\gamma=1\)) obeys this condition only for \(\theta=\pi/4\). In the case of exponential electrodynamics (\(p\to\infty\)), the cancellation occurs for particular values of the parameter \(\gamma\). Moreover, for \(p=1\) and perpendicular propagation (\(\theta=\pi/2\)), the condition is automatically satisfied for any \(\gamma<<\beta^{2}/(c^{2}B_{0}^{2})\). Our next undertaking is the circularly polarized waves. Let us consider the RCP mode. In this case, only the coefficient \(c_{1}\) will be pertinent. According to Eqs. (39) and (61), we promptly arrive at \[\eta^{2}=1-\frac{\omega_{p}^{2}}{\omega(\omega-\omega_{c})}\left(1+\frac{c^{2}B_ {0}^{2}}{2p\,\beta^{2}}\right)^{1-p}\,. \tag{66}\] In the same way, one can obtain the LCP mode by replacing \(\omega_{c}\to-\omega_{c}\) in the previous expression. For the particular case of BI electrodynamics with the notation \(\kappa=1/\beta\), Eq. (66) reduces to \[(\omega-\omega_{c})\left(\omega-\frac{c^{2}k^{2}}{\omega}\right)=\omega_{p}^{ 2}\,\sqrt{1+\kappa^{2}c^{2}B_{0}^{2}}\,, \tag{67}\] and we recover the result in ref. [52]. As before, we consider the weak field approximation, such that Eq. (66) takes the form \[\eta^{2}\approx 1-\frac{\omega_{p}^{2}}{\omega(\omega-\omega_{c})}-\frac{(1-p) }{2p}\,\frac{c^{2}B_{0}^{2}}{\beta^{2}}\,\frac{\omega_{p}^{2}}{\omega(\omega- \omega_{c})}\,. \tag{68}\] This expression is very similar to the one obtained for EK electrodynamics in Eq. (55). However, we have an additional possibility here, namely, the last term in Eq. (68) can assume positive or negative values depending on whether \(p>1\) or \(0<p<1\). We now pass to investigate the modified \(O-\)mode. By substituting the coefficients (61) and (63) in Eq. (42), we find that \[\eta^{2}=1+\gamma\,\frac{c^{2}B_{0}^{2}}{\beta^{2}}-\left(1+\frac{c^{2}B_{0}^ {2}}{2p\,\beta^{2}}\right)^{1-p}\,\frac{\omega_{p}^{2}}{\omega^{2}}\,\,. \tag{69}\] At this stage, we remember the discussion related to Fig. 1. The coefficients \(c_{1}\) and \(d_{2}\) modify the region where the wave may travel faster or slower than \(c\), according to the horizontal red line defined by \(1/(1+d_{2}B_{0}^{2}/c_{1})\). In the case of BI-type model, this line is given by \(1/(1+\gamma\,c^{2}B_{0}^{2}/\beta^{2})\). Therefore, we need to analyze the situations with \(\gamma>0\) and \(\gamma<0\), as well as specific values for \(p\) and \(c^{2}B_{0}^{2}/\beta^{2}\). For instance, the plot of \(1/\eta^{2}\) versus \(\omega/\omega_{p}\) with \(\gamma=1\) and \(c^{2}B_{0}^{2}/\beta^{2}=1/2\) is displayed in Fig. 3. We consider the particular cases of \(p=1/2\) (usual BI model), \(p=3/4\) and \(p\to\infty\) (Exponential model). Note that the propagating modes are divided into two regions with phase velocity \(v_{\phi}>c\) and \(v_{\phi}<c\). For the sake of comparison, we also exhibit the usual \(O-\)mode associated with Maxwell electrodynamics, where the wave travels only faster than \(c\) (dotted line). Similar behaviour also occurs whenever \(\gamma>0\), but for small values \(\gamma\,c^{2}B_{0}^{2}/\beta^{2}<<1\) the region with \(v_{\phi}<c\) will decrease. On the other hand, for \(\gamma<0\) with the condition \(|\gamma|<\beta^{2}/c^{2}B_{0}^{2}\,\) to guarantee a well-defined cut-off frequency, it is possible to show that \(v_{\phi}>c\) throughout the propagation region. The description of the modified \(X-\)mode can be performed in a similar way. Initially, by using the coefficients (61) and (62) in Eq. (45), we get \[\eta^{2}=\frac{(\omega^{2}-\widetilde{\omega}_{L}^{2})(\omega^{2}-\widetilde{ \omega}_{R}^{2})}{\omega^{2}(\omega^{2}-\omega_{c}^{2}-\widetilde{\omega}_{p} ^{2})}\left[\frac{2p+c^{2}B_{0}^{2}/\beta^{2}}{2p+(2p-1)c^{2}B_{0}^{2}/\beta^{ 2}}\right]\,, \tag{70}\] where the modified plasma frequency is given by \[\widetilde{\omega}_{p}=\omega_{p}\left[1+\frac{c^{2}B_{0}^{2}}{2p\,\beta^{2}} \right]^{(1-p)/2}\,, \tag{71}\] and the modified cut-off frequencies \(\widetilde{\omega}_{L}\) and \(\widetilde{\omega}_{R}\) are defined in Eqs. (46) and (47). Figure 3: Dispersion relations of the modified \(O-\)mode for particular cases of BI-type model with \(p=1/2\) (green line), \(p=3/4\) (red line) and \(p\rightarrow\infty\) (blue line) in Eq. (69). The dotted line represents the usual \(O-\)mode. We assume \(\gamma=1\) and \(c^{2}B_{0}^{2}/\beta^{2}=1/2\). In the region \(2/3<1/\eta^{2}<1\), the wave travels slower than \(c\). Notice that the dispersion relation (70) does not depend on the parameter \(\gamma\). In addition, we recall that the coefficients \(c_{1}\) and \(d_{1}\) modify the asymptotic behaviour for high frequency, which is obtained by the horizontal red line \((1-d_{1}B_{0}^{2}/c_{1})\) in Fig. 2. For the BI-type model, this expression leads to \[1-\frac{d_{1}B_{0}^{2}}{c_{1}}=\frac{2p+(2p-1)\,c^{2}B_{0}^{2}/\beta^{2}}{2p+c^ {2}B_{0}^{2}/\beta^{2}}\,. \tag{72}\] Therefore, we conclude that the wave travels faster or slower than \(c\) for high frequency, depending on whether \(p>1\) or \(0<p<1\), respectively. Next, we proceed to describe the effects on the allowed and forbidden regions. From Eq. (71), we have that \(\widetilde{\omega}_{p}<\omega_{p}\) when \(p>1\) and, consequently, the modified cut-off frequencies \(\widetilde{\omega}_{L}\) and \(\widetilde{\omega}_{R}\) also decrease in comparison with the standard results \(\omega_{L}\) and \(\omega_{R}\). However, one can promptly verify that \(\widetilde{\omega}_{p}\), \(\widetilde{\omega}_{L}\) and \(\widetilde{\omega}_{R}\) increase for the parameter \(0<p<1\). Here, it is instructive to consider the weak field regime \(\beta^{2}>>c^{2}B_{0}^{2}\). Bearing this in mind, for the allowed band \(\widetilde{\omega}_{L}<\omega<\widetilde{\omega}_{h}\), we obtain \[\widetilde{\omega}_{h}-\widetilde{\omega}_{L}\approx\omega_{h}-\omega_{L}+ \frac{(p-1)}{2p}\,\omega_{p}^{2}\,\frac{c^{2}B_{0}^{2}}{\beta^{2}}\left[\frac {1}{\sqrt{\omega_{c}^{2}+4\omega_{p}^{2}}}-\frac{1}{2\,\sqrt{\omega_{c}^{2}+ \omega_{p}^{2}}}\right]\,. \tag{73}\] From this result, one can easily see that the correction term is positive (negative) for the condition \(p>1\) (\(0<p<1\)), leading to a bigger (smaller) allowed band. Similarly, for the forbidden band \(\widetilde{\omega}_{h}<\omega<\widetilde{\omega}_{R}\), we find that \[\widetilde{\omega}_{R}-\widetilde{\omega}_{h}\approx\omega_{R}-\omega_{h}+ \frac{(1-p)}{2p}\,\omega_{p}^{2}\,\frac{c^{2}B_{0}^{2}}{\beta^{2}}\left[\frac {1}{\sqrt{\omega_{c}^{2}+4\omega_{p}^{2}}}-\frac{1}{2\,\sqrt{\omega_{c}^{2}+ \omega_{p}^{2}}}\right]\,, \tag{74}\] and arrive at the opposite previous conditions for the parameter \(p\). These conclusions remain true by taking into account an expansion with arbitrary order. To illustrate the aforementioned results, the plot of \(1/\eta^{2}\) versus \(\omega/\omega_{p}\) with \(c^{2}B_{0}^{2}/\beta^{2}=1/2\) and \(\omega_{c}=\omega_{p}/\sqrt{2}\) is exhibited in Fig. 4. We consider the particular cases of \(p=1/2\) (usual BI model) and \(p\rightarrow\infty\) (Exponential model), described by the red and blue lines. By comparing with the usual \(X-\)mode from Maxwell electrodynamics (dotted line), we clearly see that the modified cut-off frequencies are shifted to the right (left) for \(p=1/2\) (\(p\rightarrow\infty\)). Furthermore, we would like to highlight the asymptotic behaviour for extremely large \(\omega\). In the case of \(p=1/2\), the wave travels with \(v_{\phi}<c\) because \(1/\eta^{2}\to 2/3\). As expected, for the usual \(X-\)mode, the phase velocity approaches to \(c\) (\(1/\eta^{2}\to 1\)). On the other hand, for \(p\rightarrow\infty\) we have that \(v_{\phi}>c\) and \(1/\eta^{2}\to 3/2\). Finally, it is important to provide some physical estimates. According to the work [50] and references therein, the \(\beta-\)parameter of BI-type electrodynamics is estimated in the range \(10^{19}-10^{20}\,\mathrm{V/m}\), or equivalently, \(\beta/c\sim 10^{11}-10^{12}\,\mathrm{T}\). Along this subsection, we have shown that \(c^{2}B_{0}^{2}/\beta^{2}\) plays a fundamental role in the description of the principal modes. Therefore, new contributions of BI-type models become more relevant with magnetic fields of the order \(10^{10}-10^{12}\,\mathrm{T}\) such that \(c^{2}B_{0}^{2}/\beta^{2}\lesssim 1\). For example, this magnitude can be achieved in astrophysical environments of neutron stars, supernovae and gamma-ray bursts. However, in these scenarios, the analyses need to be complemented with a more detailed treatment by including an equation of state, as well as the relativistic effects and other contributions. Nevertheless, the relevance of the dimensionless parameter \(c^{2}B_{0}^{2}/\beta^{2}\) will certainly appear together with additional features. ## IV Concluding Comments and Perspectives In this contribution, we have investigated some modified plasma waves in the context of non-linear electrodynamics. We have considered a magnetized plasma with electrons and a uniform ionic background within a cold fluid model. In addition, we disregarded the col lisional, relativistic and high amplitude effects. The dispersion relations were obtained in terms of the coefficients \(c_{1}\), \(d_{1}\) and \(d_{2}\), which are determined by specifying the electrodynamics under consideration, as described in Eqs. (10) and (11). Here, we assumed that \(d_{3}=0\) to preserve the parity symmetry. In what follows, our results are summarized. For electrostatic waves, we arrived at the modified Trivelpiece-Gould dispersion relation by substituting the plasma frequency \(\omega_{p}\rightarrow\bar{\omega}_{p}\,\), defined in Eq. (22). To guarantee the stability of the corresponding modes, one has the following constraint \(c_{1}+d_{2}B_{0}^{2}\cos^{2}\theta>0\), where \(\theta\) denotes the angle between the wave propagation direction and magnetic field \({\bf B}_{0}\). We have also found a generalized Appleton-Hartree equation and investigated its principal modes. For circularly polarized waves (RCP and LCP modes), the standard analysis applies with the replacement \(\omega_{p}\rightarrow\widetilde{\omega}_{p}=\omega_{p}/\sqrt{c_{1}}\). Moreover, for the modified ordinary (\(O\)) mode, Eq. (42), we recognized a cut-off at \(\omega=\omega_{p}/\sqrt{c_{1}+d_{2}B_{0}^{2}}\) and obtained the asymptotic behaviour for high frequency, where \(1/\eta^{2}\) approaches to \(1/(1+d_{2}B_{0}^{2}/c_{1})\), with \(\eta\) being the refractive index. The qualitative description is exhibited in Fig. 1. Similarly, for the modified extraordinary (\(X\)) mode, we determined the resonance and cut-off frequencies with the same change in the plasma frequency (\(\omega_{p}\rightarrow\widetilde{\omega}_{p}\)). Keeping this in mind, a resonance occurs at the modified upper-hybrid frequency \(\widetilde{\omega}_{h}\) and the cut-off frequencies are located at \(\widetilde{\omega}_{L}\) and \(\widetilde{\omega}_{R}\), defined in Eqs. (46) and (47). Again, we observed a modification in the asymptotic behaviour for high frequency, given by \(1/\eta^{2}\rightarrow(1-d_{1}B_{0}^{2}/c_{1})\). The qualitative description is illustrated in Fig. 2. As expected, the usual dispersion relations from Maxwell electrodynamics are recovered when \(c_{1}=1\) and \(d_{1}=d_{2}=0\). To check the consistency of our results and clarify the methodology, we have considered the well-known Euler-Kockel electrodynamics and showed that the corresponding dispersion relations are in agreement with the literature. Furthermore, we also investigated the Born-Infeld-type electrodynamics, which encompasses a set of models defined in the parameter space \((p,\gamma,\beta)\). For each dispersion relation, we found some constraints involving these parameters. Below, the main results are pointed out. In the case of electrostatic waves, the dispersion relation is well-defined for \(\gamma\geq 0\). It is also possible to have \(\gamma<0\) with the constraint \(|\gamma|\cos^{2}\theta<\beta^{2}/c^{2}B_{0}^{2}\). Interestingly enough, in the weak field regime, the new effects cancel out when the parameters satisfy \((1-p)/2p=\gamma\cos^{2}\theta\). As already mentioned, for circularly polarized waves, the new contribution is included into the modified plasma frequency \(\widetilde{\omega}_{p}\). By considering weak fields, we obtained a similar expression of Euler-Kockel electrodynamics and showed that the correspondent contribution may assume positive or negative values depending on whether \(p>1\) or \(0<p<1\). Next, for the modified ordinary mode, an interesting situation occurs whenever \(\gamma>0\), namely, the propagation region allows phase velocity smaller than \(c\), as described in Fig. 3. We remember that it does not happen for the usual \(O-\)mode. At last but not least, we analyzed the modified extraordinary mode in which the parameter \(p\) plays a fundamental role. First, the asymptotic behaviour for high frequency is altered, such that the wave travels faster or slower than \(c\) in accordance with \(p>1\) and \(0<p<1\), respectively. We also emphasized that these conditions provide different effects on the allowed and forbidden regions, which may be smaller or greater than the standard results. The particular cases with \(p=1/2\) and \(p\rightarrow\infty\) are displayed in Fig. 4. To conclude, we would like to point out some perspectives. As a first approach, we have inspected the effects of non-linear electrodynamics in a restricted cold plasma model. With this in mind, we wish to include additional features, such as the collisional and relativistic contributions. In principle, it is also possible to consider large amplitude effects by taking into account high-order expansions in Eqs. (8) and (9). We hope that the results discussed here can be useful to pursue some investigations in these directions and to find applications in astrophysical scenarios. In addition, as mentioned before, we have assumed that \(d_{3}=0\). Although this assumption covers most non-linear electrodynamics in the literature, it is opportune to point out that the presence of dark matter candidates may generate effective models in which \(d_{3}\neq 0\) (see, for instance, ref. [53]). Therefore, as a further step, we shall consider the coefficient \(d_{3}\) and its effects into the modified plasma waves. This could be helpful to provide new phenomenological results and to develop some strategies for an indirect dark matter search. We expect to report on that elsewhere. **Acknowledgments:** the authors acknowledge the support by _Conselho Nacional de Desenvolvimento Cientifico e Tecnologico_ (CNPq). In particular, L. P. R. Ospedal is grateful for a post-doctoral fellowship under grant 166386/2020-0, when part of this work was carried out. **Data Availability Statement:** the data that support the findings of this study are available from the corresponding author upon reasonable request.
2308.13330
Enhancing Signal Space Diversity for SCMA Over Rayleigh Fading Channels
Sparse code multiple access (SCMA) is a promising technique for the enabling of massive connectivity in future machine-type communication networks, but it suffers from a limited diversity order which is a bottleneck for significant improvement of error performance. This paper aims for enhancing the signal space diversity of sparse code multiple access (SCMA) by introducing quadrature component delay to the transmitted codeword of a downlink SCMA system in Rayleigh fading channels. Such a system is called SSD-SCMA throughout this work. By looking into the average mutual information (AMI) and the pairwise error probability (PEP) of the proposed SSD-SCMA, we develop novel codebooks by maximizing the derived AMI lower bound and a modified minimum product distance (MMPD), respectively. The intrinsic asymptotic relationship between the AMI lower bound and proposed MMPD based codebook designs is revealed. Numerical results show significant error performance improvement in the both uncoded and coded SSD-SCMA systems.
Qu Luo, Zilong Liu, Gaojie Chen, Pei Xiao
2023-08-25T12:05:53Z
http://arxiv.org/abs/2308.13330v1
# Enhancing Signal Space Diversity for SCMA Over Rayleigh Fading Channels ###### Abstract Sparse code multiple access (SCMA) is a promising technique for the enabling of massive connectivity in future machine-type communication networks, but it suffers from a limited diversity order which is a bottleneck for significant improvement of error performance. This paper aims for enhancing the signal space diversity of sparse code multiple access (SCMA) by introducing quadrature component delay to the transmitted codeword of a downlink SCMA system in Rayleigh fading channels. Such a system is called SSD-SCMA throughout this work. By looking into the average mutual information (AMI) and the pairwise error probability (PEP) of the proposed SSD-SCMA, we develop novel codebooks by maximizing the derived AMI lower bound and a modified minimum product distance (MMPD), respectively. The intrinsic asymptotic relationship between the AMI lower bound and proposed MMPD based codebook designs is revealed. Numerical results show significant error performance improvement in the both uncoded and coded SSD-SCMA systems. Sparse code multiple access (SCMA), signal space diversity (SSD), average mutual information (AMI), lower bound, modified minimum product distance (MMPD), codebook design. ## I Introduction The widespread proliferation of wireless services and internet-of-thing (IoT) devices is challenging the legacy human-centric mobile networks. For higher spectral efficiency and lower communication latency, there has been a paradigm shift in recent years in the study of non-orthogonal multiple access (NOMA), where the same time/frequency resources are shared for the supporting of several times of more active users [1, 2]. Among many others, this work is concerned with a representative code-domain NOMA (CD-NOMA) technique called sparse code multiple access (SCMA), where multiple users communicate concurrently with distinctive sparse codebooks [3]. At the transmitter, the incoming message bits of each SCMA user are directly mapped to a multi-dimensional sparse codeword drawn from a carefully designed codebook [4, 5]. As pointed out in [1], the conventional SCMA (C-SCMA) suffers from a small diversity order which is a critical bottleneck for fundamental improvement of the SCMA error performance. Therefore, it is pivotal to look for new and affordable SCMA transmission schemes for significant enhancement of its system signal space diversity (SSD). ### _Related Works_ SSD, as an effective transmission scheme for higher diversity, has received a sustained research attention in the past decades. A power- and bandwidth-efficient way to acquire SSD was proposed in [6] by coordinate interleaving with constellation rotation. For significant performance gain, the rotation angles of different constellations were investigated in [7, 8] for uncoded systems and in [9, 10] for bit-interleaved coded modulation (BICM) systems. In [11], a different approach to attain full diversity was proposed by judiciously permuting a one-dimensional real-constellation through combinatorial optimization to form a multi-dimensional codebook. This was soon followed by [12] with a low-complexity list-based detection algorithm that works for SSD with both partial- and full-diversity multi-dimensional codebooks. It is noted that the above works [6, 7, 8, 9, 10, 11, 12] were mainly conducted in a single user system with OMA transmission. As far as multiuser communication is concerned, there have been some works on the exploiting the SSD in power-domain NOMA (PD-NOMA) systems [13, 14, 15, 16, 17]. In these works, the composite constellations with different rotation angles were obtained for two or more users by optimizing certain criteria such as minimum distance, maximum mutual information (MI), and minimum pairwise error probability (PEP). To the best of our knowledge, however, no results are known on SSD assisted SCMA. An important aspect of SCMA system optimization is sparse codebook design in order to achieve excellent error rate performances in different channel conditions. Existing codebook designs mainly follow a multi-stage design optimization by first constructing a common multidimensional constellation, called a mother constellation (MC), upon which certain user-specific operations (e.g., interleaving, permutation, shuffling and phase rotations) are applied to the MC to obtain codebooks for multiple users [18, 19, 20, 21, 22]. In general, the MC and user-specific operations can be designed by minimizing the PEP conditioned to certain channel conditions [18, 19, 20, 21, 22, 23] or maximizing the system capacity [24, 25, 26, 27, 28]. By looking into the PEP over Gaussian and Rayleigh fading channels, it is desirable to maximize the minimum Euclidean distance (MED) and minimum product distance (MPD) of a MC or a codebook. Following this spirit, [18] considered Star-QAM as the MC for enlarged MED of the superimposed codewords in downlink SCMA systems. Golden angle modulation (GAM) constellation was adopted in [19] to construct SCMA codebooks with low peak-to-average power ratio properties. In [20], near-optimal codebooks for different channel conditions were investigated by choosing suitable MCs with large MPD. A uniquely decomposed constellation group based codebook design approach was proposed in [21] by maximizing the MED at each resource node and the MPD of the MC. Downlink quaternary sparse codebook with large MED was obtained in [23] by solving a non-convex optimization problem. Recently, a novel class of low-projection SCMA codebooks for ultra-low decoding complexity was developed in [22] by maximizing the proposed distance metric over Rician fading channels. SCMA codebooks can also be optimized from the capacity perspective, as shown in [24, 25, 26, 27, 28]. In [24], a gradient based algorithm was proposed to optimize the average mutual information (AMI), where the AMI was calculated by Monte Carlo method due to the unavailability of its closed form [24]. To avoid the prohibitively high-complexity AMI computation, the cutoff rate was considered in SCMA codebook optimization in [25, 26, 27]. Specifically, [25] proposed a performance criterion based on the cutoff rate of the equivalent multiple-input multiple-output SCMA system for uplink Rayleigh fading channels. In [26], new MCs were obtained by looking into the constellation constrained sum rate capacity. The cut-off rate combined with constellation shaping gain were considered in [27]. More recently, a novel sparse codebook was obtained in [28] by maximizing the derived lower bound of AMI. However, the lower bound with closed-form of AMI for Rayleigh fading channels is still missing. It is noted that the \(M\)-order pulse-amplitude modulation (\(M\)-PAM) was employed as the basic constellation in [26, 27, 28], thus their resultant codebooks exhibit certain similarity. ### _Motivations and Contributions_ Against the aforementioned background, the motivations of this work are the two-fold: 1) As SSD can provide enhanced diversity gain over fading channels, a fundamental investigation on the amalgamation of SSD and SCMA, refereed to as SSD-SCMA, is necessary on the theoretical trade-offs and design guidelines; 2) Albeit there are numerous SCMA codebook designs based on PEP or capacity, these codebooks may not be optimal for SSD-SCMA. The main novelties and contributions of the paper are summarized as follows: * We introduce quadrature component delay to the superimposed codeword of a downlink SCMA for efficient acquisition of SSD, where the resultant system is called SSD-SCMA. Interestingly, we show that the resultant diversity order is doubled compared to that in conventional SCMA, thus leading to significantly improved reliability in Rayleigh fading channels. To guide the system design, an AMI lower bound and a PEP upper bound are derived. * Based on the derived AMI lower bound and the modified minimum product distance (MMPD) from the proposed PEP upper bound, we formulate systematic design metrics including the MC design, sparse codebook optimization, and bit labeling from both the PEP and AMI perspectives. In addition, we fill a gap in the current SCMA literature on the asymptotic relationship between PEP and AMI based design metrics, thus bridging the fundamental connection of these two SCMA codebook design techniques. * We develop an enhanced GAM (E-GAM) as the \(N\)-dimensional MC for the proposed AMI based codebooks (AMI-CBs). The joint optimization of MC and rotation angles by maximizing the AMI lower bound are carried out with an interior point method (IPM) with random initial values and Monte Carlo sample estimation. For the proposed PEP based codebooks (P-CBs), we advocate the permutation of a basic one-dimensional constellations that owns large MED to construct the \(N\)-dimensional MC. The rotation angles for different users are optimized based on the proposed multi-stage search. * We conduct extensive numerical experiments to show the superiority of the proposed SSD-SCMA systems and the proposed codebooks in both uncoded and BICM with iterative demapping and decoding (BICM-IDD) systems. The simulations indicate that significant error performance gains are achieved for SSD-SCMA with the proposed AMI-CBs and P-CBs compared to the C-SCMA systems with the state-of-the-art codebooks. ### _Organization_ The rest of the paper is organized as follows. In Section II, the system model of downlink SSD-SCMA along with the multiuser detection technique are presented. Section III analyzes the AMI and PEP of the SSD-SCMA system in Rayleigh fading channels. In Section IV, the codebook design problems for SSD-SCMA are formulated in terms of the AMI and PEP. The detailed design of AMI-CB and P-CB is elaborated in Section V. The numerical results are given in Section VI. Finally, conclusions are made in Section VII. ### _Notation_ The \(n\)-dimensional complex, real and binary vector spaces are denoted as \(\mathbb{C}^{n}\), \(\mathbb{R}^{n}\) and \(\mathbb{B}^{n}\), respectively. Similarly, \(\mathbb{C}^{k\times n}\), \(\mathbb{R}^{k\times n}\) and \(\mathbb{B}^{k\times n}\) denote the \((k\times n)\)-dimensional complex, real and binary matrix spaces, respectively. \(\mathbf{I}_{n}\) denotes an \(n\times n\)-dimensional identity matrix. \(\mathbf{tr}(\mathbf{X})\) denotes the trace of a square matrix \(\mathbf{X}\). \(\text{diag}(\mathbf{x})\) gives a diagonal matrix with the diagonal vector of \(\mathbf{x}\). \((\cdot)^{\mathcal{T}}\), \((\cdot)^{\dagger}\) and \((\cdot)^{\mathcal{H}}\) denote the transpose, the conjugate and the Hermitian transpose operation, respectively. \(\|\mathbf{x}\|_{2}\) and \(|x|\) return the Euclidean norm of vector \(\mathbf{x}\) and the absolute value of \(x\), respectively. \(\mathbf{x}_{1}\) and \(\mathbf{x}_{\mathrm{Q}}\) return the in-phase (I) and quadrature (Q) components of the vector, respectively. ## II Introduction to the proposed SSD-SCMA ### _Introduction to SCMA_ We consider a downlink SCMA system where \(J\) users communicate over \(K\) orthogonal resources. The overloading factor, defined as \(\lambda=\frac{J}{K}\), is larger than \(100\%\). On the transmitter side, each user maps \(\log_{2}(M)\) binary bits to a length-\(K\) codeword \(\mathbf{x}_{j}\) drawn from a pre-defined codebook \(\boldsymbol{\mathcal{X}}_{j}\in\mathbb{C}^{K\times M}\), where \(M\) denotes the modulation order. The mapping relationship is expressed as \(f_{j}:\mathbb{B}^{\log_{2}M\times 1}\rightarrow\mathbf{\mathcal{X}}_{j}\in\mathbb{C}^{K \times M}\), i.e., \(\mathbf{x}_{j}=f_{j}(\mathbf{b}_{j})\), where the codeword set for the \(j\)th user is given by \(\mathbf{\mathcal{X}}_{j}=\{\mathbf{x}_{j,1},\mathbf{x}_{j,2},\ldots,\mathbf{x}_{j,M}\}\) and \(\mathbf{b}_{j}=[b_{j,1},b_{j,2},\ldots,b_{j,\log_{2}M}]^{\mathcal{T}}\in \mathbb{B}^{\log_{2}M\times 1}\) stands for the \(j\)th user's instantaneous input binary message vector. The \(K\)-dimensional complex codewords in the SCMA codebook are sparse vectors with \(N\) non-zero elements and \(N<K\). The sparsity of the codebooks enables the low complexity message passing algorithm (MPA) detection at receiver. Let \(\mathbf{c}_{j}\) be a length-\(N\) vector drawn from \(\mathbf{\mathcal{C}}_{j}\subset\mathbb{C}^{N\times M}\), where \(\mathbf{\mathcal{C}}_{j}\) is obtained by removing all the zero elements in \(\mathbf{\mathcal{X}}_{j}\). We further define the mapping from \(\mathbb{B}^{\log_{2}M}\) to \(\mathbf{\mathcal{C}}_{j}\) as \(g_{j}:\mathbb{B}^{\log_{2}M\times 1}\mapsto\mathbf{\mathcal{C}}_{j}\), i.e., \(\mathbf{c}_{j}=g_{j}(\mathbf{b}_{j})\). The SCMA mapping now can be re-written as \[f_{j}\coloneqq\mathbf{V}_{j}g_{j},\quad\text{ i.e., }\mathbf{x}_{j}=\mathbf{V}_{j}g_{j}( \mathbf{b}_{j}), \tag{1}\] where \(\mathbf{V}_{j}\in\mathbb{B}^{K\times N}\) is a mapping matrix that maps the \(N\)-dimensional vector to a \(K\)-dimensional sparse codewords. The sparse structure of the \(J\) SCMA codebooks can be represented by the indicator matrix (factor graph) \(\mathbf{F}_{K\times J}=[\mathbf{f}_{1},\ldots,\mathbf{f}_{J}]\subset\mathbb{ B}^{K\times J}\) where \(\mathbf{f}_{j}=\text{diag}(\mathbf{V}_{j}\mathbf{V}_{j}^{\mathcal{T}})\). An element of \(\mathbf{F}\) is defined as \(f_{k,j}\) which takes the value of \(1\) if and only if the user node \(u_{j}\) is connected to resource node \(r_{k}\) and 0 otherwise. Fig. 1 illustrates an SCMA factor graph with \(J=6\), \(K=4\) and \(N=2\). ### _Proposed SSD-SCMA_ The key idea of SSD-SCMA is to introduce a delay \(d\) for the quadrature component of the SCMA codeword, where the delay time \(d\) is assumed to be larger than the channel coherence time [9, 10]. After the component delay (CD), the transmit signal of \(j\)th user is represented by \(\mathbf{x}_{\text{CD},j}\). The block diagram for the proposed SCMA systems with CD is shown in Fig. 2. After the CD module, the transmitted vector is obtained by \(\mathbf{r}_{\text{CD}}=\sum_{j=1}^{J}\mathbf{x}_{\text{CD},j}\). Accordingly, the received signal at the \(j\)th user can be written as \[\mathbf{y}_{\text{CD},j}=\text{diag}\left(\mathbf{h}_{\text{CD},j}\right) \mathbf{r}_{\text{CD}}+\mathbf{z}_{j}, \tag{2}\] where \(\mathbf{h}_{\text{CD},j}\in\mathbb{C}^{K\times 1}\) is the channel coefficient vector between the base station and the \(j\)th user, and \(\mathbf{z}_{j}\in\mathbb{C}^{K\times 1}\) is the complex additive white Gaussian noise (AWGN) vector with the variance with zero mean and variance \(N_{0}\). We assume that perfect CSI is available at the receiver. After the phase equalizer, the received signal is transformed into \(\frac{\text{diag}\left(\mathbf{h}_{\text{CD},j}^{\text{I}}\right)}{|\text{ diag}\left(\mathbf{h}_{\text{CD},j}\right)|}\mathbf{y}_{\text{CD},j}=|\text{diag} \left(\mathbf{h}_{\text{CD},j}\right)|\mathbf{r}_{\text{CD}}+\frac{\text{diag} \left(\mathbf{h}_{\text{CD},j}^{\text{I}}\right)}{|\text{diag}\left(\mathbf{h} _{\text{CD},j}\right)|}\mathbf{z}_{j}\)[6, 7, 8, 9, 10, 11, 12]. Since the noise \(\mathbf{z}_{j}\) is circularly symmetric, \(\frac{\text{diag}\left(\mathbf{h}_{\text{CD},j}^{\text{I}}\right)}{|\text{diag} \left(\mathbf{h}_{\text{CD},j}\right)|}\mathbf{z}_{j}\) has the same distribution of noise \(\mathbf{z}_{j}\). Denote \(\overline{\mathbf{y}}_{j}\) by the received signal after delaying the in-phase component of the received \(K\)-dimensional vector. We further let \(\mathbf{h}_{j}^{\text{I}}=\left[h_{j,1}^{\text{I}},h_{j,2}^{\text{I}},\ldots,h _{j,K}^{\text{I}}\right]^{\mathcal{T}}\) and \(\mathbf{h}_{j}^{\text{Q}}=\left[h_{j,1}^{\text{Q}},h_{j,2}^{\text{Q}},\ldots,h _{j,K}^{\text{Q}}\right]^{\mathcal{T}}\) be the channel gains associated with the I and Q components of the transmitted vector \(\mathbf{r}\), respectively. The elements of \(\mathbf{h}_{j}^{\text{I}}\) and \(\mathbf{h}_{j}^{\text{Q}}\) are Rayleigh distributed independent random variables with zero mean and unit variance. Then, \(\overline{\mathbf{y}}_{j}\) can be demultiplexed into two independent parallel channels [29]: \[\mathbf{y}_{j}=\mathbf{H}_{j}\mathbf{w}+\mathbf{n}_{j}=\left[\begin{array}{c |c}\text{diag}\left(\mathbf{h}_{j}^{\text{I}}\right)&0\\ 0&\text{diag}\left(\mathbf{h}_{j}^{\text{Q}}\right)\end{array}\right]\left[ \begin{array}{c|c}\mathbf{r}_{\text{I}}\\ \mathbf{r}_{\text{Q}}\end{array}\right]+\left[\begin{array}{c}\overline{ \mathbf{z}}_{j,\text{I}}\\ \overline{\mathbf{z}}_{j,\text{Q}}\end{array}\right], \tag{3}\] where \(\mathbf{y}_{j}=[\overline{\mathbf{y}}_{j,\text{I}}^{\mathcal{T}},\overline{ \mathbf{y}}_{j,\text{Q}}^{\mathcal{T}}]^{\mathcal{T}}\in\mathbb{R}^{2K\times 1}\), \(\mathbf{r}=\sum_{j=1}^{J}\mathbf{x}_{j}\), and \(\mathbf{w}=\left[\mathbf{r}_{\text{I}}^{\mathcal{T}},\mathbf{r}_{\text{Q}}^{ \mathcal{T}}\right]^{\mathcal{T}}\). \(\mathbf{n}_{j}=\left[\overline{\mathbf{z}}_{\text{I}}^{\mathcal{T}},\overline{ \mathbf{z}}_{\text{Q}}^{\mathcal{T}}\right]^{\mathcal{T}}\) is the real Gaussian noise vector with the variance with zero mean and variance \(\frac{N_{0}}{2}\), and \(\overline{\mathbf{z}}_{j}=\frac{\text{diag}\left(\mathbf{h}_{j}^{\text{I}}\right)}{| \text{diag}\left(\mathbf{h}_{j}\right)|}\mathbf{z}_{j}\). For simplicity, the subscript \(j\) in (3) is omitted whenever no ambiguity arises. Observed from (3), the I and Q components of the transmitted codewords experience independent Rayleigh fading channels. We will show later that the proposed SSD-SCMA along with efficient codebook design can significantly improve the communication reliability in fading channels. ### _MPA Detection_ The received signal \(\overline{\mathbf{y}}\) will be inputted into the MPA decoder for efficient multi-user detection. The MPA detector exploits the connections between the user nodes and resource nodes, and passes the belief information alongside the edges of the factor graph. Define the sets \(\varphi_{j}=\{k:f_{j,k}=1\}\), representing all the resource nodes that user \(j\) has active transmission, and \(\phi_{k}=\{j:f_{j,k}=1\}\), consisting of all the users colliding over resource node \(k\). Following the basic principle of the MPA, at the \(t\)th iteration, the belief message propagating from resource node \(r_{k}\) to user node \(u_{j}\), denoted by \(I_{r_{k}\to u_{j}}^{t}(\mathbf{x}_{j})\), and the belief message propagating from user node \(u_{j}\) to resource node \(r_{k}\), denoted by \(I_{u_{j}\to r_{k}}^{t}(\mathbf{x}_{j})\), can be expressed respectively as [30] \[I_{r_{k}\to u_{j}}^{t}(\mathbf{x}_{j})=\sum_{\begin{subarray}{c}\in\phi_{k} \backslash\{j\}\\ \mathbf{x}_{i}\in\mathcal{X}_{i}\end{subarray}}p\left(\overline{\mathbf{y}}_{i}| \mathbf{x}_{i}\right)\prod_{i\in\phi_{k}\backslash\{j\}}I_{u_{i}\to r_{k}}^{(t-1 )}(\mathbf{x}_{i}), \tag{4}\] and \[I_{u_{j}\to r_{k}}^{(t)}(\mathbf{x})=\alpha_{j}\prod_{\ell\in\varphi_{j} \backslash\{k\}}I_{r_{\ell}\to u_{j}}^{t}(\mathbf{x}), \tag{5}\] Fig. 1: Factor representation of a \((4\times 6)\) SCMA system. Fig. 2: Block diagram for the proposed SSD-SCMA system with quadrature component delay in a downlink Rayleigh fading channel. where \(\overline{y}_{k}\) is the \(k\)th entry of \(\overline{\mathbf{y}}\), \(\alpha_{j}\) is a normalization factor and the probability distribution function of \(p\left(\overline{y}_{k}|\mathbf{x}_{i}\right)\) is given by \[p\left(\overline{y}_{k}|\mathbf{x}_{i}\right)=\frac{1}{\sqrt{2\pi N_{0}}}\text {exp}\left(-\frac{\sum_{l\in\{\mathrm{I},\mathrm{Q}\}}\left|\overline{y}_{l,k} -|h_{k}^{l}|\sum_{i\in\phi_{k}}x_{l,k,i}\right|^{2}}{2N_{0}}\right). \tag{6}\] ## III AMI Derivation and Error Performance Analysis This section first derives the AMI and its lower bound of the proposed SSD-SCMA system, followed by the error performance analysis based on the PEP. ### _The AMI of the Proposed SSD-SCMA_ Let \(\boldsymbol{\mathcal{X}}=\left\{\boldsymbol{\mathcal{X}}_{1},\boldsymbol{ \mathcal{X}}_{2},\ldots,\boldsymbol{\mathcal{X}}_{J}\right\}\) denote the \(J\) users' sparse codebooks. For the input vector \(\mathbf{w}\) given in (3), the AMI of SSD-SCMA is given by [24] \[\mathcal{I}_{AMI}^{\boldsymbol{\mathcal{X}}} =\mathcal{H}\left(\mathbf{w}\right)-\mathcal{H}\left(\mathbf{w} |\mathbf{y},\mathbf{H}\right) \tag{7}\] \[=J\log_{2}(M)-\mathbb{E}_{\mathbf{w},\mathbf{y},\mathbf{H}} \left\{\log_{2}\frac{\sum_{\hat{\mathbf{w}}\neq\hat{\mathbf{w}}}p(\mathbf{y} |\hat{\mathbf{w}},\mathbf{H})}{p(\mathbf{y}|\hat{\mathbf{w}},\mathbf{H})} \right\}\] \[=J\log_{2}(M)-\frac{1}{M^{J}}\sum_{m=1}^{M^{J}}\mathbb{E}_{ \mathbf{H},\mathbf{n}}\left\{\log\sum_{p=1}^{M^{J}}\exp\left(-d_{m,p}\right) \right\},\] where \(\mathcal{H}\left(\mathbf{w}|\mathbf{y},\mathbf{H}\right)\) denotes the entropy of \(\mathbf{w}\) conditioned on \(\mathbf{H}\) and \(\mathbf{y}\), and \[d_{m,p}=\frac{\|\mathbf{H}\left(\mathbf{w}_{p}-\mathbf{w}_{m}\right)+\mathbf{ n}\|^{2}-\|\mathbf{n}\|^{2}}{N_{0}}. \tag{8}\] The AMI bounds the maximal information rate of the codebook set \(\boldsymbol{\mathcal{X}}=\left\{\boldsymbol{\mathcal{X}}_{1},\boldsymbol{ \mathcal{X}}_{2},\ldots,\boldsymbol{\mathcal{X}}_{J}\right\}\) that can be reliably transmitted with equiprobable inputs. In general, it is challenging to obtain the AMI closed form. As an alternative solution, we deduce the analytical lower bound of AMI to evaluate the transmit efficiency with finite input \(\boldsymbol{\mathcal{X}}\). **Lemma 1:** The AMI of the SSD-SCMA system in downlink Rayleigh fading channels is upper bounded by \[\mathcal{I}_{UP}^{\boldsymbol{\mathcal{X}}}=J\log(M)-\sum_{m=1}^{M^{J}}\log \left(\sum_{p=1}^{M^{J}}\exp\left(-\frac{\|\mathbf{r}_{p}-\mathbf{r}_{m}\|^{2 }}{N_{0}}\right)\right). \tag{9}\] _Proof:_ For given \(\mathbf{H}\), we have \[\mathcal{H}\left(\mathbf{w}|\mathbf{y},\mathbf{H}\right) \stackrel{{(\mathrm{I})}}{{\geq}}\sum_{m=1}^{M^{J}} \log\left(\sum_{p=1}^{M^{J}}\exp\left(\mathbb{E}_{\mathbf{n}}\left\{-d_{m,p} \right\}\right)\right) \tag{10}\] \[=\sum_{m=1}^{M^{J}}\log\left(\sum_{p=1}^{M^{J}}\exp\left(-\frac{ \|\mathbf{H}\left(\mathbf{w}_{p}-\mathbf{w}_{m}\right)\|^{2}}{N_{0}}\right) \right),\] where (i) is obtained by applying Jensen's inequality since the log-sum-exp function is a convex function of \(d_{m,p}\). Upon taking expectation of \(\mathbf{H}\) on both sides of (10), we have \[\mathcal{H}\left(\mathbf{w}|\mathbf{y}\right)=\mathbb{E}_{\mathbf{ H}}\left\{\mathcal{H}\left(\mathbf{w}|\mathbf{y},\mathbf{H}\right)\right\} \tag{11}\] \[\stackrel{{(\mathrm{I})}}{{\geq}}\sum_{m=1}^{M^{J}} \log\left(\sum_{p=1}^{M^{J}}\exp\left(\mathbb{E}_{\mathbf{H}}\left\{-\frac{\| \mathbf{H}\left(\mathbf{w}_{p}-\mathbf{w}_{m}\right)\|^{2}}{N_{0}}\right\} \right)\right)\] \[=\sum_{m=1}^{M^{J}}\log\left(\sum_{p=1}^{M^{J}}\exp\left(-\frac{\| \mathbf{r}_{p}-\mathbf{r}_{m}\|^{2}}{N_{0}}\right)\right).\] Substituting (11) into \[\mathcal{I}(\mathbf{w};\mathbf{y})=J\log_{2}(M)-\mathbb{E}_{\mathbf{H}}\left\{ \mathcal{H}\left(\mathbf{w}|\mathbf{y},\mathbf{H}\right)\right\} \tag{12}\] yields the upper bound in (9). **Lemma 2:** The AMI of the SSD-SCMA system in downlink Rayleigh fading channels is lower bounded by \[\mathcal{I}_{LH}^{\boldsymbol{\mathcal{X}}}=2J\log(M)-K\left(\frac{1}{\ln 2}-1 \right)-\log\left(\sum_{m=1}^{M^{J}}\sum_{p=1}^{M^{J}}\prod_{k=1}^{K}\gamma_{k,m,p}\right), \tag{13}\] where \[\gamma_{k,m,p}=\prod_{l\in\{\mathrm{I},\mathrm{Q}\}}\left(1+\frac{\left|\sum_{j \in\phi_{k}}x_{j,m,l}[k]-x_{j,p,l}[k]\right|^{2}}{4N_{0}}\right)^{-1}. \tag{14}\] _Proof:_ By taking the noise term \(\frac{\|\mathbf{n}\|^{2}}{N_{0}}\) in \(d_{m,p}\) out the summation, we can reformulate the \(\mathcal{H}\left(\mathbf{w}|\mathbf{y},\mathbf{H}\right)\) in (7) as [31] \[\mathcal{H}\left(\mathbf{w}|\mathbf{y},\mathbf{H}\right) \tag{15}\] \[=\mathbb{E}_{\mathbf{n}}\log\exp(\frac{\|\mathbf{n}\|^{2}}{N_{0}} )+\frac{1}{M^{J}}\sum_{m=1}^{M^{J}}\mathbb{E}_{\mathbf{n}}\left\{\log\sum_{p=1}^{M ^{J}}\exp\left(e_{m,p}\right)\right\}\] \[\stackrel{{(\mathrm{I})}}{{\leq}}\frac{K}{\ln 2}+\log\left( \frac{1}{M^{J}}\sum_{m=1}^{M^{J}}\sum_{p=1}^{M^{J}}\mathbb{E}_{\mathbf{n}} \left\{\exp\left(e_{m,p}\right)\right\}\right),\] where \(e_{m,p}=-\frac{\|\mathbf{H}(\mathbf{w}_{p}-\mathbf{w}_{m})+\mathbf{n}\|^{2}}{N_ {0}}\). Considering the integral interval of \((-\infty,\infty)\) for the second term of the right-hand side, we have \(\mathbb{E}_{\mathbf{n}}\left\{\log\exp(\|\mathbf{n}\|^{2}/N_{0})\right\}=\frac{K} {\ln 2}\). Since \(\log\left(x\right)\) is a concave function, an lower bound for the AMI in (15) is derived by applying Jensen's inequality, i.e., step (i). The expectation over \(\mathbf{n}\) in (15) is given by \[\mathbb{E}_{\mathbf{n}}\left\{\exp\left(e_{m,p}\right)\right\} \tag{16}\] \[=\int\frac{1}{(\pi N_{0})^{K}}\exp\left(\frac{-\|\mathbf{n}\|^{2 }}{N_{0}}\right)\exp\left(e_{m,p}\right)\mathrm{d}\mathbf{n}\] \[=\prod_{k=1}^{2K}\frac{1}{(\pi N_{0})^{K}}\int_{n_{k}}\exp\left(- \frac{|n_{k}+|h_{k}|\left(w_{p,k}-w_{m,k}\right)|^{2}+|n_{k}|^{2}}{N_{0}}\right) \mathrm{d}n_{k}\] \[\stackrel{{(\mathrm{I})}}{{=}}\frac{1}{2^{K}}\prod_{k=1}^ {2K}\exp\left(-\frac{|h_{k}|^{2}\delta_{p,m,k}^{2}}{2N_{0}}\right),\] where step (i) is derived based on the (2.33.1) in [32], \(\delta_{p,m,k}^{2}=\left(w_{p,k}-w_{m,k}\right)^{2}\), \(|h_{k}|\) is the \(k\)th entry of \(\text{diag}(\mathbf{H})\), and \(w_{m,k}\) is the \(k\)th entry of \(\text{w}_{m}\). Then, substituting (16) into (15) and taking expectation of \(\mathbf{H}\) on the both sides, we obtain \[\mathcal{H}\left(\mathbf{w}|\mathbf{y}\right)=\mathbb{E}_{\mathbf{H}} \left\{\mathcal{H}\left(\mathbf{w}|\mathbf{y},\mathbf{H}\right)\right\}\] \[\leq\frac{K}{\ln 2}-J\log(M)\] \[+\mathbb{E}_{\mathbf{H}}\left\{\log\left(\sum_{m=1}^{M^{J}}\sum_ {p=1}^{M^{J}}\frac{1}{2^{K}}\prod_{k=1}^{2K}\exp\left(-\frac{|h_{k}|^{2}\delta _{p,m,k}^{2}}{2N_{0}}\right)\right)\right\}\] \[\leq K\left(\frac{1}{\ln 2}-1\right)-J\log(M)\] \[+\log\left(\sum_{m=1}^{M^{J}}\sum_{p=1}^{M^{J}}\prod_{k=1}^{2K} \mathbb{E}_{\mathbf{H}}\left\{\exp\left(-\frac{|h_{k}|^{2}\delta_{p,m,k}^{2}}{ 4N_{0}}\right)\right\}\right)\] \[\overset{(\ref{eq:H})}{=}K\left(\frac{1}{\ln 2}-1\right)-J\log(M)\] \[+\log\bigg{(}\sum_{m=1}^{M^{J}}\sum_{p=1}^{M^{J}}\prod_{k=1}^{K} \prod_{l\in\{\Omega,\Omega\}}\bigg{(}1+\frac{\left|\sum_{j\phi_{k}}x_{j,m,l}[k ]-x_{j,p,l}[k]\right|^{2}}{4N_{0}}\bigg{)}^{-1}\bigg{)}, \tag{17}\] where step (i) is obtained base on the fact that the \(s=h_{k}^{2}\) has a chi-square probability distribution with its moment generating function, which is defined as \(\mathbb{E}\left[e^{-st}\right]\), given by \(M_{s}(t)=\frac{1}{1+t}\). Substituting (17) into (12) leads to the lower bound in (13). **Remark 1**: _Following a similar derivation to the above for Lemma 2, we can obtain the lower bound of AMI for conventional SCMA, which has the same form from (13), but with different expression of \(\gamma_{k,m,p}\) given by_ \[\gamma_{k,m,p}=\left(1+\frac{\left|\sum\limits_{j\notin\phi_{k}}x_{j,m}[k]-x_{ j,p}[k]\right|^{2}}{4N_{0}}\right)^{-1}. \tag{18}\] **Remark 2**: _For \(N_{0}\to 0\) and \(N_{0}\rightarrow\infty\), \(\mathcal{I}_{LB}\) approaches to \(-K\left(1/\ln 2-1\right)\) and \(J\log M-K\left(1/\ln 2-1\right)\), respectively. This indicates that at low and high signal-to-noise ratio (SNR) regions, there exists a constant gap \(-K\left(1/\ln 2-1\right)\) between \(\mathcal{I}_{LB}\) and \(\mathcal{I}_{AMI}^{\mathcal{X}}\). The lower bound with a constant shift can well approximate the AMI, particularly in the low and high SNR regions. In medium SNR region, the gap between the lower bound and AMI is intractable. In fact, maximizing the lower bound is still an efficient approach to improve the AMI in the medium SNR region [28, 31]._ ### _Error Performance Analysis of the Proposed SSD-SCMA_ Assume that the erroneously decoded codeword is \(\hat{\mathbf{w}}\) when \(\mathbf{w}\) is transmitted, where \(\hat{\mathbf{w}}\neq\mathbf{w}\). Furthermore, let us define the element-wise distance \(\tau_{\mathbf{w}\rightarrow\mathbf{w}}(k)=|w_{k}-\hat{w}_{k}|^{2}\)[1]. Then, the pairwise-error probability conditioned on the channel fading vector for a maximum-likelihood receiver is given as [1] \[\text{Pr}\{\mathbf{w}\rightarrow\hat{\mathbf{w}}|\mathbf{H}\}=Q\left(\sqrt{ \frac{\sum_{k=1}^{2K}h_{k}^{2}\tau_{\mathbf{w}\rightarrow\mathbf{w}}(k)}{2N_{ 0}}}\right), \tag{19}\] where \(Q(t)=\frac{1}{\pi}\int_{0}^{+\infty}\frac{e^{\frac{\mathbf{H}^{2}}{2}(t^{2}+1) }}{t^{2}+1}dt\) is the Gaussian \(Q\)-function [33]. By letting \(\gamma_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)=\tau_{\mathbf{w}\rightarrow \hat{\mathbf{w}}}(k)/4N_{0}\) and with the expectation over the channel vector, one has \[\text{Pr}\{\mathbf{w}\rightarrow\hat{\mathbf{w}}\}\] \[=\frac{1}{\pi}\int_{0}^{+\infty}\frac{\mathbb{E}_{\mathbf{H}} \left\{\exp\left\{-\left(t^{2}+1\right)\sum\limits_{k=1}^{2K}h_{k}^{2}\tau_{ \mathbf{w}\rightarrow\mathbf{w}}(k)\right\}\right\}}{t^{2}+1}dt \tag{20}\] \[=\frac{1}{\pi}\int_{0}^{+\infty}\frac{1}{t^{2}+1}\prod_{k=1}^{2K }\frac{1}{1+\gamma_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)(t^{2}+1)}dt.\] To proceed, define the set \(\eta_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}\triangleq\left\{k:\tau_{ \mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)\neq 0,1\leq k\leq 2K\right\}\), and let \(G_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}\) be the cardinality of \(\eta_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}\). We further define \(\delta_{k}\triangleq\frac{\gamma_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)}{1 +\gamma_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)}\) and \(\delta_{\text{min}}=\min\left\{\delta_{k}:k\in\eta_{\mathbf{w}\rightarrow\hat{ \mathbf{w}}}\right\}\). Thus, the PEP can be written as [34] \[\text{Pr}\{\mathbf{w}\rightarrow\hat{\mathbf{w}}\}\leq B(G_{\mathbf{w} \rightarrow\hat{\mathbf{w}}},\delta_{\text{min}})\prod_{k=1}^{2K}\frac{1}{1+ \gamma_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)^{2}}, \tag{21}\] where \[B(G_{\mathbf{w}\rightarrow\hat{\mathbf{w}}},\delta_{\text{min}})\] \[=\frac{1}{4^{G_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}}}\sum_{l= 0}^{G_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}-1}\left(G_{\mathbf{w}\rightarrow \hat{\mathbf{w}}}-1+l\right)\left(\frac{2}{1+\delta_{\text{min}}}\right)^{G_{ \mathbf{w}\rightarrow\hat{\mathbf{w}}}-l}. \tag{22}\] Note that under the asymptotic condition \(N_{0}\to 0\), (21) is tighter than the Chernoff bound1 by the factor \(B(G_{\mathbf{w}\rightarrow\hat{\mathbf{w}}},\delta_{\text{min}})\). Footnote 1: The Chernoff bound can be obtained by applying \(B(G_{\mathbf{w}\rightarrow\hat{\mathbf{w}}},\delta_{\text{min}})=\frac{1}{2}\). _Diversity order:_ At sufficiently large SNR, since \(\gamma_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)\rightarrow\infty\), thus \(\delta_{\text{min}}\approx 1\) and it follows that \[B(G_{\mathbf{w}\rightarrow\hat{\mathbf{w}}},\delta_{\text{min}})\leq\frac{1}{4^{G_{ \mathbf{w}\rightarrow\hat{\mathbf{w}}}}}\left(2G_{\mathbf{w}\rightarrow\hat{ \mathbf{w}}}-1\right). \tag{23}\] Hence, one can approximate the right-hand side of (21) as \[\text{Pr}\{\mathbf{w}\rightarrow\hat{\mathbf{w}}\}\leq G_{c}\left(\mathbf{w} \rightarrow\hat{\mathbf{w}}\right)N_{0}^{G_{d}(\mathbf{w}\rightarrow\hat{\mathbf{w}})}, \tag{24}\] where \[G_{c}(\mathbf{w}\rightarrow\hat{\mathbf{w}})=\left(\begin{matrix}2G_{\mathbf{w} \rightarrow\hat{\mathbf{w}}}-1\\ G_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}\end{matrix}\right)\prod_{k\in \hat{\mathbf{w}}_{\rightarrow\hat{\mathbf{w}}}\rightarrow\mathbf{w}}|\tau_{ \mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)|^{-2}. \tag{25}\] From (24) and (25), define the diversity order (DO) of the SSD-SCMA system as \(G_{d}\triangleq\underset{\mathbf{w}\neq\hat{\mathbf{w}}}{\min}G_{d}(\mathbf{w} \rightarrow\hat{\mathbf{w}})\) and the coding gain as \(G_{c}\triangleq\underset{\mathbf{w}\neq\hat{\mathbf{w}},G_{d}(\mathbf{w} \rightarrow\hat{\mathbf{w}})=G_{d}}{\min}G_{c}(\mathbf{w}\rightarrow\hat{ \mathbf{w}})\)[35]. The union bound of average bit error rate (ABER) for SSD-SCMA system is given as \[P_{\text{b}}\leq\frac{1}{M^{J}\cdot J\log_{2}(M)}\sum_{\mathbf{w}}\sum_{ \mathbf{w}\neq\hat{\mathbf{w}}}n_{\text{E}}(\mathbf{w},\hat{\mathbf{w}}) \cdot\text{Pr}\{\mathbf{w}\rightarrow\hat{\mathbf{w}}\}. \tag{26}\] where \(n_{e}\left(\mathbf{w},\hat{\mathbf{w}}\right)\) is the Hamming distance between \(\mathbf{w}\) and \(\hat{\mathbf{w}}\). Based on (24) and (26), we introduce the following lemma: **Lemma 3**: _The proposed SSD-SCMA enjoys maximum DO of \ correctly except for a codeword of only one user. Hence, the proposed SSD-SCMA system can achieve maximum DO of \(2N\). Specifically, the diversity comes from the following two aspects: whilst the non-zero elements of the sparse codebooks provide a diversity of \(N\) (same as the traditional SCMA), the delay in the I/Q component attributes to a diversity increase of two times. _Remark 3:_ The proposed SSD-SCMA with a DO of \(2N\) generally enjoys improved error performance with steeper ABER slope against SNR, compared to C-SCMA system. Observed from (24) and (26), it is ideal to maximize the DO and coding gain of a codebook to improve ABER. In addition, the PEP is also dependent on the element-wise distance \(\tau_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)\), hence it is also desirable to improve \(\tau_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)\) for improved ABER. ## IV Multidimensional codebook design: design metrics In this section, we formulate the codebook design metrics according to the AMI and PEP analysis. The main idea is to maximize the lower bound of AMI or minimize the upper bound of PEP. ### _AMI-CB Design by Maximizing the Lower Bound of AMI_ Our optimization goal is to maximize the AMI lower bound derived in Lemma 2. Thus, we formulate the codebook design of the SSD-SCMA system with structure of \(\mathcal{S}(\mathcal{V},\mathcal{G};J,M,N,K),\ \mathcal{V}:=[\mathbf{V}_{j}]_{j=1}^{J}\) and \(\mathcal{G}:=[g_{j}]_{j=1}^{J}\) in downlink Rayleigh fading channels as follows: \[\mathcal{P}_{1}:\mathcal{V}^{*},\mathcal{G}^{*} =\arg\max_{\mathcal{V},\mathcal{G}} \mathcal{I}_{LB}^{\mathcal{X}} \tag{27}\] \[\text{Subject to} \sum_{j=1}^{J}\text{Tr}\left(\mathbf{\mathcal{X}}_{j}^{H}\mathbf{ \mathcal{X}}_{j}\right)=MJ. \tag{27a}\] #### Iv-A1 MC figure of merit In general, users' sparse codebooks \(\mathbf{\mathcal{X}}_{j},j=1,2,\ldots,J\) are generated from a common MC, denoted as \(\mathbf{\mathcal{C}}_{MC}\in\mathbb{C}^{N\times M}\), thus the design of MC is crucial. The MC based MI (MC-MI) determines the maximum information rate that can be reliably transmitted for a single user. Naturally, we target maximizing the AMI lower bound of the MC, which can be expressed as \[\mathcal{I}_{LB}^{\mathcal{C}_{MC}} =\log_{2}(M)-\frac{N}{\ln 2} \tag{28}\] \[-\frac{1}{M}\log\left(\sum_{m=1}^{M}\sum_{p=1}^{M}\prod_{n=1}^{N} \left(1+\frac{|c_{m,n}-c_{p,n}|^{2}}{4N_{0}}\right)^{-1}\right),\] where \(c_{m,n}\) is the \(m\)th codeword at the \(n\)th entry of \(\mathbf{\mathcal{C}}_{MC}\). Hence, in Rayleigh fading channels, we choose the following cost function for MC design: \[\mathfrak{G}_{j}=\sum_{m=1}^{M}\sum_{p=1}^{M}\prod_{n=1}^{N}\bigg{(}1+\frac{ \parallel c_{m,n}-c_{p,n}\parallel^{2}}{4N_{0}}\bigg{)}^{-1}. \tag{29}\] #### Iv-A2 Labeling figure of merit Labeling is crucial for BICM-IDD system. The effect of mapping on the performance of SSD-SCMA in BICM-IDD systems can be characterized by the simplified criterion [36, 37]: \[\Upsilon_{\mathrm{Ray}}= \frac{1}{mM}\sum_{i=1}^{m}\sum_{b=0}^{m}\sum_{\mathbf{x}\in \mathcal{X}_{i}^{b}}\prod_{k=1}^{K}\bigg{\{}\frac{1}{1+\frac{1}{4N_{0}}| \mathfrak{Re}\left(x_{k}-\hat{x}_{k}\right)|^{2}} \tag{30}\] \[\qquad\qquad\times\frac{1}{1+\frac{1}{4N_{0}}|\mathfrak{Im}\left( x_{k}-\hat{x}_{k}\right)|^{2}}\bigg{\}},\] where \(\mathcal{X}_{i}^{b}\) denotes the set of codewords that with bit \(b\) at \(i\)th position. Note that (30) is employed as the cost function to design the labeling for the AMI-CBs. ### _P-CB Design by Minimizing the Upper Bound of PEP_ According to the _Remark 3_ in Subsection III-B, the ABER is dominated by the DO and the coding gain, i.e., \(G_{c}\). The DO of user \(j\) is given by \[\text{DO}(\mathbf{\mathcal{X}}_{j})= \sum_{k=1}^{K}\text{Ind}\left(|\mathfrak{Re}\left(x_{n,i}-x_{n,l} \right)|\right) \tag{31}\] \[\qquad+\sum_{k=1}^{K}\text{Ind}\left(|\mathfrak{Im}\left(x_{n,i} -x_{n,l}\right)|\right),\] where \(\text{Ind}(x)\) takes the value of one if \(x\) is nonzero and zero otherwise. We now formulate the design metric by maximizing the coding gain. For any arbitrary \(\mathbf{w}\) and \(\hat{\mathbf{w}}\) that \(\min_{\mathbf{w}\neq\hat{\mathbf{w}}}G_{d}(\mathbf{w}\rightarrow\hat{\mathbf{ w}})=G_{d}\), the term \(\prod\limits_{k\in\mathfrak{Im}_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}}|\tau_{ \mathbf{w}\rightarrow\hat{\mathbf{w}}}(k)|^{-2}\) in (25) equals to the product distance of a single user [1]. To proceed, let us define the modified MPD (MMPD) of the \(j\)th user as \[d_{\mathrm{MMPD}}^{\mathbf{\mathcal{X}}_{j}} =\min_{i\neq\mu,l,1<i,l<M}\ d_{\mathrm{P}_{i},l}^{\mathbf{\mathcal{ X}}_{j}} \tag{32}\] \[=\min_{i\neq l,1<i,l<M}\prod_{n\in\rho_{1}\left(x_{i},\mathbf{x}_ {i}\right)}\Big{\{}|\mathfrak{Re}\left(x_{n,i}-x_{n,l}\right)|^{2}\] \[\qquad\qquad\times\prod_{n\in\rho_{2}\left(\mathbf{x}_{i}, \mathbf{x}_{i}\right)}|\mathfrak{Im}\left(x_{n,i}-x_{n,l}\right)|^{2}\Big{\}},\] where \(\rho_{1}(\mathbf{x}_{i},\mathbf{x}_{l})\) and \(\rho_{2}(\mathbf{x}_{i},\mathbf{x}_{l})\) denote the sets of indices in which \(\mathfrak{Re}(x_{n,i})\neq\mathfrak{Re}(x_{n,j})\) and \(\mathfrak{Im}(x_{n,i})\neq\mathfrak{Im}(x_{n,j})\), respectively. Note that the product distance of one user may be different from that of another due to different user constellation operators. Hence, it is interesting to minimize the MMPD of all users, which is expressed as \[d_{\mathrm{MMPD}}^{\mathbf{\mathcal{X}}}=\min_{j=1,2,\ldots,J}\ d_{\mathrm{MMPD}}^{\mathbf{ \mathcal{X}}_{j}}. \tag{33}\] Obviously, improving the \(G_{c}\) of a codebook is equivalent to maximizing the \(d_{\mathrm{MMPD}}^{\mathbf{\mathcal{X}}}\). In addition, we further define the minimum element-wise distance \(\tau_{\min}^{\mathbf{\mathcal{X}}}\) as \[\tau_{\min}^{\mathbf{\mathcal{X}}}= \min\big{\{}\tau_{\mathbf{w}\rightarrow\hat{\mathbf{w}}}^{2}(k) \tag{34}\] \[\qquad\qquad\left\|\mathfrak{w}_{n},\mathbf{w}_{n}\in\Phi,\mathbf{ w}_{n}\neq\mathbf{w}_{m},1\leq k\leq 2K\big{\}}.\] Based on the _Remark 3_ in Subsection III-B, it is desirable to design codebooks to achieve full DO, large MMPD and large \(\tau_{\min}^{\mathbf{\mathcal{X}}}\). Hence, the codebook design problem of SSD-SCMA is formulated as \[\mathcal{P}_{2-1}:\mathcal{V}^{*},\mathcal{G}^{*}= \arg\max_{\mathcal{V},\mathcal{G}}\left\{d_{\mathrm{MMDP}}^{\mathbf{ \mathcal{K}}},\tau_{\min}^{\mathbf{\mathcal{K}}}\right\} \tag{35}\] \[\text{Subject to}\quad\text{DO}(\mathbf{\mathcal{X}}_{j})=2N,j=1,2, \ldots,J,\] (35a) \[\sum_{j=1}^{J}\text{Tr}\left(\mathbf{\mathcal{X}}_{j}^{\mathbf{H}}\mathbf{ \mathcal{X}}_{j}\right)=MJ. \tag{35b}\] #### Iv-B1 MC figure of merit Similar to the case of MI aided MC design, the MC constrained PEP is employed to guide the design of MC. Specifically, the MC design is formulated as \[\begin{split}\mathbf{\mathcal{C}}_{MC}^{*}=\max& \underbrace{\min_{i\neq l}\prod_{n\in\rho(e_{i},e_{j})}|c_{n,i}-c_{n,l}|^{2}} _{d_{\mathrm{MMDP}}^{\mathbf{\mathcal{X}}_{i}^{MC}}}|c_{n,i}-c_{n,l}|^{2},\\ &\text{Subject to}\quad\begin{matrix}\text{Tr}\left(\mathbf{ \mathcal{C}}_{MC}^{\mathbf{\mathcal{C}}}\mathbf{\mathcal{C}}_{MC}\right)=NM,\\ |\rho(\mathbf{c}_{i},\mathbf{c}_{j})|=N,\end{matrix}\end{split} \tag{36}\] where \(\rho(\mathbf{c}_{i},\mathbf{c}_{j})\) denotes the set of indices in which \(c_{n,i}\neq c_{n,j}\), and \(d_{\mathrm{MPD}}^{\mathbf{\mathcal{X}}_{MC}}\) is the MPD of \(\mathbf{\mathcal{C}}_{MC}\). The rationale for employing \(d_{\mathrm{MPD}}^{\mathbf{\mathcal{X}}_{i}^{MC}}\) is that a MC with large \(d_{\mathrm{MPD}}^{\mathbf{\mathcal{C}}_{MC}}\) can also enlarge \(d_{\mathrm{MMPD}}^{\mathbf{\mathcal{X}}_{i}^{\prime}}\) after applying user-specific operation, such as phase rotation. In addition, it is well known that improving \(d_{\mathrm{MPD}}^{\mathbf{\mathcal{C}}_{MC}}\) can also improve the ABER of a C-SCMA system. #### Iv-B2 Labeling figure of merit The labeling rules should be designed to minimize the MC constrained PEP. Under Rayleigh fading channels, the labeling metric is given as [20] \[\Pi_{\text{R}}(\xi_{l})\equiv\sum_{i=1}^{M-1}\sum_{l=i+1}^{M}N_{i,l}(\xi_{l}) \frac{1}{d_{\mathrm{P},i,l}^{\mathbf{\mathcal{X}}_{j}}}, \tag{37}\] where \(N_{i,l}(\xi_{j})\) denotes the number of different labelling bits between \(\mathbf{x}_{j,i}\) and \(\mathbf{x}_{j,l}\) based on the considered labelling rule \(\xi_{j}\). ### _Asymptotic Relationship Between the P-CB and AMI-CB_ First, let us look at the AMI lower bound, i.e., \(\mathcal{I}_{LB}^{\mathbf{\mathcal{X}}}\) in (13), whose value depends on \(\sum_{m=1}^{M^{J}}\sum_{p=1}^{M^{J}}\prod_{k=1}^{K}\lambda_{k,m,p}\). For sufficiently high SNR, we have \[\gamma_{k,m,p}\approx\prod_{l\in\{\text{I},\text{Q}\}}\left(\frac{\left| \sum\limits_{j\in\phi_{k}}x_{j,m,l}[k]-x_{j,p,l}[k]\right|^{2}}{4N_{0}}\right) ^{-1}. \tag{38}\] The term \(\sum_{m=1}^{M^{J}}\sum_{p=1}^{M^{J}}\prod_{k=1}^{K}\gamma_{k,m,p}\) is dominated by the most significant term. Thus, for \(N_{0}\to 0\), we have the following lower bound: \[\sum_{m=1}^{M^{J}}\sum_{p=1}^{M^{J}}\prod_{k=1}^{K}\gamma_{k,m,p}\geq\max_{1 \leq m,p\leq M^{J}}\quad\frac{1}{M^{J}}\prod_{k=1}^{K}\gamma_{k,m,p}. \tag{39}\] It is noted that the right-hand side of (39) essentially corresponds to the MMPD of the codebook, i.e., the \(d_{\mathrm{MMDP}}^{\mathbf{\mathcal{X}}}\) in (33). Hence, maximizing \(\mathcal{I}_{LB}^{\mathbf{\mathcal{X}}}\) of a codebook is equivalent to maximizing the \(d_{\mathrm{MMDP}}^{\mathbf{\mathcal{X}}}\) of a codebook at a sufficiently high SNR value. _Remark 4 :_ Similar results can be obtained for C-SCMA system. Namely, a codebook with larger MPD will result in a higher \(\mathcal{I}_{LB}^{\mathbf{\mathcal{X}}}\) with \(\gamma_{k,m,p}\) given in (18) at a sufficiently high SNR values. ## V Multidimensional codebook design: implementation issues This section presents the detailed sparse codebook design for SSD-SCMA based on the proposed design metrics in Section IV. Specifically, Subsection V-A first introduces a constellation superposition scheme, which generates multiple sparse codebooks based on the MC. Then, we present the solutions for \(\mathcal{P}_{1-1}\) and \(\mathcal{P}_{1-2}\), i.e., (27) and (35), in Subsections V-B and V-C, respectively. ### _Constellation Superposition Scheme_ As mentioned, multiple sparse codebooks are generated from a common MC (\(\mathbf{\mathcal{C}}_{MC}\in\mathbb{C}^{N\times M}\)) by user-specific operations. The detail design of \(\mathbf{\mathcal{C}}_{MC}\) will be discussed later. Once \(\mathbf{\mathcal{C}}_{MC}\) is determined, phase rotations are applied to design the multi-dimensional constellations for different users. Therefore, the \(j\)th user's codebook with non-zero elements is generated as \(\mathbf{\mathcal{C}}_{j}=\mathbf{R}_{j}\mathbf{\mathcal{C}}_{MC}\), where \(\mathbf{R}_{j}\) denotes the diagonal phase rotation matrix of the \(j\)th user. For example, for an \(\mathbf{\mathcal{C}}_{MC}\) with \(N=2\), we have \(\mathbf{R}_{j}=\begin{bmatrix}e^{j\theta_{1}}&0\\ 0&e^{j\theta_{2}}\end{bmatrix}\). Based on the mapping matrix \(\mathbf{V}_{j}\), the \(j\)th user's codebook now can be generated by \(\mathbf{\mathcal{X}}_{j}=\mathbf{V}_{j}\mathbf{R}_{j}\mathbf{\mathcal{C}}_{MC}\). \(\mathbf{V}_{j}\) can be constructed based on the \(j\)-th column of \(\mathbf{F}\). Specifically, according to the positions of the '0' elements of \(\mathbf{f}_{j}\), we insert the all-zero row vectors into the identity matrix \(\mathbf{I}_{N}\). For example, for the \(\mathbf{F}\) in Fig. 1, we have \[\mathbf{V}_{1}=\begin{bmatrix}0&0\\ 1&0\\ 0&0\end{bmatrix},\mathbf{V}_{2}=\begin{bmatrix}1&0\\ 0&0\\ 0&1\\ 0&0\end{bmatrix}. \tag{40}\] It is noted that the phase rotation matrix \(\mathbf{R}_{j}\) and the mapping matrix \(\mathbf{V}_{j}\) can be combined together by a column vector, i.e., \(\mathbf{s}_{N\times J}^{j}=\mathbf{V}_{j}\mathbf{R}_{j}\mathbf{I}_{K}\), where \(\mathbf{I}_{K}\) denotes all-one vector of length \(K\). Hence, the codebooks for the \(J\) users can be represented by the signature matrix \(\mathbf{S}_{N\times J}=\begin{bmatrix}\mathbf{s}_{N\times J}^{1},\mathbf{s}_{N \times J}^{2},\ldots,\mathbf{s}_{N\times J}^{J}\end{bmatrix}\). In this paper, we consider the following signature matrix: \[\mathbf{S}_{4\times 6}=\begin{bmatrix}0&e^{j\theta_{3}}&e^{j\theta_{1}}&0&e^{j \theta_{2}}&0\\ e^{j\theta_{2}}&0&e^{j\theta_{3}}&0&0&e^{j\theta_{1}}\\ 0&e^{j\theta_{2}}&0&e^{j\theta_{1}}&0&e^{j\theta_{3}}\\ e^{j\theta_{1}}&0&0&e^{j\theta_{2}}&e^{j\theta_{3}}&0\end{bmatrix}. \tag{41}\] ### _Design of AMI-CB_ #### V-B1 Proposed Design of \(\mathbf{\mathcal{C}}_{MC}\) GAM is a novel, shape-versatile and circular symmetric modulation scheme which can offer enhanced MI performance over pulse-amplitude modulation and square quadrature amplitude modulation (QAM) design. This motivates us to employ GAM as the basic MC to design the AMI-CBs. In an \(N_{p}\)-point disc-shaped GAM, the \(n^{th}\) constellation point can be generated according to \(a_{n}=r_{n}e^{i2\pi\nu n}\), where \(r_{n}=c_{\text{norm}}\sqrt{n}\), \(c_{\text{norm}}=\sqrt{\frac{P^{2}}{N_{p}+1}}\), \(P\) is the power constraint and \(\varphi=\frac{1-\sqrt{5}}{2}\) is the golden angle in rads. The \((\xi,\psi)\)-GAM is defined as \(a_{n}=c_{\text{norm}}\sqrt{n+\xi}e^{i2\pi(\varphi+\psi)n}\)[19], where \(\xi\) and \(\psi\) are the two parameters to be optimized in the MC. Here, we propose an enhanced scheme to construct the MC based on GAM, termed as E-GAM. The design scheme mainly includes the following three steps and the detailed process is given in **Algorithm 1**. _Step 1_. Generate the \(N\)-dimensional constellation based on GAM, denoted as \(\mathcal{A}^{M}=[\mathbf{a}_{1}^{\mathcal{T}},\mathbf{a}_{2}^{\mathcal{T}},\, \ldots,\mathbf{a}_{N}^{\mathcal{T}}]^{\mathcal{T}}\in\mathbb{C}^{N\times M}\). _Step 2_. Perform permutation to obtain higher gain based on the criteria given in (29). For \(n=1,\ldots,N\), let \(\boldsymbol{\pi}_{n}:\{1,\ldots,M\}\rightarrow\{1,\ldots,M\}\) denote the permutation mapping of dimension \(n\). Namely, \(\pi_{n}\left(\mathbf{a}_{n}\right)\) is the permutation operation of \(\mathcal{A}_{M}\) at \(n\)th dimension. Then the \(N\)-dimensional constellation can be obtained as \[\boldsymbol{\mathcal{C}}_{MC}=\left[\pi_{1}\left(\mathbf{a}_{1}^{\mathcal{T}} \right),\pi_{2}\left(\mathbf{a}_{2}^{\mathcal{T}}\right),\ldots,\pi_{N}\left( \mathbf{a}_{N}^{\mathcal{T}}\right)\right]^{\mathcal{T}}. \tag{42}\] The construction of an \(N\)-dimensional constellation is equivalent to finding the \(N\) permutations \(\pi_{n},n=1,2,\ldots,N\), which can be efficiently solved by using the binary switching algorithm (BSA) [37, 38]. _Step 3_. Dimension switching. Note that different from conventional codebook design [20], where a basic one-dimensional constellation is repeated at each MC, \(\boldsymbol{\mathcal{C}}_{MC}\) is directly designed based on GAM, which introduces inherent power difference among the \(N\) dimensions. For the \(n\)th dimension, the total energy can be obtained as \[E_{n}=c_{\text{norm}}^{2}M\left(\xi+n+N\left(\frac{M}{2}-1\right)\right), \tag{43}\] and the power difference between two consecutive dimensions is \(E_{n+1}-E_{n}=c_{\text{norm}}^{2}M,1\leq n\leq N-1\). To further maintain the power difference of the sparse codebooks, dimension switching is introduced to \(\boldsymbol{\mathcal{C}}_{MC}\) for some users. Specifically, let \(\mathbf{c}_{n}\in\mathbb{C}^{1\times M}\) be the \(n\)th row of \(\boldsymbol{\mathcal{C}}_{MC}\), the switched \(\boldsymbol{\mathcal{C}}_{MC}\) can be written as \[\boldsymbol{\mathcal{C}}_{MC}^{{}^{\prime}}=\left[\mathbf{c}_{N}^{\mathcal{T}},\mathbf{c}_{N-1}^{\mathcal{T}},\ldots,\mathbf{c}_{1}^{\mathcal{T}}\right]^{ \mathcal{T}}. \tag{44}\] _Remark 5_: The major differences between the proposed E-GAM and [19] are: (1) The proposed E-GAM can avoid complex mapping from the GAM points to MC through mapping table; (2) A mathematical design criteria is considered in the proposed E-GAM, which is missing in [19]; (3) A novel dimension switching scheme is further proposed to improve the power diversity. _2) Generate users' sparse codebooks:_ Based on the \(\boldsymbol{\mathcal{C}}_{MC}\) and \(\boldsymbol{\mathcal{C}}_{MC}\), user \(j\)'s sparse codebook is obtained as \[\boldsymbol{\mathcal{X}}_{j}=\left\{\begin{matrix}\mathbf{V}_{j}\mathbf{R}_{j }\boldsymbol{\mathcal{C}}_{MC},&j\text{ is odd,}\\ \mathbf{V}_{j}\mathbf{R}_{j}\boldsymbol{\mathcal{C}}_{MC},&j\text{ is even.}\end{matrix}\right. \tag{45}\] (45) introduces row-based energy difference in (41) by \(E_{1}\neq E_{2}\), which is useful for improving the distance profile of the superimposed codewords. For example, according to (41), users \(2\), \(3\), and \(5\) superimpose over the first resource node with individual energy of \(E_{2}\), \(E_{1}\), and \(E_{1}\), respectively. The MC parameters, i.e., \(\xi,\psi\), and the rotation angle \(\theta_{i},i=1,2,\ldots,d_{f}\) should be optimized according to (27). Let \(\boldsymbol{\Theta}=\left[\theta_{1},\theta_{2},\ldots,\theta_{d_{f}},\xi, \psi\right]^{\mathcal{T}}\) denote all the parameters to be optimized. Based on the proposed MC and constellation superposition scheme, the optimization problem \(\mathcal{P}_{1-1}\) is reformulated as \[\mathcal{P}_{1-2}:\boldsymbol{\mathcal{X}}=\arg\max_{\boldsymbol {\Theta}}\quad v_{0\text{s}}(\boldsymbol{\Theta})= \quad\mathcal{I}_{LB}^{\boldsymbol{\mathcal{X}}} \tag{46}\] \[\text{Subject to}\quad v_{i}(\boldsymbol{\Theta})= \left\{\begin{array}{l}-\theta_{i}\leq 0\\ \theta_{i}-\pi\leq 0,i=1,2,\ldots,d_{f}\\ \end{array}\right.\] \[\quad v_{d_{f}+1}(\boldsymbol{\Theta})= \left\{\begin{array}{l}-\frac{M}{2}-\xi\leq 0\\ \xi-\frac{M}{2}\leq 0\\ \end{array}\right.\] \[\quad v_{d_{f}+2}(\boldsymbol{\Theta})= \left\{\begin{array}{l}-\frac{\pi}{M}-\psi\leq 0\\ \psi-\frac{\pi}{M}\leq 0.\end{array}\right.\] Obviously, there are \(d_{f}+2\) parameters to be optimized in (46). Unfortunately, \(v_{0\text{s}}(\boldsymbol{\Theta})\) is a non-convex function and the computation complexity of \(v_{0\text{s}}(\boldsymbol{\Theta})\) is also high, especially for \(8\leq M\). To solve (46), we propose a sample based primal-dual IPM with random initial values. We first transform (46) into a standard barrier problem with the perturbed Karush-Kuhn-Tucker conditions given by [39] \[\nabla v_{0\text{s}}(\boldsymbol{\Theta})+\sum\nolimits_{p=1}^{d_{f }+2}u_{p}\nabla v_{p}(\boldsymbol{\Theta})=0,\] (47) \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad **Algorithm 2** AMI-CB design based on the AMI lower bound \(\mathcal{I}_{LB}^{\mathbf{\mathcal{X}}}\) ``` 0:\(J\), \(K\), \(\mathbf{V}_{j}\), \(M\), \(N_{0}\) 1:for\(i_{1}=1:I_{1}\)do 2: Randomly choose an initial value of \(\mathbf{\Theta}\) that satisfies the constraint. 3:for\(i_{2}=1:I_{2}\)do 4: For the given \(\xi,\psi\in\mathbf{\Theta}\), generate \(\mathbf{\mathcal{C}}_{MC}\) based on Algorithm 1. 5: Perform dimension switching and generate \(\mathbf{\mathcal{X}}_{j}\) according to (45). 6: Compute (48) with the proposed sub-optimal estimation, and obtain the update direction \(\Delta\mathbf{\Theta},\Delta\mathbf{u}\). 7: Line search method to determine the update factor \(\varsigma\) by minimizing \(\|\mathbf{r}_{\text{old}}\|+\|\mathbf{r}_{\text{cent}}\|\), and then update \(\mathbf{\Theta}=\Delta\mathbf{\Theta}+\varsigma\Delta\mathbf{\Theta}\), \(\mathbf{u}=\mathbf{u}+\varsigma\Delta\mathbf{u}\). 8:endfor 9: Preserve the current results of \(\mathbf{\Theta}\) and \(v_{\text{Obj}}(\mathbf{\Theta})\). 10:endfor 11: Choose the best result \(\mathbf{\Theta}^{*}\) that has the maximum value of \(v_{\text{Obj}}(\mathbf{\Theta}^{*})\), and generate \(\mathbf{\mathcal{X}}_{j}\) based on \(\mathbf{\Theta}^{*}\). 12: Perform the bit labeling for \(\mathbf{\mathcal{X}}_{j}\) based on the criteria given in (30). ``` **Algorithm 3** P-CB design based on the PEP upper bound in (24) **Algorithm 2** AMI-CB design based on the API lower bound \(\mathcal{I}_{LB}^{\mathbf{\mathcal{X}}}\) ### _Design of P-CB_ #### Iv-C1 Design of \(\mathbf{\mathcal{C}}_{MC}\) In contrast to the case for the AMI based MC design, a MC with large MPD, i.e., \(d_{\text{MDPD}}^{\mathcal{C}_{MC}}\), is required for the PEP based design. To this end, we first choose a one-dimensional basic constellation with large MED, denoted as \(\mathbf{p}\), and then permute \(\mathbf{p}\) to obtain \(\mathbf{\mathcal{C}}_{MC}\). Namely, the \(N\)-dimensional constellation can be obtained as \[\mathbf{\mathcal{C}}_{MC}=\left[\pi_{1}\left(\mathbf{p}\right),\pi_{2}\left( \mathbf{p}\right),\ldots,\pi_{N}\left(\mathbf{p}\right)\right]^{\mathcal{T}}. \tag{50}\] Similar to the AMI based codebook design, the permutation is obtained by performing BSA according to the criteria given in (36). #### Iv-C2 Generate sparse codebooks Multiple sparse codebooks are generated by \(\mathbf{\mathcal{X}}_{j}=\mathbf{V}_{j}\mathbf{R}_{j}\mathbf{\mathcal{C}}_{MC}\) based on the signature matrix given in (41). Let \(p_{i}\) denote the \(i\)th signal point in \(\mathbf{p}\). Based on (41), the superimposed constellation on a resource node can be obtained by \[\begin{split}\mathbf{\mathcal{S}}_{\text{sum}}=\Big{\{}p_{i}^{(1)}e^ {j\theta_{1}}+p_{i}^{(2)}e^{j\theta_{2}}+\ldots+p_{i}^{(d_{f})}e^{j\theta_{d _{f}}}\\ |\forall p_{i}^{(l)}\in\mathbf{p},l=1,2,\ldots,d_{f}\Big{\}}. \end{split} \tag{51}\] Accordingly, the \(\tau_{\min}^{\mathbf{\mathcal{X}}}\) in (34) can be simplified to \[\tau_{\min}^{\mathbf{\mathcal{X}}}=\min_{m\neq\theta}\big{\{}\min\big{\{}|\Re(s_{ m}-s_{n})|^{2},|\mathfrak{Im}(s_{m}-s_{n})|^{2}\big{\}}\big{\}}, \tag{52}\] where \(s_{m}\) is the \(m\)th point of \(\mathbf{\mathcal{S}}_{\text{sum}}\). According to (35), the rotation angles, i.e., \(\mathbf{\theta}=\big{[}\theta_{1},\theta_{2},\ldots,\theta_{d_{f}}\big{]}\), should be designed to achieve full DO, large MMPD \(\big{(}d_{\text{PD},\min}^{\mathbf{\mathcal{X}}}\big{)}\), and large \(\tau_{\min}^{\mathbf{\mathcal{X}}}\), i.e., \[\begin{split}\mathcal{P}_{2-2}:\mathbf{\theta}^{*}=\arg\max_{\mathbf{\theta }}&\quad\Big{\{}d_{\text{MDPD}}^{\mathbf{\mathcal{X}}},\tau_{\min}^{ \mathbf{\mathcal{X}}}\Big{\}}\\ &\text{Subject to}&\quad\eqref{eq:p_ first and then improve \(\tau_{\min}^{\mathbf{\chi}}\). Hence, we consider a feasible way by transforming (53) into a multi-stage design problem. Specifically, the optimal rotation angles that achieve full DO and optimal MMPD of \(\mathbf{p}\) are first obtained. Then, the rest of angles are determined by maximizing \(\tau_{\min}^{\mathbf{\chi}}\). The detailed design process is summarized in **Algorithm 3**. #### V-B3 Perform bit labeling Similar to the AMI based labeling scheme, bit labeling is performed according to the metric given in (37). ## VI Numerical results In this section, we conduct numerical evaluations of the proposed SSD-SCMA in both uncoded and coded systems with various codebooks. We first evaluate the computational complexity of the proposed codebook design scheme in Subsection VI-A. Then, Subsection VI-B compares the proposed PEP upper bound with the exact ABER, and the AMI lower bound with the exact AMI. Then, the proposed AMI-CBs and P-CBs are presented in Subsection VI-C. Accordingly, with the proposed codebooks, we evaluate the BER performance of SSD-SCMA and C-SCMA in both uncoded and BICM-IDD systems in Subsection VI-D. The indicator matrix that presented in Fig. 1 with \(K=4\), \(J=6\) and \(\lambda=150\%\) is employed. For comparison, we consider the original GAM (OGAM) codebook [19], StarQAM codebook [18], Chen's codebook [20]2, Huang's codebook [23] and Jiang's codebook [28]. This is because all these codebooks are designed for downlink channels and achieve good BER performance. Footnote 2: For \(M=8\), the NS-QAM proposed in [20] is employed. ### _Computational Complexity_ #### Vi-A1 AMI-CB design As discussed in Subsection III-A, the direct calculation of AMI, i.e., (7) is intractable and often calculated by the tedious Monte Carlo method. In general, at least \(N_{\text{s}}=10^{3}\) noise and channel samples are required to accurately estimate the AMI and its gradient at each iteration. The resultant the computational complexity for AMI can be approximated as \(\mathcal{O}\left(N_{\text{s}}M^{2J}\right)\). In contrast, the computational complexity of the proposed lower bound can be approximated as \(\mathcal{O}\left(M^{2J}\right)\), which is significantly smaller than that of AMI. In addition, compared to the codebook design scheme proposed in [28], our proposed lower bound of AMI in Rayleigh fading channels has closed-form expression and with simpler computation of \(\text{Log}(\cdot)\) operations. #### Vi-A2 P-CB design The computational complexity of calculating the MMPD is negligible and the main computational complexities of **Algorithm 3** are imposed by the operations of permutation, labeling, and the maximizing of \(\tau_{\min}^{\mathbf{\chi}}\). With the aid of BSA algorithm, the computational complexity of permutation and labeling can be reduced to \(\mathcal{O}\left(M^{2}\right)\). In addition, the complexity of determining the \(\theta_{3}\) by maximizing \(\tau_{\min}^{\mathbf{\chi}}\) can be approximated as \(\mathcal{O}\left(KM^{d_{f}}\right)\)[20]. In general, the design complexity of P-CB is relatively smaller than that of AMI-CB. ### _Evaluations of PEP upper bound and AMI lower bound_ We now compare the ABER of Chernoff bound with the proposed PEP upper bound in (21) and (26) by employing the StarQAM codebook in SSD-SCMA. As shown in Fig. 4, for \(M=4\), the proposed upper bound, i.e., the line "Prop.", close to the simulated ABER at high SNR, while the Chernoff bound is about \(2\) dB away from the simulated ABER. For \(M=8\), the Chernoff bound is about \(3\) dB away the simulated ABER, however, the proposed PEP upper bound still holds very tight to the simulated ABER in high \(\text{E}_{\text{b}}/\text{N}_{\text{o}}\) regime. As discussed in Subsection III-A, direct calculation of AMI, i.e., (7) is intractable and hence Monte Carlo method is applied. Suppose about \(N_{\text{s}}\) noise and channel samples are required to accurately estimate the AMI. Hence the computational complexity can be approximated as \(\mathcal{O}\left(N_{\text{s}}\ M^{2J}\right)\), which may be not affordable for \(M\geq 8\). As mentioned in _Remark 2_, there exists a constant gap between the AMI and the proposed lower bound and this can be leveraged for AMI analysis. Fig. 5 shows the AMI obtained by Monte Carlo simulation and the proposed AMI lower bound with a constant shift for both SSD-SCMA and C-SCMA with \(M=4\). One can see that the proposed AMI lower bound plus a constant fits well the simulated AMI over the low and high SNR ranges. For the other middle-range, there exists a small gap between the estimated AMI and the AMI lower bound with a constant shift. However, it is still effective to improve the AMI by maximizing the lower bound. ### _The Proposed Codebooks_ #### Vi-C1 The proposed P-CBs We consider the following one dimensional basic constellations \(\mathbf{p}\) employed for the PEP based codebook: quadrature phase shift keying (QPSK), non-square quadrature amplitude modulation (NS-QAM) and the constellation drawn from a lattice of equilateral triangles [40], denoted as \(M\)-TRI. The MED of the basic constellation, denoted by \(d_{\min}\), is also given. Based on QPSK, \(4\)-TRI, NS-QAM and \(8\)-TRI, the resultant codebooks are respectively \begin{table} \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{Codebook} & \multicolumn{2}{c}{Basic} & \multirow{2}{*}{\([\theta_{1},\theta_{2},\theta_{3}]\)} & \multirow{2}{*}{MMPD} \\ & constellation & & \\ \hline P-CB1 & QPSK & \([0.172\pi,-0.172\pi,0.068\pi]\) & 0.86 \\ P-CB2 & 4-TRI & \(0.083\pi,-0.083\pi,0.336\pi\) & 0.50 \\ P-CB3 & 8-NS-QAM & \([0.072\pi,-0.072\pi,0.125\pi]\) & 0.35 \\ P-CB4 & 8-TRI & \([0.075\pi,-0.075\pi,0.378\pi]\) & 0.36 \\ \hline \end{tabular} \end{table} TABLE I: The proposed P-CBs. denoted as P-CB1, P-CB2, P-CB3 and P-CB4. The labeling is carried out at \(\text{E}_{\text{b}}/\text{N}_{\text{o}}=15\) dB for both \(M=4\) and \(M=8\). The rotation angles and the MMPDs corresponding to \(\theta_{\text{opt}}\) are also summarized in Table I. _2) The proposed AMI-CBs:_ In Algorithm 2, we set \(I_{1}=10\) and \(I_{2}=25\) to generate the AMI-CBs. Table II compares the \(\mathcal{I}_{LB}^{\mathcal{X}}\) with a constant shift of \(-K\left(1/{\ln 2-1}\right)\) for various codebooks. The proposed AMI-CBs own larger value of \(\mathcal{I}_{LB}^{\mathcal{X}}\), especially for SSD-SCMA system. As discussed in Section IV-C, a codebook with larger MMPD (MPD) owns larger \(\mathcal{I}_{LB}^{\mathcal{X}}\) of SSD-SCMA (conventional SCMA). We now give an example to illustrate this relationship. Consider the C-SCMA system, the MPDs for the proposed P-CB1, Chen's codebook and StarQAM codebook are respectively given as \(1\), \(1\) and \(0.72\), whereas the \(\mathcal{I}_{LB}^{\mathcal{X}}\) with a constant shift for those codebooks at \(\text{E}_{\text{b}}/\text{N}_{\text{o}}=12\) dB are \(11.89\), \(11.89\) and \(11.78\), respectively. However, this may not hold for low and mid range SNRs, which can been seen in Table II. In general, the asymptotic relationship holds when \(\text{E}_{\text{b}}/\text{N}_{\text{o}}\geq 12\) dB and \(\text{E}_{\text{b}}/\text{N}_{\text{o}}\geq 16\) dB for \(M=4\) and \(M=8\), respectively. The optimized parameters \([\psi,\xi,\theta_{1},\theta_{2},\theta_{3}]\) for AMI-CB1 and AMI-CB2 are given as \([-0.525,0.659,-0.109,0.272,0.555]\) and \([-0.452,0.700,-0.131,0.152,0.455]\), respectively. Interested readers can find the proposed AMI-CB1 in Appendix A, and more relevant results at our GitHub project3. Footnote 3: [https://github.com/ethanlab/SCMA-codebook](https://github.com/ethanlab/SCMA-codebook) ### _BER performance of the proposed SSD-SCMA_ _1) Uncoded system:_ We first evaluate the uncoded BER performance of SSD-SCMA system with P-CBs, as shown in Fig. 7 and Fig. 8. The dash lines denote the BERs for conventional SCMA, whereas the solid lines are the proposed SSD-SCMA. The main observations are summarized as follows: * The proposed SSD-SCMA outperforms C-SCMA for both \(M=4\) and \(M=8\). For \(M=4\), \(5.5\) dB can be observed for SSD-SCMA with the proposed P-CB1 over that of C-SCMA with Chen's codebook at BER \(=10^{-5}\), whereas \(4\) dB gain is achieved for \(M=8\) with the proposed P-CB3. In addition, SSD-SCMA exhibits steeper BER slopes than that of C-SCMA due to the larger DO of the former. * Not all codebooks can achieve such large gain for SSD-SCMA system. For example, only \(2\) dB gain is observed for Star-QAM codebook. Interestingly, the proposed SSD-SCMA can improve the BER performance of Huang's codebook by \(6\) dB. It is noted that the advantage of Huang's codebook is in Gaussian channels, however, we show that its BER performance is significantly improved with the proposed SSD-SCMA over Rayleigh fading channels. * The proposed P-CB1 and PCB3 achieve the best BER performance for both SSD-SCMA and C-SCMA systems, owning to the well optimized MPD, MMPD, DO and labeling. Fig. 4: ABER against various upper bounds for SSD-SCMA system. Fig. 5: The AMI and the lower bound of AMI for SSD-SCMA and C-SCMA systems. Fig. 6: The adopted basic one dimensional constellation \(\mathbf{p}\) with large MED (\(d_{\min}\)). The resultant codebooks are respectively denoted as P-CB1, P-CB2, P-CB3 and P-CB4. #### Vi-C2 BICM-IDD system Next, we evaluate the BER performance of coded SSD-SCMA with AMI-CBs under BICM-IDD receiver structure. Specifically, the 5G NR LDPC code, as specified in TS38.212 [41], with block length of \(1024\) and rate of \(5/6\) are considered. The Eb/No for optimizing a codebook is determined according to the code rate [9]. Specifically, the Eb/No that achieves an AMI of \(rJ\log(M)\) is considered, where \(r\) denotes the code rate. Namely, for a code rate of \(5/6\), we employ the codebooks of \(M=4\) and \(M=8\) that are optimized at \(\text{Eb/No}=4\) dB and \(\text{Eb/No}=12\) dB, respectively. The number of MPA iterations is \(3\), the maximum number of LDPC decoding iterations is \(25\) and the number of BICM-IDD iterations is \(4\). Fig. 9 and Fig. 10 show the block error rate (BLER) performances of various codebooks for \(M=4\) and \(M=8\), respectively. The main observations are summarized as follows: * The proposed SSD-SCMA with the proposed AMI-CBs achieve the best BLER performance among all benchmarking codebooks due to its optimized AMI and labeling. Specifically, the proposed SSD-SCMA with AMI-CB1 and AMI-CB2 can achieve about 1 dB and \(0.75\) dB gains compared to C-SCMA with the StarQAM codebook for \(M=4\) and \(M=8\), respectively. * It is worth mentioning that a codebook that achieves better BER performance in an uncoded system may not outperform in a BICM-IDD system, and vice versa. For example, the proposed PCB1 and Chen' codebook achieve better BER performance than the Star-QAM, OGAM codebooks and the proposed AMI-CB1 in uncoded C-SCMA with \(M=4\); however, their error performances deteriorate under the BICM-IDD system. This is because their performance metrics and codebook design criteria are different from each other. ## VII Conclusions In this paper, we have introduced a novel SSD-SCMA that can significantly improve the error performance of SCMA in Fig. 8: ABER comparison between SSD-SCMA and C-SCMA systems of various codebooks of \(M=8\). Fig. 10: BLER comparison between SSD-SCMA and C-SCMA systems of various codebooks for \(M=8\). Fig. 7: ABER comparison between SSD-SCMA and C-SCMA systems of various codebooks of \(M=4\). Fig. 9: BLER comparison between SSD-SCMA and C-SCMA systems of various codebooks for \(M=4\). downlink Rayleigh fading channels. We have analyzed the AMI and PEP of SSD-SCMA, and derived the AMI lower bound and PEP upper bound. Based on the proposed bounds, a systematic codebook design metrics have been established for both AMI-CBs and P-CBs. The asymptotic relationship between the two codebook designs based on PEP and AMI have been revealed. Furthermore, a novel E-GAM has been proposed as the MC to design the AMI-CBs, whereas an efficient approach by permuting a basic constellation has been introduced to design P-CBs. Numerical results have demonstrated the advantages of the proposed SSD-SCMA with the proposed AMI-CBs and P-CBs in both uncoded and BICM-IDD systems. ## Appendix A The proposed codebooks The proposed AMI-CB1 is presented, and the other codebooks can be constructed with the presented parameters. Alternatively, one can find these codebooks at: [https://github.com/ethaniq/SCMA-codebook](https://github.com/ethaniq/SCMA-codebook). For \(M=4\), the four columns of \(\mathcal{X}_{j},j=1,2,\ldots,J\) denote the codewords labelled by \(00\), \(01\), \(10\), \(11\), respectively. \[\mathcal{X}_{1} =\begin{bmatrix}0&0.0432-0.7904i&0&-0.5417-0.2827i\\ 0&-0.1541-0.3106i&0&0.5097+0.7873i\\ 0&0.1541+0.3106i&0&-0.5097-0.7873i\\ 0&-0.0432+0.7904i&0&5417+0.2827i\\ \mathcal{X}_{2}&\begin{bmatrix}0.3884-0.4718i&0&0.0432-0.7904i&0\\ -0.8755-0.3364i&0&-0.1541-0.3106i\\ 0&0.8755-0.3364i&0&0.1541+0.3106i&0\\ -0.3884+0.4718i&0&-0.4032-0.7904i&0\\ -0.5853-0.5329i&0.3884-0.4718i&0&0\\ -0.3382-0.0768i&-0.8755+0.33634i&0&0\\ 0.3382+0.0768i&0.8755-0.3364i&0&0\\ 0.5853+0.5329i&-0.3884+0.4718i&0&0\\ 0&0&-0.5417-0.2827i&0.0432-0.7904i&0\\ 0&0&0.5097+0.7873i&-0.1541-0.3106i\\ 0&0&0.5417+0.2827i&-0.6423+0.7904i&0\\ 0.0432-0.7904i&0&0.3884-0.4718i&0\\ -0.1541-0.3106i&0&0&-0.8755+0.3364i&0\\ -0.0432-0.7904i&0&-0.3884+0.4718i&0\\ -0.1541-0.2827i&0.6422-0.4628i&0\\ \end{bmatrix},\] \[\mathcal{X}_{6} =\begin{bmatrix}0&-0.5417-0.2827i&0.61242-0.4628i\\ 0&0.5097+0.7873i&0.1449-0.3150i&0\\ 0&-0.5097-0.7873i&-0.1449+0.3150i&0\\ 0&0.5417+0.2827i&-0.6423+0.7904i&0\\ \end{bmatrix},\]
2310.09122
Equirectangular image construction method for standard CNNs for Semantic Segmentation
360{\deg} spherical images have advantages of wide view field, and are typically projected on a planar plane for processing, which is known as equirectangular image. The object shape in equirectangular images can be distorted and lack translation invariance. In addition, there are few publicly dataset of equirectangular images with labels, which presents a challenge for standard CNNs models to process equirectangular images effectively. To tackle this problem, we propose a methodology for converting a perspective image into equirectangular image. The inverse transformation of the spherical center projection and the equidistant cylindrical projection are employed. This enables the standard CNNs to learn the distortion features at different positions in the equirectangular image and thereby gain the ability to semantically the equirectangular image. The parameter, {\phi}, which determines the projection position of the perspective image, has been analyzed using various datasets and models, such as UNet, UNet++, SegNet, PSPNet, and DeepLab v3+. The experiments demonstrate that an optimal value of {\phi} for effective semantic segmentation of equirectangular images is 6{\pi}/16 for standard CNNs. Compared with the other three types of methods (supervised learning, unsupervised learning and data augmentation), the method proposed in this paper has the best average IoU value of 43.76%. This value is 23.85%, 10.7% and 17.23% higher than those of other three methods, respectively.
Haoqian Chen, Jian Liu, Minghe Li, Kaiwen Jiang, Ziheng Xu, Rencheng Sun, Yi Sui
2023-10-13T14:11:33Z
http://arxiv.org/abs/2310.09122v1
# Equirectangular image construction method for standard CNNs for Semantic Segmentation ###### Abstract 360\({}^{\circ}\) spherical images have advantages of wide view field, and are typically projected on a planar plane for processing, which is known as equirectangular image. The object shape in equirectangular images can be distorted and lack translation invariance. In addition, there are few publicly dataset of equirectangular images with labels, which presents a challenge for standard CNNs models to process equirectangular images effectively. To tackle this problem, we propose a methodology for converting a perspective image into equirectangular image. The inverse transformation of the spherical center projection and the equidistant cylindrical projection are employed. This enables the standard CNNs to learn the distortion features at different positions in the equirectangular image and thereby gain the ability to semantically the equirectangular image. The parameter, \(\varphi\), which determines the projection position of the perspective image, has been analyzed using various datasets and models, such as UNet, UNet++, SegNet, PSPNet, and DeepLab v3+. The experiments demonstrate that an optimal value of \(\varphi\) for effective semantic segmentation of equirectangular images is 6\(\pi\)/16 for standard CNNs. Compared with the other three types of methods (supervised learning, unsupervised learning and data augmentation), the method proposed in this paper has the best average IoU value of 43.76%. This value is 23.85%, 10.7% and 17.23% higher than those of other three methods, respectively. Keywords:360\({}^{\circ}\)Spherical images; Equirectangular images; Semantic segmentation; Standard CNNs + Footnote †: journal: Computer Vision and Pattern Recognition ## 1 Introduction 360\({}^{\circ}\) spherical images have increasingly wide applications in the fields of automated driving, drones, VR, and more[1]. Equidistant cylindrical projection is often used to map a 360\({}^{\circ}\) spherical image to a 2D plane. As shown in Fig.1, the distorted shape of the object can be clearly observed in the equirectangular image. The closer to the upper and lower ends of the image, the more serious the distortion of the shape of the object. However, the distortion of the shape of the object in the middle area is lighter[2; 3]. Different locations in the equirectangular image show different object shapes and structural features, which pose challenges for object detection, semantic segmentation, and depth estimation of equirectangular images. At present, Convolutional Neural Networks (CNNs) have excellent performance in image classification, image segmentation, target recognition, and other fields. Especially based on large-scale labeled datasets (COCO[4], ImageNet[5], Cityscapes[6]), pre-trained CNN models such as VGG[7] and ResNet[8] can be used as initial models for solving new problems and tuned on new datasets. The data objects processed by CNNs are usually perspective images, which are characterized by the same object having the same shape in different locations (i.e., translational invariance). Thus, the convolution kernel of CNNs uses regular squares. However, equirectangular images do not have translational invariance characteristics. To solve this problem, some studies have proposed the method of distorting the convolution kernel by adaptively adjusting its shape according to the degree of distortion of the object shape. Coors et al.[9] and Tateno et al.[10] changed the shape of the convolution kernel through spherical projection. As shown in Fig. 2, the closer to the middle of the equirectangular image, the closer the shape of the convolution kernel is to a square; the closer to the top and bottom of the equirectangular image, the left and right boundaries of the convolution kernel are inclined to the sides to adapt to changes in the shape of objects in this area. The use of distorted convolutional kernels is premised on supervised learning, which assumes the existence of a large labeled dataset of equirectangular images. After the widespread adoption of 360\({}^{\circ}\) cameras, there has been a significant increase in the volume of 360\({}^{\circ}\) spherical images and videos available. Despite this, there remains a shortage of labeled equirectangular image data, primarily due to the costly nature of annotation. To address this issue, several studies have implemented domain adaptive methods aimed at processing equirectangular images. In their study, Su et al.[11] utilized the tangent plane of the perspective image and equirectangular image to train a convolutional neural network (CNN). They used a pre-trained CNN for the tangent plane of the perspective image, while for the equirectangular image, they employed a CNN with parameters that needed to be optimized. They continuously optimized the parameters of the latter so that the output would be similar to that of the pre-trained CNN. Although this method does not require a labeled equirectangular images dataset, the model has a large number of parameters (up to GB level), making it difficult to train and use. The method described by the authors[12] has been optimized to reduce the number of model parameters. However, the effectiveness of the improved method is determined by the performance of the pre-trained model. If the pre-trained model fails to perform well, it can negatively affect the effectiveness of the model on the equirectangular image. Ma et al.[13] proposed a semantic segmentation model for equirectangular images using generative adversarial networks (GANs), which is similar to the domain adaptive method. The model takes perspective images with labels and equirectangular images as inputs, which then undergo semantic segmentation through a segmentation network (Generator G) to produce segmentation results. The objective is to minimize the disparity between the two segmentation results. To achieve this, the segmentation results are evaluated using the discriminator network (D) to classify them as either perspective images or equirectangular images. Generator G is trained using segmentation loss and backpropagation loss to generate segmentation results comparable to those of perspective images. The method proposed in this study does not rely on labeled equirectangular images. However, it specifically focuses on the central region of the equirectangular image, which experiences less Figure 1: Equirectangular image obtained by equidistant projection of 360\({}^{\circ}\) image Figure 2: Adaptive adjustment of convolutional kernel shape of CNNs distortion. The study does not investigate the upper and lower regions of the image that are known to exhibit significant distortion. Our findings indicate that the method is not effective in handling the severely distorted portions of equirectangular images. Training a network to process equirectangular images using a limited perspective image dataset can be challenging, especially when extracting shape features from severely distorted objects in these images. To overcome this challenge, we present a novel approach to transform perspective image datasets into labeled equirectangular image datasets. This conversion method enables standard CNN models to effectively process equirectangular images, including the upper and lower regions where object shape features are highly distorted. We propose an optimal projection position for converting perspective image datasets into equirectangular image datasets for semantic segmentation. Our proposed method surpasses other existing approaches, demonstrating superior performance. ## 2 Method ### Coordinate transformation of spherical images and equirectangular images As depicted in Fig 3(a), the viewpoint of the spherical image is at the center of the sphere, denoted as \(S\), which is a unit sphere. A point \(s(\theta,\varphi)\) on the sphere can be uniquely determined by the azimuthal angle \(\theta\) with the range of \([-\pi,\pi]\) and the zenith angle \(\varphi\) within the range of \([-\frac{\pi}{2},\frac{\pi}{2}]\). Equidistant cylindrical projection samples the spherical image at equal intervals (\(\Delta_{\theta}\) and \(\Delta_{\varphi}\)). These sampling points are evenly mapped onto a planar image, as shown in Fig.3(b). Each point on the sphere S corresponds to a point on the plane. Suppose the width and the height of the plane are denoted as \(w\) and \(h\), respectively. Then, \(\Delta_{\theta}\)=\(\frac{2\pi}{w}\), and \(\Delta_{\varphi}\)=\(\frac{\pi}{h}\) represent the horizontal and vertical lengths of each unit on the plane. When processing planar images, it is common to convert the spherical coordinate system to a Cartesian coordinate system. As illustrated in Figure 3(c), a point \(s(\theta,\varphi)\) on the sphere corresponds to a point \(p(x,y)\) on the planar image, and the coordinate conversion are equations are as follows: \[x =\frac{(\theta+\pi)w}{2\pi} \tag{1}\] \[y =\frac{(\pi-2\varphi)h}{2\pi} \tag{2}\] ### Spherical images and tangent planes For any point on a sphere, it is possible to draw a tangent line that intersects the sphere. The point of intersection between the tangent plane and the sphere is known as the tangent point. In Fig.4(a), we select the point (0,0) on the spherical surface as the tangent point and construct a tangent plane, denoted as \(I_{r}\).with size of \(n\times n\) at (0, 0). By utilizing the spherical center projection[14], we can obtain a projection region, denoted Figure 3: Spherical image coordinate system and equidistant cylindrical projection as \(I_{s}\), on the sphere with size of \(n\times n\). Fig.4(b) illustrates the one-to-one correspondence between the points on \(I_{r}\) and the points on \(I_{s}\). This means that the orange tangent plane is projected to the blue area on the sphere. where \(i,j\in\left[0,\frac{n-1}{2}\right],n\geq 3\). The tangent \(I_{r}\) at sampling intervals of \(\Delta_{\theta}\) and \(\Delta_{\varphi}\), in both horizontal and vertical directions is depicted in Fig.5. To determine the Cartesian coordinates of a point \(r(i,j)\) on the tangent plane \(I_{r}\), Equ.3-Equ.6 can be utilized. Here, \(i\) and \(j\) are within the range \(\left[0,\frac{n-1}{2}\right]\), with \(n\) being greater than or equal to 3. \[r(0,0)=(0,0) \tag{3}\] \[r(\pm i,0)=(\pm\ i\tan\Delta_{\theta},0)\] (4) \[r(0,\pm j)=(0,\pm\ j\tan\Delta_{\varphi})\] (5) \[r(\pm i,\pm j)=(\pm\ i\ \tan\Delta_{\theta},\pm\ j\tan\Delta_{ \varphi}) \tag{6}\] By maintaining the shape of the tangent plane at the tangent point (0,0), it is possible to displace the tangent plane along the surface of the sphere. The specific location of the tangent point s(\(\theta\),\(\varphi\)) on the sphere is arbitrarily selected, and the points on the tangent plane can be projected onto the spherical surface through an inverse transformation of the spherical center projection. This process is illustrated in Figure 6. Figure 4: Tangent plane \(I_{r}\) and its projection \(I_{s}\) region on the sphere Figure 5: n=9 as an example, a schematic diagram of sampling points on the tangent plane As the relative positions between the points on the tangent plane remain unchanged during the movement, their coordinates \(r\) can be derived from Equ.3-Equ.6. Furthermore, the azimuth angle \(\theta(i,j)\) and the zenith angle \(\varphi(i,j)\) of the corresponding point mapped to the sphere can be calculated as using Equ.7-Equ.10. \[\theta(i,j)=\theta+tan^{-1}(\frac{isinv}{pos\varphi cosov-jsinupsinv}) \tag{7}\] \[\varphi(i,j)=sin^{-1}(cosvsin\varphi+\frac{jsinvcos\varphi}{\rho}) \tag{8}\] \[\rho=\sqrt{i^{2}+j^{2}} \tag{9}\] \[v=tan^{-1}\rho \tag{10}\] where \(\theta\) and \(\varphi\) are the coordinates of tangency. ### Transformation from perspective images to equirectangular images Based on the principles presented in Section 2.1 and Section 2.2, the transformation from perspective images to equirectangular images can be accomplished through the following steps: (1) Determine the width \(w\) and height \(h\) of the desired equirectangular image. Calculate the sampling intervals \(\Delta_{\theta}\) and \(\Delta_{\varphi}\) for the horizontal and vertical directions of the equirectangular image. (2) Adjust the size of the perspective image to \(n\times n\) for convenience. Use the perspective image as the tangent plane at a point on the unit sphere \(s(\theta,\varphi)\). Calculate the coordinates of each point within the \(n\times n\) region of the perspective image using Equ.3-Equ.6. (3) Employ the inverse transformation of the spherical center projection to map the sampling point \(r(i,j)\) in the perspective image onto the spherical surface, obtaining the spherical coordinates of the corresponding point \(s^{\prime}\big{(}\theta(i,j),\varphi(i,j)\big{)}\). (4) Utilize the coordinate transformation relationship between the spherical image and the equirectangular image (Equ.1-Equ.2) to convert the spherical coordinates \(s^{\prime}(\theta(i,j),\varphi(i,j))\) of the mapped point into Cartesian coordinates \(p(x,y)\) representing the corresponding point in the planar image. Fig.7 illustrates this process, where the yellow point \(s\) represents the tangent point, the red point \(r(i,j)\) represents the sampled point on the tangent plane \(I_{r}\) image, which corresponds to the black point on the sphere \((\theta(i,j),\varphi(i,j))\), and subsequently, the points on the sphere are mapped to the red point \(p(x,y)\) in the equirectangular image. Figure 6: The tangent plane is displaced along the surface of the sphere The choice of the tangent point \(s(\theta,\varphi)\) plays a significant role in determining the level of distortion in the object's shape within the equirectangular image resulting from by the transformation. When \(\varphi\) approaches \(\pi/2\) or \(-\pi/2\) (the poles of the sphere), the object's shape becomes more distorted in the equirectangular image. Conversely, when \(\varphi\) is closer to 0, the object's shape resembles its shape in the perspective image. On the other hand, the value of \(\theta\) determines the horizontal position of the projection area in the equirectangular image and does not impact the object's shape. Consequently, during the conversion process, \(\theta\) can be set to 0 without affecting the object's shape. ## 3 Datasets and Experiments In our experiments, we utilized the CityScapes[6] and CamVid[15] datasets along with various standard CNNs semantic segmentation models. The objectives of our study are twofold: (1) Finding the optimal projection \(\varphi\) value for converting perspective images to equirectangular images: Our goal is to ensure that standard CNNs semantic segmentation models can effectively learn object distortion features by identifying the ideal projection parameter. (2) Assessing the effectiveness of the proposed equal-angle image construction method: We aim to improve the semantic segmentation performance of standard CNNs for equirectangular images by comparing our method with alternative approaches. Through these aims, we aim to enhance the understanding and application of semantic segmentation models for equirectangular images. ### Datasets CityScapes consists of street view images with a total of 34 categories, while CamVid contains street scene images with 32 categories. For our experiments, we select six common categories (roads, building, vegetation, sky, cars, and pedestrians) present in both datasets. To transform these perspective image datasets into equirectangular images, we employ the method proposed in Section 2, resulting in the creation of the Omni-CityScapes and Omni-CamVid datasets. The transformed datasets are used for testing purposes. We design two experimental schemes: (1) CityScapes is used as the training set and the Omni-CamVid dataset serves as the test set. (2) CamVid is used as the training set and the Omni-CityScapes dataset serves as the test set. During the construction of the test set, the projection parameter \(\varphi\) is set to \(\pi\)/2, resulting in fully distorted object shapes in the test images. On the other hand, when creating the training set, \(\varphi\) values within the range (0, \(\pi\)/2) are selected to identify the optimal \(\varphi\) value that enables standard CNNs to learn effectively from the equirectangular images. The training images have a resolution of 224\(\times\)224 pixels, with 700 images in the training set and 1000 images in the test set. Figure 7: Transformation of perspective images to equirectangular images Our experiments focus on the upper region of the equirectangular image since the distortion degree is the same for both the upper and lower regions. Hence, we solely consider the upper region to simplify the analysis. To ensure that the test set's image layout differs from the training set, we perform cropping on the perspective images before applying spherical projection for transformation. The cropping process is guided by the principle of projecting each category to different locations within the region where the equirectangular image is formed after spherical transformation. This ensures that the test set contains distortion at various positions. To maintain consistency, the projected image is always positioned in the upper part of the divided equirectangular image. Consequently, any excess portion of the projected image is cropped. For instance, in Fig.8, we demonstrate this process using the car category as an example. The red box represents the area where the car is located, and it is cropped to generate a new image. Subsequently, through spherical projection, the car is positioned at different locations within the equirectangular image projection area, and any excess portion is cropped accordingly. ### Determination of the projection \(\varphi\) value (1) Experiments on different datasets with the same CNNs For the training set, we set the projection \(\varphi\) values to specific increments: \(\pi\)/16, 2\(\pi\)/16, 3\(\pi\)/16, 4\(\pi\)/16, 5\(\pi\)/16, 6\(\pi\)/16, 7\(\pi\)/16, and 8\(\pi\)/16. The resulting equirectangular images obtained by converting the perspective images are depicted in Fig.9. It is evident that as the Figure 8: A transformation and cropping example of perspective images \(\varphi\) value increases, the degree of object shape distortion in the transformed equirectangular image becomes more pronounced. This demonstrates the impact of \(\varphi\) on the level of distortion experienced by objects in the equirectangular representation. For the semantic segmentation model, we select the UNet[16] architecture with the VGG16 as the encoder. The VGG16 is pre-trained on the ImageNet dataset. Considering the object shape distortion in equirectangular images, we employ a specific approach in the UNet architecture. The encoder module used distorted convolutional kernels, while the decoder module utilized normal convolutional kernels. This design allowed us to analyze the impact of these different convolutional kernels on the performance of the semantic segmentation model in the context of object shape distortion. During training, we conduct experiments for 40 epochs. The initial learning rate was set to \(1\times 10^{-4}\) for the first 25 epochs, and then adjusted to \(1\times 10^{-5}\) for the remaining epochs. The performance of the model was evaluated using the Intersection over Union (IoU) metric, a commonly used measure for semantic segmentation tasks. The CamVid dataset is projected onto different \(\varphi\) values, and the resulting equirectangular images are used to train a UNet model with distorted convolutional kernels. Table 1 presents the IoU results of this model on the Omni-CityScapes dataset. Among the different projection positions, when the perspective image is projected with a \(\varphi\) value of 6\(\pi\)/16, the model achieves the highest average IoU value of 29.76. The IoU values for the "buildings" and "sky" categories are the highest, reaching 45.62 and 40.36, respectively. The IoU values for "roads" and "vegetation" are slightly lower than the best values, with differences of 0.26 and 0.02, respectively. In Table 2, the results of the trained model with normal convolutional kernels are shown. Similar to the distorted convolutional kernel model, when projected with a \(\varphi\) value of 6\(\pi\)/16, the average IoU value increases to 33.76, surpassing the other projection positions. Although the IoU values for each category are not the best, they are within a close range from the best values, differing by 0.05, 0.66, 1.26, 3.03, 0, and 7.08, respectively. Indeed, the observation that UNet models with normal convolutional kernels outperformed those with distorted convolutional kernels is quite intriguing. This finding implies that the training dataset obtained through the transformation process can significantly enhance the model's ability to learn and understand the distorted features of objects present in equirectangular images. By utilizing normal convolutional kernels, the model appears to better capture and represent the complex spatial relationships and shape variations that arise due to the distortion inherent in equirectangular images. This highlights the importance of appropriate training data and network architecture in effectively addressing the challenges posed by object shape distortion in semantic segmentation tasks. Figure 9: Perspective image is projected onto the plane at different locations to form equirectangular images The results presented in Table 3 demonstrate the performance of the UNet model with distorted convolutional kernels trained on CityScapes dataset, projected to various locations, and tested on Omni-CamVid. Notably, when the perspective image is projected with a \(\varphi\) value of 6\(\pi\)/16, the model achieves the highest average IoU value of 23.96. Furthermore, the IoU values for roads, buildings, and sky are also higher compared to other projection positions, reaching 31.87, 28.02, and 54.90, respectively. However, the IoU values for vegetation, pedestrians, and cars fall short of the highest values by 1.29, 0, and 1.84, respectively. On the other hand, Table 4 presents the segmentation results of the UNet model with normal convolutional kernels trained on CityScapes dataset and tested on Omni-CamVid, with images projected to different locations. Notably, when projected to 4\(\pi\)/16, buildings achieve the highest IoU value of 30.92, while cars achieve the highest IoU value of 7.64. When projected to 8\(\pi\)/16, roads and vegetation achieve the highest IoU values of 30.96 and 29.15, respectively. Additionally, when projected to 6\(\pi\)/16, the sky exhibits the highest IoU value of 58.89. In terms of the average IoU value for all six categories, projecting to 6\(\pi\)/16 yields the highest value of 25.03, surpassing other projection positions. However, there are slight differences between the highest IoU values and the IoU values obtained for roads, buildings, vegetation, pedestrians, and cars, with variations of 0.87, 3.50, 1.29, 0, and 1.70, respectively. Comparing the results presented in Table 3 and Table 4, it is evident that, across the eight different projection positions, the average IoU of the model utilizing normal convolutional kernels consistently outperforms that of the model employing distorted convolutional kernels. This finding aligns with the observation from previous experiments, reinforcing the notion that employing common convolutional kernels facilitates improved \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(\varphi\) & roads & buildings & vegetation & sky & pedestrians & cars & average \\ \hline \(\pi\)/16 & 27.68 & 15.63 & 9.74 & 9.13 & 0 & 6.77 & 11.49 \\ \hline \(2\pi\)/16 & 33.31 & 27.31 & 19.30 & 16.03 & 0 & 4.83 & 16.80 \\ \hline \(3\pi\)/16 & 36.38 & 35.68 & 17.94 & 22.13 & 0 & 7.56 & 19.95 \\ \hline \(4\pi\)/16 & 37.72 & 38.43 & 27.27 & 29.63 & 0 & 10.85 & 23.98 \\ \hline \(5\pi\)/16 & 37.64 & 40.00 & 44.89 & 35.09 & 0 & **12.07** & 28.28 \\ \hline \(6\pi\)/16 & 38.48 & **45.62** & 47.24 & **40.36** & 0 & 6.88 & **29.76** \\ \hline \(7\pi\)/16 & 38.13 & 35.93 & **47.26** & 37.47 & 0 & 6.17 & 27.49 \\ \hline \(8\pi\)/16 & **38.74** & 37.19 & 45.71 & 37.50 & 0 & 3.70 & 27.14 \\ \hline \hline \end{tabular} \end{table} Table 1: IoU (%) of the UNet model with distorted convolution kernel when CamVid dataset is projected to different locations (best values in bold) \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \(\varphi\) & roads & buildings & vegetation & sky & pedestrians & cars & Average \\ \hline \(\pi\)/16 & 26.14 & 15.21 & 38.59 & 21.04 & 0 & 12.18 & 18.86 \\ \hline \(2\pi\)/16 & 28.68 & 20.40 & 41.09 & 24.45 & 0 & 12.24 & 21.14 \\ \hline \(3\pi\)/16 & 38.59 & 47.88 & 42.14 & 28.80 & 0 & **19.44** & 29.48 \\ \hline \(4\pi\)/16 & 39.03 & 46.40 & 52.81 & 35.72 & 0 & 11.05 & 30.84 \\ \hline \(5\pi\)/16 & **39.41** & **52.16** & 54.47 & 39.06 & 0 & 11.13 & 32.71 \\ \hline \(6\pi\)/16 & 39.36 & 51.50 & 58.28 & 41.08 & 0 & 12.36 & **33.76** \\ \hline \(7\pi\)/16 & 38.96 & 43.02 & **59.54** & 37.77 & 0 & 13.65 & 32.16 \\ \hline \(8\pi\)/16 & 38.94 & 47.07 & 58.00 & **44.11** & 0 & 12.08 & 33.37 \\ \hline \hline \end{tabular} \end{table} Table 2: IoU (%) of the UNet model with normal convolution kernel when CamVid dataset is projected to different locations (best values in bold) performance in handling the challenges associated with object shape distortion in semantic segmentation tasks. Based on the analysis of the IoU results presented in Table 1-4, it is observed that the highest average IoU value is achieved when the perspective images are projected to approximately \(6\pi/16\). Subdividing the \(\varphi\) value further does not significantly impact the IoU values, as the differences become very small. Hence, \(6\pi/16\) can be considered as the approximate optimal projection position. To provide visual evidence of the segmentation results, Fig. 10 and Fig. 11 illustrate the segmentation outputs of the UNet model with normal convolutional kernels trained on the CamVid and CityScapes datasets, respectively, using different projection \(\varphi\) values. The visualization results indicate that when the \(\varphi\) value is set to \(6\pi/16\), the overall segmentation performance of the model is superior compared to other projection positions. However, it is worth noting that the segmentation performance for small-sized objects, such as pedestrians and cars, remains challenging in perspective image processing tasks. This finding highlights an area that requires further improvement and attention in future research. \begin{table} \begin{tabular}{c c c c c c c} \hline \(\varphi\) & roads & buildings & vegetation & sky & pedestrians & cars & average \\ \hline \(\pi/16\) & 23.27 & 4.14 & 2.20 & 19.39 & 0 & 4.88 & 8.98 \\ \hline \(2\pi/16\) & 24.61 & 4.79 & 2.91 & 26.15 & 0 & 5.36 & 10.64 \\ \hline \(3\pi/16\) & 25.79 & 30.61 & 18.21 & 51.55 & 0 & 5.24 & 21.90 \\ \hline \(4\pi/16\) & 27.19 & **30.92** & 16.98 & 47.12 & 0 & **7.64** & 21.64 \\ \hline \(5\pi/16\) & 28.26 & 27.93 & 24.34 & 55.89 & 0 & 6.32 & 23.79 \\ \hline \(6\pi/16\) & 30.09 & 27.42 & 27.86 & **58.89** & 0 & 5.94 & **25.03** \\ \hline \(7\pi/16\) & 29.48 & 22.16 & 25.01 & 41.45 & 0 & 6.19 & 20.72 \\ \hline \(8\pi/16\) & **30.96** & 30.31 & **29.15** & 38.26 & 0 & 5.13 & 22.30 \\ \hline \end{tabular} \end{table} Table 4: IoU (%) of the UNet model with normal convolution kernel when CityScapes dataset is projected to different locations (best values in bold) \begin{table} \begin{tabular}{c c c c c c c c} \hline \(\varphi\) & roads & buildings & vegetation & sky & pedestrians & cars & average \\ \hline \(\pi/16\) & 24.26 & 1.68 & 0.84 & 8.23 & 0 & 4.23 & 6.54 \\ \hline \(2\pi/16\) & 25.13 & 7.35 & 1.87 & 17.95 & 0 & 4.10 & 9.40 \\ \hline \(3\pi/16\) & 28.02 & 13.52 & 11.32 & 16.26 & 0 & 6.02 & 12.52 \\ \hline \(4\pi/16\) & 31.07 & 22.26 & 11.39 & 49.28 & 0 & **7.69** & 20.28 \\ \hline \(5\pi/16\) & 31.29 & 26.38 & **24.42** & 46.96 & 0 & 6.61 & 22.61 \\ \hline \(6\pi/16\) & **31.87** & **28.02** & 23.13 & **54.90** & 0 & 5.85 & **23.96** \\ \hline \(7\pi/16\) & 29.07 & 16.04 & 24.12 & 45.39 & 0 & 6.36 & 20.16 \\ \hline \(8\pi/16\) & 28.76 & 16.76 & 23.12 & 27.33 & 0 & 7.12 & 17.18 \\ \hline \end{tabular} \end{table} Table 3: IoU (%) of the UNet model with distorted convolution kernel when CityScapes dataset is projected to different locations (best values in bold) (2) Same dataset, different CNNs The experimental setup involves using the CamVid dataset as the training set and the Omni-CityScapes dataset as the testing set. Various CNN models, including UNet, UNet++[17], SegNet[18], PSPNet[19], and DeepLab v3+[20], are utilized with normal convolutional kernels. The results of these experiments are presented in Tables 5 to 9. Based on the segmentation results shown in Fig.13 to Fig.17, it can be observed that when the \(\,\varphi\,\) value is set to 6\(\pi\)/16, the average IoU values obtained using UNet, SegNet, and PSPNet models are higher compared to other \(\,\varphi\,\) values. However, for UNet++ and DeepLab v3+ models, the highest average IoU value is achieved at a \(\,\varphi\,\) value of 5\(\pi\)/16. Nevertheless, the difference in IoU values between 5\(\pi\)/16 and 6\(\pi\)/16 is very small, with only 0.03 and 0.33 differences, respectively. This suggests that even though some models do not achieve the highest IoU at the 6\(\pi\)/16 projection position, the difference is minimal. Hence, for different semantic segmentation models, the approximation of 6\(\pi\)/16 can be considered as the actual optimal projection position. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \hline \(\varphi\) & backbone & roads & buildings & vegetation & sky & pedestrians & cars & average \\ \hline \multirow{2}{*}{\(\pi/16\)} & VGG16 & 24.37 & 17.04 & 15.55 & 25.11 & 0 & 6.16 & 14.71 \\ \cline{2-8} & ResNet34 & 19.31 & 18.41 & 19.15 & 22.80 & 0 & 7.89 & 14.59 \\ \hline \multirow{2}{*}{\(2\pi/16\)} & VGG16 & 22.23 & 19.66 & 28.84 & 32.73 & 0 & 5.57 & 18.17 \\ \cline{2-8} & ResNet34 & 19.44 & 6.04 & 3.58 & 27.08 & 0 & 5.89 & 10.34 \\ \hline \multirow{2}{*}{\(3\pi/16\)} & VGG16 & 35.93 & 47.66 & 39.40 & 36.30 & 0 & 11.65 & 28.49 \\ \cline{2-8} & ResNet34 & 32.07 & 34.41 & 36.73 & 27.20 & 0 & 9.43 & 23.31 \\ \hline \multirow{2}{*}{\(4\pi/16\)} & VGG16 & 37.85 & **49.58** & 53.40 & 32.47 & 0 & 7.35 & 30.11 \\ \cline{2-8} & ResNet34 & 33.55 & 35.05 & 38.17 & 37.70 & 0 & 8.70 & 25.53 \\ \hline \multirow{2}{*}{\(5\pi/16\)} & VGG16 & **39.08** & 48.80 & **55.57** & 34.75 & 0 & 6.70 & **30.82** \\ \cline{2-8} & ResNet34 & 38.41 & 46.30 & 40.45 & 38.67 & 0 & 9.07 & 28.82 \\ \hline \multirow{2}{*}{\(6\pi/16\)} & VGG16 & 38.66 & 46.99 & 52.06 & 35.03 & 0 & 12.01 & 30.79 \\ \cline{2-8} & ResNet34 & 38.74 & 36.40 & 48.42 & 35.07 & 0 & **15.08** & 28.95 \\ \hline \multirow{2}{*}{\(7\pi/16\)} & VGG16 & 37.71 & 37.71 & 52.29 & 32.12 & 0 & 11.43 & 28.54 \\ \cline{2-8} & ResNet34 & 38.65 & 33.71 & 47.87 & 34.52 & 0 & 10.68 & 27.57 \\ \hline \multirow{2}{*}{\(8\pi/16\)} & VGG16 & 37.80 & 37.47 & 52.31 & **38.78** & 0 & 9.71 & 29.35 \\ \cline{2-8} & ResNet34 & 38.58 & 29.41 & 50.67 & 36.03 & 0 & 13.52 & 28.04 \\ \hline \hline \end{tabular} \end{table} Table 7: IoU(%) of SegNet network (backbone network is VGG16) \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline \(\varphi\) & roads & buildings & vegetation & sky & pedestrians & cars & average \\ \hline \multirow{2}{*}{\(\pi/16\)} & VGG16 & 26.14 & 15.21 & 38.59 & 21.04 & 0 & 12.18 & 18.86 \\ \cline{2-8} & ResNet34 & 9.76 & 7.60 & 3.51 & 17.09 & 0 & 3.19 & 6.86 \\ \hline \multirow{2}{*}{\(2\pi/16\)} & VGG16 & 28.68 & 20.40 & 41.09 & 24.45 & 0 & 12.24 & 21.14 \\ \cline{2-8} & ResNet34 & 12.15 & 11.77 & 4.64 & 30.98 & 0 & 1.59 & 10.19 \\ \hline \multirow{2}{*}{\(3\pi/16\)} & VGG16 & 38.59 & 47.88 & 42.14 & 28.80 & 0 & **19.44** & 29.48 \\ \cline{2-8} & ResNet34 & 31.30 & 23.80 & 33.03 & 22.97 & 0 & 7.95 & 19.84 \\ \hline \multirow{2}{*}{\(6\pi/16\)} & VGG16 & 39.36 & 51.50 & 58.28 & 41.08 & 0 & 12.36 & **33.76** \\ \cline{2-8} & ResNet34 & 38.66 & 42.70 & 42.76 & 35.15 & 0 & 3.68 & 27.16 \\ \hline \multirow{2}{*}{\(7\pi/16\)} & VGG16 & 38.96 & 43.02 & **59.54** & 37.77 & 0 & 13.65 & 32.16 \\ \cline{2-8} & ResNet34 & 39.07 & 45.83 & 44.69 & 39.38 & 0 & 7.59 & 29.43 \\ \hline \multirow{2}{*}{\(8\pi/16\)} & VGG16 & 38.94 & 47.07 & 58.00 & **44.11** & 0 & 12.08 & 33.37 \\ \cline{2-8} & ResNet34 & 37.74 & 31.02 & 44.17 & 31.88 & 0 & 8.17 & 25.50 \\ \hline \hline \end{tabular} \end{table} Table 6: IoU(%) of UNet++ network (backbone network is VGG16 and ResNet34) \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline \(\varphi\) & roads & buildings & vegetation & sky & pedestrians & cars & average \\ \hline 8\(\pi\)/16 & 37.48 & 17.62 & 24.61 & **30.13** & 0 & 3.93 & 18.96 \\ \hline \hline \end{tabular} \end{table} Table 8: IoU(%) of PSPNet network (backbone network is VGG16 and ResNet34) \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline \(\varphi\) & backbone & roads & buildings & vegetation & sky & pedestrians & cars & average \\ \hline \multirow{2}{*}{\(\pi\)/16} & VGG16 & 5.56 & 0.15 & 0.20 & 0.35 & 0 & 0 & 1.04 \\ \cline{2-8} & ResNet34 & 8.28 & 14.68 & 2.30 & 1.76 & 0 & 0.04 & 4.51 \\ \hline \multirow{2}{*}{2\(\pi\)/16} & VGG16 & 9.07 & 5.29 & 11.74 & 3.20 & 0 & 0 & 4.88 \\ \cline{2-8} & ResNet34 & 18.99 & 34.52 & 23.03 & 9.87 & 0 & 0.18 & 14.43 \\ \hline \multirow{2}{*}{3\(\pi\)/16} & VGG16 & 29.68 & 48.73 & 29.61 & 8.51 & 0 & 0 & 19.42 \\ \cline{2-8} & ResNet34 & 35.71 & 47.04 & 9.80 & 23.02 & 0 & 0 & 19.26 \\ \hline \multirow{2}{*}{4\(\pi\)/16} & VGG16 & 33.97 & **52.35** & 43.24 & 13.66 & 0 & 0 & 23.87 \\ \cline{2-8} & ResNet34 & 35.46 & 50.54 & 31.83 & 23.90 & 0 & 0.26 & 23.67 \\ \hline \multirow{2}{*}{5\(\pi\)/16} & VGG16 & 30.30 & 48.70 & 46.42 & 19.66 & 0 & 1.01 & 24.35 \\ \cline{2-8} & ResNet34 & 36.96 & 52.20 & 47.73 & 32.88 & 0 & 0.46 & 28.37 \\ \hline \multirow{2}{*}{6\(\pi\)/16} & VGG16 & 33.22 & 52.14 & 37.02 & 29.10 & 0 & 1.09 & 25.43 \\ \cline{2-8} & ResNet34 & 37.82 & 51.09 & **55.81** & 32.90 & 0 & **3.74** & **30.23** \\ \hline \multirow{2}{*}{7\(\pi\)/16} & VGG16 & 35.29 & 50.59 & 53.83 & 33.00 & 0 & 3.66 & 29.40 \\ \cline{2-8} & ResNet34 & 37.76 & 42.36 & 53.24 & 29.44 & 0 & 3.38 & 27.70 \\ \hline \multirow{2}{*}{8\(\pi\)/16} & VGG16 & 35.77 & 44.72 & 48.20 & 34.38 & 0 & 1.06 & 27.36 \\ \cline{2-8} & ResNet34 & **38.12** & 42.37 & 53.67 & **36.78** & 0 & 2.84 & 28.96 \\ \hline \end{tabular} \end{table} Table 9: IoU(%) of DeepLab v3+ network (backbone network is ResNet34) Figure 14: The segmentation results of SegNet with backbone VGG 16 Figure 12: The segmentation results of UNet with backbone VGG 16 Figure 13: The segmentation results of UNet++ with backbone VGG 16 ### Methods Comparison In the comparison experiments, three different approaches, namely supervised learning, unsupervised learning, and data augmentation, are evaluated alongside the method proposed in the paper. The experiments are conducted using the CamVid dataset as the training set and the Omni-Cityscapes dataset as the test set, with 700 and 1000 images, respectively. The UNet model is chosen as the semantic segmentation model, with VGG16 as the backbone network. (1) Tangent plane image method [10]: This approach involves using normal convolutional kernels for training and distorted convolutional kernels for testing. Tangent plane images are generated from equirectangular images in the Omni-Cityscapes dataset using spherical center projection. These tangent plane images are then used as input for training the UNet network. During testing, the weights of the normal convolutional kernels in the UNet network are copied to the distorted convolutional kernels for segmenting equirectangular images. This method utilizes the labeled Omni-Cityscapes dataset, making it a form of supervised learning. (2) UNet-P2PDA [12]: This method employs an adversarial generation network that takes both perspective images and equirectangular images as input. The goal is to minimize the difference between the segmentation results of the two types of images. Since no labeled equirectangular images are required for training, this approach falls under the category of unsupervised learning. Figure 16: The segmentation results of DeepLab v3+ with backbone ResNet34 Figure 15: The segmentation results of PSPNet with backbone ResNet34 (3) Image Enhancement: This method involves performing operations such as cropping, upscaling, and left/right rotations on the perspective images to increase the size of the dataset and improve the learning capability of the UNet model. The method proposed in the paper uses normal convolutional kernels for both training and testing. The experimental results of this method, as well as the three comparison methods mentioned above, are presented in Table 10. Table 10 demonstrates that the method proposed in the paper achieves the highest IoU values compared to the other three methods, with an average IoU of 33.76. Therefore, the method in this paper outperforms the other methods in terms of overall segmentation effectiveness. Specifically, the tangent plane image method has a significantly lower average IoU value of 19.91. This can be attributed to the generation of numerous similar tangent plane images, leading to overfitting of the model and reduced generalization ability. Consequently, the model performs poorly on the test dataset, resulting in lower IoU values and inferior segmentation performance. For the UNet-P2PDA method, the average IoU value is 23.06, indicating that the approach of using an adversarial generation network is less effective in handling the upper and lower regions of equirectangular images. Regarding the image enhancement method, the model achieves the highest IoU values of 42.8 and 42.04 for roads and sky, respectively. It is noteworthy that the training set processing procedure of this method can be seen as a specialized form of image enhancement, specifically targeted at learning distortion features of the upper and lower regions of equirectangular images. Consequently, the model can better capture the distorted features of these regions and achieve superior performance. When considering individual categories, the method in this paper achieves the highest IoU values of 51.50, 58.28, and 12.36 for buildings, vegetation, and cars, respectively. This is attributed to the projection of perspective images onto equirectangular images, enabling the model to learn the distorted features of object shapes during training and improving its semantic segmentation capability. ## 4 Conclusion 360-degree spherical images offer the advantage of a wide field of view and are commonly projected onto a planar plane as an equirectangular image for further processing. However, the object shapes in equirectangular images can be distorted, and they often lack translation invariance. Moreover, there is a scarcity of publicly available datasets containing labeled equirectangular images, which poses a challenge for standard convolutional neural network (CNN) models to effectively process such images. To address these challenges, we propose a methodology that involves converting a perspective image into equirectangular image. This is achieved through the use of inverse transformations, specifically the spherical center projection and the equidistant cylindrical projection. By employing these transformations, standard CNN models can learn the distortion features at different positions within the equirectangular image, enabling them to perform semantic segmentation effectively. The parameter \(\varphi\), which determines the projection position of the perspective image, has been thoroughly analyzed using various datasets and CNN models such as UNet, \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Methods & roads & buildings & vegetation & sky & pedestrians & cars & Average \\ \hline Tangent plane image method & 28.07 & 21.31 & 32.37 & 34.55 & 0.12 & 3.03 & 19.91 \\ \hline UNet-P2PDA & 30.80 & 38.58 & 34.87 & 27.38 & 0 & 6.74 & 23.06 \\ \hline Image enhancement & **42.80** & 35.85 & 35.52 & **42.04** & **0.18** & 2.76 & 26.53 \\ \hline Method in this paper(\(\varphi=6\pi/16\)) & 39.36 & **51.50** & **58.28** & 41.08 & 0 & **12.36** & **33.76** \\ \hline \hline \end{tabular} \end{table} Table 10: Comparison of the method in this paper with other methods (IoU) UNet++, SegNet, PSPNet, and DeepLab v3+. The experimental results consistently demonstrate that an optimal value of \(\varphi\) for achieving effective semantic segmentation of equirectangular images using standard CNNs is 6\(\pi\)/16. Comparing our proposed method with three other types of approaches, namely supervised learning, unsupervised learning, and data augmentation, we find that our method outperforms them in terms of average Intersection over Union (IoU) value. Specifically, our method achieves an average IoU value of 43.76%, which is significantly higher than the corresponding values of the other three methods. It surpasses them by 23.85%, 10.7%, and 17.23% respectively, highlighting the superiority of our proposed methodology in semantically segmenting equirectangular images. In future research, we intend to explore the adaptability of our proposed methodology to various other tasks such as isometric image classification, object detection, and depth prediction. By extending our methodology to these areas, we hope to contribute to the broader field of computer vision and enable more comprehensive analysis and understanding of 360-degree spherical images. The authors Haoqian Chen, Rencheng Sun and Yi Sui developed the initial idea for the study. The authors Haoqian Chen, Jian Liu, Minghe Li designed the research methodology, including data collection, experimental procedures. The authors Kaiwen Jiang, Minghe Li and Ziheng Xu created figures, tables of the presentation of results. The authors Haoqianchen and Yi Sui wrote the initial version of the manuscript. The authors Kaiwen Jiang and Yi Sui revised and edited the manuscript. The author Yi Sui provided oversight and guidance throughout the research. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. This research was supported by Young Scientists Fund of the National Natural Science Foundation of China (41706198) and Qingdao Independent innovation major special project (21-1-2-1hy). All data generated or analyzed during this study are included in this published article. All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
2303.06329
MetaViewer: Towards A Unified Multi-View Representation
Existing multi-view representation learning methods typically follow a specific-to-uniform pipeline, extracting latent features from each view and then fusing or aligning them to obtain the unified object representation. However, the manually pre-specify fusion functions and view-private redundant information mixed in features potentially degrade the quality of the derived representation. To overcome them, we propose a novel bi-level-optimization-based multi-view learning framework, where the representation is learned in a uniform-to-specific manner. Specifically, we train a meta-learner, namely MetaViewer, to learn fusion and model the view-shared meta representation in outer-level optimization. Start with this meta representation, view-specific base-learners are then required to rapidly reconstruct the corresponding view in inner-level. MetaViewer eventually updates by observing reconstruction processes from uniform to specific over all views, and learns an optimal fusion scheme that separates and filters out view-private information. Extensive experimental results in downstream tasks such as classification and clustering demonstrate the effectiveness of our method.
Ren Wang, Haoliang Sun, Yuling Ma, Xiaoming Xi, Yilong Yin
2023-03-11T07:17:28Z
http://arxiv.org/abs/2303.06329v1
# MetaViewer: Towards A Unified Multi-View Representation ###### Abstract Existing multi-view representation learning methods typically follow a specific-to-uniform pipeline, extracting latent features from each view and then fusing or aligning them to obtain the unified object representation. However, the manually pre-specify fusion functions and view-private redundant information mixed in features potentially degrade the quality of the derived representation. To overcome them, we propose a novel bi-level-optimization-based multi-view learning framework, where the representation is learned in a uniform-to-specific manner. Specifically, we train a meta-learner, namely MetaViewer, to learn fusion and model the view-shared meta representation in outer-level optimization. Start with this meta representation, view-specific base-learners are then required to rapidly reconstruct the corresponding view in inner-level. MetaViewer eventually updates by observing reconstruction processes from uniform to specific over all views, and learns an optimal fusion scheme that separates and filters out view-private information. Extensive experimental results in downstream tasks such as classification and clustering demonstrate the effectiveness of our method. ## 1 Introduction Multi-view representation learning mines a unified representation from multiple views of the same entity [27, 39, 55, 58]. Each view acquired by different sensors or sources contains both view-shared consistency information and view-specific information. Among them, view-specific information consists of complementary and redundant components, where the former can be considered as a supplement to the consistency information, while the latter is highly specific and may be adverse for the unified representation [8, 16]. Therefore, a high-quality representation is required to retain the consistency and complementary information, as well as filter out the view-private redundant ones [48]. Given the data containing two views \(x_{1}\) and \(x_{2}\), the prevailing multi-view representation methods typically follow a _specific-to-uniform_ pipeline and can be roughly characterized as: \[H:=f(x_{1};W_{f})\circ g(x_{2};W_{g}), \tag{1}\] where \(f\) and \(g\) are encoding (or embedding [28]) functions Figure 1: (a), (b) and (c) show three multi-view learning frameworks following the _specific-to-uniform_ pipeline, where the unified representation is obtained by fusing or concatenating view-specific features. (d) illustrates our _uniform-to-specific_ manner, where a meta-learner learns to fusion by observing reconstruction from unified representation to specific views. that map the original view data into the corresponding latent features with the trainable parameters \(W_{f}\) and \(W_{g}\). These latent features are subsequently aggregated into the unified representation using the designed aggregation operator \(\circ\). with different aggregation strategies, existing approaches can be further subdivided into the joint, alignment, and a combined share-specific (S\(\&\)S) representation [23, 27, 3]. Joint representation focuses on the integration of complementary information by directly fusing latent features, where the \(\circ\) is represented as fusion strategies, such as graph-based module [41, 34], neural network [33, 10], or other elaborate functions [7, 18, 37]. While alignment representation seeks to perform alignment between view-specific features to retain the consistency information and specifies \(\circ\) as the alignment operator, measured by distance [11, 26], similarity [14, 25], or correlation [44, 1, 4]. As a trade-off way, S\(\&\)S representation explicitly distinguishes latent features into shared and specific representation and only aligns the shared part [23, 21, 50, 30]. Fig. 1 (a) - (c) show the above three branches of the _specific-to-uniform_ paradigm. Despite demonstrating promising results, the aggregation way inherently suffers from potential risks in the following two aspects: (1) The derived unified representation is usually the concatenation or fusion of learned latent features with the manually pre-specify rules. It makes them fairly hard to be generally applied in practice due to the significant variation of fusion schemes relying on the downstream tasks and training views [36]. (2) Even finding a well-performing fusion scheme, view-private redundant information mixed in latent features also degrades the quality of the fused unified representation. Several studies have noticed the second issue and attempted to distinguish the redundant information from view features via multi-level feature modeling [48] or matrix factorization [57]. However, recent works indicate the view-specific information could be not automatically separated at feature level [23, 35]. In addition, the first issue has received little attention. In this paper, we propose a novel multi-view representation learning framework based on bi-level optimization meta-learning. In contrast to the _specific-to-uniform_ pipeline, our model emphasizes learning unified representation in a _uniform-to-specific_ manner, as illustrated in Fig. 1 (d). In detail, we build a meta-learner, namely MetaViewer, to learn fusion and model a unified meta representation in outer-level optimization. Based on this meta representation, view-specific base-learners are required to rapidly reconstruct the corresponding view in the inner-level. MetaViewer eventually updates by observing reconstruction processes over all views, thus learning optimal fusion rules to address the first issue. On the other hand, the rapid reconstruction from uniform representation to specific views in inner-level optimization essentially models the information that cannot be fused, i.e., view-private parts, solving the second issue. After alternate training, the resulting meta representation is closer to each view, which indicates that it contains as much view-shared information as possible as well as avoids the hindrance of redundant information. Extensive experiments on multiple benchmarks validate the performance of our MetaViewer. The unified meta representation learned from multiple views achieves comparable performance to state-of-the-art methods in downstream tasks such as clustering and classification. The core contributions of this paper are as follows. 1. We propose a novel insight for multi-view representation learning, where the unified representation is learned in a _uniform-to-specific_ manner. 2. Our MetaViewer achieves the data-driven fusion of view features in meta-learning paradigm. To the best of our knowledge, this could be the first meta-learning-based work in multi-view representation scenarios. 3. MetaViewer decouples the modeling of view-shared and view-private information via bi-level optimization, alleviating the hindrance of redundant information. 4. Extensive experimental results validate the performance of our approach in several downstream tasks. ## 2 Related Work ### Multi-view learning Multi-view representation learning is not a new topic and has been widely used in downstream tasks such as retrieval, classification, and clustering. This work focuses on multi-view representation in unsupervised deep learning scope, and related works can be summarized into two main categories [51]. One is the deep extension of traditional methods, where representative ones include deep canonical correlation analysis (DCCA) [1] and its variants [44, 40, 54]. DCCA intends to discover the nonlinear mapping for two views to a common space in which their canonical correlations are maximally preserved. These methods benefit from a sound theoretical foundation, but also usually have strict restrictions on the number and form of views. Another alternative is the multi-view deep network. Early deep-based approaches attempted to build different architectures for handling multi-view data, such as CNN-based [12, 38, 53] and GAN-based model [19, 49], etc. Some recent approaches focus on better parameter constraints using mutual information [2, 8], comparative information [28, 52], etc. Most of these existing methods follow a _specific-to-uniform_ pipeline. In contrast, the underlying assumption of our MetaViewer learns from uniform to specific. The most related work is the MFLVC [48], which separates view-private information from latent features at the parameter level. The essential difference is that we model the view-private information in inner-level optimization, allowing outer-level observes the modeling process and future meta learners the optimal fusion scheme. ### Meta-learning Optimization-based meta-learning is a classic application of bi-level optimization designed to learn task-level knowledge to quickly handle new tasks [20, 22]. A typical work, MAML [13], learns a set of initialization parameters to solve different tasks with few steps of updates. Similar meta paradigm has been used to learn other manually designed parts, such as the network structure [29], optimizer [45, 59], and even sample weight [32, 36]. Similarly, we try to meta-learn fusion of multi-view feature for a unified representation. There also exist some works that consider both meta-learning paradigms and multi-view data [15, 31, 42]. However, they are dedicated to exploit the rich information contained in multiple views to improve the performance of the meta-learner on few-shot tasks or self-supervised scenario. Instead, we train a meta-learner to derive the high-quality shared representations from multi-view data via the bi-level optimization. To the best of our knowledge this is the first work to learn multi-view representation with meta-learning paradigm. ## 3 MetaViewer Given a set of unlabeled multi-view dataset \(\mathcal{D}=\{x_{i}\in\mathbb{R}^{d_{x}}\}_{i=1}^{N}\), where \(N\) are the number of samples and each sample entity \(x_{i}=\{x_{i}^{1},x_{i}^{2},\ldots,x_{i}^{v}\}_{v=1}^{V}\) contains \(V\) views. The view-incomplete scenario means that partial views of some samples are missing or unavailable, i.e., \(\mathcal{D}_{inc}=\{\mathcal{D}_{c},\{\mathcal{D}_{u}^{v}\}_{v=1}^{V}\}\), where \(\mathcal{D}_{c}\) and \(\mathcal{D}_{u}^{v}\) indicate subsets of samples with complete views and unavailable \(v\)-\(th\) view, respectively. Our goal is learn a unified high-quality representation \(H\) for each entity by observing available views and filtering self-private information as much as possible. The overall framework of our MetaViewer is shown in Fig. 2, including three mainly modules and a bi-level optimization process. The outer-level learns a meta-learner to learn a optimal fusion function and derive the unified representation \(H\), and inner-level reconstructs original views from \(H\) in few update steps, which explicitly models and separates view-private information and ensure the representation quality. In following subsections, we first introduce the entire structure of the MetaViewer and then elaborate the bi-level optimization process. ### The entire structure **Embedding module** aims to transform heterogeneous view into the latent feature space, where transformed view embeddings have the same dimension as each other. To this end, we conduct a view-specific embedding function \(f_{v}\) for each view, where \(v=1,2,\ldots,V\). Given the \(v\)-\(th\) view data \(x^{v}\) of the entity \(x\), the corresponding embedding \(z^{v}\) can be computed by \[z^{v}=f_{v}(x^{v},\phi_{f_{v}}), \tag{2}\] where \(z^{v}\in\mathbb{R}^{d}\) and \(f_{v}\) typically instantiated as a multi-layer neural network with learnable parameters \(\phi_{f_{v}}\). **Representation learning module** maps the obtained embedding to the view representation, consisting of view-specific base-learners \(\{b_{v}\}_{v=1}^{V}\) and a view-shared meta-learner \(m\) (i.e., MetaViewer). The former learns representation for each view embedding, while the latter takes all embeddings as input and outputs the unified representation \(H\) that is ultimately used for downstream tasks. Meanwhile, base-learners are generally required to be initialized from parameters of the meta-learner to learning view-shared meta representation (see 3.2), thus two types learners should be structurally shared rather than individually designed. To meet the above two requirements simultaneously, MetaViewer is implemented as a channel-oriented \(1\)-\(d\) convolutional layer (C-Conv) with a non-linear function (e.g., ReLU [17]), as shown in Fig. 2 (c). On the one hand, as a meta-learner, we first concatenate embeddings at the channel level, i.e., the number of the channel in the concatenated feature is equal to the number of views, and then train the MetaViewer to learn the fusion of cross-view information \(H\in\mathbb{R}^{d_{h}}\) by \[H=m(z^{cat},\omega), \tag{3}\] where \(z^{cat}\in\mathbb{R}^{d\times V}\) indicates the concatenated embedding and \(\omega\) is the parameter of the MetaViewer. On the other hand, base-learners could be initialized and trained for learning \(v\)-\(th\) representation \(h_{base}^{v}\) via \[h_{base}^{v}=b_{v}(z^{v},\theta_{b_{v}}(\omega_{sub})), \tag{4}\] where \(h_{base}^{v}\in\mathbb{R}^{d_{h}}\) and \(\theta_{b_{v}}(\omega_{sub})\), or \(\theta_{b_{v}}(\omega)\) for short, means the base-learner's parameter \(\theta_{b_{v}}\) is initialized from the channel(view)-related part \(\omega_{sub}\) of the MetaViewer's parameters. Note that this sub-network mechanism also provides a convenient way to handle incomplete views. **Self-supervised module** conducts pre-text tasks to provide effectively supervised objects for model training, represented by different heads. Typically, the reconstruction head \(r\) achieves the reconstruction object by re-mapping the representation back to the original view space, i.e., \[x_{rec}^{v}=r_{v}(z_{rec}^{v},\phi_{r_{v}}), \tag{5}\] where \(x_{rec}^{v}\in\mathbb{R}^{d_{x}}\) is the reconstruction result and \(r_{v}\) is the reconstruction function with learnable parameters \(\phi_{r_{v}}\). Expect that, we can also conduct contrastive or correlation head for the meta-learner to mine the associations across views. Similar self-supervised objectives [54, 52, 28, 40] have been extensively studied in multi-view learning, and this is not the focus of this work. ### Training via bi-level optimization Now, we have conducted the entire structure, which can be end-to-end trained to derive the unified multi-view representation, even for incomplete views, in the _specific-to-uniform_ manner like most existing approaches. However the data-driven fusion and view-private redundant information still cannot be handling well. So we turn to a opposite _uniform-to-specific_ way using a bi-level optimization process, inspired by the meta-learning paradigm. Inner-level focus on the training of view-specific modules for corresponding views, and outer-level updates meta-learner to find the optimal fusion rule through observing the learning over all views. Before the detailed description, we introduce a split way of multi-view data in meta-learning style. **Meta-split of multi-view data**. Consider a batch multi-view samples \(\{\mathcal{D}^{v}_{batch}\}_{v=1}^{V}\) from \(\mathcal{D}\). For bi-level updating, we randomly and proportionally divide it into two disjoint subsets, marked as support set \(S\) and query set \(Q\), respectively. As shown in 2 (a), support set is used in inner-level for leaning view-specific information, thus the sample attributes in it could be ignored. In contrast, query set retains both view and sample attributes for meta-learner training in outer-level optimization. This meta-split that decoupling view from samples can be naturally transferred to the data with incomplete views \(\mathcal{D}_{inc}=\{\mathcal{D}_{c},\{\mathcal{D}^{v}_{u}\}_{v=1}^{V}\}\), where subset with incomplete views \(\{\mathcal{D}^{v}_{u}\}_{v=1}^{V}\) is used as support set, and the complete part \(\mathcal{D}_{c}\) is left as query set. **Inner-level optimization**. Without loss of generality, take the inner-level update after the \(o\)-\(th\) outer-level optimization as an example. Let \(\omega^{o}\) be the lasted parameters of the MetaViewer, and \(\phi^{o}_{v}=\{\phi^{o}_{f_{v}},\phi^{o}_{r_{v}}\}\) denotes the lasted parameters in embedding and self-supervised modules for brevity. We first initial base-learner from meta-learner, i.e., \(\theta^{0}_{b_{v}}=\omega^{o}\), and make a copy of \(\phi^{o}_{v}\) for \(\tilde{\phi}_{v}\). Note that the _copy_ means gradients with respect to the \(\phi^{o}_{v}\) will not be back-propagated to \(\tilde{\phi}_{v}\) and vice versa. Thus, \(\theta^{0}_{b_{v}}\) and \(\tilde{\phi}_{v}\) form the initial state for the inner-level optimization. Suppose the \(\mathcal{L}^{v}_{inner}\) is the loss function of the inner-level with respect to the \(v\)-\(th\) view, and then the corresponding update goals are \[\theta^{*}_{b_{v}}(\omega),\tilde{\phi}^{*}_{v},=\arg\min\mathcal{L}^{v}_{inner }\left(\theta_{b_{v}}(\omega^{o}),\tilde{\phi}_{v};S^{v}\right). \tag{6}\] Consider a gradient descent strategy (e.g., SGD [4]), we can Figure 3: Intuitively, the _specific-to-uniform_ observes view features and learns a representation that falls in a reconstructable space. MetaViewer observes the reconstruction process and seeks a unified meta representation that is close as possible to each view. Figure 2: The overall framework of MetaViewer, contains (a) the meta split of multi-view data and three modules (b) embedding module, (c) representation module, and (d) self-supervised module. These modules are trained with a bi-level optimization. Inner-level (dark gray arrows) learns the view-specific reconstruction on the support set, and outer-level (red arrows) updates the entire model to learn the fusion scheme and the unified representation \(H\) for downstream tasks by validating on the query set. further write the update process of \(\theta_{b_{v}}\): \[\theta_{b_{v}}^{i}=\theta_{b_{v}}^{i-1}-\beta\frac{\partial\mathcal{L}_{inner}^{v }}{\partial\theta_{b_{v}}^{i-1}},\ldots,\theta_{b_{v}}^{0}=\omega^{o}, \tag{7}\] where \(\beta\) and \(i\) denote the learning rate and iterative step of inner-level optimization, respectively. **Outer-level optimization**. After several inner-level updates, we obtain a set of optimal view-specific parameters on the support set. Outer-level then updates the meta-learner, embedding, and head modules by training on the query set. With the loss function \(\mathcal{L}_{outer}\), the outer-level optimization goal is \[\omega^{*},\{\phi_{v}^{*}\}_{1}^{V}=\arg\min\mathcal{L}_{outer}\left(\theta^{* }(\omega),\{\phi_{v}\}_{1}^{V};Q\right). \tag{8}\] By alternately optimizing Eq. 6 and Eq. 8, we end up with the optimal meta-parameters \(\omega^{*}\) and a set of view-specific parameters \(\phi_{v}^{*}\). For a test sample \(x_{test}\), its corresponding representation is derived by sequentially feeding the embedding function and the meta-learner. The overall framework of our MetaViewer is shown in Alg. 1. ``` 0: Training dataset \(\mathcal{D}\), meta parameters \(\omega\), base parameters \(\{\theta_{v}\}_{v=1}^{V}\), view-specific parameters \(\{\phi_{v}\}_{v=1}^{V}\), the number of view \(V\), the iteration step in inner-level optimization \(T\). 1: Initialize \(\omega\), \(\{\phi_{v}\}_{v=1}^{V}\); 2:while not done do 3:\(\#\)\(Outer-level\) 4: Sample and meta-split a batch set from \(\mathcal{D}\): 5:\(\{\mathcal{D}_{batch}^{v}\}_{v=1}^{V}=\{S,Q\}\). 6:for\(t=1,\ldots,T\)do 7:for\(v=1,\ldots,V\)do 8:\(\#\)\(Inner-level\) 9: Initialize \(\theta_{v}=\theta_{v}(\omega)\), \(\tilde{\phi_{v}}=\phi_{v}\); 10: Optimize \(\theta_{v}(\omega)\) and \(\tilde{\phi_{v}}\) via Eq. 6. 11:endfor 12:endfor 13: Optimize \(\omega\) via Eq. 8. 14: Optimize \(\{\phi_{v}\}_{v=1}^{V}\) via Eq. 8. 15:endwhile ``` **Algorithm 1** The framework of our MetaViewer. ### Specific-to-uniform versus uniform-to-specific We discuss the difference between _specific-to-uniform_ and _uniform-to-specific_ paradigm through the update of fusion parameters \(\omega\) with a reconstruction loss \(\mathcal{L}_{rec}^{v}\). Using the same structure described in Sec. 3.1, the _specific-to-uniform_ generally optimizes \(\omega\) by minimizing reconstruction losses over all views, i.e., \(\omega^{*}=\arg\min\sum_{v=1}^{V}\mathcal{L}_{rec}^{v}(r_{v}(m(z^{v},\omega), \phi_{r_{v}})),x_{v})\), and the \(\omega\) is updated by (with the SGD) \[\omega\leftarrow\omega-\alpha\sum_{v=1}^{V}\nabla_{\omega}\mathcal{L}_{rec}^{v }(\omega). \tag{9}\] The optimal \(\omega^{*}\) observes all views and derives the unified representation \(H\), that is, from the particular to the general. While the update of \(\omega\) in our _uniform-to-specific_ can be written \[\omega\leftarrow\omega-\alpha\sum_{v=1}^{V}\nabla_{\omega}\mathcal{L}_{rec}^{v }\left(\theta_{b_{v}}^{*}(\omega)\right). \tag{10}\] Note that \(\theta_{b_{v}}^{*}(\omega)\) contain the optimization process of each view in inner-level as Eq. (6), which means that the optimal \(\omega^{*}\) update by observing the reconstruction from unified representation to specific views. Fig. 3 intuitively demonstrates the difference between these two manner. ### The instances of the objective function Our uniform-to-specific framework emphasizes learning from reconstruction in inner-level, thus the \(\mathcal{L}_{inner}^{v}\) is specified as the reconstruction loss [56, 28] \[\mathcal{L}_{inner}^{v}=\mathcal{L}_{rec}^{v}(S^{v},S_{rec}^{v})=\|S^{v}-S_{ rec}^{v}\|_{F}^{2}. \tag{11}\] While parameters updated in outer-level can be constrained by richer self-supervised feedbacks as mantained in Sec. 3.1. Here we provide two instances of outer-level loss function to demonstrate how MetaViewer can be extended with different learning objectives. **MVer-R** adopts the same reconstruction loss as the inner-level and \(\mathcal{L}_{outer}=\sum_{v}\mathcal{L}_{rec}^{v}(Q^{v},Q_{rec}^{v})\), which is the purest implementation of MetaViewer. **MVer-C** additionally utilizes a contrastive objective, where the similarities of views belonged to same entity (i.e., positive pairs) should be maximized and that of different entities (i.e., negative pairs) should be minimized, i.e., \(\mathcal{L}_{outer}=\sum_{v}\left(\mathcal{L}_{rec}^{v}+\sum_{v^{\prime},v^{ \prime}\neq v}\mathcal{L}_{con}^{v,v^{\prime}}\right)\). Following previous work [19, 48, 52], the contrastive loss \(\mathcal{L}_{con}^{v,v^{\prime}}\) is formed as \[\mathcal{L}_{con}^{v,v^{\prime}}=-\frac{1}{N_{Q}}\sum_{i=1}^{N_{Q}}log\frac{e^{d (q_{i}^{v},q_{i}^{v^{\prime}})/\tau}}{\sum_{j=1,j\neq i}^{N_{Q}}e^{d(q_{i}^{v},q_{j}^{v^{\prime}})/\tau}+\sum_{j=1}^{N_{Q}}e^{d(q_{i}^{v},q_{j}^{v^{\prime}})/ \tau}}, \tag{12}\] where \(q_{i}^{v}\) is the \(v\)-\(th\) views of the \(i\)-\(th\) query sample, and \(d\) is the similarity metric (e.g., cosine similarity [6]). The \(N_{Q}\) and \(\tau\) denote the number of query set samples and the temperature parameter, respectively. Note that, the derived meta representation can be also used in contrastive learning as a additional novel view. ## 4 Experiments In this section, we present extensive experimental results to validate the quality of the unified representation derived from our MetaViewer. The remainder of the experiments are organized as follows: Subsection 4.1 lists datasets, compared methods and the implementation details. Subsection 4.2 compares the performance of our method with classical and state-of-the-art methods on two common downstream scenarios, clustering and classification tasks. Comparison with manually designed fusion and ablation studies are shown in Subsection 4.3 and 4.4, respectively. ### Experimental Setup **Datasets**. To comprehensively evaluate the effectiveness of our MetaViewer, we conduct six multi-view benchmarks in experiments. All datasets are scaled to \([0,1]\) and split into training, validation, and test sets in the ratio of \(6:2:2\), as shown in Tab. 1. More details of dataset are in Appendix A. * **BDGP** is an image and text dataset, corresponding to \(2,500\) drosophila embryo images in \(5\) categories. Each image is described by a \(79\)-D textual feature vector and a \(1,750\)-D visual feature vector [5]. * **Handwritten** contains \(2,000\) handwritten digital images from \(0\) to \(9\). Two types of descriptors, i.e., \(240\)-D pixel average in \(2\times 3\) windows and \(216\)-D profile correlations, are selected as two views [56]. * **RGB-D** dataset contains visual and depth images of \(300\) distinct objects across \(50\) categories [43, 60]. Two views are obtained by flattening the \(64\times 64\times 3\) color images and \(64\times 64\) depth images. * **Fashion-MV** is an image dataset that contains \(10\) categories with a total of \(30,000\) fashion products. It has three views and each of which consists of 10,000 gray images sampled from the same category [47]. * **MSRA**[46] consists of \(210\) scene recognition images \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Datasets & \#Views & \#Classes & \#Samples (train, val, test) & View Dimensions \\ \hline BDGP & 2 & 5 & 2500 (1500, 500, 500) & 1750; 79 \\ Handwritten & 2 & 10 & 2000 (1200, 400, 400) & 240; 216 \\ RGB-D & 2 & 50 & 500 (300, 100, 100) & 12288; 4096 \\ Fashion-MV & 3 & 10 & 10000 (6000, 2000, 2000) & 784; 784; 784 \\ MSRA & 6 & 7 & 210 (126, 42, 42) & 1302; 48; 512; 100; 256; 210 \\ Caltech101-20 & 6 & 20 & 2386 (1425, 469, 492) & 48; 40; 254; 1984; 512; 928 \\ \hline \hline \end{tabular} \end{table} Table 1: The attributes for all datasets used in our experiments. \begin{table} \begin{tabular}{c|c|c c c c c c} \hline \hline Datasets & Metrics & DCCA [1] & DCCAE [44] & MIB [8] & MFLVC [48] & DCP [28] & MVer-R & MVer-C \\ \hline \multirow{4}{*}{BDGP} & ACC & 0.4640 & 0.5180 & 0.6940 & 0.8800 & 0.7820 & 0.8280 & **0.9040** \\ & NMI & 0.4163 & 0.5793 & 0.5565 & 0.8397 & 0.7800 & 0.7979 & **0.8627** \\ & ARI & 0.3347 & 0.3208 & 0.4865 & 0.8504 & 0.6725 & 0.6156 & **0.8925** \\ \hline \multirow{4}{*}{Handwritten} & ACC & 0.5725 & 0.6300 & 0.6325 & 0.6400 & 0.6625 & 0.7500 & **0.8625** \\ & NMI & 0.6980 & 0.7504 & 0.6758 & 0.6453 & 0.7056 & 0.7853 & **0.7896** \\ & ARI & 0.5215 & 0.5929 & 0.5216 & 0.4885 & 0.5610 & 0.6721 & **0.7225** \\ \hline \multirow{4}{*}{RGB-D} & ACC & 0.5100 & 0.4800 & 0.5000 & 0.5300 & 0.5200 & 0.5300 & **0.5700** \\ & NMI & 0.8299 & 0.8158 & 0.8113 & 0.8331 & 0.8204 & 0.8241 & **0.8497** \\ & ARI & 0.5202 & 0.4834 & 0.5127 & 0.5407 & 0.5264 & 0.5304 & **0.5707** \\ \hline \multirow{4}{*}{Fashion-MV} & ACC & 0.7070 & 0.7105 & 0.5720 & 0.8320 & 0.6260 & 0.8080 & **0.8540** \\ & NMI & 0.8042 & 0.8112 & 0.7383 & 0.8875 & 0.6838 & 0.8813 & **0.8876** \\ & ARI & 0.6180 & 0.6234 & 0.4762 & 0.7893 & 0.5430 & 0.7505 & **0.8007** \\ \hline \multirow{4}{*}{MSRA} & ACC & 0.3333 & 0.3571 & 0.3095 & 0.7143 & 0.6429 & **0.7143** & 0.7018 \\ & NMI & 0.2997 & 0.3285 & 0.2471 & 0.6796 & 0.6801 & **0.7007** & 0.6126 \\ \cline{1-1} & ARI & 0.3341 & 0.3471 & 0.3035 & 0.6891 & 0.6308 & **0.6893** & 0.6029 \\ \hline \multirow{4}{*}{Caltech101-20} & ACC & 0.3862 & 0.3659 & 0.3598 & 0.3659 & 0.3679 & 0.4187 & **0.4512** \\ \cline{1-1} & NMI & 0.5088 & 0.5224 & 0.4700 & 0.5836 & 0.4437 & 0.5852 & **0.6086** \\ \cline{1-1} & ARI & 0.2273 & 0.2525 & 0.2218 & 0.2687 & 0.2350 & 0.2919 & **0.3500** \\ \hline \hline \end{tabular} \end{table} Table 2: Clustering results of all methods on six datasets. Bold and underline denote the best and second-best results, respectively. from seven classes with six views, that is, CENTRIST, CMT, GIST, HOG, LBP, and SIFT. * **Caltech101-20** is a subset of the Caltech101 image set [9], which consists of \(2,386\) images of \(20\) subjects. Six features are used, including Gabor, Wavelet Moments, CENTRIST, HOG, GIST, and LBP. **Compared methods.** We compare the performance of MetaViewer with five representative multi-view learning methods, including two classical methods (DCCA [1] and DCCAE [44]) and three state-of-the-art methods (MIB [8], MFLVC [48] and DCP [28]). Among them, DCCA and DCCAE are the deep extensions of traditional correlation strategies. MIB is a typical generative method with mutual information constraints. DCP learns unified representation both in complete and incomplete views. In particular, MFLVC also notices the view-private redundant information and designs a multi-level features network for clustering tasks. **Implementation details.** For a fair comparison, all methods are trained from scratch and share the same backbone listed in Appendix B. We concatenate latent features of all views in compared methods to obtain the unified representation \(H\) with the same dimension \(d_{h}=256\), and verify their performance in clustering and classification tasks using K-means and a linear SVM, respectively. For MetaViewer, we train \(2,000\) epochs for all benchmarks, and set the batch size is \(32\) for RGBD and MSRA and \(256\) for others. The learning rates in outer- and inner-level are set to \(10^{-3}\) and \(10^{-2}\), respectively. All experiments have been verified using the PyTorch library on a single RTX3090. ### Performance on downstream tasks **Clustering results**. Tab. 2 lists the results of the clustering task, where the performance is measured by three standard evaluation metrics, i.e., Accuracy (ACC), Normalized Mutual Information (NMI), and Adjusted Rand Index (ARI). A higher value of these metrics indicates a better clustering performance. It can be observed that (1) our MVer-C variant significantly outperforms other compared methods on all benchmarks except MSRA; (2) the second-best results appear between MVer-R and MFLVC, both of which explicitly separates the view-private information; (3) A larger number of categories and views is the main reason for the degradation of clustering performance, and our Metaviewer improves most significantly in such scenario (e.g. Fashion-MV and Caltech101-20). **Classification results**. Tab. 3 lists the results of the classification task, where three common metrics are used including Accuracy, Precision, and F-score. A higher value indicates a better classification performance. Similar to the clustering results, two variants of MetaViewer significantly outperform the comparison methods. It is worth noting that (1) DCP learns the unified generic representation and therefore achieves the second-best result instead of MFLVC. (2) The number of categories is the main factor affecting the classification performance, and our method obtains the most sig \begin{table} \begin{tabular}{c|c|c c c c c c} \hline \hline Datasets & Metrics & DCCA [1] & DCCAE [44] & MIB [8] & MFLVC [48] & DCP [28] & MVer-R & MVer-C \\ \hline \multirow{4}{*}{BDGP} & ACC & 0.9840 & **0.9865** & 0.8900 & 0.9820 & 0.9720 & 0.9860 & 0.9800 \\ & Precision & 0.9842 & 0.9863 & 0.9005 & 0.9822 & 0.9726 & **0.9871** & 0.9859 \\ & F-score & 0.9840 & 0.9850 & 0.8884 & 0.9820 & 0.9720 & **0.9859** & 0.9802 \\ \hline \multirow{4}{*}{Handwritten} & ACC & 0.8825 & 0.9000 & 0.7900 & 0.9400 & 0.9725 & 0.9700 & **0.9775** \\ & Precision & 0.8920 & 0.9048 & 0.8390 & 0.9420 & 0.9730 & 0.9708 & **0.9790** \\ & F-score & 0.8805 & 0.8992 & 0.7852 & 0.9401 & 0.9724 & 0.9700 & **0.9775** \\ \hline \multirow{4}{*}{RGB-D} & ACC & 0.3000 & 0.2400 & 0.3300 & 0.4400 & 0.3700 & 0.5100 & **0.5600** \\ & Precision & 0.2110 & 0.1600 & 0.2850 & 0.4609 & 0.2887 & **0.5365** & 0.5520 \\ & F-score & 0.2204 & 0.1691 & 0.2737 & 0.4181 & 0.3078 & 0.4873 & **0.5278** \\ \hline \multirow{4}{*}{Fashion-MV} & ACC & 0.8490 & 0.8535 & 0.8680 & 0.9650 & 0.8925 & 0.9685 & **0.9770** \\ & Precision & 0.8522 & 0.8597 & 0.8680 & 0.9652 & 0.8206 & 0.9637 & **0.9678** \\ & F-score & 0.8354 & 0.8384 & 0.8655 & 0.9649 & 0.8290 & 0.9648 & **0.9707** \\ \hline \multirow{4}{*}{MSRA} & ACC & 0.2381 & 0.2429 & 0.3619 & 0.6905 & 0.9048 & **0.9371** & 0.9270 \\ & Precision & 0.2053 & 0.2204 & 0.2498 & 0.7129 & 0.9153 & **0.9393** & 0.9317 \\ & F-score & 0.2422 & 0.2357 & 0.2773 & 0.6895 & 0.9037 & **0.9391** & 0.9277 \\ \hline \multirow{4}{*}{Caltech101-20} & ACC & 0.7154 & 0.7154 & 0.7272 & 0.8537 & **0.9248** & 0.9228 & 0.9216 \\ & Precision & 0.4527 & 0.6057 & 0.6164 & 0.7183 & 0.8941 & 0.8946 & **0.9068** \\ \cline{1-1} & F-score & 0.3981 & 0.4325 & 0.5247 & 0.6907 & 0.8458 & 0.8421 & **0.8572** \\ \hline \hline \end{tabular} \end{table} Table 3: Classification results of all methods on six datasets. Bold and underline denote the best and second-best results, respectively. nificant improvement in the RGB-D dataset with \(50\) classes. More results including incomplete views are deferred to Appendix C. ### Comparison with manually designed fusion. As mentioned in 3.3, MetaViewer is essentially learned to learn an optimal fusion function that filters the view-private information. To verify this, we compare it with commonly used fusion strategies [27], including _sum_, _maxima_, _concatenation_, _linear layer_ and _C-Conv_. The former three are the specified fusion rules without trainable parameters and the remaining two are the trainable fusion layer trained via the _specific-to-uniform_ manner. Tab. 4 lists the clustering results and an additional MSE score on the Handwritten dataset with the same embedding and reconstruction network (see Tab. 1). We can observe that (1) trainable fusion layers outperform the hand-designed rules, and our MetaViewer yields the best performance; (2) the MSE scores listed in the last column indicate that the quality of the unified representation cannot be measured and guaranteed only with the reconstruction constraint, due to the view-private redundant information mixed in view-specific latent features. ### Ablation Studies **Meta-learner structures**. We implement the meta-learner as channel-level convolution layers in this work. Albeit simple, this layer can be considered as a universal approximator for almost any continuous function [36], and thus can fit a wide range of conventional fusion functions. To investigate the effect of network depth, width, and convolution kernel size on the performance of the representation, we alternate fix the \(32\) kernels and \(1\times 3\) kernel size and show the classification results on Handwritten data in Fig. 4. It is clear that (1) the meta-learner works well with just a shallow structure as shown in Fig. 4 (a), instead of gradually overfitting to the training data as the network deepens or widens, (2) our MetaViewer is stable and insensitive to the hyper-parameters within reasonable ranges. **Meta-split ratios**. Fig. 5 (a) shows the impact of meta-split mentioned in 3.2 on the classification performance, where the proportion of support set is set from \(0.1\) to \(0.9\) in steps of \(0.1\), and the rest is query set. In addition to the single view, we also compare the _sum_ and _concat_. fusion as baselines. MetaViewer consistently surpasses all baselines over the experimental proportion. In addition, fusion baselines are more dependent on the better-performing view at lower proportions, instead becoming unstable as the available query sample decreases. **Inner-level update steps**. Another hyper-parameter is the number of iteration steps in inner-level optimization. More iterations mean a larger gap from the learned meta representation to the specific view space, i.e., coarser modeling of view-private information. Fig. 5 (b) shows the classification results with various steps, where \(n\) step means that the inner-level optimization is updated \(n\) times throughout the training. MetaViewer achieves the best results when using \(1\) steps, and remains stable within \(15\) steps. ## 5 Conclusion This work introduced a novel meta-learning perspective for multi-view learning, and proposed a meta-learner, namely MetaViewer, to derive a high-quality unified representation for downstream tasks. In contrast to the prevailing _specific-to-uniform_ pipeline, MetaViewer observes the reconstruction process from unified representation to specific views and essentially learns an optimal fusion function that separates and filters out meaningless view-private information. Extensive experimental results on clustering and classification tasks demonstrate the performance of the unified representation we meta-learned. \begin{table} \begin{tabular}{c|c|c c c|c} \hline \hline Strategies & Rules & ACC\(\uparrow\) & NM\(\uparrow\) & AR\(\uparrow\) & MSE\(\downarrow\) \\ \hline Sum & \(z^{x}\pm z^{y}\) & 69.25 & 71.89 & 59.02 & 1.84 \\ Max & \(max(z^{x},z^{y})\) & 80.75 & 73.93 & 63.75 & - \\ Concat. & \(cat[z^{x},z^{y}]\) & 78.75 & 72.02 & 61.52 & **1.77** \\ Linear & \(l(z^{x},z^{y},\theta_{i})\) & 85.00 & 77.40 & 69.71 & 4.74 \\ C-Conv & \(m(z^{x},z^{y},\omega)\) & 69.75 & 65.21 & 51.33 & 2.37 \\ MetaViewer & _meta-learning_ & **86.25** & **78.96** & **72.25** & 2.45 \\ \hline \hline \end{tabular} \end{table} Table 4: Clustering resulting on the Handwritten dataset. Figure 4: Effect of meta-learner architectures with different depth, width, and kernel size on classification accuracy. Figure 5: Effect of (a) different meta-division ratios and (b) the number of inner-loop iterations on classification accuracy.
2310.06211
On convergence rates of proximal alternating direction method of multipliers
In this paper we consider from two different aspects the proximal alternating direction method of multipliers (ADMM) in Hilbert spaces. We first consider the application of the proximal ADMM to solve well-posed linearly constrained two-block separable convex minimization problems in Hilbert spaces and obtain new and improved non-ergodic convergence rate results, including linear and sublinear rates under certain regularity conditions. We next consider proximal ADMM as a regularization method for solving linear ill-posed inverse problems in Hilbert spaces. When the data is corrupted by additive noise, we establish, under a benchmark source condition, a convergence rate result in terms of the noise level when the number of iteration is properly chosen.
Qinian Jin
2023-10-09T23:52:25Z
http://arxiv.org/abs/2310.06211v1
# On convergence rates of proximal alternating direction method of multipliers ###### Abstract. In this paper we consider from two different aspects the proximal alternating direction method of multipliers (ADMM) in Hilbert spaces. We first consider the application of the proximal ADMM to solve well-posed linearly constrained two-block separable convex minimization problems in Hilbert spaces and obtain new and improved non-ergodic convergence rate results, including linear and sublinear rates under certain regularity conditions. We next consider proximal ADMM as a regularization method for solving linear ill-posed inverse problems in Hilbert spaces. When the data is corrupted by additive noise, we establish, under a benchmark source condition, a convergence rate result in terms of the noise level when the number of iteration is properly chosen. Key words and phrases:proximal alternating direction method of multipliers, linearly constrained convex programming, linear inverse problems, convergence rates ## 1. **Introduction** The alternating direction method of multipliers (ADMM) was introduced and developed in the 1970s by Glowinski, Marrocco [16] and Gabay, Mercier [15] for the numerical solutions of partial differential equations. Due to its decomposability and superior flexibility, ADMM and its variants have gained renewed interest in recent years and have been widely used for solving large-scale optimization problems that arise in signal/image processing, statistics, machine learning, inverse problems and other fields, see [5, 17, 21]. Because of their popularity, many works have been devoted to the analysis of ADMM and its variants, see [5, 8, 10, 14, 19, 26, 33] for instance. In this paper we will devote to deriving convergence rates of ADMM in two aspects: its applications to solve well-posed convex optimization problems and its use to solve linear ill-posed inverse problems as a regularization method. In the first part of this paper we consider ADMM for solving linearly constrained two-block separable convex minimization problems. Let \(\mathcal{X}\), \(\mathcal{Y}\) and \(\mathcal{Z}\) be real Hilbert spaces with possibly infinite dimensions. We consider the convex minimization problem of the form \[\begin{split}\text{minimize}&\quad H(x,y):=f(x)+g(y) \\ \text{subject to}&\quad Ax+By=c,\end{split} \tag{1.1}\] where \(c\in\mathcal{Z}\), \(A:\mathcal{X}\to\mathcal{Z}\) and \(B:\mathcal{Y}\to\mathcal{Z}\) are bounded linear operators, and \(f:\mathcal{X}\to(-\infty,\infty]\) and \(g:\mathcal{Y}\to(-\infty,\infty]\) are proper, lower semi-continuous, convex functions. The classical ADMM solves (1.1) approximately by constructing an iterative sequence via alternatively minimizing the augmented Lagrangian function \[\mathscr{L}_{\rho}(x,y,z):=f(x)+g(y)+\langle\lambda,Ax+By-c\rangle+\frac{\rho}{2} \|Ax+By-c\|^{2}\] with respect to the primal variables \(x\) and \(y\) and then updating the dual variable \(\lambda\); more precisely, starting from an initial guess \(y^{0}\in\mathcal{Y}\) and \(\lambda^{0}\in\mathcal{Z}\), an iterative sequence \(\{(x^{k},y^{k},\lambda^{k})\}\) is defined by \[x^{k+1} =\arg\min_{x\in\mathcal{X}}\left\{f(x)+\langle\lambda^{k},Ax\rangle +\frac{\rho}{2}\|Ax+By^{k}-c\|^{2}\right\}, \tag{1.2}\] \[y^{k+1} =\arg\min_{y\in\mathcal{Y}}\left\{g(y)+\langle\lambda^{k},By \rangle+\frac{\rho}{2}\|Ax^{k+1}+By-c\|^{2}\right\},\] \[\lambda^{k+1} =\lambda^{k}+\rho(Ax^{k+1}+By^{k+1}-c),\] where \(\rho>0\) is a given penalty parameter. The implementation of (1.2) requires to determine \(x^{k+1}\) and \(y^{k+1}\) by solving two convex minimization problems during each iteration. Although \(f\) and \(g\) may have special structures so that their proximal mappings are easy to be determined, solving the minimization problems in (1.2) in general is highly nontrivial due to the appearance of the terms \(\|Ax\|^{2}\) and \(\|By\|^{2}\). In order to avoid this implementation issue, one may consider to add certain proximal terms to the \(x\)-subproblem and \(y\)-subproblem in (1.2) to remove the terms \(\|Ax\|^{2}\) and \(\|By\|^{2}\). For any bounded linear positive semi-definite self-adjoint operator \(D\) on a real Hilbert space \(\mathcal{H}\), we will use the notation \[\|u\|_{D}^{2}:=\langle z,Du\rangle,\quad\forall u\in\mathcal{H}.\] By taking two bounded linear positive semi-definite self-adjoint operators \(P:\mathcal{X}\to\mathcal{X}\) and \(Q:\mathcal{Y}\to\mathcal{Y}\), we may add the terms \(\frac{1}{2}\|x-x^{k}\|_{P}^{2}\) and \(\frac{1}{2}\|y-y^{k}\|_{Q}^{2}\) to the \(x\)- and \(y\)-subproblems in (1.2) respectively to obtain the following proximal alternating direction method of multipliers ([4, 9, 19, 20, 22, 33]) \[x^{k+1} =\arg\min_{x\in\mathcal{X}}\left\{f(x)+\langle\lambda^{k},Ax\rangle +\frac{\rho}{2}\left\|Ax+By^{k}-c\right\|^{2}+\frac{1}{2}\|x-x^{k}\|_{P}^{2} \right\}, \tag{1.3}\] \[y^{k+1} =\arg\min_{y\in\mathcal{Y}}\left\{g(y)+\langle\lambda^{k},By \rangle+\frac{\rho}{2}\left\|Ax^{k+1}+By-c\right\|^{2}+\frac{1}{2}\|y-y^{k}\|_ {Q}^{2}\right\},\] \[\lambda^{k+1} =\lambda^{k}+\rho(Ax^{k+1}+By^{k+1}-c).\] The advantage of (1.3) over (1.2) is that, with wise choices of \(P\) and \(Q\), it is possible to remove the terms \(\|Ax\|^{2}\) and \(\|By\|^{2}\) and thus make the determination of \(x^{k+1}\) and \(y^{k+1}\) much easier. In recent years, various convergence rate results have been established for ADMM and its variants in either ergodic or non-ergodic sense. In [19, 25] the ergodic convergence rate \[|H(\bar{x}^{k},\bar{y}^{k})-H_{*}|=O\left(\frac{1}{k}\right)\quad\text{and} \quad\|A\bar{x}^{k}+B\bar{y}^{k}-c\|=O\left(\frac{1}{k}\right) \tag{1.4}\] has been derived in terms of the objective error and the constraint error, where \(H_{*}\) denotes the minimum value of (1.1), \(k\) denotes the number of iteration, and \[\bar{x}^{k}:=\frac{1}{k}\sum_{j=1}^{k}x^{j}\qquad\text{and}\qquad\bar{y}^{k}: =\frac{1}{k}\sum_{j=1}^{k}y^{j}\] denote the ergodic iterates of \(\{x^{k}\}\) and \(\{y^{k}\}\) respectively; see also [4, Theorem 15.4]. A criticism on ergodic result is that it may fail to capture the feature of the sought solution of the underlying problem because ergodic iterate has the tendency to average out the expected property and thus destroy the feature of the solution. This is in particular undesired in sparsity optimization and low-rank learning. In contrast, the non-ergodic iterate tends to share structural properties with the solution of the underlying problem. Therefore, the use of non-ergodic iterates becomes more favorable in practice. In [20] a non-ergodic convergence rate has been derived for the proximal ADMM (1.3) with \(Q=0\) and the result reads as \[\|x^{k+1}-x^{k}\|_{P}^{2}+\|B(y^{k+1}-y^{k})\|^{2}+\|\lambda^{k+1}-\lambda^{k} \|^{2}=o\left(\frac{1}{k}\right). \tag{1.5}\] By exploiting the connection with the Douglas-Rachford splitting algorithm, the non-ergodic convergence rate \[|H(x^{k},y^{k})-H_{*}|=o\left(\frac{1}{\sqrt{k}}\right)\quad\text{and}\quad\| Ax^{k}+By^{k}-c\|=o\left(\frac{1}{\sqrt{k}}\right) \tag{1.6}\] in terms of the objective error and the constraint error has been established in [8] for the ADMM (1.2) and an example has been provided to demonstrate that the estimates in (1.6) are sharp. However, the derivation of (1.6) in [8] relies on some unnatural technical conditions involving the convex conjugate of \(f\) and \(g\), see Remark 2.1. Note that the estimate (1.5) implies the second estimate in (1.6), however it does not imply directly the first estimate in (1.6). In Section 2 we will show, by a simpler argument, that similar estimate as in (1.5) can be established for the proximal ADMM (1.3) with arbitrary positive semi-definite \(Q\). Based on this result and some additional properties of the method, we will further show that the non-ergodic rate (1.6) holds for the proximal ADMM (1.3) with arbitrary positive semi-definite \(P\) and \(Q\). Our result does not require any technical conditions as assumed in [8]. In order to obtain faster convergence rates for the proximal ADMM (1.3), certain regularity conditions should be imposed. In finite dimensional situation, a number of linear convergence results have been established. In [9] some linear convergence results of the proximal ADMM have been provided under a number of scenarios involving the strong convexity of \(f\) and/or \(g\), the Lipschitz continuity of \(\nabla f\) and/or \(\nabla g\), together with further full row/column rank assumptions on \(A\) and/or \(B\). Under a bounded metric subregularity condition, in particular under the assumption that both \(f\) and \(g\) are convex piecewise linear-quadratic functions, a global linear convergence rate has been established in [32] for the proximal ADMM (1.3) with \[P:=\tau_{1}I-\rho A^{*}A\succ 0\quad\text{and}\quad Q:=\tau_{2}I-\rho B^{*}B \succ 0, \tag{1.7}\] where \(A^{*}\) and \(B^{*}\) denotes the adjoints of \(A\) and \(B\) respectively; the condition (1.7) plays an essential role in the convergence analysis in [32]. We will derive faster convergence rates for the proximal ADMM (1.3) in the general Hilbert space setting. To this end, we need first to consider the weak convergence of \(\{(x^{k},y^{k},\lambda^{k})\}\) and demonstrate that every weak cluster point of this sequence is a KKT point of (1.1). This may not be an issue in finite dimensions. However, this is non-trivial in infinite dimensional spaces because extra care is required to dealing with weak convergence. In [6] the weak convergence of the proximal ADMM (1.3) has been considered by transforming the method into a proximal point method and the result there requires restrictive conditions, see [6, Lemma 3.4 and Theorem 3.1]. These restrictive conditions have been weakened later in [31] by using machinery from the maximal monotone operator theory. We will explore the structure of the proximal ADMM and show by an elementary argument that every weak cluster point of \(\{(x^{k},y^{k},\lambda^{k})\}\) is indeed a KKT point of (1.1) without any additional conditions. We will then consider the linear convergence of the proximal ADMM under a bounded metric subregularity condition and obtain the linear convergence for any positive semi-definite \(P\) and \(Q\); in particular, we obtain the linear convergence of \(|H(x^{k},y^{k})-H_{*}|\) and \(\|Ax^{k}+By^{k}-c\|\). We also consider deriving convergence rates under a bounded Holder metric subregularity condition which is weaker than the bounded metric subregularity. This weaker condition holds if both \(f\) and \(g\) are semi-algebraic functions and thus a wider range of applications can be covered. We show that, under a bounded Holder metric subrigularity condition, among other things the convergence rates in (1.6) can be improved to \[\|Ax^{k}+By^{k}-c\|=O(k^{-\beta})\quad\text{ and }\quad|H(x^{k},y^{k})-H_{*}|=O(k^{- \beta})\] for some number \(\beta>1/2\); the value of \(\beta\) depends on the properties of \(f\) and \(g\). To further weaken the bounded (Holder) metric subregularity assumption, we introduce an iteration based error bound condition which is an extension of the one in [27] to the general proximal ADMM (1.3). It is interesting to observe that this error bound condition holds under any one of the scenarios proposed in [9]. Hence, we provide a unified analysis for deriving convergence rates under the bounded (Holder) metric subregularity or the scenarios in [9]. Furthermore, we extend the scenarios in [9] to the general Hilbert space setting and demonstrate that some conditions can be weakened and the convergence result can be strengthened; see Theorem 2.11. In the second part of this paper, we consider using ADMM as a regularization method to solve linear ill-posed inverse problems in Hilbert spaces. Linear inverse problems have a wide range of applications, including medical imaging, geophysics, astronomy, signal processing, and more ([12, 18, 28]). We consider linear inverse problems of the form \[Ax=b,\quad x\in\mathcal{C}, \tag{1.8}\] where \(A:\mathcal{X}\to\mathcal{H}\) is a compact linear operator between two Hilbert spaces \(\mathcal{X}\) and \(\mathcal{H}\), \(\mathcal{C}\) is a closed convex set in \(\mathcal{X}\), and \(b\in\mathrm{Ran}(A)\), the range of \(A\). In order to find a solution of (1.8) with desired properties, a priori available information on the sought solution should be incorporated into the problem. Assume that, under a suitable linear transform \(L\) from \(\mathcal{X}\) to another Hilbert spaces \(\mathcal{Y}\) with domain \(\mathrm{dom}(L)\), the feature of the sought solution can be captured by a proper convex penalty function \(f:\mathcal{Y}\to(-\infty,\infty]\). One may consider instead of (1.8) the constrained optimization problem \[\min\{f(Lx):Ax=b,\ x\in\mathcal{C},\ x\in\mathrm{dom}(L)\}. \tag{1.9}\] A challenging issue related to the numerical resolution of (1.9) is its ill-posedness in the sense that the solution of (1.9) does not depend continuously on the data and thus a small perturbation on data can lead to a large deviation on solutions. In practical applications, the exact data \(b\) is usually unavailable, instead only a noisy data \(b^{\delta}\) is at hand with \[\|b^{\delta}-b\|\leq\delta\] for some small noise level \(\delta>0\). To overcome ill-posedness, regularization methods should be introduced to produce reasonable approximate solutions; one may refer to [7, 12, 23, 29] for various regularization methods. The common use of ADMM to solve (1.9) with noisy data \(b^{\delta}\) first considers the variational regularization \[\min_{x\in\mathcal{C}}\left\{\frac{1}{2}\|Ax-b^{\delta}\|^{2}+\alpha f(Lx) \right\}, \tag{1.10}\] then uses the splitting technique to rewrite (1.10) into the form (1.1), and finally applies the ADMM procedure to produce approximate solutions. The parameter \(\alpha>0\) is the so-called regularization parameter which should be adjusted carefully to achieve reasonable good performance; consequently one has to run ADMM to solve (1.10) for many different values of \(\alpha\), which can be time consuming. In [21, 22] the ADMM has been considered to solve (1.9) directly to reduce the computational load. Note that (1.9) can be written as \[\left\{\begin{array}{l}\min f(y)+\iota_{\mathcal{C}}(x)\\ \text{subject to }Az=b,\ Lz-y=0,\ z-x=0,\ z\in\text{dom}(L),\end{array}\right.\] where \(\iota_{\mathcal{C}}\) denotes the indicator function of \(\mathcal{C}\). With the noisy data \(b^{\delta}\) we introduce the augmented Lagrangian function \[\mathscr{L}_{\rho_{1},\rho_{2},\rho_{3}}(z,y,x,\lambda,\mu,\nu) :=f(y)+\iota_{\mathcal{C}}(x)+\langle\lambda,Az-b^{\delta}\rangle +\langle\mu,Lz-y\rangle+\langle\nu,z-x\rangle\] \[\quad+\frac{\rho_{1}}{2}\|Az-b^{\delta}\|^{2}+\frac{\rho_{2}}{2} \|Lz-y\|^{2}+\frac{\rho_{3}}{2}\|z-x\|^{2},\] where \(\rho_{1}\), \(\rho_{2}\) and \(\rho_{3}\) are preassigned positive numbers. The proximal ADMM proposed in [22] for solving (1.9) then takes the form \[z^{k+1} =\arg\min_{z\in\text{dom}(L)}\left\{\mathscr{L}_{\rho_{1},\rho_{2 },\rho_{3}}(z,y^{k},x^{k},\lambda^{k},\mu^{k},\nu^{k})+\frac{1}{2}\|z-z^{k}\|_ {Q}^{2}\right\}, \tag{1.11}\] \[y^{k+1} =\arg\min_{y\in\mathcal{Y}}\left\{\mathscr{L}_{\rho_{1},\rho_{2 },\rho_{3}}(z^{k+1},y,x^{k},\lambda^{k},\mu^{k},\nu^{k})\right\},\] \[x^{k+1} =\arg\min_{x\in\mathcal{X}}\left\{\mathscr{L}_{\rho_{1},\rho_{2 },\rho_{3}}(z^{k+1},y^{k+1},x,\lambda^{k},\mu^{k},\nu^{k})\right\},\] \[\lambda^{k+1} =\lambda^{k}+\rho_{1}(Az^{k+1}-b^{\delta}),\] \[\mu^{k+1} =\mu^{k}+\rho_{2}(Lz^{k+1}-y^{k+1}),\] \[\nu^{k+1} =\nu^{k}+\rho_{3}(z^{k+1}-x^{k+1}),\] where \(Q\) is a bounded linear positive semi-definite self-adjoint operator. The method (1.11) is not a 3-block ADMM. Note that the variables \(y\) and \(x\) are not coupled in \(\mathscr{L}_{\rho_{1},\rho_{2},\rho_{3}}(z,y,x,\lambda,\mu,\nu)\). Thus, \(y^{k+1}\) and \(x^{k+1}\) can be updated simultaneously, i.e. \[(y^{k+1},x^{k+1})=\arg\min_{y\in\mathcal{Y},x\in\mathcal{X}}\left\{\mathscr{L} _{\rho_{1},\rho_{2},\rho_{3}}(z^{k+1},y,x,\lambda^{k},\mu^{k},\nu^{k})\right\}.\] This demonstrates that (1.11) is a 2-block proximal ADMM. It should be highlighted that all well-established convergence results on proximal ADMM for well-posed optimization problems are not applicable to (1.11) directly. Note that (1.11) uses the noisy data \(b^{\delta}\). If the convergence theory for well-posed optimization problems could be applicable, one would obtain a solution of the perturbed problem \[\min\left\{f(Lx):Ax=b^{\delta},\ x\in\mathcal{C},\ x\in\mathrm{dom}(L)\right\} \tag{1.12}\] of (1.9). Because \(A\) is compact, it is very likely that \(b^{\delta}\not\in\mathrm{Ran}(A^{*})\) and thus (1.12) makes no sense as the feasible set is empty. Even if \(b^{\delta}\in\mathrm{Ran}(A^{*})\) and (1.12) has a solution, this solution could be far away from the solution of (1.9) because of the ill-posedness. Therefore, if (1.11) is used to solve (1.9), better result can not be expected even if larger number of iterations are performed. In contrast, like all other iterative regularization methods, when (1.11) is used to solve (1.9), it shows the semi-convergence property, i.e., the iterate becomes close to the sought solution at the beginning; however, after a critical number of iterations, the iterate leaves the sought solution far away as the iteration proceeds. Thus, properly terminating the iteration is important to produce acceptable approximate solutions. One may hope to determine a stopping index \(k_{\delta}\), depending on \(\delta\) and/or \(b^{\delta}\), such that \(\|x^{k_{\delta}}-x^{\dagger}\|\) is as small as possible and \(\|x^{k_{\delta}}-x^{\dagger}\|\to 0\) as \(\delta\to 0\), where \(x^{\dagger}\) denotes the solutio of (1.9). This has been done in our previous work [21, 22] in which early stopping rules have been proposed for the method (1.11) to render it into a regularization method and numerical results have been reported to demonstrate the nice performance. However, the work in [21, 22] does not provide convergence rates, i.e. the estimate on \(\|x^{k_{\delta}}-x^{\dagger}\|\) in terms of \(\delta\). Deriving convergence rates for iterative regularization methods involving general convex regularization terms is a challenging question and only a limited number of results are available. In order to derive a convergence rate of a regularization method for ill-posed problems, certain source condition should be imposed on the sought solution. In Section 3, under a benchmark source condition on the sought solution, we will provide a partial answer to this question by establishing a convergence rate result for (1.11) if the iteration is terminated by an _a priori_ stopping rule. We conclude this section with some notation and terminology. Let \(\mathcal{V}\) be a real Hilbert spaces. We use \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\) to denote its inner product and the induced norm. We also use "\(\to\)" and "\(\to\)" to denote the strong convergence and weak convergence respectively. For a function \(\varphi:\mathcal{V}\to(-\infty,\infty]\) its domain is defined as \(\mathrm{dom}(\varphi):=\{x\in\mathcal{V}:\varphi(x)<\infty\}\). If \(\mathrm{dom}(\varphi)\neq\emptyset\), \(\varphi\) is called proper. For a proper convex function \(\varphi:\mathcal{V}\to(-\infty,\infty]\), its modulus of convexity, denoted by \(\sigma_{\varphi}\), is defined to be the largest number \(c\) such that \[\varphi(tx+(1-t)y)+ct(1-t)\|x-y\|^{2}\leq t\varphi(x)+(1-t)\varphi(y)\] for all \(x,y\in\mathrm{dom}(\varphi)\) and \(0\leq t\leq 1\). We always have \(\sigma_{\varphi}\geq 0\). If \(\sigma_{\varphi}>0\), \(\varphi\) is called strongly convex. For a proper convex function \(\varphi:\mathcal{V}\to(-\infty,\infty]\), we use \(\partial\varphi\) to denote its subdifferential, i.e. \[\partial\varphi(x):=\{\xi\in\mathcal{V}:\varphi(y)\geq\varphi(x)+\langle\xi,y -x\rangle\ \text{for all}\ y\in\mathcal{V}\},\quad x\in\mathcal{V}.\] Let \(\mathrm{dom}(\partial\varphi):=\{x\in\mathcal{V}:\partial\varphi(x)\neq\emptyset\}\). It is easy to see that \[\varphi(y)-\varphi(x)-\langle\xi,y-x\rangle\geq\sigma_{\varphi}\|y-x\|^{2}\] for all \(y\in\mathcal{V}\), \(x\in\mathrm{dom}(\partial\varphi)\) and \(\xi\in\partial\varphi(x)\) which in particular implies the monotonicity of \(\partial\varphi\), i.e. \[\langle\xi-\eta,x-y\rangle\geq 2\sigma_{\varphi}\|x-y\|^{2}\] for all \(x,y\in\mathrm{dom}(\partial\varphi)\), \(\xi\in\partial\varphi(x)\) and \(\eta\in\partial\varphi(y)\). ## 2. **Proximal ADMM for convex optimization problems** In this section we will consider the proximal ADMM (1.3) for solving the linearly constrained convex minimization problem (1.1). For the convergence analysis, we will make the following standard assumptions. **Assumption 2.1**.: \(\mathcal{X}\), \(\mathcal{Y}\) and \(\mathcal{Z}\) are real Hilbert spaces, \(A:\mathcal{X}\to\mathcal{Z}\) and \(B:\mathcal{Y}\to\mathcal{Z}\) are bounded linear operators, \(P:\mathcal{X}\to\mathcal{X}\) and \(Q:\mathcal{Y}\to\mathcal{Y}\) are bounded linear positive semi-definite self-adjoint operators, and \(f:\mathcal{X}\to(-\infty,\infty]\) and \(g:\mathcal{Y}\to(-\infty,\infty]\) are proper, lower semi-continuous, convex functions. **Assumption 2.2**.: The problem (1.1) has a Karush-Kuhn-Tucker (KKT) point, i.e. there exists \((\bar{x},\bar{y},\bar{\lambda})\in\mathcal{X}\times\mathcal{Y}\times \mathcal{Z}\) such that \[-A^{*}\bar{\lambda}\in\partial f(\bar{x}),\quad-B^{*}\bar{\lambda}\in\partial g (\bar{y}),\quad A\bar{x}+B\bar{y}=c.\] It should be mentioned that, to guarantee the proximal ADMM (1.3) to be well-defined, certain additional conditions need to be imposed to ensure that the \(x\)- and \(y\)-subproblems do have minimizers. Since the well-definedness can be easily seen in concrete applications, to make the presentation more succinct we will not state these conditions explicitly. By the convexity of \(f\) and \(g\), it is easy to see that, for any KKT point \((\bar{x},\bar{y},\bar{\lambda})\) of (1.1), there hold \[0\leq f(x)-f(\bar{x})+\langle\bar{\lambda},A(x-\bar{x})\rangle, \quad\forall x\in\mathcal{X},\] \[0\leq g(y)-g(\bar{y})+\langle\bar{\lambda},B(y-\bar{y})\rangle, \quad\forall y\in\mathcal{Y}.\] Adding these two equations and using \(A\bar{x}+B\bar{y}-c=0\), it follows that \[0\leq H(x,y)-H(\bar{x},\bar{y})+\langle\bar{\lambda},Ax+By-c\rangle,\quad \forall(x,y)\in\mathcal{X}\times\mathcal{Y}. \tag{2.1}\] This in particular implies that \((\bar{x},\bar{y})\) is a solution of (1.1) and thus \(H_{*}:=H(\bar{x},\bar{y})\) is the minimum value of (1.1). Based on Assumptions 2.1 and 2.2 we will analyze the proximal ADMM (1.3). For ease of exposition, we set \(\widehat{Q}:=\rho B^{*}B+Q\) and define \[Gu:=(Px,\widehat{Q}y,\lambda/\rho),\quad\forall u:=(x,y,\lambda)\in\mathcal{ X}\times\mathcal{Y}\times\mathcal{Z}\] which is a bounded linear positive semi-definite self-adjoint operator on \(\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}\). Then, for any \(u:=(x,y,\lambda)\in\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}\) we have \[\|u\|_{G}^{2}:=\langle u,Gu\rangle=\|x\|_{P}^{2}+\|y\|_{\widehat{Q}}^{2}+ \frac{1}{\rho}\|\lambda\|^{2}.\] For the sequence \(\{u^{k}:=(x^{k},y^{k},\lambda^{k})\}\) defined by the proximal ADMM (1.3), we use the notation \[\Delta x^{k}:=x^{k}-x^{k-1},\ \ \Delta y^{k}:=y^{k}-y^{k-1},\ \ \Delta\lambda^{k}:=\lambda^{k}-\lambda^{k-1},\ \ \Delta u^{k}:=u^{k}-u^{k-1}.\] We start from the first order optimality conditions on \(x^{k+1}\) and \(y^{k+1}\) which by definition can be stated as \[\begin{split}-A^{*}\lambda^{k}-\rho A^{*}(Ax^{k+1}+By^{k}-c)-P(x ^{k+1}-x^{k})&\in\partial f(x^{k+1}),\\ -B^{*}\lambda^{k}-\rho B^{*}(Ax^{k+1}+By^{k+1}-c)-Q(y^{k+1}-y^{k} )&\in\partial g(y^{k+1}).\end{split} \tag{2.2}\] By using \(\lambda^{k+1}=\lambda^{k}+\rho(Ax^{k+1}+By^{k+1}-c)\) we may rewrite (2.2) as \[-A^{*}(\lambda^{k+1}-\rho B\Delta y^{k+1})-P\Delta x^{k+1} \in\partial f(x^{k+1}), \tag{2.3}\] \[-B^{*}\lambda^{k+1}-Q\Delta y^{k+1} \in\partial g(y^{k+1})\] which will be frequently used in the following analysis. We first prove the following important result which is inspired by [19, Lemma 3.1] and [4, Theorem 15.4]. **Proposition 2.1**.: _Let Assumption 2.1 hold. Then for the proximal ADMM (1.3) there holds_ \[\sigma_{f}\|x^{k+1}-x\|^{2}+\sigma_{g}\|y^{k+1}-y\|^{2}\] \[\leq H(x,y)-H(x^{k+1},y^{k+1})+\langle\lambda^{k+1}-\rho B\Delta y ^{k+1},Ax+By-c\rangle\] \[\quad-\langle\lambda,Ax^{k+1}+By^{k+1}-c\rangle+\frac{1}{2}\left( \|u^{k}-u\|_{G}^{2}-\|u^{k+1}-u\|_{G}^{2}\right)\] \[\quad-\frac{1}{2\rho}\|\Delta\lambda^{k+1}-\rho B\Delta y^{k+1}\| ^{2}-\frac{1}{2}\|\Delta x^{k+1}\|_{P}^{2}-\frac{1}{2}\|\Delta y^{k+1}\|_{Q}^ {2}\] _for all \(u:=(x,y,\lambda)\in\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}\), where \(\sigma_{f}\) and \(\sigma_{g}\) denote the modulus of convexity of \(f\) and \(g\) respectively._ Proof.: Let \(\tilde{\lambda}^{k+1}:=\lambda^{k+1}-\rho B\Delta y^{k+1}\). By using (2.3) and the convexity of \(f\) and \(g\) we have for any \((x,y,\lambda)\in\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}\) that \[\sigma_{f}\|x^{k+1}-x\|^{2}+\sigma_{g}\|y^{k+1}-y\|^{2}\] \[\leq f(x)-f(x^{k+1})+\langle\lambda^{k+1}-\rho B\Delta y^{k+1},A (x-x^{k+1})\rangle+\langle P\Delta x^{k+1},x-x^{k+1}\rangle\] \[\quad+g(y)-g(y^{k+1})+\langle\lambda^{k+1},B(y-y^{k+1})\rangle+ \langle Q\Delta y^{k+1},y-y^{k+1}\rangle\] \[=H(x,y)-H(x^{k+1},y^{k+1})+\langle\tilde{\lambda}^{k+1},A(x-x^{k +1})+B(y-y^{k+1})\rangle\] \[\quad+\langle P\Delta x^{k+1},x-x^{k+1}\rangle+\langle\tilde{Q} \Delta y^{k+1},y-y^{k+1}\rangle\] \[=H(x,y)-H(x^{k+1},y^{k+1})+\langle\tilde{\lambda}^{k+1},Ax+By-c\rangle\] \[\quad-\langle\lambda,Ax^{k+1}+By^{k+1}-c\rangle+\langle\lambda- \tilde{\lambda}^{k+1},Ax^{k+1}+By^{k+1}-c\rangle\] \[\quad+\langle P\Delta x^{k+1},x-x^{k+1}\rangle+\langle\tilde{Q} \Delta y^{k+1},y-y^{k+1}\rangle.\] Since \(\rho(Ax^{k+1}+By^{k+1}-c)=\Delta\lambda^{k+1}\) we then obtain \[\sigma_{f}\|x^{k+1}-x\|^{2}+\sigma_{g}\|y^{k+1}-y\|^{2}\] \[\leq H(x,y)-H(x^{k+1},y^{k+1})+\langle\tilde{\lambda}^{k+1},Ax+ By-c\rangle-\langle\lambda,Ax^{k+1}+By^{k+1}-c\rangle\] \[\quad+\frac{1}{\rho}\langle\lambda-\lambda^{k+1},\Delta\lambda^{ k+1}\rangle+\frac{1}{\rho}\langle\lambda^{k+1}-\tilde{\lambda}^{k+1},\Delta \lambda^{k+1}\rangle\] \[\quad+\langle P\Delta x^{k+1},x-x^{k+1}\rangle+\langle\tilde{Q} \Delta y^{k+1},y-y^{k+1}\rangle.\] By using the polarization identity and the definition of \(G\), it follows that \[\sigma_{f}\|x^{k+1}-x\|^{2}+\sigma_{g}\|y^{k+1}-y\|^{2}\] \[\leq H(x,y)-H(x^{k+1},y^{k+1})+\langle\tilde{\lambda}^{k+1},Ax+ By-c\rangle-\langle\lambda,Ax^{k+1}+By^{k+1}-c\rangle\] \[\quad+\frac{1}{2\rho}\left(\|\lambda^{k}-\lambda\|^{2}-\| \lambda^{k+1}-\lambda\|^{2}-\|\Delta\lambda^{k+1}\|^{2}\right)\] \[\quad-\frac{1}{2\rho}\left(\|\lambda^{k}-\tilde{\lambda}^{k+1}\| ^{2}-\|\lambda^{k+1}-\tilde{\lambda}^{k+1}\|^{2}-\|\Delta\lambda^{k+1}\|^{2}\right)\] \[+\frac{1}{2}\left(\|u^{k}-\bar{u}\|_{G}^{2}-\|u^{k+1}-\bar{u}\|_{G}^{ 2}\right)\] \[+\frac{1}{2}\left(\|u^{k}-\bar{u}\|_{G}^{2}\right) \tag{2.4}\] _for all \(k\geq 0\). Moreover, the sequence \(\{\|u^{k}-\bar{u}\|_{G}^{2}\}\) is monotonically decreasing._ Proof.: By taking \(u=\bar{u}\) in Proposition 2.1 and using \(A\bar{x}+B\bar{y}-c=0\) we immediately obtain (2.4). According to (2.1) we have \[H(x^{k+1},y^{k+1})-H_{*}+\langle\bar{\lambda},Ax^{k+1}+By^{k+1}-c\rangle\geq 0.\] Thus, from (2.4) we can obtain \[\sigma_{f}\|x^{k+1}-\bar{x}\|^{2}+\sigma_{g}\|y^{k+1}-\bar{y}\|^{2}\leq\frac{1 }{2}\left(\|u^{k}-\bar{u}\|_{G}^{2}-\|u^{k+1}-\bar{u}\|_{G}^{2}\right) \tag{2.5}\] which implies the monotonicity of the sequence \(\{\|u^{k}-\bar{u}\|_{G}^{2}\}\). We next show that \(\|\Delta u^{k}\|_{G}^{2}=o(1/k)\) as \(k\to\infty\). This result for the proximal ADMM (1.3) with \(Q=0\) has been established in [20] based on a variational inequality approach. We will establish this result for the proximal ADMM (1.3) with general bounded linear positive semi-definite self-adjoint operators \(P\) and \(Q\) by a simpler argument. **Lemma 2.3**.: _Let Assumption 2.1 hold. For the proximal ADMM (1.3), the sequence \(\{\|\Delta u^{k}\|_{G}^{2}\}\) is monotonically decreasing._ Proof.: By using (2.3) and the monotonicity of \(\partial f\) and \(\partial g\), we can obtain \[0\leq \left\langle-A^{*}(\Delta\lambda^{k+1}-\rho B\Delta y^{k+1}+\rho B \Delta y^{k})-P\Delta x^{k+1}+P\Delta x^{k},\Delta x^{k+1}\right\rangle\] \[+\left\langle-B^{*}\Delta\lambda^{k+1}-Q\Delta y^{k+1}+Q\Delta y ^{k},\Delta y^{k+1}\right\rangle\] \[= -\langle\Delta\lambda^{k+1},A\Delta x^{k+1}+B\Delta y^{k+1} \rangle+\rho\langle B(\Delta y^{k+1}-\Delta y^{k}),A\Delta x^{k+1}\rangle\] \[-\langle P(\Delta x^{k+1}-\Delta x^{k}),\Delta x^{k+1}\rangle- \langle Q(\Delta y^{k+1}-\Delta y^{k}),\Delta y^{k+1}\rangle.\] Note that \[A\Delta x^{k+1}+B\Delta y^{k+1}=\frac{1}{\rho}(\Delta\lambda^{k+1}-\Delta \lambda^{k}).\] We therefore have \[0\leq -\frac{1}{\rho}\langle\Delta\lambda^{k+1},\Delta\lambda^{k+1}- \Delta\lambda^{k}\rangle-\rho\langle B(\Delta y^{k+1}-\Delta y^{k}),B\Delta y ^{k+1}\rangle\] \[+\langle B(\Delta y^{k+1}-\Delta y^{k}),\Delta\lambda^{k+1}- \Delta\lambda^{k}\rangle\] \[-\langle P(\Delta x^{k+1}-\Delta x^{k}),\Delta x^{k+1}\rangle- \langle Q(\Delta y^{k+1}-\Delta y^{k}),\Delta y^{k+1}\rangle.\] By the polarization identity we then have \[0\leq \ \frac{1}{2\rho}\left(\|\Delta\lambda^{k}\|^{2}-\|\Delta\lambda^{ k+1}\|^{2}-\|\Delta\lambda^{k}-\Delta\lambda^{k+1}\|^{2}\right)\] \[+\frac{\rho}{2}\left(\|B\Delta y^{k}\|^{2}-\|B\Delta y^{k+1}\|^{2 }-\|B(\Delta y^{k}-\Delta y^{k+1})\|^{2}\right)\] \[+\frac{1}{2}\left(\|\Delta x^{k}\|^{2}_{P}-\|\Delta x^{k+1}\|^{2 }_{P}-\|\Delta x^{k}-\Delta x^{k+1}\|^{2}_{P}\right)\] \[+\frac{1}{2}\left(\|\Delta y^{k}\|^{2}_{Q}-\|\Delta y^{k+1}\|^{2 }_{Q}-\|\Delta y^{k}-\Delta y^{k+1}\|^{2}_{Q}\right)\] \[+\langle B(\Delta y^{k+1}-\Delta y^{k}),\Delta\lambda^{k+1}- \Delta\lambda^{k}\rangle.\] With the help of the definition of \(G\), we obtain \[0\leq \ \|\Delta u^{k}\|^{2}_{G}-\|\Delta u^{k+1}\|^{2}_{G}-\|\Delta x^{ k}-\Delta x^{k+1}\|^{2}_{P}-\|\Delta y^{k}-\Delta y^{k+1}\|^{2}_{Q}\] \[-\frac{\rho}{2}\left\|B(\Delta y^{k+1}-\Delta y^{k})-\frac{1}{ \rho}(\Delta\lambda^{k+1}-\Delta\lambda^{k})\right\|^{2}\] which completes the proof. **Lemma 2.4**.: _Let Assumptions 2.1 and 2.2 hold and let \(\bar{u}:=(\bar{x},\bar{y},\bar{\lambda})\) be any KKT point of (1.1). For the proximal ADMM (1.3) there holds_ \[\|\Delta u^{k+1}\|^{2}_{G}\leq\left(\|u^{k}-\bar{u}\|^{2}_{G}+\|\Delta y^{k}\| ^{2}_{Q}\right)-\left(\|u^{k+1}-\bar{u}\|^{2}_{G}+\|\Delta y^{k+1}\|^{2}_{Q}\right)\] _for all \(k\geq 1\)._ Proof.: We will use (2.3) together with \(-A^{*}\bar{\lambda}\in\partial f(\bar{x})\) and \(-B^{*}\bar{\lambda}\in\partial g(\bar{y})\). By using the monotonicity of \(\partial f\) and \(\partial g\) we have \[0\leq \ \left\langle-A^{*}(\lambda^{k+1}-\bar{\lambda}-\rho B\Delta y^{k+1 })-P\Delta x^{k+1},x^{k+1}-\bar{x}\right\rangle\] \[+\left\langle-B^{*}(\lambda^{k+1}-\bar{\lambda})-Q\Delta y^{k+1}, y^{k+1}-\bar{y}\right\rangle\] \[= \ \langle\bar{\lambda}-\lambda^{k+1},Ax^{k+1}+By^{k+1}-c\rangle+ \rho\langle B\Delta y^{k+1},A(x^{k+1}-\bar{x})\rangle\] \[-\langle P\Delta x^{k+1},x^{k+1}-\bar{x}\rangle-\langle Q\Delta y ^{k+1},y^{k+1}-\bar{y}\rangle.\] By virtue of \(\rho(Ax^{k+1}+By^{k+1}-c)=\Delta\lambda^{k+1}\) we further have \[0\leq\frac{1}{\rho}\langle\bar{\lambda}-\lambda^{k+1},\Delta \lambda^{k+1}\rangle-\rho(B\Delta y^{k+1},B(y^{k+1}-\bar{y}))+\langle B\Delta y^ {k+1},\Delta\lambda^{k+1}\rangle\] \[\quad-\langle P\Delta x^{k+1},x^{k+1}-\bar{x}\rangle-\langle Q \Delta y^{k+1},y^{k+1}-\bar{y}\rangle.\] By using the second equation in (2.3) and the monotonicity of \(\partial g\) we have \[0\leq\left\langle-B^{*}\Delta\lambda^{k+1}-Q\Delta y^{k+1}+Q \Delta y^{k},\Delta y^{k+1}\right\rangle\] \[\quad=-\langle\Delta\lambda^{k+1},B\Delta y^{k+1}\rangle-\langle Q (\Delta y^{k+1}-\Delta y^{k}),\Delta y^{k+1}\rangle\] which shows that \[\langle\Delta\lambda^{k+1},B\Delta y^{k+1}\rangle\leq-\langle Q( \Delta y^{k+1}-\Delta y^{k}),\Delta y^{k+1}\rangle.\] Therefore \[0\leq \frac{1}{\rho}\langle\bar{\lambda}-\lambda^{k+1},\Delta\lambda^{k +1}\rangle-\langle\widehat{Q}\Delta y^{k+1},y^{k+1}-\bar{y}\rangle-\langle P \Delta x^{k+1},x^{k+1}-\bar{x}\rangle\] \[\quad-\langle Q(\Delta y^{k+1}-\Delta y^{k}),\Delta y^{k+1}\rangle.\] By using the polarization identity we then obtain \[0\leq \frac{1}{2\rho}\left(\langle\lambda^{k}-\bar{\lambda}\|^{2}-\| \lambda^{k+1}-\bar{\lambda}\|^{2}-\|\Delta\lambda^{k+1}\|^{2}\right)\] \[\quad+\frac{1}{2}\left(\|y^{k}-\bar{y}\|_{\widehat{Q}}^{2}-\|y^{k +1}-\bar{y}\|_{\widehat{Q}}^{2}-\|\Delta y^{k+1}\|_{\widehat{Q}}^{2}\right)\] \[\quad+\frac{1}{2}\left(\|x^{k}-\bar{x}\|_{P}^{2}-\|x^{k+1}-\bar{x} \|_{P}^{2}-\|\Delta x^{k+1}\|_{P}^{2}\right)\] \[\quad+\frac{1}{2}\left(\|\Delta y^{k}\|_{Q}^{2}-\|\Delta y^{k+1} \|_{Q}^{2}-\|\Delta y^{k+1}-\Delta y^{k}\|_{Q}^{2}\right).\] Recalling the definition of \(G\) we then complete the proof. **Proposition 2.5**.: _Let Assumption 2.1 and Assumption 2.2 hold. Then for the proximal ADMM (1.3) there holds \(\|\Delta u^{k}\|_{G}^{2}=o(1/k)\) as \(k\to\infty\)._ Proof.: Let \(\bar{u}\) be a KKT point of (1.1). From Lemma 2.4 it follows that \[\sum_{j=1}^{k}\|\Delta u^{j+1}\|_{G}^{2} \leq\sum_{j=1}^{k}\left(\left(\|u^{j}-\bar{u}\|_{G}^{2}+\|\Delta y ^{j}\|_{Q}^{2}\right)-\left(\|u^{j+1}-\bar{u}\|_{G}^{2}+\|\Delta y^{j+1}\|_{Q} ^{2}\right)\right)\] \[\leq\|u^{1}-\bar{u}\|_{G}^{2}+\|\Delta y^{1}\|_{Q}^{2} \tag{2.6}\] for all \(k\geq 1\). By Lemma 2.3, \(\{\|\Delta u^{j+1}\|_{G}^{2}\}\) is monotonically decreasing. Thus \[\left(\frac{k}{2}+1\right)\|\Delta u^{k+1}\|_{G}^{2}\leq\sum_{j= [k/2]}^{k}\|\Delta u^{j+1}\|_{G}^{2}, \tag{2.7}\] where \([k/2]\) denotes the largest integer \(\leq k/2\). Since (2.6) shows that \[\sum_{j=1}^{\infty}\|\Delta u^{j+1}\|_{G}^{2}<\infty,\] the right hand side of (2.7) must converge to \(0\) as \(k\to\infty\). Thus \((k+1)\|\Delta u^{k+1}\|_{G}^{2}=o(1)\) and hence \(\|\Delta u^{k}\|_{G}^{2}=o(1/k)\) as \(k\to\infty\) As a byproduct of Proposition 2.5 and Corollary 2.2, we can prove the following non-ergodic convergence rate result for the proximal ADMM (1.3) in terms of the objective error and the constraint error. **Theorem 2.6**.: _Let Assumption 2.1 and Assumption 2.2 hold. Consider the proximal ADMM (1.3) for solving (1.1). Then_ \[|H(x^{k},y^{k})-H_{*}|=o\left(\frac{1}{\sqrt{k}}\right)\quad\text{and}\quad \|Ax^{k}+By^{k}-c\|=o\left(\frac{1}{\sqrt{k}}\right) \tag{2.8}\] _as \(k\to\infty\)._ Proof.: Since \[\rho(Ax^{k}+By^{k}-c)=\Delta\lambda^{k}\quad\text{and}\quad\|\Delta\lambda^{k} \|^{2}\leq\rho\|\Delta u^{k}\|_{G}^{2} \tag{2.9}\] we may use Proposition 2.5 to obtain the estimate \(\|Ax^{k}+By^{k}-c\|=o(1/\sqrt{k})\) as \(k\to\infty\). In the following we will focus on deriving the estimate of \(|H(x^{k},y^{k})-H_{*}|\). Let \(\bar{u}:=(\bar{x},\bar{y},\bar{\lambda})\) be a KKT point of (1.1). By using (2.4) we have \[H(x^{k},y^{k})-H_{*} \leq-\langle\bar{\lambda},Ax^{k}+By^{k}-c\rangle+\frac{1}{2} \left(\|u^{k-1}-\bar{u}\|_{G}^{2}-\|u^{k}-\bar{u}\|_{G}^{2}\right)\] \[=-\frac{1}{\rho}\langle\bar{\lambda},\Delta\lambda^{k}\rangle- \langle u^{k-1}-\bar{u},G\Delta u^{k}\rangle-\frac{1}{2}\|\Delta u^{k}\|_{G}^ {2}\] \[\leq\frac{\|\bar{\lambda}\|}{\rho}\|\Delta\lambda^{k}\|+\|u^{k-1} -\bar{u}\|_{G}\|\Delta u^{k}\|_{G}. \tag{2.10}\] By virtue of the monotonicity of \(\{\|u^{k}-\bar{u}\|_{G}^{2}\}\) given in Corollary 2.2 we then obtain \[H(x^{k},y^{k})-H_{*} \leq\frac{\|\bar{\lambda}\|}{\rho}\|\Delta\lambda^{k}\|+\|u^{0}- \bar{u}\|_{G}\|\Delta u^{k}\|_{G}\] \[\leq\left(\|u^{0}-\bar{u}\|_{G}+\frac{\|\bar{\lambda}\|}{\sqrt{ \rho}}\right)\|\Delta u^{k}\|_{G}.\] On the other hand, by using (2.1) we have \[H(x^{k},y^{k})-H_{*} \geq-\langle\bar{\lambda},Ax^{k}+By^{k}-c\rangle=-\frac{1}{\rho} \langle\bar{\lambda},\Delta\lambda^{k}\rangle\] \[\geq-\frac{\|\bar{\lambda}\|}{\rho}\|\Delta\lambda^{k}\|\geq- \frac{\|\bar{\lambda}\|}{\sqrt{\rho}}\|\Delta u^{k}\|_{G}.\] Therefore \[\left|H(x^{k},y^{k})-H_{*}\right|\leq\left(\|u^{0}-\bar{u}\|_{G}+\frac{\|\bar{ \lambda}\|}{\sqrt{\rho}}\right)\|\Delta u^{k}\|_{G}. \tag{2.11}\] Now we can use Proposition 2.5 to conclude the proof. _Remark 2.1_.: By exploiting the connection between the Douglas-Rachford splitting algorithm and the classical ADMM (1.2), the non-ergodic convergence rate (2.8) has been established in [8] for the classical ADMM (1.2) under the conditions that \[\text{zero}(\partial d_{f}+\partial d_{g})\neq\emptyset \tag{2.12}\] and \[\partial d_{f}=A^{*}\circ\partial f^{*}\circ A,\qquad\partial d_{g}=B^{*} \circ\partial g^{*}\circ B-c, \tag{2.13}\] where \(d_{f}(\lambda):=f^{*}(A^{*}\lambda)\) and \(d_{g}(\lambda):=g^{*}(B^{*}\lambda)-\langle\lambda,c\rangle\) with \(f^{*}\) and \(g^{*}\) denoting the convex conjugates of \(f\) and \(g\) respectively. The conditions (2.12) and (2.13) seems strong and unnatural because they are posed on the convex conjugates \(f^{*}\) and \(g^{*}\) instead of \(f\) and \(g\) themselves. In Theorem 2.6 we establish the non-ergodic convergence rate (2.8) for the proximal ADMM (1.3) with any positive semi-definite \(P\) and \(Q\) without requiring the conditions (2.12) and (2.13) and therefore our result extends and improves the one in [8]. Next we will consider establishing faster convergence rates under suitable regularity conditions. As a basis, we first prove the following result which tells that any weak cluster point of \(\{u^{k}\}\) is a KKT point of (1.1). This result can be easily established for ADMM in finite-dimensional spaces, however it is nontrivial for the proximal ADMM (1.3) in infinite-dimensional Hilbert spaces due to the required treatment of weak convergence; Proposition 2.1 plays a crucial role in our proof. **Theorem 2.7**.: _Let Assumption 2.1 and Assumption 2.2 hold. Consider the sequence \(\{u^{k}:=(x^{k},y^{k},\lambda^{k})\}\) generated by the proximal ADMM (1.3). Assume \(\{u^{k}\}\) is bounded and let \(u^{\dagger}:=(x^{\dagger},y^{\dagger},\lambda^{\dagger})\) be a weak cluster point of \(\{u^{k}\}\). Then \(u^{\dagger}\) is a KKT point of (1.1). Moreover, for any weak cluster point \(u^{*}\) of \(\{u^{k}\}\) there holds \(\|u^{*}-u^{\dagger}\|_{G}=0\)._ Proof.: We first show that \(u^{\dagger}\) is a KKT point of (1.1). According to Propositon 2.5 we have \(\|\Delta u^{k}\|_{G}^{2}\to 0\) which means \[\Delta\lambda^{k}\to 0,\quad P\Delta x^{k}\to 0,\quad B\Delta y^{k}\to 0, \quad Q\Delta y^{k}\to 0 \tag{2.14}\] as \(k\to\infty\). According to Theorem 2.6 we also have \[Ax^{k}+By^{k}-c\to 0\quad\text{and}\quad H(x^{k},y^{k})\to H_{*}\quad\text{as }k \to\infty. \tag{2.15}\] Since \(u^{\dagger}\) is a weak cluster point of the sequence \(\{u^{k}\}\), there exists a subsequence \(\{u^{k_{j}}:=(x^{k_{j}},y^{k_{j}},\lambda^{k_{j}})\}\) of \(\{u^{k}\}\) such that \(u^{k_{j}}\rightharpoonup u^{\dagger}\) as \(j\to\infty\). By using the first equation in (2.15) we immediately obtain \[Ax^{\dagger}+By^{\dagger}-c=0. \tag{2.16}\] By using Proposition 2.1 with \(k=k_{j}-1\) we have for any \(u:=(x,y,\lambda)\in\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}\) that \[0 \leq H(x,y)-H(x^{k_{j}},y^{k_{j}})+\langle\lambda^{k_{j}}-\rho B \Delta y^{k_{j}},Ax+By-c\rangle\] \[\quad-\langle\lambda,Ax^{k_{j}}+By^{k_{j}}-c\rangle+\frac{1}{2} \left(\|u^{k_{j}-1}-u\|_{G}^{2}-\|u^{k_{j}}-u\|_{G}^{2}\right). \tag{2.17}\] According to Corollary 2.2, \(\{\|u^{k}\|_{G}\}\) is bounded. Thus we may use Proposition 2.5 to conclude \[\left|\|u^{k_{j}-1}-u\|_{G}^{2}-\|u^{k_{j}}-u\|_{G}^{2}\right|\leq\left(\|u^{k _{j}-1}-u\|_{G}+\|u^{k_{j}}-u\|_{G}\right)\|\Delta u^{k_{j}}\|_{G}\to 0\] as \(j\to\infty\). Therefore, by taking \(j\to\infty\) in (2.17) and using (2.14), (2.15) and \(\lambda^{k_{j}}\rightharpoonup\lambda^{\dagger}\) we can obtain \[0\leq H(x,y)-H_{*}+\langle\lambda^{\dagger},Ax+By-c\rangle \tag{2.18}\] for all \((x,y)\in\mathcal{X}\times\mathcal{Y}\). Since \(f\) and \(g\) are convex and lower semi-continuous, they are also weakly lower semi-continuous (see [11, Chapter 1, Corollary 2.2]). Thus, by using \(x^{k_{j}}\rightharpoonup x^{\dagger}\) and \(y^{k_{j}}\rightharpoonup y^{\dagger}\) we obtain \[H(x^{\dagger},y^{\dagger}) =f(x^{\dagger})+g(y^{\dagger})\leq\liminf_{j\to\infty}f(x^{k_{j}})+ \liminf_{j\to\infty}g(y^{k_{j}})\] \[\leq\liminf_{j\to\infty}\big{(}f(x^{k_{j}})+g(y^{k_{j}})\big{)}\] \[=\liminf_{j\to\infty}H(x^{k_{j}},y^{k_{j}})=H_{*}.\] Since \((x^{\dagger},y^{\dagger})\) satisfies (2.16), we also have \(H(x^{\dagger},y^{\dagger})\geq H_{*}\). Therefore \(H(x^{\dagger},y^{\dagger})=H_{*}\) and then it follows from (2.18) and (2.16) that \[0\leq H(x,y)-H(x^{\dagger},y^{\dagger})+\langle\lambda^{\dagger},A(x-x^{ \dagger})+B(y-y^{\dagger})\rangle\] for all \((x,y)\in\mathcal{X}\times\mathcal{Y}\). Using the definition of \(H\) we can immediately see that \(-A^{*}\lambda^{\dagger}\in\partial f(x^{\dagger})\) and \(-B^{*}\lambda^{\dagger}\in\partial g(y^{\dagger})\). Therefore \(u^{\dagger}\) is a KKT point of (1.1). Let \(u^{*}\) be another weak cluster point of \(\{u^{k}\}\). Then there exists a subsequence \(\{u^{l_{j}}\}\) of \(\{u^{k}\}\) such that \(u^{l_{j}}\rightharpoonup u^{*}\) as \(j\to\infty\). Noting the identity \[2\langle u^{k},G(u^{*}-u^{\dagger})\rangle=\|u^{k}-u^{\dagger}\|_{G}^{2}-\|u^ {k}-u^{*}\|_{G}^{2}-\|u^{\dagger}\|_{G}^{2}+\|u^{*}\|_{G}^{2}. \tag{2.19}\] Since both \(u^{*}\) and \(u^{\dagger}\) are KKT points of (1.1) as shown above, it follows from Corollary 2.2 that both \(\{\|u^{k}-u^{\dagger}\|_{G}^{2}\}\) and \(\{\|u^{k}-u^{*}\|_{G}^{2}\}\) are monotonically decreasing and thus converge as \(k\to\infty\). By taking \(k=k_{j}\) and \(k=l_{j}\) in (2.19) respectively and letting \(j\to\infty\) we can see that, for the both cases, the right hand side tends to the same limit. Therefore \[\langle u^{*},G(u^{*}-u^{\dagger})\rangle =\lim_{j\to\infty}\langle u^{l_{j}},G(u^{*}-u^{\dagger})\rangle\] \[=\lim_{j\to\infty}\langle u^{k_{j}},G(u^{*}-u^{\dagger})\rangle\] \[=\langle u^{\dagger},G(u^{*}-u^{\dagger})\rangle\] which implies \(\|u^{*}-u^{\dagger}\|_{G}^{2}=0\). _Remark 2.2_.: Theorem 2.7 requires \(\{u^{k}\}\) to be bounded. According to Corollary 2.2, \(\{\|u^{k}\|_{G}^{2}\}\) is bounded which implies the boundedness of \(\{\lambda^{k}\}\). In the following we will provide sufficient conditions to guarantee the boundedness of \(\{(x^{k},y^{k})\}\). 1. From (2.5) it follows that \(\{\sigma_{f}\|x^{k}\|^{2}+\sigma_{g}\|y^{k}\|^{2}+\|u^{k}\|_{G}^{2}\}\) is bounded. By the definition of \(G\), this in particular implies the boundedness of \(\{\lambda^{k}\}\) and \(\{By^{k}\}\). Consequently, it follows from \(\Delta\lambda^{k}=\rho(Ax^{k}+By^{k}-c)\) that \(\{Ax^{k}\}\) is bounded. Putting the above together we can conclude that both \(\{(\sigma_{f}I+P+A^{*}A)x^{k}\}\) and \(\{(\sigma_{g}I+Q+B^{*}B)y^{k}\}\) are bounded. Therefore, if both the bounded linear self-adjoint operators \[\sigma_{f}I+P+A^{*}A\quad\text{and}\quad\sigma_{g}I+Q+B^{*}B\] are coercive, we can conclude the boundedness of \(\{x^{k}\}\) and \(\{y^{k}\}\). Here a linear operator \(L:\mathcal{V}\to\mathcal{H}\) between two Hilbert spaces \(\mathcal{V}\) and \(\mathcal{H}\) is called coercive if \(\|Lv\|\to\infty\) whenever \(\|v\|\to\infty\). It is easy to see that \(L\) is coercive if and only if there is a constant \(c>0\) such that \(c\|v\|\leq\|Lv\|\) for all \(v\in\mathcal{V}\). 2. If there exist \(\beta>H_{*}\) and \(\sigma>0\) such that the set \[\{(x,y)\in\mathcal{X}\times\mathcal{Y}:H(x,y)\leq\beta\text{ and }\|Ax+By-c\|\leq\sigma\}\] is bounded, then \(\{(x^{k},y^{k})\}\) is bounded. In fact, since \(H(x^{k},y^{k})\to H_{*}\) and \(Ax^{k}+By^{k}-c\to 0\) as shown in Theorem 2.6, the sequence \(\{(x^{k},y^{k})\}\) is contained in the above set except for finite many terms. Thus \(\{(x^{k},y^{k})\}\) is bounded. _Remark 2.3_.: It is interesting to investigate under what conditions \(\{u^{k}\}\) has a unique weak cluster point. According to Theorem 2.7, for any two weak cluster points \(u^{*}:=(x^{*},y^{*},\lambda^{*})\) and \(u^{!}:=(x^{!},y^{!},\lambda^{!})\) of \(\{u^{k}\}\) there hold \[\|u^{*}-u^{!}\|_{G}^{2}=0,\quad Ax^{*}+By^{*}=c,\quad-A^{*}\lambda ^{*}\in\partial f(x^{*}),\quad-B^{*}\lambda^{*}\in\partial g(y^{*}),\] \[Ax^{!}+By^{!}=c,\quad-A^{*}\lambda^{!}\in\partial f(x^{!}),\quad -B^{*}\lambda^{!}\in\partial g(y^{!}).\] By using the definition of \(G\) and the monotonicity of \(\partial f\) and \(\partial g\) we can deduce that \[\lambda^{*}=\lambda^{!},\quad P(x^{*}-x^{!})=0,\quad Q(y^{*}-y^{!})=0,\quad B(y^{*}-y^{!})=0,\] \[A(x^{*}-x^{!})=0,\quad\sigma_{f}\|x^{*}-x^{!}\|^{2}=0,\quad \sigma_{g}\|y^{*}-y^{!}\|^{2}=0.\] Consequently \[(\sigma_{f}I+P+A^{*}A)(x^{*}-x^{!})=0\quad\text{ and }\quad(\sigma_{g}I+Q+B^{* }B)(y^{*}-y^{!})=0.\] Therefore, if both \(\sigma_{f}I+P+A^{*}A\) and \(\sigma_{g}I+Q+B^{*}B\) are injective, then \(x^{*}=x^{!}\) and \(y^{*}=y^{!}\) and hence \(\{u^{k}\}\) has a unique weak cluster point, say \(u^{!}\); consequently \(u^{k}\rightharpoonup u^{!}\) as \(k\to\infty\). _Remark 2.4_.: In [31] the proximal ADMM (with relaxation) has been considered under the condition that \[P+\rho A^{*}A+\partial f\text{ and }Q+\rho B^{*}B+\partial g\text{ are strongly maximal monotone.} \tag{2.20}\] which requires both \((P+\rho A^{*}A+\partial f)^{-1}\) and \((Q+\rho B^{*}B+\partial g)^{-1}\) exist as single valued mappings and are Lipschitz continuous. It has been shown that the iterative sequence converges weakly to a KKT point which is its unique weak cluster point. The argument in [31] used the facts that the KKT mapping \(F(u)\), defined in (2.21) below, is maximal monotone and maximal monotone operators are closed under the weak-strong topology ([2, 3]). Our argument is essentially based on Proposition 2.1, it is elementary and does not rely on any machinery from the maximal monotone operator theory. Based on Theorem 2.7, we now devote to deriving convergence rates of the proximal ADMM (1.3) under certain regularity conditions. To this end, we introduce the multifuncton \(F:\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}\rightrightarrows\mathcal{X} \times\mathcal{Y}\times\mathcal{Z}\) defined by \[F(u):=\left(\begin{array}{c}\partial f(x)+A^{*}\lambda\\ \partial g(y)+B^{*}\lambda\\ Ax+By-c\end{array}\right),\quad\forall u=(x,y,\lambda)\in\mathcal{X}\times \mathcal{Y}\times\mathcal{Z}. \tag{2.21}\] Then \(\bar{u}\) is a KKT point of (1.1) means \(0\in F(\bar{u})\) or, equivalently, \(\bar{u}\in F^{-1}(0)\), where \(F^{-1}\) denotes the inverse multifunction of \(F\). We will achieve our goal under certain bounded (Holder) metric subregularity conditions of \(F\). We need the following calculus lemma. **Lemma 2.8**.: _Let \(\{\Delta_{k}\}\) be a sequence of nonnegative numbers satisfying_ \[\Delta_{k}^{\theta}\leq C(\Delta_{k-1}-\Delta_{k}) \tag{2.22}\] _for all \(k\geq 1\), where \(C>0\) and \(\theta>1\) are constants. Then there is a constant \(\tilde{C}>0\) such that_ \[\Delta_{k}\leq\tilde{C}(1+k)^{-\frac{1}{\theta-1}}\] _for all \(k\geq 0\)._ Proof.: Please refer to the proof of [1, Theorem 2]. **Theorem 2.9**.: _Let Assumption 2.1 and Assumption 2.2 hold. Consider the sequence \(\{u^{k}:=(x^{k},y^{k},\lambda^{k})\}\) generated by the proximal ADMM (1.3). Assume \(\{u^{k}\}\) is bounded and let \(u^{\dagger}:=(x^{\dagger},y^{\dagger},\lambda^{\dagger})\) be a weak cluster point of \(\{u^{k}\}\). Let \(R\) be a number such that \(\|u^{k}-u^{\dagger}\|\leq R\) for all \(k\) and assume that there exist \(\kappa>0\) and \(\alpha\in(0,1]\) such that_ \[d(u,F^{-1}(0))\leq\kappa[d(0,F(u))]^{\alpha},\quad\forall u\in B_{R}(u^{ \dagger}). \tag{2.23}\] 1. _If_ \(\alpha=1\)_, then there exists a constant_ \(0<q<1\) _such that_ \[\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2}\leq q^{2}\left(\|u ^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}\right)\] (2.24) _for all_ \(k\geq 0\) _and consequently there exist_ \(C>0\) _and_ \(0<q<1\) _such that_ \[\|u^{k}-u^{\dagger}\|_{G},\,\|\Delta u^{k}\|_{G} \leq Cq^{k},\] (2.25) \[\|Ax^{k}+By^{k}-c\| \leq Cq^{k},\] \[|H(x^{k},y^{k})-H_{*}| \leq Cq^{k}\] _for all_ \(k\geq 0\)_._ 2. _If_ \(\alpha\in(0,1)\) _then there is a constant_ \(C\) _such that_ \[\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}\leq C(k+1)^{-\frac{ \alpha}{1-\alpha}}\] (2.26) _and consequently_ \[\|u^{k}-u^{\dagger}\|_{G},\,\|\Delta u^{k}\|_{G} \leq C(k+1)^{-\frac{1}{2(1-\alpha)}},\] \[\|Ax^{k}+By^{k}-c\| \leq C(k+1)^{-\frac{1}{2(1-\alpha)}},\] (2.27) \[|H(x^{k},y^{k})-H_{*}| \leq C(k+1)^{-\frac{1}{2(1-\alpha)}}\] _for all_ \(k\geq 0\)_._ Proof.: According to Theorem 2.7, \(u^{\dagger}\) is a KKT point of (1.1). Therefore we may use Lemma 2.4 with \(\bar{u}=u^{\dagger}\) to obtain \[\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2} \leq\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}-\| \Delta u^{k+1}\|_{G}^{2}\] \[=\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}-\eta\| \Delta u^{k+1}\|_{G}^{2}\] \[\quad-(1-\eta)\|\Delta u^{k+1}\|_{G}^{2}, \tag{2.28}\] where \(\eta\in(0,1)\) is any number. According to (2.3), \[\begin{pmatrix}\rho A^{*}B\Delta y^{k+1}-P\Delta x^{k+1}\\ -Q\Delta y^{k+1}\\ Ax^{k+1}+By^{k+1}-c\end{pmatrix}\in F(u^{k+1}).\] Thus, by using \(\Delta\lambda^{k+1}=\rho(Ax^{k+1}+By^{k+1}-c)\) we can obtain \[d^{2}(0,F(u^{k+1})) \leq\|\rho A^{*}B\Delta y^{k+1}-P\Delta x^{k+1}\|^{2}+\|-Q\Delta y^{ k+1}\|^{2}\] \[\quad+\|Ax^{k+1}+By^{k+1}-c\|^{2}\] \[\leq 2\|P\Delta x^{k+1}\|^{2}+2\rho^{2}\|A\|^{2}\|B\Delta y^{k+1} \|^{2}\] \[\quad+\|Q\Delta y^{k+1}\|^{2}+\frac{1}{\rho^{2}}\|\Delta\lambda^{ k+1}\|^{2}\] \[\leq\gamma\|\Delta u^{k+1}\|_{G}^{2}, \tag{2.29}\] where \[\gamma:=\max\left\{2\|P\|,2\rho\|A\|^{2},\|Q\|,\frac{1}{\rho}\right\}.\] Combining this with (2.28) gives \[\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2} \leq\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}-\eta \|\Delta u^{k+1}\|_{G}^{2}\] \[\quad-\frac{1-\eta}{\gamma}d^{2}(0,F(u^{k+1})).\] Since \(\|u^{k}-u^{\dagger}\|\leq R\) for all \(k\) and \(F\) satisfies (2.23), one can see that \[d(u^{k+1},F^{-1}(0))\leq\kappa[d(0,F(u^{k+1}))]^{\alpha},\quad\forall k\geq 0.\] Consequently \[\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2} \leq\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}-\eta \|\Delta u^{k+1}\|_{G}^{2}\] \[\quad-\frac{1-\eta}{\gamma\kappa^{2/\alpha}}[d(u^{k+1},F^{-1}(0) )]^{2/\alpha}.\] For any \(u=(x,y,\lambda)\in\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}\) let \[d_{G}(u,F^{-1}(0)):=\inf_{\bar{u}\in F^{-1}(0)}\|u-\bar{u}\|_{G}\] which measures the "distance" from \(u\) to \(F^{-1}(0)\) under the semi-norm \(\|\cdot\|_{G}\). It is easy to see that \[d_{G}^{2}(u,F^{-1}(0))\leq\|G\|d^{2}(u,F^{-1}(0)),\] where \(\|G\|\) denotes the norm of the operator \(G\). Then we have \[\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2} \leq\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}-\eta\| \Delta u^{k+1}\|_{G}^{2}\] \[\quad-\frac{1-\eta}{\gamma(\kappa^{2}\|G\|)^{1/\alpha}}[d_{G}(u^ {k+1},F^{-1}(0))]^{2/\alpha}.\] Now let \(\bar{u}\in F^{-1}(0)\) be any point. Then \[\|u^{k+1}-u^{\dagger}\|_{G}\leq\|u^{k+1}-\bar{u}\|_{G}+\|u^{\dagger}-\bar{u}\|_ {G}.\] Since \(u^{\dagger}\) is a weak cluster point of \(\{u^{k}\}\), there is a subsequence \(\{u^{k_{j}}\}\) of \(\{u^{k}\}\) such that \(u^{k_{j}}\rightharpoonup u^{\dagger}\). Thus \[\|u^{\dagger}-\bar{u}\|_{G}^{2}=\lim_{j\to\infty}\langle u^{k_{j}}-\bar{u},G(u ^{\dagger}-\bar{u})\rangle\leq\liminf_{j\to\infty}\|u^{k_{j}}-\bar{u}\|_{G}\|u^{ \dagger}-\bar{u}\|_{G}\] which implies \(\|u^{\dagger}-\bar{u}\|_{G}\leq\liminf_{j\to\infty}\|u^{k_{j}}-\bar{u}\|_{G}\). From Corollary 2.2 we know that \(\{\|u^{k}-\bar{u}\|_{G}^{2}\}\) is monotonically decreasing. Thus \[\|u^{k+1}-u^{\dagger}\|_{G}\leq\|u^{k+1}-\bar{u}\|_{G}+\liminf_{j\to\infty}\|u ^{k_{j}}-\bar{u}\|_{G}\leq 2\|u^{k+1}-\bar{u}\|_{G}.\] Since \(\bar{u}\in F^{-1}(0)\) is arbitrary, we thus have \[\|u^{k+1}-u^{\dagger}\|_{G}\leq 2d_{G}(u^{k+1},F^{-1}(0)).\] Therefore \[\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2} \leq\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}-\eta\| \Delta u^{k+1}\|_{G}^{2}\] \[\quad-\frac{1-\eta}{\gamma(4\kappa^{2}\|G\|)^{1/\alpha}}\|u^{k+1} -u^{\dagger}\|_{G}^{2/\alpha}.\] By using the fact \(\|\Delta u^{k}\|_{G}\to 0\) established in Proposition 2.5, we can find a constant \(C>0\) such that \[\|\Delta u^{k+1}\|_{G}^{2}\geq C\|\Delta u^{k+1}\|_{G}^{2/\alpha}.\] Note that \(\|\Delta u^{k+1}\|_{G}^{2}\geq\|\Delta y^{k+1}\|_{Q}^{2}\). Thus \[\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2} \leq\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}-C\eta \|\Delta y^{k+1}\|_{Q}^{2/\alpha}\] \[\quad-\frac{1-\eta}{\gamma(4\kappa^{2}\|G\|)^{1/\alpha}}\|u^{k+1 }-u^{\dagger}\|_{G}^{2/\alpha}.\] Choose \(\eta\) such that \[\eta=\frac{1}{1+C\gamma(4\kappa^{2}\|G\|)^{1/\alpha}}.\] Then \[\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2}\] \[\leq\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}-C\eta \left(\|\Delta y^{k+1}\|_{Q}^{2/\alpha}+\|u^{k+1}-u^{\dagger}\|_{G}^{2/\alpha }\right).\] Using the inequality \((a+b)^{p}\leq 2^{p-1}(a^{p}+b^{p})\) for \(a,b\geq 0\) and \(p\geq 1\), we then obtain \[\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2}\] \[\leq\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}-2^{1-1 /\alpha}C\eta\left(\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2} \right)^{1/\alpha}. \tag{2.30}\] (i) If \(\alpha=1\), then we obtain the linear convergence \[(1+C\eta)\left(\|u^{k+1}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k+1}\|_{Q}^{2} \right)\leq\|u^{k}-u^{\dagger}\|_{G}^{2}+\|\Delta y^{k}\|_{Q}^{2}\] which is (2.24) with \(q=1/(1+C\eta)\). By using Lemma 2.4 and (2.24) we immediately obtain the first estimate in (2.25). By using (2.9) and (2.11) we then obtain the last two estimates in (2.25). (ii) If \(\alpha\in(0,1)\), we may use (2.30) and Lemma 2.8 to obtain (2.26). To derive the first estimate in (2.27), we may use Lemma 2.4 to obtain \[\sum_{j=l}^{k}\|\Delta u^{j}\|_{G}^{2}\leq\|u^{l}-u^{\dagger}\|_{G}^{2}+\| \Delta y^{l}\|_{Q}^{2}\] for all integers \(1\leq l<k\). By using the monotonicity of \(\{\|\Delta u^{j}\|_{G}^{2}\}\) shown in Lemma 2.3 and the estimate (2.26) we have \[(k-l+1)\|\Delta u^{k}\|_{G}^{2}\leq C(l+1)^{-\frac{\alpha}{1-\alpha}}.\] Taking \(l=[k/2]\), the largest integers \(\leq k/2\), gives \[\|\Delta u^{k}\|_{G}^{2}\leq C(k+1)^{-\frac{\alpha}{1-\alpha}-1}=C(k+1)^{- \frac{1}{1-\alpha}}\] with a possibly different generic constant \(C\). This shows the first estimate in (2.27). Based on this, we can use (2.9) and (2.11) to obtain the last two estimates in (2.27). The proof is therefore complete. _Remark 2.5_.: Let us give some comments on the condition (2.23). In finite dimensional Euclidean spaces, it has been proved in [30] that for every polyhedral multifunction \(\Psi:\mathbb{R}^{m}\rightrightarrows\mathbb{R}^{n}\) there is a constant \(\kappa>0\) such that for any \(y\in\mathbb{R}^{n}\) there is a number \(\varepsilon>0\) such that \[d(x,\Psi^{-1}(y))\leq\kappa d(y,\Psi(x)),\quad\forall x\text{ satisfying }d(y,\Psi(x))<\varepsilon.\] This result in particular implies the bounded metric subregularity of \(\Psi\), i.e. for any \(r>0\) and any \(y\in\mathbb{R}^{n}\) there is a number \(C>0\) such that \[d(x,\Psi^{-1}(y))\leq Cd(y,\Psi(x)),\quad\forall x\in B_{r}(0).\] Therefore, if \(\partial f\) and \(\partial g\) are polyhedral multifunctions, then the multifunction \(F\) defined by (2.21) is also polyhedral and thus (2.23) with \(\alpha=1\) holds. The bounded metric subregularity of polyhedral multifunctions in arbitrary Banach spaces has been established in [34]. On the other hand, if \(\mathcal{X}\), \(\mathcal{Y}\) and \(\mathcal{Z}\) are finite dimensional Euclidean spaces, and if \(f\) and \(g\) are semi-algebraic convex functions, then the multifunction \(F\) satisfies (2.23) for some \(\alpha\in(0,1]\). Indeed, the semi-algebraicity of \(f\) and \(g\) implies that their subdifferentials \(\partial f\) and \(\partial g\) are semi-algebraic multifunctions with closed graph; consequently \(F\) is semi-algebraic with closed graph. According to [24, Proposition 3.1], \(F\) is bounded Holder metrically subregular at any point \((\bar{u},\bar{\xi})\) on its graph, i.e. for any \(r>0\) there exist \(\kappa>0\) and \(\alpha\in(0,1]\) such that \[d(u,F^{-1}(\bar{\xi}))\leq\kappa[d(\bar{\xi},F(u))]^{\alpha},\quad\forall u \in B_{r}(\bar{u})\] which in particular implies (2.23). By inspecting the proof of Theorem 2.9, it is easy to see that the same convergence rate results can be derived with the condition (2.23) replaced by the weaker condition: there exist \(\kappa>0\) and \(\alpha\in(0,1]\) such that \[d_{G}(u^{k},F^{-1}(0))\leq\kappa\left\|\Delta u^{k}\right\|_{G}^{\alpha}, \quad\forall k\geq 1. \tag{2.31}\] Therefore we have the following result. **Theorem 2.10**.: _Let Assumption 2.1 and Assumption 2.2 hold. Consider the sequence \(\{u^{k}:=(x^{k},y^{k},\lambda^{k})\}\) generated by the proximal ADMM (1.3). Assume \(\{u^{k}\}\) is bounded. If there exist \(\kappa>0\) and \(\alpha\in(0,1]\) such that (2.31) holds, then, for any weak cluster point \(u^{\dagger}\) of \(\{u^{k}\}\), the same convergence rate results in Theorem 2.9 hold._ _Remark 2.6_.: Note that the condition (2.31) is based on the iterative sequence itself. Therefore, it makes possible to check the condition by exploring not only the property of the multifunction \(F\) but also the structure of the algorithm. The condition (2.31) with \(\alpha=1\) has been introduced in [27] as an iteration based error bound condition to study the linear convergence of the proximal ADMM (1.3) with \(Q=0\) in finite dimensions. _Remark 2.7_.: The condition (2.31) is strongly motivated by the proof of Theorem 2.9. We would like to provide here an alternative motivation. Consider the proximal ADMM (1.3). We can show that if \(\|\Delta u^{k}\|_{G}=0\) then \(u^{k}\) must be a KKT point of (1.1). Indeed, \(\|\Delta u^{k}\|_{G}^{2}=0\) implies \(P\Delta x^{k}=0\), \(\widehat{Q}\Delta y^{k}=0\) and \(\Delta\lambda^{k}=0\). Since \(\widehat{Q}=Q+\rho B^{T}B\) with \(Q\) positive semi-definite and \(\Delta\lambda^{k}=\rho(Ax^{k}+By^{k}-c)\), we also have \(B\Delta y^{k}=0\), \(Q\Delta y^{k}=0\) and \(Ax^{k}+By^{k}-c=0\). Thus, it follows from (2.3) that \[-A^{*}\lambda^{k}\in\partial f(x^{k}),\quad-B^{*}\lambda^{k}\in\partial g(y^{k }),\quad Ax^{k}+By^{k}=c\] which shows that \(u^{k}=(x^{k},y^{k},\lambda^{k})\) is a KKT point, i.e., \(u^{k}\in F^{-1}(0)\). Therefore, it is natural to ask, if \(\|\Delta u^{k}\|_{G}\) is small, can we guarantee \(d_{G}(u^{k},F^{-1}(0))\) to be small as well? This motivates us to propose a condition like \[d_{G}(u^{k},F^{-1}(0))\leq\varphi(\|\Delta u^{k}\|_{G}),\quad\forall k\geq 1\] for some function \(\varphi:[0,\infty)\to[0,\infty)\) with \(\varphi(0)=0\). The condition (2.31) corresponds to \(\varphi(s)=\kappa s^{\alpha}\) for some \(\kappa>0\) and \(\alpha\in(0,1]\). In finite dimensional Euclidean spaces some linear convergence results on the proximal ADMM (1.3) have been established in [9] under various scenarios involving strong convexity of \(f\) and/or \(g\), Lipschitz continuity of \(\nabla f\) and/or \(\nabla g\), together with further conditions on \(A\) and/or \(B\), see [9, Theorem 3.1 and Table 1]. In the following theorem we will show that (2.31) with \(\alpha=1\) holds under any one of these scenarios and thus the linear convergence in [9, Theorem 3.1 and Theorem 3.4] can be established by using Theorem 2.10. Therefore, the linear convergence results based on the bounded metric subregularity of \(F\) or the scenarios in [9] can be treated in a unified manner. Actually our next theorem improves the results in [9] by establishing the linear convergence of \(\{u^{k}\}\) and \(\{H(x^{k},y^{k})\}\) and relaxing the Lipschitz continuity of gradient(s) to the local Lipschitz continuity. Furthermore, Our result is established in general Hilbert spaces. To formulate the scenarios from [9] in this general setting, we need to replace the full row/column rank of matrices by the coercivity of linear operators. We also need the linear operator \(M:\mathcal{X}\times\mathcal{Y}\to\mathcal{Z}\) defined by \[M(x,y):=Ax+By,\quad\forall(x,y)\in\mathcal{X}\times\mathcal{Y}\] which is constructed from \(A\) and \(B\). It is easy to see that the adjoint of \(M\) is \(M^{*}z=(A^{*}z,B^{*}z)\) for any \(z\in\mathcal{Z}\). **Theorem 2.11**.: _Let Assumption 2.1 and Assumption 2.2 hold. Let \(\{u^{k}\}\) be the sequence generated by the proximal ADMM (1.3). Then \(\{u^{k}\}\) is bounded and there exists a constant \(C>0\) such that_ \[d_{G}(u^{k},F^{-1}(0))\leq C\|\Delta u^{k}\|_{G} \tag{2.32}\] _for all \(k\geq 1\), provided any one of the following conditions holds:_ 1. \(\sigma_{g}>0\)_,_ \(A\) _and_ \(B^{*}\) _are coercive,_ \(g\) _is differentiable and its gradient is Lipschitz continuous over bounded sets;_ 2. \(\sigma_{f}>0\)_,_ \(\sigma_{g}>0\)_,_ \(B^{*}\) _is coercive,_ \(g\) _is differentiable and its gradient is Lipschitz continuous over bounded sets;_ 3. \(\lambda^{0}=0\)_,_ \(\sigma_{f}>0\)_,_ \(\sigma_{g}>0\)_,_ \(M^{*}\) _restricted on_ \(\mathcal{N}(M^{*})^{\perp}\) _is coercive, both_ \(f\) _and_ \(g\) _are differentiable and their gradients are Lipschitz continuous over bounded sets;_ 4. \(\lambda^{0}=0\)_,_ \(\sigma_{g}>0\)_,_ \(A\) _is coercive,_ \(M^{*}\) _restricted on_ \(\mathcal{N}(M^{*})^{\perp}\) _is coercive, both_ \(f\) _and_ \(g\) _are differentiable and their gradients are Lipschitz continuous over bounded sets;_ _where \(\mathcal{N}(M^{*})\) denotes the null space of \(M^{*}\). Consequently, there exist \(C>0\) and \(0<q<1\) such that_ \[\|u^{k}-u^{\dagger}\|\leq Cq^{k}\quad\text{ and }\quad|H(x^{k},y^{k})-H_{*}|\leq Cq^{k}\] _for all \(k\geq 0\), where \(u^{\dagger}:=(x^{\dagger},y^{\dagger},\lambda^{\dagger})\) is a KKT point of (1.1)._ Proof.: We will only consider the scenario (i) since the proofs for other scenarios are similar. In the following we will use \(C\) to denote a generic constant which may change from line to line but is independent of \(k\). We first show the boundedness of \(\{u^{k}\}\). According to Corollary 2.2, \(\{\|u^{k}\|_{G}^{2}\}\) is bounded which implies the boundedness of \(\{\lambda^{k}\}\). Since \(\sigma_{g}>0\), it follows from (2.5) that \(\{y^{k}\}\) is bounded. Consequently, it follows from \(\Delta\lambda^{k}=\rho(Ax^{k}+By^{k}-c)\) that \(\{Ax^{k}\}\) is bounded. Since \(A\) is coercive, \(\{x^{k}\}\) must be bounded. Next we show (2.32). Let \(u^{\dagger}:=(x^{\dagger},y^{\dagger},\lambda^{\dagger})\) be a weak cluster point of \(\{u^{k}\}\) whose existence is guaranteed by the boundedness of \(\{u^{k}\}\). According to Theorem 2.7, \(u^{\dagger}\) is a KKT point of (1.1). Let \((\xi,\eta,\tau)\in F(u^{k})\) be any element. Then \[\xi-A^{*}\lambda^{k}\in\partial f(x^{k}),\quad\eta-B^{*}\lambda^{k}\in \partial g(y^{k}),\quad\tau=Ax^{k}+By^{k}-c.\] By using the monotonicity of \(\partial f\) and \(\partial g\) we have \[\sigma_{f}\|x^{k}-x^{\dagger}\|^{2}+\sigma_{g}\|y^{k}-y^{\dagger} \|^{2}\] \[\leq\langle\xi-A^{*}\lambda^{k}+A^{*}\lambda^{\dagger},x^{k}-x^{ \dagger}\rangle+\langle\eta-B^{*}\lambda^{k}+B^{*}\lambda^{\dagger},y^{k}-y^{ \dagger}\rangle\] \[=\langle\xi,x^{k}-x^{\dagger}\rangle+\langle\eta,y^{k}-y^{ \dagger}\rangle+\langle\lambda^{\dagger}-\lambda^{k},A(x^{k}-x^{\dagger})+B(y^ {k}-y^{\dagger})\rangle\] \[=\langle\xi,x^{k}-x^{\dagger}\rangle+\langle\eta,y^{k}-y^{ \dagger}\rangle+\langle\lambda^{\dagger}-\lambda^{k},\tau\rangle. \tag{2.33}\] Since \(\sigma_{g}>0\), it follows from (2.33) and the Cauchy-Schwarz inequality that \[\|y^{k}-y^{\dagger}\|^{2}\leq C\left(\|\eta\|^{2}+\|\xi\|\|x^{k}-x^{\dagger} \|+\|\tau\|\|\lambda^{k}-\lambda^{\dagger}\|\right). \tag{2.34}\] Note that \(A(x^{k}-x^{\dagger})=-B(y^{k}-y^{\dagger})+\frac{1}{\rho}\Delta\lambda^{k}\). Since \(A\) is coercive, we have \[\|x^{k}-x^{\dagger}\|^{2}\leq C\|A(x^{k}-x^{\dagger})\|^{2}\leq C\left(\|y^{k }-y^{\dagger}\|^{2}+\|\Delta\lambda^{k}\|^{2}\right). \tag{2.35}\] By the differentiability of \(g\) we have \(-B^{*}\lambda^{\dagger}=\nabla g(y^{\dagger})\) and \(-B^{*}\lambda^{k}-Q\Delta y^{k}=\nabla g(y^{k})\). Since \(B^{*}\) is coercive and \(\nabla g\) is Lipschitz continuous over bounded sets, we thus obtain \[\|\lambda^{k}-\lambda^{\dagger}\|^{2} \leq C\|B^{*}(\lambda^{k}-\lambda^{\dagger})\|^{2}=\|Q\Delta y^{k }+\nabla g(y^{k})-\nabla g(y^{\dagger})\|^{2}\] \[\leq C\left(\|\Delta y^{k}\|_{Q}^{2}+\|y^{k}-y^{\dagger}\|^{2} \right). \tag{2.36}\] Adding (2.35) and (2.36) and then using (2.34), it follows \[\|x^{k}-x^{\dagger}\|^{2}+\|\lambda^{k}-\lambda^{\dagger}\|^{2}\leq C\left(\| \eta\|^{2}+\|\Delta u^{k}\|_{G}^{2}+\|\xi\|\|x^{k}-x^{\dagger}\|+\|\tau\|\| \lambda^{k}-\lambda^{\dagger}\|\right)\] which together with the Cauchy-Schwarz inequality then implies \[\|x^{k}-x^{\dagger}\|^{2}+\|\lambda^{k}-\lambda^{\dagger}\|^{2}\leq C\left(\| \xi\|^{2}+\|\eta\|^{2}+\|\tau\|^{2}+\|\Delta u^{k}\|_{G}^{2}\right). \tag{2.37}\] Combining (2.34) and (2.37) we can obtain \[\|x^{k}-x^{\dagger}\|^{2}+\|y^{k}-y^{\dagger}\|^{2}+\|\lambda^{k}-\lambda^{ \dagger}\|^{2}\leq C\left(\|\xi\|^{2}+\|\eta\|^{2}+\|\tau\|^{2}+\|\Delta u^{k} \|_{G}^{2}\right).\] Since \((\xi,\eta,\tau)\in F(u^{k})\) is arbitrary, we therefore have \[\|u^{k}-u^{\dagger}\|^{2}\leq C\left(\left[d(0,F(u^{k}))\right]^{2}+\|\Delta u^ {k}\|_{G}^{2}\right).\] With the help of (2.29), we then obtain \[\|u^{k}-u^{\dagger}\|^{2}\leq C\|\Delta u^{k}\|_{G}^{2}. \tag{2.38}\] Thus \[[d_{G}(u^{k},F^{-1}(0))]^{2}\leq C[d(u^{k},F^{-1}(0))]^{2}\leq C\|u^{k}-u^{ \dagger}\|^{2}\leq C\|\Delta u^{k}\|_{G}^{2}\] which shows (2.32). Because \(\{u^{k}\}\) is bounded and (2.32) holds, we may use Theorem 2.10 to conclude the existence of a constant \(q\in(0,1)\) such that \[\|\Delta u^{k}\|_{G}\leq Cq^{k}\quad\text{and}\quad|H(x^{k},y^{k})-H_{*}|\leq Cq ^{k}.\] Finally we may use (2.38) to obtain \(\|u^{k}-u^{\dagger}\|\leq Cq^{k}\). _Remark 2.8_.: If \(\mathcal{Z}\) is finite-dimensional, the coercivity of \(M^{*}\) restricted on \(\mathcal{N}(M^{*})^{\perp}\) required in the scenarios (iii) and (iv) holds automatically. If it is not, then there exists a sequence \(\{z^{k}\}\subset\mathcal{N}(M^{*})^{\perp}\setminus\{0\}\) such that \[\|z^{k}\|\geq k\|M^{*}z^{k}\|,\quad k=1,2,\cdots.\] By rescaling we may assume \(\|z^{k}\|=1\) for all \(k\). Since \(\mathcal{Z}\) is finite-dimensional, by taking a subsequence if necessary, we may assume \(z^{k}\to z\) for some \(z\in\mathcal{Z}\). Clearly \(z\in\mathcal{N}(M^{*})^{\perp}\) and \(\|z\|=1\). Note that \(\|M^{*}z^{k}\|\leq 1/k\) for all \(k\), we have \(\|M^{*}z\|=\lim_{k\to\infty}\|M^{*}z^{k}\|=0\) which means \(z\in\mathcal{N}(M^{*})\). Thus \(z\in\mathcal{N}(M^{*})\cap\mathcal{N}(M^{*})^{\perp}=\{0\}\) which is a contradiction. ## 3. **Proximal ADMM for linear inverse problems** In this section we consider the method (1.11) as a regularization method for solving (1.9) and establish a convergence rate result under a benchmark source condition on the sought solution. Throughout this section we make the following assumptions on the operators \(Q\), \(L\), \(A\), the constraint set \(\mathcal{C}\) and the function \(f\): **Assumption 3.1**.: * \(A:\mathcal{X}\to\mathcal{H}\) is a bounded linear operator, \(Q:\mathcal{X}\to\mathcal{X}\) is a bounded linear positive semi-definite self-adjoint operator, and \(\mathcal{C}\subset\mathcal{X}\) is a closed convex subset. * \(L\) is a densely defined, closed, linear operator from \(\mathcal{X}\) to \(\mathcal{Y}\) with domain \(\text{\it dom}(L)\). * There is a constant \(c_{0}>0\) such that \[\|Ax\|^{2}+\|Lx\|^{2}\geq c_{0}\|x\|^{2},\qquad\forall x\in\text{\it dom}(L).\] * \(f:\mathcal{Y}\to(-\infty,\infty]\) is proper, lower semi-continuous, and strongly convex. This assumptions is standard in the literature on regularization methods and has been used in [21, 22]. Based on (iii), we can define the adjoint \(L^{*}\) of \(L\) which is also closed and densely defined; moreover, \(z\in\text{\rm dom}(L^{*})\) if and only if \(\langle L^{*}z,x\rangle=\langle z,Lx\rangle\) for all \(x\in\text{\rm dom}(L)\). Under Assumption 3.1, it has been shown in [21, 22] that the proximal ADMM (1.11) is well-defined and if the exact data \(b\) is consistent in the sense that there exists \(\hat{x}\in\mathcal{X}\) such that \[\hat{x}\in\text{\rm dom}(L)\cap\mathcal{C},\quad L\hat{x}\in\text{\rm dom}(f) \quad\text{ and }\quad A\hat{x}=b,\] then the problem (1.9) has a unique solution, denoted by \(x^{\dagger}\). Furthermore, there holds the following monotonicity result, see [22, Lemma 2.3]; alternatively, it can also be derived from Lemma 2.3.. **Lemma 3.1**.: _Let \(\{z^{k},y^{k},x^{k},\lambda^{k},\mu^{k},\nu^{k}\}\) be defined by the proximal ADMM (1.11) with noisy data and let_ \[E_{k}:=\frac{1}{2\rho_{1}}\|\Delta\lambda^{k}\|^{2}+\frac{1}{2 \rho_{2}}\|\Delta\mu^{k}\|^{2}+\frac{1}{2\rho_{3}}\|\Delta\nu^{k}\|^{2}\] \[\qquad+\frac{\rho_{2}}{2}\|\Delta y^{k}\|^{2}+\frac{\rho_{3}}{2} \|\Delta x^{k}\|^{2}+\frac{1}{2}\|\Delta z^{k}\|_{Q}^{2}. \tag{3.1}\] _Then \(\{E_{k}\}\) is monotonically decreasing with respect to \(k\)._ In the following we will always assume the exact data \(b\) is consistent. We will derive a convergence rate of \(x^{k}\) to the unique solution \(x^{\dagger}\) of (1.9) under the source condition \[\exists\mu^{\dagger}\in\partial f(Lx^{\dagger})\cap\operatorname{dom}(L^{*}) \text{ and }\nu^{\dagger}\in\partial_{t\mathcal{C}}(x^{\dagger})\text{ such that }L^{*}\mu^{\dagger}+\nu^{\dagger}\in \operatorname{Ran}(A^{*}). \tag{3.2}\] Note that when \(L=I\) and \(\mathcal{C}=\mathcal{X}\), (3.2) becomes the benchmark source condition \[\partial f(x^{\dagger})\cap\operatorname{Ran}(A^{*})\neq\emptyset\] which has been widely used to derive convergence rate for regularization methods, see [7, 13, 23, 29] for instance. We have the following convergence rate result. **Theorem 3.2**.: _Let Assumption 3.1 hold, let the exact data \(b\) be consistent, and let the sequence \(\{z^{k},y^{k},x^{k},\lambda^{k},\mu^{k},\nu^{k}\}\) be defined by the proximal ADMM (1.11) with noisy data \(b^{\delta}\) satisfying \(\|b^{\delta}-b\|\leq\delta\). Assume the unique solution \(x^{\dagger}\) of (1.9) satisfies the source condition (3.2). Then for the integer \(k_{\delta}\) chosen by \(k_{\delta}\sim\delta^{-1}\) there hold_ \[\|x^{k_{\delta}}-x^{\dagger}\|=O(\delta^{1/4}),\quad\|y^{k_{\delta}}-Lx^{ \dagger}\|=O(\delta^{1/4})\quad\text{and}\quad\|z^{k_{\delta}}-x^{\dagger}\|= O(\delta^{1/4})\] _as \(\delta\to 0\)._ In order to prove this result, let us start from the formulation of the algorithm (1.11) to derive some useful estimates. For simplicity of exposition, we set \[\Delta x^{k+1}:=x^{k+1}-x^{k},\quad\Delta y^{k+1}:=y^{k+1}-y^{k},\quad\Delta z^{k+1}:=z^{k+1}-z^{k},\] \[\Delta\lambda^{k+1}:=\lambda^{k+1}-\lambda^{k},\quad\Delta\mu^{k +1}:=\mu^{k+1}-\mu^{k},\quad\Delta\nu^{k+1}:=\nu^{k+1}-\nu^{k}.\] According to the definition of \(z^{k+1}\), \(y^{k+1}\) and \(x^{k+1}\) in (1.11), we have the optimality conditions \[0=A^{*}\lambda^{k}+\nu^{k}+\rho_{1}A^{*}(Az^{k+1}-b^{\delta})+L^ {*}(\mu^{k}+\rho_{2}(Lz^{k+1}-y^{k}))\] \[\qquad+\rho_{3}(z^{k+1}-x^{k})+Q(z^{k+1}-z^{k}), \tag{3.3}\] \[0\in\partial f(y^{k+1})-\mu^{k}-\rho_{2}(Lz^{k+1}-y^{k+1}),\] (3.4) \[0\in\partial_{tC}(x^{k+1})-\nu^{k}-\rho_{3}(z^{k+1}-x^{k+1}). \tag{3.5}\] By using the last two equations in (1.11), we have from (3.4) and (3.5) that \[\mu^{k+1}\in\partial f(y^{k+1})\quad\text{and}\quad\nu^{k+1}\in\partial_{t \mathcal{C}}(x^{k+1}). \tag{3.6}\] Letting \(y^{\dagger}=Lx^{\dagger}\). From the strong convexity of \(f\), the convexity of \(\iota_{C}\), and (3.6) it follows that \[\sigma_{f}\|y^{k+1}-y^{\dagger}\|^{2}\leq f(y^{\dagger})-f(y^{k+1})-\langle\mu^{k+1},y^{\dagger}-y^{k+1}\rangle\] \[\qquad+\langle\nu^{k+1},x^{k+1}-x^{\dagger}\rangle. \tag{3.7}\] where \(\sigma_{f}\) denotes the modulus of convexity of \(f\); we have \(\sigma_{f}>0\) as \(f\) is strongly convex. By taking the inner product of (3.3) with \(z^{k+1}-x^{\dagger}\) we have \[0 =\langle\lambda^{k}+\rho_{1}(Az^{k+1}-b^{\delta}),A(z^{k+1}-x^{ \dagger})\rangle\] \[\quad+\langle\mu^{k}+\rho_{2}(Lz^{k+1}-y^{k}),L(z^{k+1}-x^{ \dagger})\rangle\] \[\quad+\langle\nu^{k}+\rho_{3}(z^{k+1}-x^{k}),z^{k+1}-x^{\dagger}\rangle\] \[\quad+\langle Q(z^{k+1}-z^{k}),z^{k+1}-x^{\dagger}\rangle.\] Therefore we may use the definition of \(\lambda^{k+1},\mu^{k+1},\nu^{k+1}\) in (1.11) and the fact \(Ax^{\dagger}=b\) to further obtain \[0 =\langle\lambda^{k+1},Az^{k+1}-b\rangle+\langle\mu^{k+1}+\rho_{2 }\Delta y^{k+1},Lz^{k+1}-y^{\dagger}\rangle\] \[\quad+\langle\nu^{k+1}+\rho_{3}\Delta x^{k+1},z^{k+1}-x^{\dagger}\rangle\] \[\quad+\langle Q\Delta z^{k+1},z^{k+1}-x^{\dagger}\rangle. \tag{3.8}\] Subtracting (3.7) by (3.8) gives \[\sigma_{f}\|y^{k+1}-y^{\dagger}\|^{2} \leq f(y^{\dagger})-f(y^{k+1})-\langle\lambda^{k+1},Az^{k+1}-b \rangle+\langle\mu^{k+1},y^{k+1}-Lz^{k+1}\rangle\] \[\quad-\rho_{2}\langle\Delta y^{k+1},Lz^{k+1}-y^{\dagger}\rangle+ \langle\nu^{k+1},x^{k+1}-z^{k+1}\rangle\] \[\quad-\rho_{3}\langle\Delta x^{k+1},z^{k+1}-x^{\dagger}\rangle- \langle Q\Delta z^{k+1},z^{k+1}-x^{\dagger}\rangle.\] Note that under the source condition (3.2), there exist \(\mu^{\dagger}\), \(\nu^{\dagger}\) and \(\lambda^{\dagger}\) such that \[\mu^{\dagger}\in\partial f(y^{\dagger}),\quad\nu^{\dagger}\in\partial_{t \mathcal{C}}(x^{\dagger})\quad\text{ and }\quad L^{*}\mu^{\dagger}+\nu^{\dagger}+A^{*}\lambda^{ \dagger}=0. \tag{3.9}\] Thus, it follows from the above equation and the last two equations in (1.11) that \[\sigma_{f}\|y^{k+1}-y^{\dagger}\|^{2}\] \[\leq f(y^{\dagger})-f(y^{k+1})-\langle\lambda^{\dagger},Az^{k+1}-b \rangle-\langle\mu^{\dagger},Lz^{k+1}-y^{k+1}\rangle-\langle\nu^{\dagger},z^ {k+1}-x^{k+1}\rangle\] \[\quad-\langle\lambda^{k+1}-\lambda^{\dagger},Az^{k+1}-b^{\delta} +b^{\delta}-b\rangle\] \[\quad-\frac{1}{\rho_{2}}\langle\mu^{k+1}-\mu^{\dagger},\Delta\mu ^{k+1}\rangle-\rho_{2}\langle\Delta y^{k+1},Lz^{k+1}-y^{\dagger}\rangle\] \[\quad-\frac{1}{\rho_{3}}\langle\nu^{k+1}-\nu^{\dagger},\Delta\nu ^{k+1}\rangle-\rho_{3}\langle\Delta x^{k+1},z^{k+1}-x^{\dagger}\rangle\] \[\quad-\langle Q\Delta z^{k+1},z^{k+1}-x^{\dagger}\rangle.\] By using (3.9), \(b=Ax^{\dagger}\) and the convexity of \(f\), we can see that \[f(y^{\dagger})-f(y^{k+1})-\langle\lambda^{\dagger},Az^{k+1}-b \rangle-\langle\mu^{\dagger},Lz^{k+1}-y^{k+1}\rangle-\langle\nu^{\dagger},z^ {k+1}-x^{k+1}\rangle\] \[=f(y^{\dagger})-f(y^{k+1})+\langle\lambda^{\dagger},b\rangle+ \langle\mu^{\dagger},y^{k+1}\rangle+\langle\nu^{\dagger},x^{k+1}\rangle\] \[=f(y^{\dagger})-f(y^{k+1})+\langle A^{*}\lambda^{\dagger},x^{ \dagger}\rangle+\langle\mu^{\dagger},y^{k+1}\rangle+\langle\nu^{\dagger},x^{k+ 1}\rangle\] \[=f(y^{\dagger})-f(y^{k+1})-\langle L^{*}\mu^{\dagger},x^{\dagger} \rangle+\langle\mu^{\dagger},y^{k+1}\rangle+\langle\nu^{\dagger},x^{k+1}-x^{ \dagger}\rangle\] \[=f(y^{\dagger})-f(y^{k+1})+\langle\mu^{\dagger},y^{k+1}-y^{ \dagger}\rangle+\langle\nu^{\dagger},x^{k+1}-x^{\dagger}\rangle\leq 0.\] Consequently, by using the fourth equation in (1.11), we have \[\sigma_{f}\|y^{k+1}-y^{\dagger}\|^{2}\leq -\langle\lambda^{k+1}-\lambda^{\dagger},b^{\delta}-b\rangle-\frac{1 }{\rho_{1}}\langle\lambda^{k+1}-\lambda^{\dagger},\Delta\lambda^{k+1}\rangle\] \[-\frac{1}{\rho_{2}}\langle\mu^{k+1}-\mu^{\dagger},\Delta\mu^{k+1} \rangle-\frac{1}{\rho_{3}}\langle\nu^{k+1}-\nu^{\dagger},\Delta\nu^{k+1}\rangle\] \[-\rho_{2}\langle\Delta y^{k+1},y^{k+1}-y^{\dagger}+Lz^{k+1}-y^{k+ 1}\rangle\] \[-\rho_{3}\langle\Delta x^{k+1},x^{k+1}-x^{\dagger}+z^{k+1}-x^{k+1}\rangle\] \[-\langle Q\Delta z^{k+1},z^{k+1}-x^{\dagger}\rangle.\] By using the polarization identity and the last two equations in (1.11) we further have \[\sigma_{f}\|y^{k+1}-y^{\dagger}\|^{2}\leq -\langle\lambda^{k+1}-\lambda^{\dagger},b^{\delta}-b\rangle\] \[+\frac{1}{2\rho_{1}}\left(\|\lambda^{k}-\lambda^{\dagger}\|^{2}- \|\lambda^{k+1}-\lambda^{\dagger}\|^{2}-\|\Delta\lambda^{k+1}\|^{2}\right)\] \[+\frac{1}{2\rho_{2}}\left(\|\mu^{k}-\mu^{\dagger}\|^{2}-\|\mu^{k +1}-\mu^{\dagger}\|^{2}-\|\Delta\mu^{k+1}\|^{2}\right)\] \[+\frac{1}{2\rho_{3}}\left(\|\nu^{k}-\nu^{\dagger}\|^{2}-\|\nu^{k +1}-\nu^{\dagger}\|^{2}-\|\Delta\nu^{k+1}\|^{2}\right)\] \[+\frac{1}{2}\left(\|z^{k}-z^{\dagger}\|_{Q}^{2}-\|z^{k+1}-z^{ \dagger}\|_{Q}^{2}-\|\Delta z^{k+1}\|_{Q}^{2}\right)\] \[+\frac{\rho_{2}}{2}\left(\|y^{k}-y^{\dagger}\|^{2}-\|y^{k+1}-y^{ \dagger}\|^{2}-\|\Delta y^{k+1}\|^{2}\right)\] \[+\frac{\rho_{3}}{2}\left(\|x^{k}-x^{\dagger}\|^{2}-\|x^{k+1}-x^{ \dagger}\|^{2}-\|\Delta x^{k+1}\|^{2}\right)\] \[-\langle\Delta y^{k+1},\Delta\mu^{k+1}\rangle-\langle\Delta x^{k +1},\Delta\nu^{k+1}\rangle.\] Let \[\Phi_{k}:=\frac{1}{2\rho_{1}}\|\lambda^{k}-\lambda^{\dagger}\|^{2 }+\frac{1}{2\rho_{2}}\|\mu^{k}-\mu^{\dagger}\|^{2}+\frac{1}{2\rho_{3}}\|\nu^{k }-\nu^{\dagger}\|^{2}\] \[+\frac{1}{2}\|z^{k}-x^{\dagger}\|_{Q}^{2}+\frac{\rho_{2}}{2}\|y^{ k}-y^{\dagger}\|^{2}+\frac{\rho_{3}}{2}\|x^{k}-x^{\dagger}\|^{2}.\] Then \[\sigma_{f}\|y^{k+1}-y^{\dagger}\|^{2}\leq \Phi_{k}-\Phi_{k+1}-\langle\lambda^{k+1}-\lambda^{\dagger},b^{ \delta}-b\rangle-E_{k+1}\] \[-\langle\Delta y^{k+1},\Delta\mu^{k+1}\rangle-\langle\Delta x^{k +1},\Delta\nu^{k+1}\rangle, \tag{3.10}\] where \(E_{k}\) is defined by (3.1). **Lemma 3.3**.: _For all \(k=0,1,\cdots\) there hold_ \[\sigma_{f}\|y^{k+1}-y^{\dagger}\|^{2}\leq\Phi_{k}-\Phi_{k+1}-\langle\lambda^{ k+1}-\lambda^{\dagger},b^{\delta}-b\rangle-E_{k+1}, \tag{3.11}\] \[E_{k+1}\leq\Phi_{k}-\Phi_{k+1}+\sqrt{2\rho_{1}\Phi_{k+1}}\delta \tag{3.12}\] _and_ \[\Phi_{k+1}\leq\Phi_{0}+\left(\sum_{j=1}^{k+1}\sqrt{2\rho_{1}\Phi_{j}}\right)\delta. \tag{3.13}\] Proof.: By using (3.6) and the monotonicity of the subdifferentials \(\partial f\) and \(\partial\iota_{\mathcal{C}}\) we have \[0\leq\sigma_{f}\|\Delta y^{k+1}\|^{2}\leq\langle\Delta\mu^{k+1},\Delta y^{k+1} \rangle+\langle\Delta\nu^{k+1},\Delta x^{k+1}\rangle.\] This together with (3.10) implies (3.11). From (3.11) it follows immediately that \[E_{k+1} \leq\Phi_{k}-\Phi_{k+1}-\langle\lambda^{k+1}-\lambda^{\dagger},b^ {\delta}-b\rangle\] \[\leq\Phi_{k}-\Phi_{k+1}+\|\lambda^{k+1}-\lambda^{\dagger}\|\delta\] \[\leq\Phi_{k}-\Phi_{k+1}+\sqrt{2\rho_{1}\Phi_{k+1}}\delta\] which shows (3.12). By the non-negativity of \(E_{k+1}\) we then obtain from (3.12) that \[\Phi_{k+1}\leq\Phi_{k}+\sqrt{2\rho_{1}\Phi_{k+1}}\delta,\quad\forall k\geq 0\] which clearly implies (3.13). In order to derive the estimate on \(\Phi_{k}\) from (3.13), we need the following elementary result. **Lemma 3.4**.: _Let \(\{a_{k}\}\) and \(\{b_{k}\}\) be two sequences of nonnegative numbers such that_ \[a_{k}^{2}\leq b_{k}^{2}+c\sum_{j=1}^{k}a_{j},\quad k=0,1,\cdots,\] _where \(c\geq 0\) is a constant. If \(\{b_{k}\}\) is non-decreasing, then_ \[a_{k}\leq b_{k}+ck,\quad k=0,1,\cdots.\] Proof.: We show the result by induction on \(k\). The result is trivial for \(k=0\) since the given condition with \(k=0\) gives \(a_{0}\leq b_{0}\). Assume that the result is valid for all \(0\leq k\leq l\) for some \(l\geq 0\). We show it is also true for \(k=l+1\). If \(a_{l+1}\leq\max\{a_{0},\cdots,a_{l}\}\), then \(a_{l+1}\leq a_{j}\) for some \(0\leq j\leq l\). Thus, by the induction hypothesis and the monotonicity of \(\{b_{k}\}\) we have \[a_{l+1}\leq a_{j}\leq b_{j}+cj\leq b_{l+1}+c(l+1).\] If \(a_{l+1}>\max\{a_{0},\cdots,a_{l}\}\), then \[a_{l+1}^{2}\leq b_{l+1}^{2}+c\sum_{j=1}^{l+1}a_{j}\leq b_{l+1}^{2}+c(l+1)a_{l+1}\] which implies that \[\left(a_{l+1}-\frac{1}{2}c(l+1)\right)^{2} =a_{l+1}^{2}-c(l+1)a_{l+1}+\frac{1}{4}c^{2}(l+1)^{2}\] \[\leq b_{l+1}^{2}+\frac{1}{4}c^{2}(l+1)^{2}\] \[\leq\left(b_{l+1}+\frac{1}{2}c(l+1)\right)^{2}.\] Taking square roots shows \(a_{l+1}\leq b_{l+1}+c(l+1)\) again. **Lemma 3.5**.: _There hold_ \[\Phi_{k}^{1/2}\leq\Phi_{0}^{1/2}+\sqrt{2\rho_{1}}k\delta,\quad\forall k\geq 0 \tag{3.14}\] _and_ \[E_{k}\leq\frac{2\Phi_{0}}{k}+\frac{5}{2}\rho_{1}k\delta^{2},\quad\forall k\geq 1. \tag{3.15}\] Proof.: Based on (3.13), we may use Lemma 3.4 with \(a_{k}=\Phi_{k}^{1/2}\), \(b_{k}=\Phi_{0}^{1/2}\) and \(c=(2\rho_{2})^{1/2}\delta\) to obtain (3.14) directly. Next, by using the monotonicity of \(\{E_{k}\}\), (3.12) and (3.14) we have \[kE_{k} \leq\sum_{j=1}^{k}E_{j}\leq\sum_{j=1}^{k}\left(\Phi_{j-1}-\Phi_{j }+\sqrt{2\rho_{1}\Phi_{j}}\delta\right)\] \[\leq\Phi_{0}-\Phi_{k}+\sum_{j=1}^{k}\sqrt{2\rho_{1}\Phi_{j}}\delta\] \[\leq\Phi_{0}+\sum_{j=1}^{k}\sqrt{2\rho_{1}}\left(\sqrt{\Phi_{0}} +\sqrt{2\rho_{1}}j\delta\right)\delta\] \[=\Phi_{0}+\sqrt{2\rho_{1}\Phi_{0}}k\delta+\rho_{1}k(k+1)\delta^{2}\] \[\leq 2\Phi_{0}+\frac{5}{2}\rho_{1}k^{2}\delta^{2}\] which shows (3.15). Now we are ready to complete the proof of Theorem 3.2. Proof of Theorem 3.2.: Let \(k_{\delta}\) be an integer such that \(k_{\delta}\sim\delta^{-1}\). From (3.14) and (3.15) in Lemma 3.5 it follows that \[E_{k_{\delta}}\leq C_{0}\delta\quad\text{and}\quad\Phi_{k}\leq C_{1}\text{ for all }k\leq k_{\delta}, \tag{3.16}\] where \(C_{0}\) and \(C_{1}\) are constants independent of \(k\) and \(\delta\). In order to use (3.11) in Lemma 3.3 to estimate \(\|y^{k_{\delta}}-y^{\dagger}\|\), we first consider \(\Phi_{k}-\Phi_{k+1}\) for all \(k\geq 0\). By using the definition of \(\Phi_{k}\) and the inequality \(\|u\|^{2}-\|v\|^{2}\leq(\|u\|+\|v\|)\|u-v\|\), we have for \(k\geq 0\) that \[\Phi_{k}-\Phi_{k+1} \leq\frac{1}{2\rho_{1}}\left(\|\lambda^{k}-\lambda^{\dagger}\|+ \|\lambda^{k+1}-\lambda^{\dagger}\|\right)\|\Delta\lambda^{k+1}\|\] \[\quad+\frac{1}{2\rho_{2}}\left(\|\mu^{k}-\mu^{\dagger}\|+\|\mu^{k +1}-\mu^{\dagger}\|\right)\|\Delta\mu^{k+1}\|\] \[\quad+\frac{1}{2\rho_{3}}\left(\|\nu^{k}-\nu^{\dagger}\|+\|\nu^{ k+1}-\nu^{\dagger}\|\right)\|\Delta\nu^{k+1}\|\] \[\quad+\frac{1}{2}\left(\|z^{k}-x^{\dagger}\|_{Q}+\|z^{k+1}-x^{ \dagger}\|_{Q}\right)\|\Delta z^{k+1}\|_{Q}\] \[\quad+\frac{\rho_{2}}{2}\left(\|y^{k}-y^{\dagger}\|+\|y^{k+1}-y^{ \dagger}\|\right)\|\Delta y^{k+1}\|\] \[\quad+\frac{\rho_{3}}{2}\left(\|x^{k}-x^{\dagger}\|+\|x^{k+1}-x^{ \dagger}\|\right)\|\Delta x^{k+1}\|.\] By virtue of the Cauchy-Schwarz inequality and the inequality \((a+b)^{2}\leq 2(a^{2}+b^{2})\) for any numbers \(a,b\in\mathbb{R}\) we can further obtain \[\Phi_{k}-\Phi_{k+1}\leq\sqrt{2(\Phi_{k}+\Phi_{k+1})E_{k+1}},\quad\forall k\geq 0.\] This together with (3.16) in particular implies \[\Phi_{k_{\delta}-1}-\Phi_{k_{\delta}}\leq\sqrt{4C_{0}C_{1}\delta}.\] Therefore, it follows from (3.11) that \[\sigma_{f}\|y^{k_{\delta}}-y^{\dagger}\|^{2} \leq\Phi_{k_{\delta}-1}-\Phi_{k_{\delta}}+\|\lambda^{k_{\delta}}- \lambda^{\dagger}\|\delta\] \[\leq\sqrt{4C_{0}C_{1}\delta}+\sqrt{2\rho_{1}\Phi_{k_{\delta}}}\delta\] \[\leq\sqrt{4C_{0}C_{1}\delta}+\sqrt{2\rho_{1}C_{1}}\delta.\] Thus \[\|y^{k_{\delta}}-y^{\dagger}\|^{2}\leq C_{2}\delta^{1/2},\] where \(C_{2}\) is a constant independent of \(\delta\) and \(k\). By using the estimate \(E_{k_{\delta}}\leq C_{0}\delta\) in (3.16), the definition of \(E_{k_{\delta}}\), and the last three equations in (1.11), we can see that \[\|Az^{k_{\delta}}-b^{\delta}\|^{2}=\frac{1}{\rho_{1}^{2}}\|\Delta \lambda^{k_{\delta}}\|^{2}\leq\frac{2}{\rho_{1}}E_{k_{\delta}}\leq\frac{2C_{0 }}{\rho_{1}}\delta,\] \[\|Lz^{k_{\delta}}-y^{k_{\delta}}\|^{2}=\frac{1}{\rho_{2}^{2}}\| \Delta\mu^{k_{\delta}}\|^{2}\leq\frac{2}{\rho_{2}}E_{k_{\delta}}\leq\frac{2C_{ 0}}{\rho_{2}}\delta,\] \[\|z^{k_{\delta}}-x^{k_{\delta}}\|^{2}=\frac{1}{\rho_{3}^{2}}\| \Delta\nu^{k_{\delta}}\|^{2}\leq\frac{2}{\rho_{3}}E_{k_{\delta}}\leq\frac{2C_ {0}}{\rho_{3}}\delta.\] Therefore \[\|L(z^{k_{\delta}}-x^{\dagger})\|^{2} \leq 2\left(\|Lz^{k_{\delta}}-y^{k_{\delta}}\|^{2}+\|y^{k_{\delta}} -y^{\dagger}\|^{2}\right)\leq\frac{4C_{0}}{\rho_{2}}\delta+2C_{2}\delta^{1/2},\] \[\|A(z^{k_{\delta}}-x^{\dagger})\|^{2} \leq 2\left(\|Az^{k_{\delta}}-b^{\delta}\|^{2}+\|b^{\delta}-b\|^{2 }\right)\leq 2\left(\frac{2C_{0}}{\rho_{1}}+1\right)\delta.\] By virtue of (iii) in Assumption 3.1 on \(A\) and \(L\) we thus obtain \[c_{0}\|z^{k_{\delta}}-x^{\dagger}\|^{2} \leq\|A(z^{k_{\delta}}-x^{\dagger})\|^{2}+\|L(z^{k_{\delta}}-x^{ \dagger})\|^{2}\] \[\leq 2\left(\frac{2C_{0}}{\rho_{1}}+\frac{2C_{0}}{\rho_{1}}+1 \right)\delta+2C_{2}\delta^{1/2}.\] This means there is a constant \(C_{3}\) independent of \(\delta\) and \(k\) such that \[\|z^{k_{\delta}}-x^{\dagger}\|^{2}\leq C_{3}\delta^{1/2}.\] Finally we obtain \[\|x^{k_{\delta}}-x^{\dagger}\|^{2}\leq 2\left(\|x^{k_{\delta}}-z^{k_{\delta}} \|^{2}+\|z^{k_{\delta}}-x^{\dagger}\|^{2}\right)\leq\frac{4C_{0}}{\rho_{3}} \delta+2C_{3}\delta^{1/2}.\] The proof is thus complete. _Remark 3.1_.: Under the benchmark source condition (3.2), we have obtained in Theorem 3.2 the convergence rate \(O(\delta^{1/4})\) for the proximal ADMM (1.11). This rate is not order optimal. It is not yet clear if the order optimal rate \(O(\delta^{1/2})\) can be achieved. _Remark 3.2_.: When using the proximal ADMM to solve (1.9) with \(L=I\), i.e. \[\min\left\{f(x):Ax=b\text{ and }x\in\mathcal{C}\right\}, \tag{3.17}\] it is not necessary to introduce the \(y\)-variable as is done in (1.11) and thus (1.11) can be simplified to the scheme \[\begin{split} z^{k+1}&=\arg\min_{z\in\mathcal{X}} \left\{\mathscr{L}_{\rho_{1},\rho_{2}}(z,x^{k},\lambda^{k},\nu^{k})+\frac{1}{2} \|z-z^{k}\|_{Q}^{2}\right\},\\ x^{k+1}&=\arg\min_{x\in\mathcal{X}}\left\{\mathscr{L }_{\rho_{1},\rho_{2}}(z^{k+1},x,\lambda^{k},\nu^{k})\right\},\\ \lambda^{k+1}&=\lambda^{k}+\rho_{1}(Az^{k+1}-b^{ \delta}),\\ \nu^{k+1}&=\nu^{k}+\rho_{2}(z^{k+1}-x^{k+1}),\end{split} \tag{3.18}\] where \[\mathscr{L}_{\rho_{1},\rho_{2}}(z,x,\lambda,\nu):=f(z)+\iota_{\mathcal{C}}(x) +\langle\lambda,Az-b^{\delta}\rangle+\langle\nu,z-x\rangle+\frac{\rho_{1}}{2} \|Az-b^{\delta}\|^{2}+\frac{\rho_{2}}{2}\|z-x\|^{2}.\] The source condition (1.9) reduces to the form \[\exists\mu^{\dagger}\in\partial f(x^{\dagger})\text{ and }\nu^{\dagger}\in \partial_{\mathcal{C}}(x^{\dagger})\text{ such that }\mu^{\dagger}+\nu^{\dagger}\in\operatorname{ Ran}(A^{*}). \tag{3.19}\] If the unique solution \(x^{\dagger}\) of (3.17) satisfies the source condition (3.19), one may follow the proof of Theorem 3.2 with minor modification to deduce for the method (3.18) that \[\|x^{k_{\delta}}-x^{\dagger}\|=O(\delta^{1/4})\quad\text{ and }\quad\|z^{k_{ \delta}}-x^{\dagger}\|=O(\delta^{1/4})\] whenever the integer \(k_{\delta}\) is chosen such that \(k_{\delta}\sim\delta^{-1}\). We conclude this section by presenting a numerical result to illustrate the semi-convergence property of the proximal ADMM and the convergence rate. We consider finding a solution of (1.8) with minimal norm. This is equivalent to solving (3.17) with \(f(y)=\frac{1}{2}\|y\|^{2}\). With a noisy data \(b^{\delta}\) satisfying \(\|b^{\delta}-b\|\leq\delta\), the corresponding proximal ADMM (3.18) takes the form \[\begin{split} z^{k+1}&=\left((1+\rho_{2})I+Q+\rho_ {1}A^{*}A\right)^{-1}\left(\rho_{1}A^{*}b^{\delta}+\rho_{2}x^{k}+Qz^{k}-A^{*} \lambda^{k}-\nu^{k}\right),\\ x^{k+1}&=P_{\mathcal{C}}\left(z^{k+1}+\nu^{k}/\rho _{2}\right),\\ \lambda^{k+1}&=\lambda^{k}+\rho_{1}(Az^{k+1}-b^{ \delta}),\\ \nu^{k+1}&=\nu^{k}+\rho_{2}(z^{k+1}-x^{k+1}),\end{split} \tag{3.20}\] where \(P_{\mathcal{C}}\) denotes the orthogonal projection of \(\mathcal{X}\) onto \(\mathcal{C}\). The source condition (3.19) now takes the form \[\exists\nu^{\dagger}\in\partial_{\mathcal{C}}(x^{\dagger})\text{ such that }x^{\dagger}+\nu^{\dagger}\in\operatorname{ Ran}(A^{*}) \tag{3.21}\] which is equivalent to the projected source condition \(x^{\dagger}\in P_{\mathcal{C}}(\operatorname{Ran}(A^{*}))\). **Example 3.1**.: In our numerical simulation we consider the first kind integral equation \[(Ax)(t):=\int_{0}^{1}\kappa(s,t)x(s)ds=b(t),\quad t\in[0,1] \tag{3.22}\] on \(L^{2}[0,1]\), where the kernel \(\kappa\) is continuous on \([0,1]\times[0,1]\). Such equations arise naturally in many linear ill-posed inverse problems, see [12, 18]. Clearly \(A\) is a compact linear operator from \(L^{2}[0,1]\) to itself. We will use \[\kappa(s,t)=d\left(d^{2}+(s-t)^{2}\right)^{-3/2}\] with \(d=0.1\). The corresponding equation is a 1-D model problem in gravity surveying. Assume the equation (3.22) has a nonnegative solution. We will employ the method (3.20) to determine the unique nonnegative solution of (3.22) with minimal norm in case the data is corrupted by noise. Here \(\mathcal{C}:=\{x\in L^{2}[0,1]:x\geq 0\text{ a.e.}\}\) and thus \(P_{\mathcal{C}}(x)=\max\{x,0\}\). In order to investigate the convergence rate of the method, we generate our data as follows. First take \(\omega^{\dagger}\in L^{2}[0,1]\), set \(x^{\dagger}:=\max\{A^{*}\omega^{\dagger},0\}\) and define \(b:=Ax^{\dagger}\). Thus \(x^{\dagger}\) is a nonnegative solution of \(Ax=b\) satisfying \(x^{\dagger}=P_{\mathcal{C}}(A^{*}\omega^{\dagger})\), i.e. the source condition (3.21) holds. We use \(\omega^{\dagger}=t^{3}(0.9-t)(t-0.35)\), the corresponding \(x^{\dagger}\) is plotted in Figure 1 (a). We then pick a random data \(\xi\) with \(\|\xi\|_{L^{2}[0,1]}=1\) and generate the noisy data \(b^{\delta}\) by \(b^{\delta}:=b+\delta\xi\). Clearly \(\|b^{\delta}-b\|_{L^{2}[0,1]}=\delta\). For numerical implementation, we discretize the equation by the trapezoidal rule based on partitioning \([0,1]\) into \(N-1\) subintervals of equal length with \(N=600\). We then execute the method (3.20) with \(Q=0\), \(\rho_{1}=10\), \(\rho_{2}=1\) and the initial guess \(x^{0}=\lambda^{0}=\nu^{0}=0\) using the noisy data \(b^{\delta}\) for several distinct values of \(\delta\). In Figure 1 (b) and (c) we plot the relative error \(\|x^{k}-x^{\dagger}\|_{L^{2}}/\|x^{\dagger}\|_{L^{2}}\) versus \(k\), the number of iterations, for \(\delta=10^{-2}\) and \(\delta=10^{-4}\) respectively. These plots demonstrate that the proximal ADMM always exhibits the semi-convergence phenomenon when used \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(\delta\) & \(\mathtt{err}_{\min}\) & \(\mathtt{iter}_{\min}\) & \(\mathtt{err}_{\min}/\delta^{1/2}\) & \(\mathtt{err}_{\min}/\delta^{1/4}\) \\ \hline 1e-1 & 4.9307e-2 & 1 & 0.155922 & 0.087681 \\ 1e-2 & 1.3255e-2 & 2 & 0.132553 & 0.041917 \\ 1e-3 & 5.2985e-3 & 19 & 0.167552 & 0.029796 \\ 1e-4 & 2.1196e-3 & 501 & 0.211957 & 0.021196 \\ 1e-5 & 7.2638e-4 & 2512 & 0.229702 & 0.012917 \\ 1e-6 & 2.7450e-4 & 31447 & 0.274496 & 0.008680 \\ 1e-7 & 7.4693e-5 & 329542 & 0.236199 & 0.004200 \\ \hline \end{tabular} \end{table} Table 1. Numerical results for the method (3.20) using noisy data with diverse noise levels, where \(\mathtt{err}_{\min}\) and \(\mathtt{iter}_{\min}\) denote respectively the the smallest relative error and the required number of iterations. Figure 1. (a) plots the true solution \(x^{\dagger}\), (b) and (c) plot the relative errors versus the number of iterations for the method (3.20) using noisy data with noise level \(\delta=10^{-2}\) and \(10^{-4}\) respectively to solve ill-posed problems, no matter how small the noise level is. Therefore, properly terminating the iteration is important to produce useful approximate solutions. This has been done in [21, 22]. In Table 1 we report further numerical results. For the noisy data \(b^{\delta}\) with each noise level \(\delta=10^{-i}\), \(i=1,\cdots,7\), we execute the method and determine the smallest relative error, denoted by \(\mathtt{err}_{\min}\), and the required number of iterations, denoted by \(\mathtt{iter}_{\min}\). The ratios \(\mathtt{err}_{\min}/\delta^{1/2}\) and \(\mathtt{err}_{\min}/\delta^{1/4}\) are then calculated. Since \(x^{\dagger}\) satisfies the source condition (3.21), our theoretical result predicts the convergence rate \(O(\delta^{1/4})\). However, Table 1 illustrates that the value of \(\mathtt{err}_{\min}/\delta^{1/2}\) does not change much while the value of \(\mathtt{err}_{\min}/\delta^{1/4}\) tends to decrease to \(0\) as \(\delta\to 0\). This strongly suggests that the proximal ADMM admits the order optimal convergence rate \(O(\delta^{1/2})\) if the source condition (3.21) holds. However, how to derive this order optimal rate remains open.
2301.07896
Supercharging Distributed Computing Environments For High Performance Data Engineering
The data engineering and data science community has embraced the idea of using Python & R dataframes for regular applications. Driven by the big data revolution and artificial intelligence, these applications are now essential in order to process terabytes of data. They can easily exceed the capabilities of a single machine, but also demand significant developer time & effort. Therefore it is essential to design scalable dataframe solutions. There have been multiple attempts to tackle this problem, the most notable being the dataframe systems developed using distributed computing environments such as Dask and Ray. Even though Dask/Ray distributed computing features look very promising, we perceive that the Dask Dataframes/Ray Datasets still have room for optimization. In this paper, we present CylonFlow, an alternative distributed dataframe execution methodology that enables state-of-the-art performance and scalability on the same Dask/Ray infrastructure (thereby supercharging them!). To achieve this, we integrate a high performance dataframe system Cylon, which was originally based on an entirely different execution paradigm, into Dask and Ray. Our experiments show that on a pipeline of dataframe operators, CylonFlow achieves 30x more distributed performance than Dask Dataframes. Interestingly, it also enables superior sequential performance due to the native C++ execution of Cylon. We believe the success of Cylon & CylonFlow extends beyond the data engineering domain, and can be used to consolidate high performance computing and distributed computing ecosystems.
Niranda Perera, Kaiying Shan, Supun Kamburugamuwe, Thejaka Amila Kanewela, Chathura Widanage, Arup Sarker, Mills Staylor, Tianle Zhong, Vibhatha Abeykoon, Geoffrey Fox
2023-01-19T05:50:44Z
http://arxiv.org/abs/2301.07896v1
# Supercharging Distributed Computing Environments For High Performance Data Engineering ###### Abstract The data engineering and data science community has embraced the idea of using Python & R dataframes for regular applications. Driven by the big data revolution and artificial intelligence, these applications are now essential in order to process terabytes of data. They can easily exceed the capabilities of a single machine, but also demand significant developer time & effort. Therefore it is essential to design scalable dataframe solutions. There have been multiple attempts to tackle this problem, the most notable being the dataframe systems developed using distributed computing environments such as Dask and Ray. Even though Dask/Ray distributed computing features look very promising, we perceive that the Dask Dataframes/Ray Datasets still have room for optimization. In this paper, we present _CylonFlow_, an alternative distributed dataframe execution methodology that enables state-of-the-art performance and scalability on the same Dask/Ray infrastructure (thereby _supercharging_ them!). To achieve this, we integrate a _high performance dataframe_ system _Cylon_, which was originally based on an entirely different execution paradigm, into Dask and Ray. Our experiments show that on a pipeline of dataframe operators, _CylonFlow_ achieves \(30\times\) more distributed performance than Dask Dataframes. Interestingly, it also enables superior sequential performance due to the native C++ execution of _Cylon_. We believe the success of _Cylon & _CylonFlow_ extends beyond the data engineering domain, and can be used to consolidate high performance computing and distributed computing ecosystems. data engineering, data science, high performance computing, distributed computing, dataframes ## I Introduction The data engineering domain has expanded at a staggering pace over the past few decades, predominantly owing to the emergence of the _Big Data revolution_, machine learning (ML), and artificial intelligence (AI). In today's information age, data is no longer referred to in megabytes, files or spreadsheets, but in giga/terabytes and object stores. This overabundance of data takes up a significant amount of developer time for data preprocessing when it would be better served focusing their attention on building data engineering models. Therefore, it is crucial to improve the performance of these data preprocessing stages in order to build efficient data engineering pipelines. Data preprocessing has been traditionally done on database systems using a structured query language (SQL), but more recently Python and R programming languages have taken over these SQL workloads. Functional interface, interactive programming environment, and interpreted execution of these languages provide a more user-friendly developing ecosystem for modern-day engineers. The Python library Pandas has been at the forefront of this transformation, and has played a vital role in popularizing Python for data exploration. In this paper, we focus mainly on the _Dataframe (DF)_ API, which is at the heart of the Pandas ecosystem. The concept of a DF is not unique to Pandas; in fact, it originated from S language in the 1990s, and was subsequently popularized by the R language. However Pandas dominates the field with over 100 million monthly downloads consistently, according to the _PyPI_ package index stats [1]. Despite this popularity, both Pandas & R DF run into performance limitations even on moderately large datasets [2, 3, 4]. For example, in an Intel(r) Xeon(r) Platinum 8160 high-end workstation with 240GB memory, it takes around 700s to join two DFs with 1 billion rows each for pandas, whereas traversing each dataframe only takes about 4s. On the other hand, today's computer hardware carries plenty of computing power with a large amount of memory. On-demand elastic cloud computing services enable work to be done on thousands of such nodes with the touch of a button. As such, there are plenty of resources at our disposal to develop more efficient distributed data engineering solutions. Hadoop YARN, Dask, and Ray are just a few distributed execution runtimes capable of managing thousands of computing resources under their purview. These engines were predominantly developed by the distributed and cloud computing communities, and provide application program interfaces (API) to conveniently submit user logic across many nodes. They employ several execution models such as asynchronous many-tasks (AMT), actors, etc. In the data engineering community, we have seen several frameworks attempting to leverage these distributed runtimes to develop distributed dataframe (DDF) solutions. Spark SQL RDDs & Datasets was a breakthrough framework on this front, significantly improving the traditional map-reduce paradigm [5]. Dask developed its own take on DDFs, Dash DDF, closely followed by Ray with Ray-Datasets. Modin is the latest attempt to develop scalable DF systems [4], which is also built on top of Dask & Ray. However in practice, we have encountered several performance limitations with these systems [2, 3], as discussed in Section V. Traditionally, the high performance computing (HPC) community has been developing solutions based on the bulk synchronous parallel (BSP) execution model using the message passing interface (MPI) specification. They have been able to achieve commendable scalability & performance on thousands of CPU cores (and on supercomputers). In a previous publication we developed an alternative to the existing DDFs named _Cylon_[2], which looks at the problem from the HPC point of view. _Cylon_ employs BSP model for DDF operator execution, and works on top of MPI runtimes (OpenMPI, MPICH, IBM Spectrum MPI, etc). Due to superior scalability and HPC descent, we differentiate _Cylon_ as a _high performance DDF (HP-DDF)_ implementation. Apart from running on BSP, another notable feature in HP-DDFs is the use of an optimized communication library. Even though _Cylon_ has been able to achieve above-par scalability compared to the popular DDF systems, it is tightly coupled to the MPI ecosystem. As we discuss in Section IV, this limits us from extending HP-DDF concept to distributed computing environments such as Dask or Ray. In this paper, we propose an alternative execution methodology to resolve this limitation. Our objective is to integrate _Cylon_ with other execution runtimes without compromising its scalability and performance. It is a bipartite solution: 1. creating a stateful pseudo-BSP environment within the execution runtime resources; 2. using a modularized _communicator_ that enables plugging-in optimized communication libraries. We named it _CylonFlow_ because the idea carries parallels to workflow management. We demonstrate the robustness of this idea by implementing _Cylon_ HP-DDF runtimes on top of Dask (_CylonFlow_-on-Dask) and Ray (_CylonFlow_-on-Ray) that outperform their own DDF implementations. We also confirm that the idea gives comparable or better results than MPI-based _Cylon_ DDF on the same hardware. With _CylonFlow_, we have now enabled HP-DDFs from anywhere to personal laptops or exascale supercomputers. As depicted in Figure 1, it consolidates disparate execution models and communities under a single application runtime. To the best of our knowledge, this is the first attempt to adapt high performance data engineering constructs to distributed computing environments. We believe that the methodology behind _CylonFlow_ extends beyond the data engineering domain, and it could be used to execute many HPC applications on distributed computing environments. ## II Distributed Computing Models & Libraries In order to understand the design and implementation of both _Cylon & CylonFlow_, it is important to discuss the existing distributed computing models and prevalent libraries that implement them. A distributed computing model provides an abstract view of how a particular problem can be decomposed and executed from the perspective of a machine. It describes how a distributed application expresses and manages parallelism. _Data parallelism_ executes the same computation on different parts (partitions) of data using many compute units. We see this at the instruction level, _single-instruction multiple-data (SIMD)_, as well as in program level _single-program multiple-data (SPMD)_. On the other hand, _task parallelism_ involves executing multiple tasks in parallel over many compute units. This is a form of _multiple-program multiple-data (MPMD)_ at the program level. ### _Bulk Synchronous Parallel (BSP)_ _BSP_ or Communicating Sequential Processors (CSP) model [6, 7] is the most common model that employs SPMD & _data parallelism_ over many compute nodes. Message Passing Interface (MPI) is a formal specification of BSP model that has matured over 30+ years. OpenMPI, MPICH, MSMPI, IBM Spectrum MPI, etc. are some notable implementations of this specification. MPI applications display _static parallelism_ since most often parallelism needs to be declared at the initiation of the program. From the point of view of the data, this would mean that the data partitions are tightly coupled to the parallelism. At the beginning of the application, data partitions would be allocated to executors/workers. Executors then own data partitions until the end of the application and perform computations on them. When the workers reach a communication operation in the program, they synchronize with each other by passing messages. Many high performance computing (HPC) applications use the BSP model on supercomputing clusters and have shown admirable scalability. However, only a handful of data engineering frameworks have adopted this model, including _Twister2_[8] & _Cylon_. ### _Asynchronous Many-Tasks (AMT)_ _AMT_ model relaxes the limitations of BSP by decomposing applications into independent transferable sub-programs (many tasks) with associated inputs (data dependencies). AMT runtimes usually manage a distributed queue that accepts these Fig. 1: Current Status Quo & _CylonFlow_ Contribution tasks (_Manager/Scheduler_). A separate group of executors/workers would execute tasks from this queue, thus following MPMD & _task parallelism_. Dependencies between tasks are handled by the scheduling order. This allows the application to set parallelism on-the-fly, and the workers are allowed to scale up or down, leading to _dynamic parallelism_. AMT also enables better resource utilization in multi-tenant/multi-application environments by allowing free workers to pick independent tasks, thereby improving the overall throughput of the system. Furthermore, task parallelism enables task-level fault tolerance where failed tasks can be rerun conveniently. These benefits may have prompted many distributed dataframe runtimes, including Dask DDF & Ray Datasets, to choose AMT as the preferred execution model. ### _Actors_ _Actor_ model was popularized by _Erlang_[9]. An actor is a primitive computation which can receive messages from other actors, upon which they can execute a computation, create more actors, send more messages, and determine how to respond to the next message received. Compared to executors and tasks in AMT, actors manage/maintain their own state, and the state may change based on the computation/communication. Messages are sent asynchronously and placed in a _mailbox_ until the designated actor consumes them. Akka is a popular actor framework which was used as the foundation for the Apache Spark project. Interestingly, Dask and Ray projects also provide an actor abstraction on top of their distributed execution runtimes mainly aimed at reducing expensive state initializations. ## III Distributed Data Dataframes (DDF) With the exponential growth in dataset sizes, it is fair to conclude that data engineering applications have already exceeded the capabilities of a single workstation node. Modern hardware offers many CPU cores/threads for computation, and the latest cloud infrastructure enables users to spin many such nodes instantaneously. As a result, there is abundant computing power available at users' disposal, and it is essential that data engineering software make use of it. Furthermore, every AI/ML application requires a pre-processed dataset, and it is no secret that data pre-processing takes significant developer time and effort. Several AI/ML surveys suggest that it could even be more than 60% of total developer time [10]. For these reasons, using scalable _distributed dataframe (DDF)_ runtime could potentially improve the efficiency of data engineering pipelines immensely. Based on our experiments with some widely used DDF systems (Section V), we believe that the idea of a _high performance scalable DDF runtime_ is still a work in progress. ### _Dataframes (DF)_ Let us first define a dataframe. We borrow definitions from the relations terminology proposed by Abiteboul et al [11]. Similar to SQL tables, DFs contain heterogeneously typed data. These elements originate from a known set of _domains_, \(Dom=\{dom_{1},dom_{2},...\}\). For a DF, these _domains_ represent all the data types it supports. A **Schema** of a DF \(S_{M}\) is a tuple \((D_{M},C_{M})\), where \(D_{M}\) is a vector of \(M\) domains and \(C_{M}\) is a vector of \(M\) corresponding column labels. Column labels usually belong to _String/Object_ domain. A **Dataframe (DF)** is a tuple \((S_{M},A_{NM},R_{N})\), where \(S_{M}\) is the Schema with \(M\) domains, \(A_{NM}\) is a 2-D array of entries where actual data is stored, and \(R_{N}\) is a vector of \(N\) row labels belonging to some domain. _Length_ of the dataframe is \(N\), i.e. the number of rows. Heterogeneously typed schema clearly distinguishes DFs from multidimensional arrays or tensors. However data along a column is still homogeneous, so many frameworks have adopted a columnar data format which enables vectorized computations on columns. A collection of numpy NDAtrays would be the simplest form of DF representation. Alternatively, Apache Arrow columnar format [12] is commonly used by many DF runtimes. Arrow arrays are composed of multiple buffers such as data, validity and offsets for variable length types (e.g. string). As identified in previous literature, many commonly used DF operators are defined over the vertical axis (row-wise) [3, 4]. Even though columnar representation allows contiguous access along a column, it makes indexing or slicing rows non-trivial. Furthermore, many DF operators are defined on a set of _key columns_, while the rest (i.e. _value columns_) move along with the keys. As a consequence, traditional BLAS (basic linear algebra subprograms) routines cannot be directly used for DF operators. ### _DDF System Design_ The composition of a DF introduces several engineering challenges in designing distributed DF systems. Similar to any distributed/parallel system design, let us first examine the computation and communication aspects broadly. #### Iii-B1 Computation Petersohn et al [4] recognize that many Pandas operators can potentially be implemented by a set of core operators, thereby reducing the burden of implementing a massive DDF API. Correspondingly, in a recent publication we observed that DF operators follow several generic distribution Fig. 2: Distributed DDF Sub-operator Composition [3] (Bottom: Join Operator Example) execution patterns [3]. The _pattern_ governs how these sub-operators are arranged in a directed acyclic graph (DAG). We also identified that a DDF operator consists of three major sub-operators: 1. core local operator; 2. auxiliary local operators; and 3. communication operators. Figure 2 depicts a distributed join operation composition, and Figure 3 shows the relationship between the concepts of _Cylon_ and Modin. A framework may choose to create tasks (i.e. the definition for a unit of work) for each of these sub-operators. A typical application would be a pipeline of multiple DDF operators. When using the AMT model, these tasks would be further expanded for each data partition (parallelism). Every task would produce input data for subsequent tasks. This dataflow governs the dependencies between tasks. When there are several operators in a DAG, it is common to see multiple local tasks grouped together. An _execution plan optimizer_ may identify such tasks and coalesce them together into a single local task. We see these optimizations in the Apache Spark SQL _Tungsten_ optimizer [13]. Previously mentioned in Section I, data parallelism is natively supported by the BSP model. Since the executors own data partitions until the end of an application, they have the ability to perform all local compute tasks until they reach a communication boundary. As such, coalescing subsequent local tasks are inherently supported by the model itself compared to AMT. #### Iii-B2 Communication Implementing DDF operators requires point-to-point (P2P) communication, as well as complex message passing between worker processes. We have identified several such collective communication routines, such as shuffle (all-to-all), scatter, (all)gather, broadcast, (all)reduce, etc, that are essential for DDF operators [3]. Typically, communication routines are performed on data buffers (ex: MPI, TCP), but the DF composition dictates that these routines be extended on data structures such as DFs, arrays, and scalars. Such data structures may be composed of multiple buffers (Section I) which could further complicate the implementation. For example, join requires a DF to be shuffled, and to do this we must AllToAll the buffer sizes of all columns (counts). We then shuffle column data based on these counts. In most DF applications, communication operations may take up significant wall time, creating critical bottlenecks. This is evident from Section V-A, where we evaluate the distribution of communication and computation time over several DF operator patterns. Moreover, developer documentation of Spark SQL, Dask DDF, Ray Datasets, etc, provide special guidelines to reduce shuffle routine overheads [14, 15]. While these communication routines can be implemented ingenuously using point-to-point message passing, implementation of specialized algorithms has shown significant performance improvements [16, 17, 18]. For instance, OpenMPI implements several such algorithms for its collective communications, which can be chosen based on the application. Typically in AMT run-times, communications between tasks are initiated with the help of a _Scheduler_. Another approach is to use a distributed object store or a network file system to share data rather than sending/receiving data explicitly, although this could lead to severe communication overhead. ### _DDF Systems Examined_ Let us examine several of the most commonly used DDF systems to understand their distributed execution models and broad design choices. We will then compare these systems with our novel approach described in Section IV. #### Iii-C1 Dask DDF Dask DDF is a distributed DF composed of many Pandas DFs partitioned along the vertical axis. Operators on Dask DDFs are decomposed into tasks which are then arranged in a DAG (Figure 4 depicts a Join operation). Dask-Distributed Scheduler then executes these tasks on Dask-Workers. This DDF execution is a good example of AMT model. Core local operators are offloaded to Pandas. Communication operators (mainly shuffle) support point-to-point TCP message passing using _Partd_ disk-backed distributed object store. #### Iii-C2 Ray Datasets Ray Datasets is a DDF-like API composed of Apache Arrow tables or Python objects stored in the distributed object store. Similar to Dask, distributed operators (_Transforms_) follow the AMT model. Interestingly, they support a _task strategy_ as well as an _actor strategy_. The latter is recommended for expensive state initialization (e.g. for GPU-based tasks) to be cached. As per communication, a map-reduce style shuffle is used which maps tasks to partition blocks by value and then reduces tasks to merge co-partitioned blocks together. Essentially, Ray communication operators are backed by the object store. For larger data, the documentation suggests using a _push-based shuffle_. #### Iii-C3 Apache Spark Dataset It is fair to say that Apache Spark is the most popular actor-based data engineering framework available today, and it has attracted a large developer community since its initial publication, _Resilient Distributed Datasets (RDDs)_[5]. _PySpark Dataset_ is a DDF-like API, and recently a Pandas-like DDF named _Pandas on Spark_ was also released. Similar to AMT, Spark decomposes operators into a collection of map-reduce tasks, after which a manager process schedules these tasks in executors allocated to the Fig. 3: _Cylon_ Operator Patterns & Modin DF Algebra application. It uses _Akka-Actors_ to manage the driver (i.e. the process that submits applications), the manager, and executors. Essentially, Spark implements AMT using the actor model for map-reduce tasks. All these processes run on a Java Virtual Machine (JVM), and could face significant (de)serialization overheads when transferring data to and from Python. As an optimization, the latest versions of PySpark enable Apache Arrow columnar data format. #### Iii-B4 Modin DDF Modin [4] is the latest addition to the DDF domain. It introduces the concept of _DF algebra_ (Figure 3), where a DDF operator can be implemented as a combination of core operators. It executes on Dask & Ray backends, which also provide the communication means for DDF. Modin distinguishes itself by attempting to mirror the Pandas API and follow eager execution. ## IV _Cylon & CylonFlow_: High Performance DDFs in Dask & Ray Through our research, we have encountered several performance limitations while using the aforementioned DDF systems for large datasets. As discussed in Section V, many of these DDFs show limited scalability, and we believe the limitations of the AMT model could be a major contributor to that. A centralized scheduler might create a scheduling bottleneck. Additionally, the lack of a dedicated optimized communication mechanism further compounds the issues. It is fair to assume that the optimization of communication routines is orthogonal to designing distributed computing libraries such as Dask/Ray, and re-purposing generic distributed data-sharing mechanisms for complex communication routines may lead to suboptimal performance when used in DDF implementations. In a recent publication we proposed an alternative approach for DDFs that uses BSP execution model, which we named _Cylon_[2]. It is built on top of MPI and uses MPI collective communication routines for DDF operator implementations. MPI libraries (OpenMPI, MPICH, IBM-Spectrum) have matured over the past few decades to employ various optimized distributed communication algorithms, and _Cylon_ benefits heavily from these improvements. It also profits from data parallelism and implicit coalescing of local tasks by employing the BSP model. Experiments show commonable scalability with _Cylon_, fittingly differentiating it as a _high performance DDF (HP-DDF)_. Even though high performance DDFs seem encouraging, having to depend on an MPI environment introduces several constraints. MPI process bootstrapping is tightly coupled to the underlying MPI implementation, e.g. OpenMPI employs PMIx. As a result, it is not possible to use MPI as a separate communication library on top of distributed computing libraries such as Dask/Ray. Usually these libraries would bootstrap their worker processes by themselves. There is no straightforward way for the MPI runtime to bind to these workers. We strongly believe it is worthwhile to expand on the HP-DDF concept beyond MPI-like environments. Current advancements in technology and the high demand for efficient data engineering solutions encourage this position. Our main motivation for this paper is to develop an execution environment where we could strike a balance between the scalability of BSP and the flexibility of AMT. Dask and Ray have proven track records as distributed computing libraries. So rather than building a new system from scratch, we focused on bridging the gap between BSP and these libraries. We propose a two-pronged solution to this problem. First, creating a stateful pseudo-BSP execution environment using the computing resources of the execution runtime. This lays the foundation for HP-DDF execution. The second step is using a modularized _communicator_ abstraction (i.e. interface that defines communication routines) that enables pluging-in optimized communication libraries. We named this project _CylonFlow_, as it embraces the idea of _managing a workflow_. ### _Stateful Pseudo-BSP Execution Environment_ Within this pseudo-BSP environment, executors initialize an optimized communication library and attach it to the state of the executor. The state would keep this communication context alive for the duration of an _CylonFlow_ application. This allows _CylonFlow_ runtime to reuse the communication context without having to reinitialize it, which could be an expensive exercise for larger parallelisms. Once the environment is set up, the executors implicitly coalesce and carry out local operations until a communication boundary is met. The state can also be used to share data between _CylonFlow_ applications as discussed in Section IV-C. Fig. 4: Dask DDF join (2 partitions) This proposition of creating stateful objects matches perfectly with the actor model. Thus we leveraged the actor APIs available in Dask and Ray to implement _CylonFlow_on-Dask and _CylonFlow_-on-Ray (Figure 5). An actor is a reference to a designated object (_CylonActor_ class) residing in a remote worker. The driver/user code would call methods on this remote object, and during the execution of this call, _CylonFlow_ runtime passes the communication context as an argument. Inside these methods, users can now express their data engineering applications using _Cylon_ DDFs. This approach enables partitioning of the cluster resources and scheduling _independent applications_. It would be a much more coarsely grained work separation, but we believe the abundance of computing units and storage in modern processor hardware, and their availability through cloud computing, could still sustain it. To the best of our knowledge, this is the first time actors are being used together with a dedicated communication library to develop HP-DDF runtimes. This approach is semantically different from actors in Apache Spark, where they would still be executing _independent tasks_ in an AMT manner. Neither should it be confused with other orthogonal projects like _Dask-MPI_, which is used to deploy a Dask cluster easily from within an existing MPI environment. Upon the initialization of the application, _CylonFlow_ sends _Cylon_ Actor definition (a class) to a partition of workers in the cluster based on the required parallelism. Workers then initialize these as an actor instance (remote object). At the same time, the actor instances initialize communication channels between each other, which is the entry point for creating _Cylon_ DDFs (i.e. _Cylon_env_). Instantiating an _Cylon_env_ could be an expensive operation, especially with large parallelism, as it opens up P2P communication channels between the remote objects. The _Cylon_ actor class exposes three main endpoints. 1. start_executable: Allows users to submit an executable class that would be instantiated inside the actor instance. 2. execute_Cylon: Execute functions of the executable that accepts an _Cylon_env_ object and produces a Future. 3. run_Cylon: Execute a lambda function that accepts an _Cylon_env_ object and produces a Future. The following is an example code which creates two _Cylon_ DFs using Parquet files and performs a join (merge) on them. ``` deffoo(env:CylonEnv=None): dfl=read_parquet(...,env=env) dfz=read_parquet(...,env=env) write_parquet(df1.merge(df2,...,env=env),...,env=env) init() wait(CylonExecutor(parallelism=4).run_Cylon(foo)) ``` #### V-A1 Spawning Dask Actors Dask does not have a separate API endpoint to reserve a set of workers for an application. Consequentially, _CylonFlow_ uses the Distributed.Client API to collect a list of all available workers. It then uses the Client.map API endpoint with a chosen list of workers (based on the parallelism) to spawn the actor remote objects. Dask actor remote objects open up a direct communication channel to the driver, which they would use to transfer the results back. This avoids an extra network hop through the scheduler and achieves lower latency. #### V-A2 Spawning Ray Actors Ray provides a _Placement Groups_ API that enables reserving groups of resources across multiple nodes (known as gang-scheduling). _CylonFlow_ creates a placement group with the required parallelism and submits the _Cylon_ Actor definition to it. In Ray documentation [15], communicating actors such as this are called out-of-band communication. ### _Modularized Communicator_ Once the pseudo-BSP environment is set up, _Cylon_ HP-DDF communication routines can pass messages amongst the executors. However, we would still not be able to reuse the MPI communications due to the limitations we discussed previously. To address this, we had to look for alternative communication libraries which could allow us to implement _Cylon_ communication routines outside of MPI without compromising its scalability & performance. We achieved this by modularizing _Cylon_ communicator interface and adding abstract implementations of DDF communication routines as discussed in Section III. This allowed us to conveniently integrate Gloo and UCX/UCC libraries as alternatives to MPI. Communicator performance experiments in Section V-B demonstrate that these libraries perform as good as if not better than MPI on the same hardware. V-B1 Gloo [19]: Gloo is a collective communications library managed by Meta Inc. incubator [19] predominantly aimed at machine learning applications. PyTorch uses this for distributed all-reduce operations. It currently supports TCP, UV, and ibverbs transports. Gloo communication runtime can be initialized using an MPI Communicator or an NFS/Redis key-value store (P2P message passing is not affected). Within MPI environments _Cylon_ uses the former, but for the purposes of _CylonFlow_ it uses the latter. As an incubator project, Gloo lacks a comprehensive algorithm implementation, yet our experiments confirmed that it scales admirably. We have extended the Gloo project to suit _Cylon_ communication interface. Fig. 5: _CylonFlow_ Using Actors for HP-DFF #### Iv-C2 Unified Communication X (UCX) [20] UCX is a collection of libraries and interfaces that provides an efficient and convenient way to construct widely used HPC protocols on high-speed networks, including MPI tag matching, Remote Memory Access (RMA) operations, etc. Unlike MPI runtimes, UCX communication workers are not bound to a process bootstrapping mechanism. As such, it is being used by many frameworks, including Apache Spark and RAPIDS (DaskCuDF). It provides primitive P2P communication operations. Unified Collective Communication (UCC) is a collective communication operation API built on top of UCX which is still being developed. Similar to MPI, UCC implements multiple communication algorithms for collective communications. Based on our experiments, UCX+UCC performance is on par with or better than OpenMPI. _CylonFlow_ would use Redis key-value store to instantiate communication channels between _Cylon_ actors. ### _Sharing Results With Downstream Applications_ As discussed in Section IV-A, this approach allows partitioning of the cluster resources and scheduling of individual applications. These applications may contain data dependencies, for example, multiple data preprocessing applications feeding data into a distributed deep learning application. However this typically produces DDFs, and it would not be practical to collect intermediate results to the driver program. We propose an _CylonFlow_ data store (i.e. _Cylon_store_) abstraction to retain these results. In the following example, data_df and aux_data_df will be executed in parallel on two resource partitions, and main function would continue to execute the deep learning model. ``` defprocess_aux_data(env:CylonEnv=None,store:CylonStore-None): aux_data_df=... defnent(env:CylonEnv=None,store:CylonStore=None): data_df=... aux_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=__data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=... div_data_df=__data_df=... div_data_df=... div_data_df=__df_data_df=... div_data_df=... div_data_df=__data_df=__df_df_data_df=... div_data_df=... div_data_df=__data_df_df=... div_data_df=__data_df=__df_df_data_df=__df_data_df=__data_df=__df_df_data_df=__df_df_data_df=__df_data_df=__data_df_df=__data_df_df=__data_df_df=__data_df_df_df=__data_df_df_df=__data_df_df_df_df=__data_df=__df_df_df_df_df=__data_df_df_df_df=__data_df_df_df_df=__data_df_df_df_df_df_df=__data routines from the point of view of DDF operator design. Only operator timings have been considered (without data loading time). Input data will either be loaded from the driver to the workers or loaded as Parquet files from the workers themselves (Dask & Apache Spark discourage the former). Data is then repartitioned based on parallelism and cached. We admit that in real applications, operator performance alone may not portray a comprehensive idea of the overall performance of a data engineering framework. But we believe it is reasonably sufficient for the purpose of proposing an alternative approach for execution. Dask DDFs, Ray Datasets, Spark Datasets, and Modin DDFs are only used here as baselines. We tried our best to refer to publicly available documentation, user guides and forums while carrying out these tests to get the optimal configurations. ### _Communication & Computation_ Out of the 3 operators considered, joins have the most communication overhead, as it is a binary operator (2 input DFs). We investigated how the communication and computation time varies based on the parallelism. Even at the smallest parallelism (32), there is a significant communication overhead (Gloo 27%, MPI 17%, UCX 17%), and as the parallelism increases, it dominates the wall time (Gloo 76%, MPI 86%, UCX 69%). Unfortunately, we did not have enough expertise in the Spark, Dask, or Ray DDF code base to run a similar micro-benchmark. But even while using libraries specialized for message passing, _Cylon_ encounters significant communication overhead. ### _OpenMPI vs. Gloo vs. UCX/UCC_ In this experiment, we test the scalability of _Cylon_ communicator implementations (for join operation). As discussed in Section IV, we would not be able to use MPI implementations inside distributed computing libraries. Figure 7 confirms that our alternative choices of Gloo and UCX/UCC show equivalent performance and scalability. In fact, UCX/UCC outperforms OpenMPI in higher parallelisms. We have seen this trend in other operator benchmarks as well. ### _CylonFlow-on-Dask & CylonFlow-on-Ray_ In this experiment we showcase the performance on the proposed HP-DDF approach for distributed computing libraries (Dask & Ray) against their own DDF implementations (Dask DDF & Ray Datasets). Unfortunately we encountered several challenges with Ray Datasets. It only supports unary operators currently, therefore we could not test joins. Moreover, Ray groupby did not complete within 3 hours, and sort was showing presentable results. We have also included Apache Spark, since the proposed approach leverages actor model. We enabled Apache Arrow in PySpark feature because it would be more comparable. We also added Modin DDFs to the mix. Unfortunately, it only supports broadcast joins which performs poorly on two similar sized DFs. We could only get Modin to run on Ray backend with our datasets, and it would default to Pandas for sort. Pandas serial performance is also added as a baseline comparison. Looking at the 1 billion rows strong scaling timings in Figure 8, we observe that _Cylon_, _Cylon_-on-Dask, & _Cylon_-on-Ray are nearly indistinguishable (using Gloo communication). Thus it is evident that the proposed _CylonFlow_ actor approach on top of Dask/Ray does not add any unexpected overheads to vanilla _Cylon_ HP-DDF performance. Dask & Spark Datasets show commendable scalability for join and sort, however former groupby displays very limited scalability. We investigated Dask & Spark further by performing a 100 million row test case (bottom row of Figure 8) which constitutes a communication-bound operation. Under these circumstances, both systems diverge significantly at higher parallelisms, indicating limitations in their communication implementations. We also noticed a consistent anomaly in Spark timings for 8-32 parallelism. We hope to further investigate this with the help of the Spark community. _CylonFlow_ also shows decreasing scalability with much smoother gradients and displays better communication performance. These findings reinforce our suggestion to use a pseudo-BSP environment that employs a modular communicator. In fact, our preliminary tests sug Fig. 6: Communication & Computation Breakdown of _Cylon_ Join Operation (1B rows) Fig. 7: OpenMPI, Gloo, vs. UCX/UCC (1B rows, Log-Log) - Processes spawned by mpirun gested that using UCX/UCC communicator could potentially improve the performance further in the same setup (Section V-B). At 512 parallelism, on average _CylonFlow_ performs \(142\times,123\times\), and \(118\times\) better than Pandas serial performance for join, groupby, and sort respectively. We also observe that the serial performance of _CylonFlow_ outperforms others consistently, which could be directly related to _Cylon_'s C++ implementation and the use of Apache Arrow format. At every parallelism, _CylonFlow_ distributed performance is \(2-4\times\) higher than Dask/Spark consistently. These results confirm the efficacy of the proposed approach. ### _Pipeline of Operators_ We also tested the following pipeline on _CylonFlow_, Dask DDF, & Spark Datasets, join - groupby - sort - add_scalar. As depicted in Figure 8, the gains of _CylonFlow_ become more pronounced in composite use cases. Average speed-up over Dask DDFs ranges from \(10-24\times\), while for Spark Datasets it is \(3-5\times\). As mentioned in Section IV, _Cylon_ execution coalesces all local operators that are in-between communication routines in the pipeline, and we believe this is a major reason for this gain. ## VI Limitations & Future Work From our findings in Section IV, the idea of using BSP execution environments is a very common use case in HPC and supercomputing clusters, and the _CylonFlow_ concept readily fits these environments. We are currently working with Radical-Cybertools and Parsl teams to extend _CylonFlow_ to leadership class supercomputers based on workflow management software stack. In addition, we plan to extend _CylonFlow_ on top of pure actor libraries such as Akka. This would enable _Cylon_'s native performance on the JVM using Java Native Interface (JNI). We are currently adding these JNI bindings to _Cylon_ & _CylonFlow_. In Section V we saw significant time being spent on communication. In modern CPU hardware, we can perform computation while waiting on communication results. Since an operator consists of sub-operators arranged in a DAG, we can exploit _pipeline parallelism_ by overlapping communication and computation. Furthermore we can also change the granularity of a computation such that it fits into CPU caches. We have made some preliminary investigations on these ideas, Fig. 8: Strong Scaling of DDF Operators (Log-Log), Top: 1B rows, Bottom: 100M rows Fig. 9: Pipeline of Operators (1B rows, Log-Log) and we were able to see significant performance improvements for _Cylon_. Section IV proposed an _CylonFlow_ data store that allows sharing data with downstream applications. This work is still under active development. Providing fault tolerance in an MPI-like environment is quite challenging, as it operates under the assumption that the communication channels are alive throughout the application. This means providing communication-level fault tolerance would be complicated. However, we are planning to add a checkpointing mechanism that would allow a much coarser-level fault tolerance. Load imbalance (especially with skewed datasets) could starve some processes and might reduce the overall throughput. To avoid such scenarios, we are working on a sample-based repartititoning mechanism. ## VII Related Work In a previous publication we proposed a formal framework for designing and developing high performance data engineering frameworks that includes data structures, architectures, and program models [22]. Kamburugamuve et al proposed a similar big data toolkit named _Twister2_[8], which is based on Java. There the authors observed that using a BSP-like environment for data processing improves scalability, and they also introduced a DF-like API in Java named _TSets_. However, _Cylon_ being developed in C++ enables native performance of hardware and provides a more robust integration to Python and R. Being an extension built in Python, _CylonFlow_ still manages to achieve the same performance as _Cylon_. In parallel to _Cylon_, Totoni et al also suggested a similar HP-DDF runtime named _HiFrames_[23]. They primarily attempt to compile native MPI code for DDF operators using numba. While there are several architectural similarities between _HiFrames_ and _Cylon_, the latter is the only open-source HP-DDF available at the moment. The former is still bound to MPI, hence it would be impractical to use it in distributed computing libraries like Dask/Ray. Horovod utilizes Ray-actors that use Gloo communication for data parallel deep learning in its _Horovod-on-Ray_ project [24]. From the outset, this has many similarities to _CylonFlow_-on-Ray, but the API only supports communications on tensors. _Cylon/CylonFlow_ is a more generic approach that could support both DFS & tensors. In fact, these could be complementary frameworks, where data preprocessing and deep learning are integrated together in a single pipeline. In addition to the DDF runtimes we discussed in this paper, we would also like to recognize some exciting new projects. Velox is a C++ vectorized database acceleration library managed by the Meta Inc. incubator [25]. Currently it does not provide a DF abstraction, but still offers most of the operators shown in Figure 3. Photon is another C++ based vectorized query engine developed by Databricks [26] that enables native performance to the Apache Spark ecosystem. Unfortunately, it has yet to be released to the open source community. Substrat is another interesting model that attempts to produce an independent description of data compute operations [27]. ## VIII Conclusion Scalable dataframe systems are vital for modern data engineering applications, but despite this many systems available today fail to meet the scalability expectations. In this paper, the authors present an alternative approach for scalable dataframes, _CylonFlow_, which attempts to bring high performance computing into distributed computing runtimes. Their proposed stateful pseudo-BSP environment and modularized communicator enable state-of-the-art scalability and performance on Dask and Ray environments, thereby _supercharging them_. _CylonFlow_ is compared against Dask and Ray's own dataframe systems as well as Apache Spark, Modin, and Pandas. Using _Cylon_ HP-DDF C++ backend and Apache Arrow format give _CylonFlow_ superior sequential performance to the competition. Modular communicator in _CylonFlow_ allows swapping Gloo and UCX/UCC for DDF communications, which enables scalable distributed performance on Dask/Ray environments. In essence, _CylonFlow_ creates a ubiquitous data engineering ecosystem that unifies both HPC and distributed computing communities.
2308.15741
Stability and regularization for ill-posed Cauchy problem of a stochastic parabolic differential equation
In this paper, we investigate an ill-posed Cauchy problem involving a stochastic parabolic equation. We first establish a Carleman estimate for this equation. Leveraging this estimate, we derive the conditional stability and convergence rate of the Tikhonov regularization method for the aforementioned ill-posed Cauchy problem. To complement our theoretical analysis, we employ kernel-based learning theory to implement the completed Tikhonov regularization method for several numerical examples.
Fangfang Dou, Peimin Lü, Yu Wang
2023-08-30T03:39:14Z
http://arxiv.org/abs/2308.15741v2
Stability and regularization for ill-posed Cauchy problem of a stochastic parabolic differential equation ###### Abstract In this paper, we investigate an ill-posed Cauchy problem involving a stochastic parabolic equation. We first establish a Carleman estimate for this equation. Leveraging this estimate, we are able to derive the conditional stability and convergence rate of the Tikhonov regularization method for the aforementioned ill-posed Cauchy problem. To complement our theoretical analysis, we employ kernel-based learning theory to implement the completed Tikhonov regularization method for several numerical examples. **2020 Mathematics Subject Classification**. 35R30,65N21,60H15. **Key Words**. Carleman estimates, Cauchy problem of stochastic parabolic differential equation, conditional stability, regularization, numerical approximation. ## 1 Introduction To beginning with, we introduce some notations concerning stochastic analysis. More details for these can be found in [24]. Let \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\) with \(\mathbb{F}=\{\mathcal{F}_{t}\}_{t\geq 0}\) be a complete filtered probability space on which a one-dimensional standard Brownian motion \(\{W(t)\}_{t\geq 0}\) is defined. Let \(H\) be a Frechet space. We denote by \(L^{2}_{\mathbb{F}}(0,T;H)\) the Frechet space consisting of all \(H\)-valued \(\mathbb{F}\)-adapted processes \(X(\cdot)\) such that \(\mathbb{E}(|X(\cdot)|^{2}_{L^{2}(0,T;H)})<\infty\); by \(L^{\infty}_{\mathbb{F}}(0,T;H)\) the Frechet space consisting of all \(H\)-valued \(\mathbb{F}\)-adapted bounded processes; and by \(L^{2}_{\mathbb{F}}(\Omega;C([0,T];H))\) the Frechet space consisting of all \(H\)-valued \(\mathbb{F}\)-adapted continuous processes \(X\) such that \(\mathbb{E}(|X|^{2}_{C([0,T];H)})<\infty\). All of the above spaces are equipped with the canonical quasi-norms. Furthermore, all of the above spaces are Banach spaces equipped with the canonical norms, if \(H\) is a Banach space. For simplicity, we use the notation \(y_{i}\equiv y_{i}(x)=\frac{\partial y(x)}{\partial x_{i}}\), where \(x_{i}\) is the \(i\)-th coordinate of a generic point \(x=(x_{1},\cdots,x_{n})\) in \(\mathbb{R}^{n}\). In a similar manner, we use the notation \(z_{i}\), \(v_{i}\), etc. for the partial derivatives of \(z\) and \(v\) with respect to \(x_{i}\). Also, we denote the scalar product in \(\mathbb{R}^{n}\) by \(\left<\cdot,\cdot\right>\), and use \(C\) to denote a generic positive constant independent of the solution \(y\), which may change from line to line. Let \(T>0\), \(G\subset\mathbb{R}^{n}\) (\(n\in\mathbb{N}\)) be a given bounded domain with the \(C^{4}\) boundary \(\partial G\), and \(\Gamma\) be a given nonempty open subset of \(\partial G\). Let \(a_{1}\in L^{2}_{\mathbb{F}}(0,T;L^{\infty}(G;\mathbb{R}^{n}))\), \(a_{2}\in L^{\infty}_{\mathbb{F}}(0,T;L^{\infty}(G))\), \(a_{3}\in L^{\infty}_{\mathbb{F}}(0,T;W^{1,\infty}(G))\), \(g_{1}\in L^{2}_{\mathbb{F}}(0,T;\)\(H^{1}(\Gamma))\) and \(g_{2}\in L^{2}_{\mathbb{F}}(0,T;L^{2}(\Gamma))\), and \(a^{ij}:\;\Omega\times[0,T]\times\overline{G}\to\mathbb{R}^{n\times n}\;\;(i,j =1,2,\cdots,n)\) satisfies **(H1)**\(a^{ij}\in L^{2}_{\mathbb{F}}(\Omega;C^{1}([0,T];W^{2,\infty}(G)))\) _and \(a^{ij}=a^{ji}\);_ **(H2)**\(\sum\limits_{i,j=1}^{n}a^{ij}(\omega,t,x)\xi^{i}\xi^{j}\geq s_{0}|\xi|^{2},\;( \omega,t,x,\xi)\equiv(\omega,t,x,\xi^{1},\cdots,\xi^{n})\in\Omega\times(0,T) \times G\times\mathbb{R}^{n}\) _for some \(s_{0}>0\)._ Now, the Cauchy problem of the forward stochastic parabolic differential equation can be described as follows. \[\begin{cases}dy-\sum\limits_{i,j=1}^{n}(a^{ij}y_{i})_{j}dt=[\left\langle a_{ 1},\nabla y\right\rangle+a_{2}y]dt+a_{3}ydW(t)&\text{ in }(0,T)\times G,\\ y=g_{1}&\text{ on }(0,T)\times\Gamma,\\ \frac{\partial y}{\partial\nu}=g_{2}&\text{ on }(0,T)\times\Gamma.\end{cases} \tag{1.1}\] Let \(\mathbf{G}_{\Gamma}\stackrel{{\triangle}}{{=}}\{G^{\prime}\subset G |\partial G^{\prime}\cap\partial G\subset\Gamma\}\) and \[H^{2}_{\Gamma}(G)\stackrel{{\triangle}}{{=}}\{\eta\in H^{2}_{loc} (G):\eta|_{G^{\prime}}\in H^{2}(G^{\prime}),\quad\forall G^{\prime}\in\mathbf{ G}_{\Gamma},\quad\eta|_{\Gamma}=g_{1},\;\partial_{n}\eta|_{\Gamma}=g_{2}\}.\] The aim of this paper is to study the Cauchy Problem with the Lateral Data for the stochastic parabolic differential equation: **Problem (CP)** Find a function \(y\in L^{2}_{\mathbb{F}}(\Omega;C([0,T];L^{2}_{loc}(G)))\cap L^{2}_{\mathbb{F}} (0,T;H^{2}_{\Gamma}(G))\) that satisfies system (1.1). In certain applications involving diffusive, thermal, and heat transfer problems, measuring some boundary data can be challenging. For instance, in nuclear reactors and steel furnaces in the steel industry, the interior boundary can be difficult to measure. Similarly, the use of liquid crystal thermography in visualization can pose the same problem. To address this issue, engineers attempt to reconstruct the status of the boundary using measurements from the accessible boundary. This, in turn, creates the Cauchy problem for parabolic equations. Due to the requirements in real applications, numerous researchers have focused on solving the Cauchy problem of parabolic differential equations, particularly on the inverse problems for deterministic parabolic equations, as seen in [14, 15, 16, 17, 19, 22, 31, 33], and their respective references. Among these works, some have studied the identification of coefficients in parabolic equations through lateral Cauchy observations, such as uniqueness and stability [14, 19, 31, 33], and numerical reconstruction [15], assuming that the initial value is known. Meanwhile, the determination of the initial value has been considered in [16, 17, 22] under the assumption that all coefficients in the governing equation are known. Stochastic parabolic equations have a wide range of applications for simulating various behaviors of stochastic models which are utilized in numerous fields such as random growth of bacteria populations, propagation of electric potential in neuron models, and physical systems that are subject to thermal fluctuations (e.g., [18, 24]). In addition, they can be considered as simplified models for complex phenomena such as turbulence, intermittence, and large-scale structure (e.g., [8]). Given the significant applications of these models, stochastic parabolic equations have been extensively studied in both physics and mathematics. Therefore, it is natural to study the Cauchy problem of stochastic parabolic equations in such situations. However, due to the complexity of stochastic parabolic equations, some tools become ineffective for solving these problems. Thus, the research on inverse problems for stochastic parabolic differential equations is relatively scarce. In [1], the author proved backward uniqueness of solutions to stochastic semilinear parabolic equations and tamed Navier-Stokes equations driven by linearly multiplicative Gaussian noises, via the logarithmic convexity property known to hold for solutions to linear evolution equations in Hilbert spaces with self-adjoint principal parts. In [11, 27], authors studied inverse random source problems for the stochastic time fractional diffusion equation driven by a spatial Gaussian random field, proving the uniqueness and representation for the inverse problems, as well as proposing a numerical method using Fourier methods and Tikhonov regularization. Carleman estimates play an important role in the study of inverse problems for stochastic parabolic differential equations, such as inverse source problems [23, 35], determination of the history for stochastic diffusion processes [4, 23, 30, 34], and unique continuation properties [6, 32]. We refer the reader to [25] for a survey on some recent advances in Carleman estimates and their applications to inverse problems for stochastic partial differential equations. In this paper, our objective is to solve **Problem (CP)**, i.e., we aim to retrieve the solution of equation (1.1) with observed data from the lateral boundary. To this end, we first prove the conditional stability based on a new Carleman estimate for the stochastic parabolic equation (1.1). Then we construct a Tikhonov functional for the Cauchy problem based on the Tikhonov regularization strategy and prove the uniqueness of the minimizer of the Tikhonov functional, as well as the convergence rate for the optimization problem using variational principles, Riesz theorems, and Carleman estimates established previously. Generally, the optimization problem for the Tikhonov functional is difficult to solve in the study of inverse problems in stochastic partial differential equations (SPDEs). This is because it involves solving the adjoint problem of the original problem, which is challenging to handle. In fact, one of the primary differences between stochastic parabolic equations and deterministic parabolic equations is that at least one partial derivative of the solution does not exist, making it impossible to express the solution of that equation explicitly. Fortunately, we can express the mild solution of the stochastic parabolic equations using the fundamental solution of the corresponding deterministic equation [7, 9]. This idea suggests that we can use kernel-based theory to numerically solve the minimization problem of the Tikhonov functional without computing the adjoint problem. Furthermore, we can solve the problem in one step without iteration, thus reducing the computational cost to some extent. This technique has gained attention in the study of numerical computation for ordinary and partial differential equations, and the use of fundamental solutions as kernels has proven effective for solving inverse problems in deterministic evolution partial differential equations. As far as we know, our work is the first attempt to apply the regularization method combining with kernel-based theory to solve inverse problems for stochastic parabolic differential equations. The strong solution of equation (1.1) is useful in the proof of convergence rate of regularization method. Thus, we recall it here. **Definition 1.1**: _We call \(y\in L^{2}_{\mathbb{F}}(\Omega;C([0,T];L^{2}_{loc}(G)))\cap L^{2}_{\mathbb{F}}(0,T;H^{2}_{\Gamma}(G))\) a solution to the equation (1.1) if for any \(t\in[0,T]\) and a.e. \(x\in G^{\prime}\in\mathbf{G}_{\Gamma}\), it holds_ \[\begin{split}& y(t,x)-y(0,x)\\ =&\int_{0}^{t}\Big{\{}-\sum_{i,j=1}^{n}\big{(}a^{ij}(s,x)y_{i}(s, x)\big{)}_{j}+[\big{\langle}a_{1}(s,x),\nabla y(s,x)\big{\rangle}+a_{2}(s,x)y(s,x)] \Big{\}}ds\\ &+\int_{0}^{t}a_{3}(s,x)y(s,x)dW(s),\qquad\mathbb{P}\text{-a.s.} \end{split} \tag{1.2}\] It should be noted that the assumption regarding the solution, namely that \(y\in L^{2}_{\mathbb{F}}(\Omega;C([0,T];\)\(L^{2}_{loc}(G)))\cap L^{2}_{\mathbb{F}}(0,T;H^{2}_{\Gamma}(G))\), implies a higher degree of smoothness than is strictly necessary for establishing Holder stability of the Cauchy problem. However, this additional smoothness is required in order to facilitate the regularization process. The remainder of this paper is organized as follows. Section 2 presents a proof of a Holder-type conditional stability result, along with the establishment of the Carleman estimate as a particularly useful tool in this proof. The regularization method, using the Tikhonov regularization strategy, is introduced in Section 3 and we showcase both the uniqueness and convergence rate of the regularization solution. Section 4 provides numerically simulated reconstructions aided by kernel-based learning theory. ## 2 Conditional Stability In this section, we prove a stability estimate for the Cauchy problem. **Theorem 2.1**: _(Holder stability estimate). For any given \(G^{\prime}\subset\subset G\) and \(\varepsilon>0\), there exists \(\delta_{0}\in(0,1)\), \(\beta\in(0,1)\) and a constant \(C>0\) such that if for \(\delta\in(0,\delta_{0})\),_ \[\max\{|g_{1}|_{L^{2}_{\mathbb{F}}(0,T;H^{1}(\Gamma))},\;|g_{2}|_{L^{2}_{ \mathbb{F}}(0,T;L^{2}(\Gamma))}\}\leq\delta, \tag{2.1}\] _then_ \[\mathbb{E}\int_{\varepsilon}^{T-\varepsilon}\int_{G^{\prime}}\big{(}y^{2}+| \nabla y|^{2}\big{)}dxdt\leq C|y|^{2}_{L^{2}_{\mathbb{F}}((0,T);H^{1}(G))} \delta^{\beta},\quad\forall\delta\in(0,\delta_{0})\,. \tag{2.2}\] We first recall the following exponentially weighted energy identity, which will play a key role in the sequel. **Lemma 2.1**: _[_29_, Theorem 3.1]_ _Let \(n\) be a positive integer,_ \[b^{ij}=b^{ji}\in L^{2}_{\mathbb{F}}(\Omega;C^{1}([0,T];W^{2,\infty}(\mathbb{R} ^{n}))),\qquad i,j=1,2,\cdots,n, \tag{2.3}\] _and \(\ell\in C^{1,4}((0,T)\times\mathbb{R}^{n}),\;\Psi\in C^{1,2}((0,T)\times \mathbb{R}^{n})\). Assume \(u\) is an \(H^{2}(\mathbb{R}^{n})\)-valued continuous semi-martingale. Set \(\theta=e^{\ell}\) and \(v=\theta u\). Then for a.e. \(x\in\mathbb{R}^{n}\) and \(\mathbb{P}\)-a.s. \(\omega\in\Omega\),_ \[2\int_{0}^{T}\theta\Big{[}-\sum_{i,j=1}^{n}(b^{ij}v_{i})_{j}+ \mathcal{A}v\Big{]}\Big{[}du-\sum_{i,j=1}^{n}(b^{ij}u_{i})_{j}dt\Big{]}+2\int_ {0}^{T}\sum_{i,j=1}^{n}(b^{ij}v_{i}dv)_{j}\] \[\quad+2\int_{0}^{T}\sum_{i,j=1}^{n}\Big{[}\sum_{i^{\prime},j^{ \prime}}\Big{(}2b^{ij}b^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}v_{i}v_{j^{ \prime}}-b^{ij}b^{i^{\prime}j^{\prime}}\ell_{i}v_{i^{\prime}}v_{j^{\prime}} \Big{)}+\Psi b^{ij}v_{i}v-b^{ij}\Big{(}\mathcal{A}\ell_{i}+\frac{\Psi_{i}}{2} \Big{)}v^{2}\Big{]}_{j}\,dt\] \[=2\int_{0}^{T}\sum_{i,j=1}^{n}\Big{\{}\sum_{i^{\prime},j^{\prime }}\Big{[}2b^{ij^{\prime}}\Big{(}b^{i^{\prime}j}\ell_{i^{\prime}}\Big{)}_{j^{ \prime}}-\Big{(}b^{ij}b^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}\Big{)}_{j^{ \prime}}\Big{]}-\frac{b^{ij}_{t}}{2}+\Psi b^{ij}\Big{\}}v_{i}v_{j}dt \tag{2.4}\] \[\quad+\int_{0}^{T}\mathcal{B}v^{2}dt+2\int_{0}^{T}\Big{[}-\sum_{i,j=1}^{n}(b^{ij}v_{i})_{j}+\mathcal{A}v\Big{]}\Big{[}-\sum_{i,j=1}^{n}(b^{ij} v_{i})_{j}+(\mathcal{A}-\ell_{t})v\Big{]}dt\] \[\quad+\Big{(}\sum_{i,j=1}^{n}b^{ij}v_{i}v_{j}+\mathcal{A}v^{2} \Big{)}\Big{|}_{0}^{T}-\int_{0}^{T}\theta^{2}\sum_{i,j=1}^{n}b^{ij}[(du_{i}+\ell _{i}du)(du_{j}+\ell_{j}du)]-\int_{0}^{T}\theta^{2}\mathcal{A}(du)^{2},\] _where_ \[\left\{\begin{array}{l}{\cal A}\stackrel{{\triangle}}{{=}}-\sum_{i, j=1}^{n}(b^{ij}\ell_{i}\ell_{j}-b^{ij}_{j}\ell_{i}-b^{ij}\ell_{ij})-\Psi,\\ {\cal B}\stackrel{{\triangle}}{{=}}2\Big{[}{\cal A}\Psi-\sum_{i, j=1}^{n}({\cal A}b^{ij}\ell_{i})_{j}\Big{]}-{\cal A}_{t}-\sum_{i,j=1}^{n}(b^{ij} \Psi_{j})_{i}.\end{array}\right. \tag{2.5}\] In the sequel, for a positive integer \(p\), denote by \(O(\mu^{p})\) a function of order \(\mu^{p}\) for large \(\mu\) (which is independent of \(\lambda\)); by \(O_{\mu}(\lambda^{p})\) a function of order \(\lambda^{p}\) for fixed \(\mu\) and for large \(\lambda\). _Proof of Theorem 2.1._ We borrow some idea from [21]. Take a bounded domain \(J\subset\mathbb{R}^{n}\) such that \(\partial J\cap\overline{G}=\Gamma\) and that \(\widetilde{G}=J\cup G\cup\Gamma\) enjoys a \(C^{4}\) boundary \(\partial\widetilde{G}\). Then we have \[G\subset\widetilde{G},\quad\overline{\partial G\cap\widetilde{G}}\subset \Gamma,\quad\partial G\setminus\Gamma\subset\partial\widetilde{G}\quad\mbox{ and }\widetilde{G}\setminus G\mbox{ contains some nonempty open subset}. \tag{2.6}\] Let \(G_{0}\subset\subset\widetilde{G}\setminus G\) be an open subdomain. We know that there is a \(\psi\in C^{4}(\widetilde{G})\) satisfying (see [29, Lemma 5.1] for example) \[\left\{\begin{array}{ll}\psi>0&\mbox{ in }\widetilde{G},\\ \psi=0&\mbox{ on }\partial\widetilde{G},\\ |\nabla\psi|>0&\mbox{ in }G\subset\widetilde{G}\setminus G_{0}.\end{array}\right. \tag{2.7}\] Since \(G^{\prime}\subset\subset G\), we can choose a sufficiently large \(N>0\) such that \[G^{\prime}\subset\Big{\{}x:\,x\in\widetilde{G},\;\psi(x)>\frac{4}{N}|\psi|_{L^ {\infty}(\widetilde{G})}\Big{\}}\cap G. \tag{2.8}\] Further, let \(\rho=\frac{1}{\sqrt{2}}\Big{(}\frac{1}{2}-\kappa\Big{)}T>0\), there exists a positive number \(c\) such that \[c\rho^{2}<|\psi|_{L^{\infty}(\widetilde{G})}<2c\rho^{2}. \tag{2.9}\] Then, define \[\phi(t,x)=\psi(x)-c(t-t_{0})^{2},\quad\alpha(t,x)=e^{\mu\phi(t,x)} \tag{2.10}\] for a fix \(t_{0}\in[\sqrt{2}\rho,T-\sqrt{2}\rho]\), and denote \(\beta_{k}=\beta_{k}(\mu)=e^{\mu\big{[}\frac{k}{N}|\psi|_{L^{\infty}( \widetilde{G})}-\frac{c\rho^{2}}{N}\big{]}},k=1,2,3,4\). Set \[Q_{k}=\{(t,x):\,x\in\overline{G},\;\alpha(x,t)>\beta_{k}\},\quad k=1,2,3,4. \tag{2.11}\] Clearly, \(Q_{k}\) is independent of \(\mu\). Moreover, \(\psi(x)>\frac{4}{N}|\psi|_{L^{\infty}(\widetilde{G})}\) for any \((t,x)\in\Big{(}t_{0}-\frac{\rho}{\sqrt{N}},t_{0}+\frac{\rho}{\sqrt{N}}\Big{)}\times G ^{\prime}\), and thus \[\psi(x)-c(t-t_{0})^{2}>\frac{4}{N}|\psi|_{L^{\infty}(\widetilde{G})}-\frac{c \rho^{2}}{N}.\] Hence, we see \(\alpha(t,x)>\beta_{4}\) in \((t,x)\in\Big{(}t_{0}-\frac{\rho}{\sqrt{N}},t_{0}+\frac{\rho}{\sqrt{N}}\Big{)} \times G^{\prime}\), and thus \(Q_{4}\supset\Big{(}t_{0}-\frac{\rho}{\sqrt{N}},t_{0}+\frac{\rho}{\sqrt{N}} \Big{)}\times G^{\prime}\). On the other hand, for any \((t,x)\in Q_{1}\), \[\psi(x)-c(t-t_{0})^{2}>\frac{1}{N}|\psi|_{L^{\infty}(\widetilde{G})}-\frac{c \rho^{2}}{N}.\] This yields \[|\psi|_{L^{\infty}(\widetilde{G})}-\frac{1}{N}|\psi|_{L^{\infty}(\widetilde{G})} +\frac{c\rho^{2}}{N}>c(t-t_{0})^{2}.\] Together with (2.9) we have \[2\Big{(}1-\frac{1}{N}\Big{)}c\rho^{2}+\frac{c\rho^{2}}{N}>c(t-t_{0})^{2}.\] Therefore, we conclude \[\Big{(}t_{0}-\frac{\rho}{\sqrt{N}},t_{0}+\frac{\rho}{\sqrt{N}}\Big{)}\times G^ {\prime}\subset Q_{1}\subset(t_{0}-\sqrt{2}\rho,t_{0}+\sqrt{2}\rho)\times \overline{G}. \tag{2.12}\] Next, for any \((t,x)\in\partial Q_{1}\), we know \(x\in\overline{G}\) and \(\alpha(t,x)\geq\beta_{1}\). In above set, if \(x\in G\), \(\alpha(t,x)=\beta_{1}\), and if \(x\in\partial G\), it must hold \(x\in\Gamma\). Indeed, if \(x\in\partial G\setminus\Gamma\), from \(G\setminus\Gamma\subset\partial\widetilde{G}\) we get \(\psi(x)=0\). On the other hand, since \(\alpha(x)\geq\beta_{1}\), we know \[\psi(x)-c(t-t_{0})^{2}=-c(t-t_{0})^{2}\geq\frac{1}{N}|\psi|_{L^{\infty}( \widetilde{G})}-\frac{c\rho^{2}}{N}.\] And thus \[0\leq c(t-t_{0})^{2}\leq\frac{1}{N}(c\rho^{2}-|\psi|_{L^{\infty}(\widetilde{G} )}),\] which contradicts (2.9). Therefore, we have \[\partial Q_{1}=\Sigma_{1}\cup\Sigma_{2}, \tag{2.13}\] where \[\Sigma_{1}\subset(0,T)\times\Gamma,\qquad\Sigma_{2}=\{(t,x):\,x\in G,\,\alpha( t,x)=\beta_{1}\}. \tag{2.14}\] Let \(\eta\in C_{0}^{\infty}(Q_{2})\) such that \(0\leq\eta\leq 1\) and \(\eta=1\) in \(Q_{3}\). For any \(y\) solving (1.1), let \(z=\eta y\), then \(z\) solves \[\left\{\begin{array}{ll}dz-\sum_{i,j=1}^{n}(a^{ij}z_{i})_{j}dt= \big{(}\big{\langle}a_{1},\nabla z\big{\rangle}+a_{2}z+f\big{)}dt+a_{3}zdB(t)& \mbox{ in }Q_{1},\\ \\ z=\frac{\partial z}{\partial\nu}=0&\mbox{ on }\Sigma_{2}.\end{array}\right. \tag{2.15}\] Here \(f=-\sum_{i,j=1}^{n}(a_{j}^{ij}\eta_{i}y+a^{ij}\eta_{i}y+2a^{ij}\eta_{i}y_{j}) -\big{\langle}a_{1},\nabla\eta\big{\rangle}y+\eta_{t}y\in L^{2}_{\mathcal{F} }(0,T;L^{2}(G))\) and \(f\) is supported in \(Q_{2}\setminus Q_{3}\). Applying Lemma 2.1 to (2.15) with \(u=z\), \(b^{ij}=a^{ij}\), \(\ell=\lambda\alpha\) and \(\Psi=2\sum_{i,j=1}^{n}a^{ij}\ell_{ij}\), integrating it in \(G\) and taking mean value, noting that \(z\) is supported in \(Q_{1}\), we see \[2\mathbb{E}\int_{Q_{1}}\theta\Big{[}-\sum_{i,j=1}^{n}(a^{ij}v_{i}) _{j}+\mathcal{A}v\Big{]}\Big{[}dz-\sum_{i,j=1}^{n}(a^{ij}z_{i})_{j}dt\Big{]}dx+2 \int_{Q_{1}}\sum_{i,j=1}^{n}(a^{ij}v_{i}dv)_{j}dx\] \[+2\mathbb{E}\int_{Q_{1}}\sum_{i,j=1}^{n}\Big{[}\sum_{i^{\prime},j ^{\prime}}\Big{(}2a^{ij}a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}}v_{i}v_{j^{ \prime}}-a^{ij}a^{i^{\prime}j^{\prime}}\ell_{i}v_{i^{\prime}}v_{j^{\prime}} \Big{)}+\Psi a^{ij}v_{i}v-a^{ij}\Big{(}\mathcal{A}\ell_{i}+\frac{\Psi_{i}}{2} \Big{)}v^{2}\Big{]}_{j}dxdt\] \[\geq 2\sum_{i,j=1}^{n}\mathbb{E}\int_{Q_{1}}c^{ij}v_{i}v_{j}dxdt+ \int_{0}^{T}\mathcal{B}v^{2}dxdt+\mathbb{E}\int_{Q_{1}}\|-\sum_{i,j=1}^{n}(a ^{ij}v_{i})_{j}+\mathcal{A}v\|^{2}dxdt\] \[-\mathbb{E}\int_{Q_{1}}\theta^{2}\sum_{i,j=1}^{n}a^{ij}(dz_{i}+ \ell_{i}dz)(dz_{j}+\ell_{j}dz)dx-\mathbb{E}\int_{Q_{1}}\theta^{2}\mathcal{A}( dz)^{2}dx,\] where \[\left\{\begin{array}{l}\mathcal{A}=-\sum_{i,j=1}^{n}\big{(}a^{ij}\ell_{i} \ell_{j}-a^{ij}_{j}\ell_{i}-a^{ij}\ell_{ij}\big{)}-\Psi,\\ \\ \mathcal{B}=2\Big{[}A\Psi-\sum_{i,j=1}^{n}\big{(}Aa^{ij}\ell_{i}\big{)}_{j} \Big{]}-A_{t}-\sum_{i,j=1}^{n}\big{(}a^{ij}\Psi_{j}\big{)}_{i}-\ell_{t}^{2}, \\ \\ c^{ij}=\sum_{i^{\prime},j^{\prime}}\Big{[}2a^{ij^{\prime}}\big{(}a^{i^{\prime }j}\ell_{i^{\prime}}\big{)}_{j^{\prime}}-\big{(}a^{ij}a^{i^{\prime}j^{\prime}} \ell_{i^{\prime}}\big{)}_{j^{\prime}}\Big{]}-\frac{a^{ij}_{t}}{2}+\Psi a^{ij}. \end{array}\right. \tag{2.17}\] Now, we estimate \(\mathcal{A}\), \(\mathcal{B}\) and \(c^{ij}\). \[\mathcal{A} =-\sum_{i,j=1}^{n}\big{(}a^{ij}\ell_{i}\ell_{j}-a^{ij}_{j}\ell_{ i}-a^{ij}\ell_{ij}\big{)}-\Psi\] \[=-\lambda^{2}\mu^{2}\alpha^{2}\sum_{i,j=1}^{n}a^{ij}\psi_{i}\psi_{ j}+\lambda\mu\alpha\sum_{i,j=1}^{n}a^{ij}_{j}\psi_{i}-\lambda\mu^{2}\alpha\sum_{i,j =1}^{n}a^{ij}\psi_{i}\psi_{j}-\lambda\mu\alpha\sum_{i,j=1}^{n}a^{ij}\psi_{ij} \tag{2.18}\] \[=-\lambda^{2}\mu^{2}\alpha^{2}\sum_{i,j=1}^{n}a^{ij}\psi_{i}\psi_ {j}+\lambda\alpha O(\mu^{2}),\] In order to estimate \(\mathcal{B}\), we first do some computations. For \(\Psi\), we have \[\Psi=2\sum_{i,j=1}^{n}a^{ij}\big{(}\lambda\mu^{2}\alpha\psi_{i}\psi_{j}+ \lambda\mu\alpha\psi_{ij}\big{)}=2\lambda\mu^{2}\alpha\sum_{i,j=1}^{n}a^{ij} \psi_{i}\psi_{j}+\lambda\alpha O(\mu). \tag{2.19}\] Next, we have \[\ell_{i^{\prime}j^{\prime}j}=\lambda\mu^{3}\alpha\psi_{i^{\prime}}\psi_{j^{ \prime}}\psi_{j}+\lambda\alpha O(\mu^{2}),\quad\ell_{i^{\prime}j^{\prime}ij}= \lambda\mu^{4}\alpha\psi_{i^{\prime}}\psi_{j^{\prime}}\psi_{i}\psi_{j}+\lambda \alpha O(\mu^{3}).\] Therefore, we get \[\Psi_{j}=2\sum_{i^{\prime},j^{\prime}=1}^{n}\big{(}a^{i^{\prime}j^{\prime}} \ell_{i^{\prime}j^{\prime}}\big{)}_{j}=2\sum_{i^{\prime},j^{\prime}}\big{(}a^{i ^{\prime}j^{\prime}}_{j}\ell_{i^{\prime}j^{\prime}}+a^{i^{\prime}j^{\prime}} \ell_{i^{\prime}j^{\prime}j}\big{)}=2\lambda\mu^{3}\alpha\sum_{i^{\prime},j^{ \prime}=1}^{n}a^{i^{\prime}j^{\prime}}\psi_{i^{\prime}}\psi_{j^{\prime}}\psi_{ j}+\lambda\alpha O(\mu^{2}),\] \[\Psi_{ij}=2\sum_{i^{\prime},j^{\prime}=1}^{n}\left(a_{ij}^{i^{\prime}j^{\prime}} \ell_{i^{\prime}j^{\prime}}+a^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j^{\prime}ij }+2a_{j}^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j^{\prime}i}\right)=2\lambda\mu^{ 4}\alpha\sum_{i^{\prime},j^{\prime}=1}^{n}a^{i^{\prime}j^{\prime}}\psi_{i^{ \prime}}\psi_{j^{\prime}}\psi_{i}\psi_{j}+\lambda\alpha O(\mu^{3}).\] Hence, we find \[-\sum_{i,j=1}^{n}\left(a^{ij}\Psi_{j}\right)_{i}=-\sum_{i,j=1}^{n}\left(a_{i}^ {ij}\Psi_{j}+a^{ij}\Psi_{ji}\right)=-2\lambda\mu^{4}\alpha\Big{(}\sum_{i,j=1}^ {n}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}+\lambda\alpha O(\mu^{3}). \tag{2.20}\] Further, from (2.18) and (2.19), we have \[\mathcal{A}\Psi=-2\lambda^{3}\mu^{4}\alpha^{3}\Big{(}\sum_{i,j=1}^{n}a^{ij} \psi_{i}\psi_{j}\Big{)}^{2}+\lambda^{3}\alpha^{3}O(\mu^{3})+\lambda^{2}\alpha ^{2}O(\mu^{4}). \tag{2.21}\] From the definition of \(A\), we find \[\mathcal{A}_{j} =-\sum_{i^{\prime},j^{\prime}=1}^{n}\big{(}a_{j}^{i^{\prime}j^{ \prime}}\ell_{i^{\prime}}\ell_{j^{\prime}}+2a^{i^{\prime}j^{\prime}}\ell_{i^{ \prime}}\ell_{j^{\prime}j}-a_{j^{\prime}j^{\prime}}^{i^{\prime}j^{\prime}}\ell _{i^{\prime}}-a_{j^{\prime}}^{i^{\prime}j^{\prime}}\ell_{i^{\prime}j}+a_{j}^{i ^{\prime}j^{\prime}}\ell_{i^{\prime}j^{\prime}}+a^{i^{\prime}j^{\prime}}\ell_ {i^{\prime}j^{\prime}}\big{)}\] \[=-\sum_{i^{\prime},j^{\prime}=1}^{n}\big{(}2a^{i^{\prime}j^{ \prime}}\ell_{i^{\prime}}\ell_{j^{\prime}j}+a^{i^{\prime}j^{\prime}}\ell_{i^{ \prime}j^{\prime}}\big{)}+\big{(}\lambda\alpha+\lambda^{2}\alpha^{2}\big{)}O( \mu^{2})\] \[=-2\lambda^{2}\mu^{3}\alpha^{2}\sum_{i^{\prime},j^{\prime}=1}^{n} a^{i^{\prime}j^{\prime}}\psi_{i^{\prime}}\psi_{j^{\prime}}\psi_{j}+O_{\mu}( \lambda)+\lambda^{2}\alpha^{2}O(\mu^{2}).\] Hence, we see \[\sum_{i,j=1}^{n}\mathcal{A}_{j}a^{ij}\ell_{i}=-2\lambda^{3}\mu^{4}\alpha^{3} \Big{(}\sum_{i,j=1}^{n}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}+O_{\mu}(\lambda^{2})+ \lambda^{3}\alpha^{3}O(\mu^{3}),\] which leads to \[\sum_{i,j=1}^{n}\big{(}\mathcal{A}a^{ij}\ell_{i}\big{)}_{j} =\sum_{i,j=1}^{n}\mathcal{A}_{j}a^{ij}\ell_{i}+\mathcal{A}\sum_{i,j=1}^{n}\big{(}a_{j}^{ij}\ell_{i}+a^{ij}\ell_{ij}\big{)} \tag{2.22}\] \[=-3\lambda^{3}\mu^{4}\alpha^{3}\Big{(}\sum_{i,j=1}^{n}a^{ij}\psi _{i}\psi_{j}\Big{)}^{2}+O_{\mu}(\lambda^{2})+\lambda^{3}\alpha^{3}O(\mu^{3}).\] Further, we have \[\mathcal{A}_{t} =-\sum_{i,j=1}^{n}\left(a^{ij}\ell_{i}\ell_{j}-a_{j}^{ij}\ell_{i}+ a^{ij}\ell_{ij}\right)_{t} \tag{2.23}\] \[=-\sum_{i,j=1}^{n}\left[a^{ij}(\ell_{i}\ell_{j})_{t}-a_{j}^{ij} \ell_{it}+a^{ij}\ell_{ijt}\right]+\lambda^{2}\alpha^{2}O(\mu^{2})+\lambda \alpha O(\mu^{2})+\lambda\alpha TO(\mu^{3})\] \[=\lambda^{2}\alpha^{2}TO(\mu^{3})+\lambda^{2}\alpha^{2}O(\mu^{2}) +\lambda\alpha O(\mu^{2})+\lambda\alpha TO(\mu^{3}).\] Finally, it holds \[\ell_{t}^{2}=\lambda^{2}\mu^{2}\alpha^{2}\phi_{t}^{2}=O_{\mu}(\lambda^{2}). \tag{2.24}\] From the definition of \(B\) (see (2.17)), and combing (2.20)-(2.29), we have \[\begin{array}{rl}{\cal B}&=-4\lambda^{3}\mu^{4}\alpha^{3}\Big{(} \sum_{i,j=1}^{n}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}+6\lambda^{3}\mu^{4}\alpha^{3} \Big{(}\sum_{i,j=1}^{n}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}-2\lambda\mu^{4}\alpha \Big{(}\sum_{i,j=1}^{n}a^{ij}\psi_{i}\psi_{j}\Big{)}^{2}\\ &\quad+\lambda^{3}\alpha^{3}O(\mu^{3})+O_{\mu}(\lambda^{2})\\ &=2\lambda^{3}\mu^{4}\alpha^{3}\Big{(}\sum_{i,j=1}^{n}a^{ij}\psi_{i} \psi_{j}\Big{)}^{2}+\lambda^{3}\alpha^{3}O(\mu^{3})+O_{\mu}(\lambda^{2}).\end{array}\] Hence, we know \[{\cal B}\geq 2s_{0}^{2}\lambda^{3}\mu^{4}\alpha^{3}|\nabla\psi|^{4}+\lambda^ {3}\alpha^{3}O(\mu^{3})+O_{\mu}(\lambda^{2}). \tag{2.25}\] Now we estimate \(c^{ij}\). By direct computation, we have \[\begin{array}{rl}\sum_{i,j=1}^{n}c^{ij}v_{i}v_{j}&=\sum_{i,j=1} ^{n}\Big{\{}\sum_{i^{\prime},j^{\prime}=1}^{n}\Big{[}2a^{ij^{\prime}}a^{i^{ \prime}j}\ell_{i^{\prime}j^{\prime}}+a^{ij}a^{i^{\prime}j^{\prime}}\ell_{i^{ \prime}j^{\prime}}+2a^{ij^{\prime}}a^{i^{\prime}j}_{j^{\prime}}\ell_{i^{ \prime}}-(a^{ij}a^{i^{\prime}j^{\prime}})_{j^{\prime}}\ell_{i^{\prime}}\Big{]}- \frac{a_{i}^{ij}}{2}\Big{\}}v_{i}v_{j}\\ &=\sum_{i,j=1}^{n}\Big{\{}\sum_{i^{\prime},j^{\prime}=1}^{n}\Big{[}2 \lambda\mu^{2}\alpha a^{ij^{\prime}}a^{i^{\prime}j}\psi_{i^{\prime}}\psi_{j^{ \prime}}+\lambda\mu^{2}\alpha a^{ij}a^{i^{\prime}j^{\prime}}\psi_{i^{\prime}} \psi_{j^{\prime}}+\lambda\alpha O(\mu)\Big{]}+O(1)\Big{\}}v_{i}v_{j}\\ &=2\lambda\mu^{2}\alpha\Big{(}\sum_{i,j=1}^{n}a^{ij}\psi_{i}v_{j} \Big{)}^{2}+\lambda\mu^{2}\alpha\Big{(}\sum_{i,j=1}^{n}a^{ij}\psi_{i}\psi_{j} \Big{)}\Big{(}\sum_{i,j=1}^{n}a^{ij}v_{i}v_{j}\Big{)}\\ &\quad+\lambda\alpha|\nabla v|^{2}O(\mu)+O(1)|\nabla v|^{2}\\ &\geq[s_{0}^{2}\lambda\mu^{2}\alpha|\nabla\psi|^{2}+\lambda\alpha O(\mu)+O(1) ]|\nabla v|^{2}.\end{array} \tag{2.26}\] Owing to that \(z\) is a solution to equation (2.15), we find \[\begin{array}{rl}&\ 2\mathbb{E}\int_{Q_{1}}\theta\Big{[}-\sum_{i,j=1} ^{n}\big{(}a^{ij}v_{i}\big{)}_{j}+{\cal A}v\Big{]}\Big{[}dz-\sum_{i,j=1}^{n} \big{(}a^{ij}z_{i}\big{)}_{j}dt\Big{]}dx\\ &\leq\mathbb{E}\int_{Q_{1}}\Big{[}-\sum_{i,j=1}^{n}\big{(}a^{ij}v_{i} \big{)}_{j}+{\cal A}v\Big{]}^{2}dxdt+\mathbb{E}\int_{Q_{1}}\theta^{2}\Big{(} \big{\langle}a_{1},\nabla z\big{\rangle}+a_{2}z+f\Big{)}^{2}dxdt\\ &\leq\mathbb{E}\int_{Q_{1}}\Big{[}-\sum_{i,j=1}^{n}\big{(}a^{ij}v_{i} \big{)}_{j}+{\cal A}v\Big{]}^{2}dxdt+3|a_{1}|_{L^{\infty}(0,T;L^{\infty}(G; \mathbb{R}^{n}))}^{2}\mathbb{E}\int_{Q_{1}}\theta^{2}|\nabla z|^{2}dxdt\\ &\quad+3|a_{2}|_{L^{\infty}(0,T;L^{\infty}(G))}^{2}\mathbb{E}\int_{Q_{1}} \theta^{2}z^{2}dxdt+3\mathbb{E}\int_{Q_{1}}\theta^{2}f^{2}dxdt.\end{array} \tag{2.27}\] Since \(z=0\) on \(\partial Q_{1}\), we have \[\mathbb{E}\int_{Q_{1}}\sum_{i,j=1}^{n}\big{(}a^{ij}v_{i}dv\big{)}_{j}dx=0. \tag{2.28}\] By means of \(z=\frac{\partial z}{\partial\nu}=0\) on \(\Sigma_{2}\), we find \[\begin{array}{rl}&\Big{|}\mathbb{E}\int_{Q_{1}}\sum_{i,j=1}^{n} \Big{[}\sum_{i^{\prime},j^{\prime}}\Big{(}2a^{ij}a^{i^{\prime}j^{\prime}}\ell_{ i^{\prime}}v_{i}v_{j^{\prime}}-a^{ij}a^{i^{\prime}j^{\prime}}\ell_{i}v_{i^{ \prime}}v_{j^{\prime}}\Big{)}+\Psi a^{ij}v_{i}v-a^{ij}\Big{(}{\cal A}\ell_{i}+ \frac{\Psi_{i}}{2}\Big{)}v^{2}\Big{]}_{j}dxdt\Big{|}\\ &\leq C\lambda\mu\mathbb{E}\int_{0}^{T}\int_{\Gamma}\theta^{2}\Big{(} \alpha|g_{2}|^{2}+\lambda^{2}\mu^{2}\alpha^{3}|g_{1}|^{2}\Big{)}d\Gamma dt. \end{array} \tag{2.29}\] From (2.26) and (2.25), we know that there is a \(\mu_{0}>0\) such that for every \(\mu\geq\mu_{0}\), one can find a \(\lambda_{0}(\mu)>0\) so that for all \(\lambda\geq\lambda_{0}(\mu)\), it holds that \[\mathbb{E}\sum_{i,j=1}^{n}\int_{Q_{1}}c^{ij}v_{i}v_{j}dxdt\geq C\lambda\mu^{2} \mathbb{E}\int_{Q_{1}}\alpha|\nabla v|^{2}dxdt, \tag{2.30}\] and \[\mathbb{E}\int_{Q_{1}}Bv^{2}dxdt\geq C\lambda^{3}\mu^{4}\mathbb{E}\int_{Q_{1}} \alpha^{3}v^{2}dxdt. \tag{2.31}\] Utilizing the fact that \(z\) solves (2.15) again, we get \[\begin{split}&\mathbb{E}\int_{Q_{1}}\theta^{2}\sum_{i,j=1}^{n}a^{ ij}(dz_{i}+\ell_{i}dz)(dz_{j}+\ell_{j}dz)dx\\ &=\mathbb{E}\int_{Q_{1}}\theta^{2}\sum_{i,j=1}^{n}a^{ij}\Big{[}( a_{3}z)_{i}(a_{3}z)_{j}+2\lambda\mu\alpha\psi_{i}(a_{3}z)_{j}a_{3}z+\lambda^{2} \mu^{2}\alpha^{2}\psi_{i}\psi_{j}a_{3}^{2}z^{2}\Big{]}dxdt\\ &\leq C|a_{3}|^{2}_{L^{\infty}_{F}(0,T;W^{1,\infty}(G))}\Big{(} \mathbb{E}\int_{Q_{1}}\theta^{2}\big{(}z^{2}+|\nabla z|^{2}\big{)}dxdt+\lambda ^{2}\mu^{2}\mathbb{E}\int_{Q_{1}}\theta^{2}\alpha^{2}|z|^{2}dxdt\Big{)}.\end{split} \tag{2.32}\] By \(z_{i}=\theta^{-1}(v_{i}-\ell_{i}v)=\theta^{-1}(v_{i}-\lambda\mu\alpha\psi_{i}v)\) and \(v_{i}=\theta(z_{i}+\ell_{i}z)=\theta(z_{i}+\lambda\mu\alpha\psi_{i}z)\), we get \[\frac{1}{C}\theta^{2}\big{(}|\nabla z|^{2}+\lambda^{2}\mu^{2}\alpha^{2}z^{2} \big{)}\leq|\nabla v|^{2}+\lambda^{2}\mu^{2}\alpha^{2}v^{2}\leq C\theta^{2} \big{(}|\nabla z|^{2}+\lambda^{2}\mu^{2}\alpha^{2}z^{2}\big{)}. \tag{2.33}\] Combing (2.16), (2.27)-(2.33), we know that there is a \(\mu_{1}\geq\mu_{0}\) such that for all \(\mu\geq\mu_{1}\), we can find a \(\lambda_{1}(\mu_{1})\geq\lambda_{0}(\mu_{0})\) so that for every \(\lambda\geq\lambda_{1}(\mu_{1})\), it holds that \[\begin{split}&\lambda^{3}\mu^{4}\mathbb{E}\int_{Q_{1}}\alpha^{3} \theta^{2}z^{2}dxdt+\lambda\mu^{2}\mathbb{E}\int_{Q_{1}}\alpha\theta^{2}| \nabla z|^{2}dxdt\\ &\leq C\mathbb{E}\int_{Q_{1}}\theta^{2}f^{2}dxdt+C\lambda\mu \mathbb{E}\int_{0}^{T}\int_{\Gamma}\theta^{2}\big{(}\alpha|g_{2}|^{2}+\lambda^ {2}\mu^{2}\alpha^{3}|g_{1}|^{2}\big{)}d\Gamma dt.\end{split} \tag{2.34}\] Recalling that \(f=-\sum_{i,j=1}^{n}(a_{j}^{ij}\eta_{i}y+a^{ij}\eta_{i}y+2a^{ij}\eta_{i}y_{j}) -\big{\langle}a_{1},\nabla\eta\big{\rangle}y+\eta_{i}y\), from (2.34), we find that \[\begin{split}&\lambda^{3}\mu^{4}\mathbb{E}\int_{Q_{1}}\alpha^{3} \theta^{2}z^{2}dxdt+\lambda\mu^{2}\mathbb{E}\int_{Q_{1}}\alpha\theta^{2}| \nabla z|^{2}dxdt\\ &\leq C\mathbb{E}\int_{Q_{2}\backslash Q_{3}}\theta^{2}\big{(}|y |^{2}+|\nabla y|^{2}\big{)}dxdt+C\lambda\mu\mathbb{E}\int_{0}^{T}\int_{ \Gamma}\theta^{2}\big{(}\alpha|g_{2}|^{2}+\lambda^{2}\mu^{2}\alpha^{3}|g_{1}|^{ 2}\big{)}d\Gamma dt,\end{split} \tag{2.35}\] which, together with \(z=y\) in \(Q_{3}\), implies that \[\begin{split}&\lambda^{3}\mu^{4}\mathbb{E}\int_{Q_{3}}\alpha^{3} \theta^{2}y^{2}dxdt+\lambda\mu^{2}\mathbb{E}\int_{Q_{3}}\alpha\theta^{2}| \nabla y|^{2}dxdt\\ &\leq C\mathbb{E}\int_{Q_{2}\backslash Q_{3}}\theta^{2}\big{(}|y |^{2}+|\nabla y|^{2}\big{)}dxdt+C\lambda\mu\mathbb{E}\int_{0}^{T}\int_{\Gamma} \theta^{2}\big{(}\alpha|g_{2}|^{2}+\lambda^{2}\mu^{2}\alpha^{3}|g_{1}|^{2}\big{)} d\Gamma dt.\end{split} \tag{2.36}\] From the choice of \(Q_{3}\) and (2.8), we know that \[\begin{array}{l}\lambda^{3}\mu^{4}\mathbb{E}\int_{Q_{3}}\alpha^{3}\theta^{2}y^{ 2}dxdt+\lambda\mu^{2}\mathbb{E}\int_{Q_{3}}\alpha\theta^{2}|\nabla y|^{2}dxdt\\ \geq e^{2\lambda\beta_{4}}\mathbb{E}\int_{t_{0}-\frac{\rho}{\sqrt{N}}}^{t_{0}+ \frac{\rho}{\sqrt{N}}}\int_{G^{\prime}}\big{(}\lambda^{3}y^{2}+\lambda|\nabla y| ^{2}\big{)}dxdt.\end{array} \tag{2.37}\] From the definition of \(Q_{2}\) and \(Q_{3}\), we find \[\mathbb{E}\int_{Q_{2}\setminus Q_{3}}\theta^{2}\big{(}|y|^{2}+|\nabla y|^{2} \big{)}dxdt\leq e^{2\lambda\beta_{3}}\mathbb{E}\int_{Q_{1}}\big{(}|y|^{2}+| \nabla y|^{2}\big{)}dxdt. \tag{2.38}\] Now, we fix \(\mu=\mu_{1}\) in \(\beta_{3}\), \(\beta_{4}\) and \(\theta\). Let \(\gamma=\max_{(t,x)\in\Gamma\times(0,T)}\alpha(t,x)\). From (2.36)-(2.38), we obtain that \[\begin{array}{l}e^{2\lambda\beta_{4}}\mathbb{E}\int_{t_{0}-\frac{\rho}{ \sqrt{N}}}^{t_{0}+\frac{\rho}{\sqrt{N}}}\int_{G^{\prime}}\big{(}\lambda^{3}y^ {2}+\lambda|\nabla y|^{2}\big{)}dxdt\\ \leq Ce^{2\lambda\beta_{3}}\mathbb{E}\int_{Q_{1}}\big{(}|y|^{2}+|\nabla y|^{2} \big{)}dxdt+Ce^{2\lambda\gamma}\mathbb{E}\int_{0}^{T}\int_{\Gamma}\theta^{2} \big{(}|g_{1}|^{2}+|g_{2}|^{2}\big{)}d\Gamma dt,\end{array}\] which implies that for all \(\lambda\geq\lambda_{1}\), it holds that \[\begin{array}{l}\mathbb{E}\int_{t_{0}-\frac{\rho}{\sqrt{N}}}^{t_{0}+\frac{ \rho}{\sqrt{N}}}\int_{G^{\prime}}\big{(}y^{2}+|\nabla y|^{2}\big{)}dxdt\\ \leq Ce^{-2\lambda(\beta_{4}-\beta_{3})}\mathbb{E}\int_{Q_{1}} \big{(}|y|^{2}+|\nabla y|^{2}\big{)}dxdt+Ce^{2\lambda\gamma}\mathbb{E}\int_{ 0}^{T}\int_{\Gamma}\big{(}|g_{1}|^{2}+|g_{2}|^{2}\big{)}d\Gamma dt.\end{array} \tag{2.39}\] By taking \(t_{0}=\sqrt{2}\rho+\frac{2i\rho}{\sqrt{N}}\), \(i=0,1,\cdots,m\) such that \[\sqrt{2}\rho+\frac{2m\rho}{\sqrt{N}}\leq T-\sqrt{2}\rho\leq\sqrt{2}\rho+\frac {(2m+1)\rho}{\sqrt{N}},\] and \(\varepsilon=\sqrt{2}\rho\), we get \[\begin{array}{l}\mathbb{E}\int_{\varepsilon}^{T-\varepsilon}\int_{G^{ \prime}}\big{(}y^{2}+|\nabla y|^{2}\big{)}dxdt\\ =\mathbb{E}\int_{\sqrt{2}\rho}^{T-\sqrt{2}\rho}\int_{G^{\prime}} \big{(}y^{2}+|\nabla y|^{2}\big{)}dxdt\\ \leq\sum_{i=1}^{m}\mathbb{E}\int_{\sqrt{2}\rho+\frac{(i+1)\rho}{ \sqrt{N}}}^{\sqrt{2}\rho+\frac{(i+1)\rho}{\sqrt{N}}}\int_{G^{\prime}}\big{(}y ^{2}+|\nabla y|^{2}\big{)}dxdt\\ \leq Ce^{-2\lambda(\beta_{4}-\beta_{3})}\mathbb{E}\int_{Q_{1}} \big{(}|y|^{2}+|\nabla y|^{2}\big{)}dxdt+Ce^{2\lambda\gamma}\mathbb{E}\int_{ 0}^{T}\int_{\Gamma}\big{(}|g_{1}|^{2}+|g_{2}|^{2}\big{)}d\Gamma dt.\end{array} \tag{2.40}\] We now balance the terms in the right hand side of (2.40) via choosing \(\lambda=\lambda(\delta)\) such that \(e^{\lambda\gamma}\delta=Ce^{-\lambda(\beta_{4}-\beta_{3})}\). This implies that \[\lambda=\frac{-\ln\delta}{\gamma+\beta_{4}-\beta_{3}}.\] Hence, for \(\delta\in(0,\delta_{0})\), where the number \(\delta_{0}\) is so small that \(\frac{-\ln\delta}{\gamma+\beta_{4}-\beta_{3}}\geq\lambda_{2}\), we have (2.2) with \(\beta=\frac{\beta_{4}-\beta_{3}}{\gamma+\beta_{4}-\beta_{3}}\). **Remark 2.1**: _The inequality (2.34) is called the Carleman estimate for (1.1)._ ## 3 Regularization Method Let \[L_{\mathbb{F}}^{2}(0,T;H_{\Gamma}^{2}(G))\stackrel{{\triangle}}{{=}} \left\{v\in L_{\mathbb{F}}^{2}(0,T;H^{2}(G)):v\mid_{(0,T)\times\Gamma}=g_{1},\; \partial_{\nu}v\mid_{(0,T)\times\Gamma}=g_{2}\right\}.\] For \(y\in L_{\mathbb{F}}^{2}(\Omega;C([0,T];L^{2}(G)))\cap L_{\mathbb{F}}^{2}(0,T;H_ {\Gamma}^{2}(G))\), set \[(Py)(t,x) = y(t,x)-y(0,x)\] \[-\int_{0}^{t}\Big{\{}-\sum_{i,j=1}^{n}\big{(}a^{ij}(s,x)y_{i}(s,x )\big{)}_{j}+\big{[}\big{\langle}a_{1}(s,x),\nabla y(s,x)\big{\rangle}+a_{2}(s, x)y(s,x)]\Big{\}}ds\] \[-\int_{0}^{t}a_{3}(s,x)y(s,x)dW(s),\qquad\mathbb{P}\text{-a.s.}\] Given a function \(F\in L_{\mathbb{F}}^{2}(0,T;H_{\Gamma}^{2}(G))\), the Tikhonov functional is now constructed as \[J_{\gamma}\left(u\right) = |Pu|^{2}_{L_{\mathbb{F}}^{2}(0,T;L^{2}(G))}+\gamma|u-F|^{2}_{L_{ \mathbb{F}}^{2}(0,T;H^{2}(G))},\quad u\in L_{\mathbb{F}}^{2}(0,T;H^{2}(G)), \tag{3.1}\] \[\text{where }u|_{(0,T)\times\Gamma}=g_{1},\;\partial_{\nu}u|_{(0,T )\times\Gamma}=g_{2}, \tag{3.2}\] We have the following result. **Theorem 3.1**: _For every \(\gamma\in(0,1)\) there exists unique minimizer \(u_{\gamma}\in L_{\mathbb{F}}^{2}(0,T;H^{2}(G))\) of the functional \(J_{\gamma}\left(u\right)\) in (3.1), (3.2) and with a constant \(C>0\) the following estimate holds_ \[|u_{\gamma}|_{L_{\mathbb{F}}^{2}(0,T;H^{2}(G))}\leq\frac{C}{\sqrt{\gamma}}|F|_ {L_{\mathbb{F}}^{2}(0,T;H^{2}(G))}. \tag{3.3}\] **Proof**. Let \[L_{\mathbb{F}}^{2}(0,T;H_{\Gamma,0}^{2}(G))\stackrel{{\triangle}} {{=}}\left\{v\in L_{\mathbb{F}}^{2}(0,T;H^{2}(G)):v\mid_{(0,T)\times\Gamma}= \partial_{\nu}v\mid_{(0,T)\times\Gamma}=0\right\}.\] Let \(v=y-F\). Then \(v\in L_{\mathbb{F}}^{2}(0,T;H_{\Gamma,0}^{2}(G))\). By (3.1) and (3.2), we should minimize the following functional \[\overline{J}_{\gamma}\left(v\right)=|Pv+PF|^{2}_{L_{\mathbb{F}}^{2}(0,T;L^{2}( G))}+\gamma|v|^{2}_{L_{\mathbb{F}}^{2}(0,T;H^{2}(G))},\quad v\in L_{ \mathbb{F}}^{2}(0,T;H_{\Gamma,0}^{2}(G)). \tag{3.4}\] If \(v_{\gamma}\in L_{\mathbb{F}}^{2}(0,T;H_{\Gamma,0}^{2}(G))\) is a minimizer of the functional (3.4), then \(u_{\gamma}=v_{\gamma}+F\) is a minimizer of the functional (3.1). On the other hand, if \(u_{\gamma}\) is a minimizer of the functional (3.1), then \(v_{\gamma}=u_{\gamma}-F\in L_{\mathbb{F}}^{2}(0,T;H_{\Gamma,0}^{2}(G))\) is a minimizer of the functional (3.4). By the variational principle, any minimizer \(v_{\gamma}\) of the functional (3.4) should satisfy the following condition \[\begin{array}{l}\big{\langle}Pv_{\gamma},Ph\big{\rangle}_{L_{\mathbb{F}}^{2} (0,T;L^{2}(G))}+\gamma\big{\langle}v_{\gamma},h\big{\rangle}_{L_{\mathbb{F}}^ {2}(0,T;H^{2}(G))}\\ =\big{\langle}Ph,-PF\big{\rangle}_{L_{\mathbb{F}}^{2}(0,T;L^{2}(G))},\quad \forall h\in L_{\mathbb{F}}^{2}(0,T;H_{\Gamma,0}^{2}(G)).\end{array} \tag{3.5}\] Denote \[\big{\langle}v,h\big{\rangle}_{\gamma}=\big{\langle}Pv,Ph\big{\rangle}_{L_{ \mathbb{F}}^{2}(0,T;L^{2}(G))}+\gamma\big{\langle}v,h\big{\rangle}_{L_{\mathbb{F }}^{2}(0,T;H^{2}(G))},\qquad\forall v,h\in L_{\mathbb{F}}^{2}(0,T;H_{\Gamma,0}^{ 2}(G)). \tag{3.6}\] Hence, \(\big{\langle}v,h\big{\rangle}_{\gamma}\) defines a new scalar product in the Hilbert space \(L_{\mathbb{F}}^{2}(0,T;H_{\Gamma,0}^{2}(G))\) and the corresponding norm \(|v|_{\gamma}\) satisfies \[\sqrt{\gamma}|v|_{L_{\mathbb{F}}^{2}(0,T;H^{2}(G))}\leq|v|_{\gamma}\leq C|v|_{L_ {\mathbb{F}}^{2}(0,T;H^{2}(G))}. \tag{3.7}\] Thus, the scalar product (3.6) generates the new norm \(|v|_{\gamma}\), which is equivalent with the norm \(|v|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G))}\). Hence, (3.5) can be rewritten as \[\left\langle v_{\gamma},h\right\rangle_{\gamma}=\left\langle Ph,-PF\right\rangle _{L^{2}_{\mathbb{F}}(0,T;L^{2}(G))},\quad\forall h\in L^{2}_{\mathbb{F}}(0,T;H ^{2}_{\Gamma,0}(G)). \tag{3.8}\] It follows from (3.7) that \[\left|\left\langle Ph,-PF\right\rangle_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G))} \right|\leq C|F|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G))}|h|_{\gamma}. \tag{3.9}\] Hence, the right hand side of (3.8) is a bounded linear functional on \(L^{2}_{\mathbb{F}}(0,T;H^{2}_{\Gamma,0}(G))\). By Riesz representation theorem, there exists an element \(w_{\gamma}\in L^{2}_{\mathbb{F}}(0,T;H^{2}_{\Gamma,0}(G))\) such that \[\left\langle Ph,-PF\right\rangle_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G))}=\left\langle w _{\gamma},h\right\rangle_{\gamma},\qquad\forall h\in L^{2}_{\mathbb{F}}(0,T;H ^{2}_{\Gamma,0}(G)).\] This and (3.8) imply that \[\left\langle v_{\gamma},h\right\rangle_{\gamma}=\left\langle w_{\gamma},h \right\rangle_{\gamma},\qquad\forall h\in L^{2}_{\mathbb{F}}(0,T;H^{2}_{ \Gamma,0}(G)).\] Hence, the minimizer \(v_{\gamma}=w_{\gamma}.\) Also, by Riesz theorem and (3.9), \[|v_{\gamma}|_{\gamma}\leq C|F|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G))}.\] Hence, the minimizer \(v_{\gamma}\) is unique and the left inequality (3.7) implies (3.3). **Remark 3.1**: _In the proof of Theorem 3.1, we utilized solely the variational principle and Riesz's theorem, without invoking the Carleman estimate. However, we make use of this estimate in Theorem 3.2, where we establish the rate of convergence of minimizers \(u_{\gamma}\) to the precise solution, provided that certain conditions are met._ Assume that there exists an exact solution \(y^{*}\) of the problem (1.1) with the exact data \[y^{*}|_{\Gamma_{c}}=g_{1}^{*}\in L^{2}_{\mathbb{F}}(0,T;H^{1}(\Gamma)),\quad \partial_{\nu}y^{*}|_{\Gamma}=g_{2}^{*}\in L^{2}_{\mathbb{F}}(0,T;L^{2}(\Gamma )).\] By Theorem 2.1, the exact solution \(y^{*}\) is unique. Because of the existence of \(y^{*}\), there also exists an exact function \(F^{*}\in L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))\) such that \[y^{*}|_{\Gamma_{c}}=g_{1}^{*}\in L^{2}_{\mathbb{F}}(0,T;H^{1}(\Gamma)),\qquad \partial_{\nu}y^{*}|_{\Gamma}=g_{2}^{*}\in L^{2}_{\mathbb{F}}(0,T;L^{2}(\Gamma )).\] Here is an example of such a function \(F^{*}\). Let the function \(\rho\in C^{2}\left(\overline{Q}\right)\) be such that \(\rho\left(x\right)=1\) in a small neighborhood \[\rho\left(t,x\right)=\begin{cases}1,&\left(t,x\right)\in N_{\sigma}\left((0,T )\times\Gamma\right)=\left\{\left(t,x\right)\in Q:\,dist\left(\left(t,x\right),\left(0,T\right)\times\Gamma\right)<\sigma\right\},\\ 0,&x\in Q\setminus N_{2\sigma}\left(\left(0,T\right)\times\Gamma\right), \end{cases}\] where \(\sigma>0\) is a sufficiently small number. Then \(F^{*}\) can be constructed as \(F^{*}\left(t,x\right)=\rho\left(t,x\right)u^{*}\left(t,x\right)\). Let \(\delta>0\) be a sufficiently small number characterizing the error in the data. We assume that \[|g_{1}^{*}-g_{1}|_{L^{2}_{\mathbb{F}}(0,T;H^{1}(\Gamma))}\leq\delta,\quad|g_{2 }^{*}-g_{2}|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(\Gamma))}\leq\delta,\quad|F^{*}-F|_ {L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))}\leq\delta. \tag{3.10}\] **Theorem 3.2**: _(Convergence rate). Assume (3.10) and let the regularization parameter \(\gamma=\delta^{2\alpha}\), where \(\alpha\in(0,1]\) is a constant. Let \(\beta\) be the same as in Theorem 2.1. Then there exists a sufficiently small number \(\delta_{0}\in(0,1)\) and a constant \(C>0\) such that if \(\delta\in(0,\delta_{0}^{1/\alpha})\), then_ \[|y_{\gamma}-y^{*}|_{L^{2}_{\mathbb{F}}(\varepsilon,T-\varepsilon;H^{1}(G^{ \prime}))}\leq C\big{(}1+|y^{*}|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))} \big{)}\delta^{\alpha\beta},\quad\forall\delta\in(0,\delta_{0})\,, \tag{3.11}\] _where \(y_{\gamma(\delta)}\) is the minimizer of the functional (3.1)._ **Proof**. Let \(v^{*}=y^{*}-F^{*}\). Then \(v^{*}\in L^{2}_{\mathbb{F}}(0,T;H^{2}_{\Gamma,0}(G^{\prime}))\) and \(Pv^{*}=-PF^{*}\). Hence, \[\begin{split}&\big{\langle}Pv^{*},Ph\big{\rangle}_{L^{2}_{ \mathbb{F}}(0,T;L^{2}(G^{\prime}))}+\gamma\big{\langle}v^{*},h\big{\rangle}_{ L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))}\\ &=\big{\langle}Ph,-PF^{*}\big{\rangle}_{L^{2}_{\mathbb{F}}(0,T;L^ {2}(G^{\prime}))}+\gamma\big{\langle}v^{*},h\big{\rangle}_{L^{2}_{\mathbb{F}}( 0,T;H^{2}(G^{\prime}))},\quad\forall h\in L^{2}_{t}H^{2}_{x,0,c}\left(Q_{c} \right).\end{split} \tag{3.12}\] Subtract identity (3.5) from identity (3.12) and denote \(\widetilde{v}_{\gamma}=v^{*}-v_{\gamma}\) and \(\widetilde{F}=F^{*}-F\). Then \[\begin{split}&\big{\langle}P\widetilde{v}_{\gamma},Ph\big{\rangle} _{L^{2}_{\mathbb{F}}(0,T;L^{2}(G^{\prime}))}+\gamma\big{\langle}\widetilde{v} _{\gamma},h\big{\rangle}_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))}\\ &=\big{\langle}Ph,-P\widetilde{F}\big{\rangle}_{L^{2}_{\mathbb{F }}(Q_{c})}+\gamma\big{\langle}v^{*},h\big{\rangle}_{L^{2}_{\mathbb{F}}(0,T;H^ {2}(G^{\prime}))},\quad\forall h\in L^{2}_{t}H^{2}_{x,0,c}\left(G_{c}\right). \end{split}\] Setting here \(h\stackrel{{\triangle}}{{=}}\widetilde{v}_{\gamma}\), we obtain \[\begin{split}&\|P\widetilde{v}_{\gamma}\|_{L^{2}_{\mathbb{F}}(0,T; L^{2}(G^{\prime}))}^{2}+\gamma\,\|\widetilde{v}_{\gamma}\|_{L^{2}_{\mathbb{F}}(0,T;H^ {2}(G^{\prime}))}^{2}\\ &=\big{\langle}P\widetilde{v}_{\gamma},-P\widetilde{F}\big{\rangle} _{L^{2}_{\mathbb{F}}(0,T;L^{2}(G^{\prime}))}+\gamma\big{\langle}v^{*}, \widetilde{v}_{\gamma}\big{\rangle}_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))}.\end{split} \tag{3.13}\] Applying the Cauchy-Schwarz inequality to (3.13), we obtain \[\begin{split}&|P\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(0,T; L^{2}(G^{\prime}))}^{2}+\gamma|\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(0,T;H^ {2}(G^{\prime}))}^{2}\\ &\leq\frac{1}{2}|P\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(0,T ;L^{2}(G^{\prime}))}^{2}+\frac{1}{2}\big{|}P\widetilde{F}|_{L^{2}_{\mathbb{F}}( 0,T;L^{2}(G^{\prime}))}^{2}+\frac{\gamma}{2}|v^{*}|_{L^{2}_{\mathbb{F}}(0,T;H^ {2}(G^{\prime}))}^{2}+\frac{\gamma}{2}|\widetilde{v}_{\gamma}|_{L^{2}_{ \mathbb{F}}(0,T;H^{2}(G^{\prime}))}^{2}.\end{split} \tag{3.14}\] Hence, by (3.10) \[|P\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G^{\prime}))}^{2}+ \gamma|\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))}^{2} \leq C\delta^{2}+\gamma|v^{*}|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))}^{2}. \tag{3.15}\] Since \(\gamma=\delta^{2\alpha}\), where \(\alpha\in(0,1]\), then \(\delta^{2}\leq\gamma\). Hence, (3.15) implies that \[|\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))}\leq C \big{(}1+|v^{*}|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))}\big{)}, \tag{3.16}\] \[|P\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}(0,T;L^{2}(G^{\prime}))}^{2}\leq C \big{(}1+|v^{*}|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))}^{2}\big{)}\delta^{ 2\alpha}. \tag{3.17}\] Let \(w_{\gamma}=\widetilde{v}_{\gamma}\big{(}1+\|v^{*}\|_{L^{2}_{\mathbb{F}}(0,T;H^ {2}(G^{\prime}))}\,\big{)}^{-1}\). Then (3.16), (3.17) and Theorem 2.1 imply that \[\|w_{\gamma}\|_{L^{2}_{\mathbb{F}}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}) )}\leq C_{5}\delta^{\alpha\beta},\quad\forall\delta\in(0,\delta_{0})\,.\] Therefore, \[\|\widetilde{v}_{\gamma}\|_{L^{2}_{\mathbb{F}}(\varepsilon,T-\varepsilon;H^{1}(G ^{\prime}))}\leq C\left(1+\|v^{*}\|_{L^{2}_{\mathbb{F}}(0,T;H^{2}(G^{\prime}))} \right)\delta^{\alpha\beta},\quad\forall\delta\in(0,\delta_{0})\,. \tag{3.18}\] Next, since \(\widetilde{v}_{\gamma}=(y_{\gamma}-y^{*})+(F^{*}-F)\) and since by (3.10), \[\|F^{*}-F\|_{L^{2}_{\mathbb{F}}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))} \leq\delta,\] then \[\begin{split}&|\widetilde{v}_{\gamma}|_{L^{2}_{\mathbb{F}}( \varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}\geq|y_{\gamma}-y^{*}|_{L^{2}_{ \mathbb{F}}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}-|F^{*}-F|_{L^{2}_{ \mathbb{F}}(\varepsilon,T-\varepsilon;H^{1}(G^{\prime}))}\\ &\geq|y_{\gamma}-y^{*}|_{L^{2}_{\mathbb{F}}(\varepsilon,T-\varepsilon ;H^{1}(G^{\prime}))}-\delta.\end{split} \tag{3.19}\] Since numbers \(\beta,\delta\in(0,1)\) and since \(\alpha\in(0,1]\), then \(\delta^{\alpha\beta}>\delta\). Thus, using (3.18) and (3.19), we obtain (3.11). Numerical Approximations In this section, we aim to numerically solve the ill-posed Cauchy problem of the stochastic parabolic differential equation given by (1.1). For the sake of simplicity, we set \(a_{1}=0\), \(a_{2}=0\), \(a_{3}=1\) and \(T=1\) in the system for all numerical tests to follow. Since an explicit expression for the exact solution is unavailable, we resort to numerically solving the initial-boundary value problem: \[\left\{\begin{array}{l} dy-\sum_{i,j=1}^{n}(a^{ij}y_{i})_{j} dt=[\langle a_{1},\nabla y\rangle+a_{2}y]dt+a_{3}ydW(t)\quad\mbox{in}\quad(0,1) \times G,\\ y(0,x)=f(x)\quad\mbox{in}\quad G,\\ y(t,x)=g_{1}(t,x)\quad\mbox{on}\quad(0,1)\times\partial G,\end{array}\right. \tag{4.1}\] by employing the finite difference method with time discretized via the Euler-Maruyama method [10] to obtain the Cauchy data on \((0,1)\times\Gamma\). We then construct numerical approximations for the Cauchy problem (1.1) via Tikhonov regularization, with the aid of kernel-based learning theory, for which we have established convergence rate guaranteed in Section 3. We verify the proposed method by using following three examples. **Example 4.1**: _Let \(G=(0,1)\) and \(\Gamma=1\). Suppose that_ * \(f(x)=x(1-x),\quad g_{1}(t,x)=0.\)__ * \(f(x)=\begin{cases}4x,&x\in[0,0.25)\\ -\frac{4}{3}x+\frac{4}{3},&x\in[0.25,1]\end{cases},\quad g_{1}(t,x)=0.\)__ The simplest time-discrete approximation is the stochastic version of the Euler approximation, also known as the Euler-Maruyama method [10]. We simply describe the process of solving the initial-boundary value problem (4.1) in Example 4.1 by the Euler-Maruyama method in the following. Let \(y_{j,k}=y(x_{j},t_{k})\) with \(x_{j}=jh\), \(t_{k}=k\tau\), \(j=1,\cdots,m+1\), \(k=1,\cdots,n+1\), where \(h=1/n\) and \(\tau=1/m\) denote the spatial and temporal step sizes, respectively. It was shown that not all heuristic time-discrete approximations of stochastic differential equations converge to the solution process in a useful sense as the time step size \(\tau\) tends to zero [10]. Since the numerical approximation of observations obtained by direct application of backward finite difference scheme is not adapted to the filtration \(\mathbb{F}\), we can only solve the initial-boundary value problem (4.1) by the explicit finite difference scheme, i.e., \[\frac{y_{j,k+1}-y_{j,k}}{\tau}=\frac{y_{j-1,k}-2y_{j,k}+y_{j+1,k}}{h^{2}}+ \frac{W(t_{k+1})-W(t_{k})}{\tau}y_{j,k},\ 2\leq j\leq m,1\leq k\leq n.\] where the initial and boundary value are given by \[y_{j,1}=f(x_{j}),\ 1\leq j\leq m+1,\quad y_{1,k}=g_{1}(0,t_{k}),\ y_{m+1,k}=g_{1 }(1,t_{k}),\ 2\leq k\leq n+1.\] To ensure the numerical scheme is stable, we choose \(m=15\) and \(n=450\) in the computation. By solving above algebraic system, we obtain distribution of \(y\) at meshed grids of the initial-boundary value problem (4.1). In the process of solving Cauchy problem (1.1) numerically, we using \(y_{m,k}\) and \(y_{m+1,k}\) instead of the Cauchy data at \(x=1\). In the following, we numerically solve the optimization problem which is given in (3.1)-(3.2) by kernel-based learning theory. Suppose that \(\varphi(x,t)\) is the fundamental solution of parabolic equation \[y_{t}-\sum_{i,j=1}^{n}(a^{ij}y_{i})_{j}-\langle a_{1},\nabla y\rangle+a_{2}y=0 \quad\text{in}\quad(0,T)\times\mathbb{R}^{d},\] then the mild solution \(y(x,t)\) of \[\left\{\begin{array}{l}dy-\sum_{i,j=1}^{n}(a^{ij}y_{i})_{j}dt=[\langle a_{1},\nabla y\rangle+a_{2}y]dt+a_{3}ydW(t)\quad\text{in}\quad(0,T)\times\mathbb{R} ^{n},\\ y(x,0)=f(x),\quad x\in\mathbb{R}^{n},\end{array}\right. \tag{4.2}\] can be written as \[y(x,t)=\int_{0}^{t}\int_{\mathbb{R}^{d}}\varphi(x,t,y,s)a_{3}(y)dydW(s)+\int_{ \mathbb{R}^{d}}\varphi(x,t,y,0)f(y)dy. \tag{4.3}\] Since the fundamental solution \(\varphi(x,t)\) is deterministic, so the mild solution \(y(x,t)\) given by (4.3) is well-known. Assume that \[\tilde{y}(x,t)=\sum_{l=1}^{N}\lambda_{l}\Phi_{l}(x,t),\quad l=1,\cdots,N. \tag{4.4}\] Here \(\Phi_{l}\) are the basis functions, \(N\) is the number of source points, and \(\lambda_{l}\) are unknown coefficients to be determined. From (4.3), we can let \(\Phi_{l}(x,t)=\varphi(x-\xi_{l},t-\tau_{l}),l=1,\cdots,N\), where \(\varphi\) is the fundamental solution of the deterministic parabolic equation, \((\xi_{l},\tau_{l})\) are source points. The suitable choice of the source points can ensure that \(\tilde{y}\) is analytic in the domain \((0,T)\times G\) and make the algorithm effective and accurate. However, the optimal rule for the location of source points is still open. In recent work, we occupy uniformly distributed points on \(DT\times[-R,R+1]\) below \(t=0\), where \(DT\) and \(R\) are post-prior determined, as the source points. This choice is given in [5] and related works, and performs better in comparison with other schemes. Since \(\tilde{y}\) be given in (4.4) has already satisfied the equation in system (4.1), coefficients \(\lambda_{l},l=1,\cdots,N\) should be determined by Cauchy conditions. From the process of solving the initial boundary value problem we know that, \((x_{m+1},t_{k})\) and \((x_{m},t_{k}),k=1,\cdots,n\) can be set as the collocation points, and the problem converts to solving unknowns \(\lambda=[\lambda_{1},\cdots,\lambda_{N}]^{\mathsf{T}}\) from the linear system \[A\lambda=b, \tag{4.5}\] where \[A=\left[\begin{array}{c}\varphi(x_{m+1}-\xi_{l},t-\tau_{l})\\ \varphi(x_{m}-\xi_{l},t-\tau_{l})\end{array}\right]_{2n\times N},\] and \[b=[y(x_{m+1},t_{k}),y(x_{m},t_{k})]_{2n}^{\mathsf{T}},\] By comparing the Tikhonov functional be given in (3.1)-(3.2) we know, \(Pu\) is calculated by \(A\lambda-b\). Choosing the regularization parameter \(\gamma\) by using the L-curve method, that is, a regularized solution near the "corner" of the L-curve [12] \[L=\{(\log{(\|\lambda_{\gamma}\|^{2})},\log{(\|A\lambda_{\gamma}-b\|^{2})}), \gamma>0\},\] and denoting the regularized solution of linear system (4.5) by \(\lambda_{\gamma}^{*}\), leads to the approximated solution \[\tilde{y}_{\gamma}^{*}(x,t)=\sum_{l=1}^{N}\lambda_{\gamma,l}^{*}\Phi_{l}(x,t) \tag{4.6}\] of problem (1.1). To illustrate the comparison of the exact solution and its approximation \(\tilde{y}_{\gamma}^{*}\) and in order to avoid "inverse crime", we choose \(M=m+n\) and compute \(y(x,t)\) by finite difference method again such that \(y(x,t)\) and \(\tilde{y}_{\gamma}^{*}\) be defined on the same grid. Figure 4.1 shows the numerical solution of the initial-boundary problem and the approximation solutions of the Cauchy problem of Example 4.1(a) with different noisy levels. Since the data is given at \(x=1\), the numerical solution seems worse as \(x\) tends to \(0\), this is consistence with the convergence estimation in section 3 because no information is given at partial boundary \(\{x=0\}\). Furthermore, the convergence rate always holds in the temporal interval \([\varepsilon,T-\epsilon],\epsilon>0\) in section 3. However, from this figure, we find that the proposed method also works well at \(t=0\). Denote the relative error by \[E(x)=\frac{\|\tilde{y}_{\gamma}^{*}(x,t)-y(x,t)\|_{L_{2}(G)}}{\|y(x,t)\|_{L_{ 2}(G)}}. \tag{4.7}\] As in the study of stochastic equations, large numbers of sample paths should be checked in the numerical experiments for simulating the expectation of the solution. Thus, we do the tests with different sample paths. It is interesting from Figure 4.2 that when the number of sample paths is great than \(10\), the results seems no better off. Thus in the following, we only consider do the experiments with number of sample paths \(\#=10\). Now we consider the problem in Example 4.1(b). In this case, \(f\) is a piecewise smooth function. Figure 4.3 shows the approximations of boundary value \(y(t,0)\) in (a) and the approximations of initial value \(y(0,x)\) with different noise levels in (b). Moreover, to illustrate the effectiveness of the proposed method, the change of relative error along lines \(x\) and \(t\) are also given in Figure 4.3(c) and (d), respectively. We mention that the Cauchy problem of the deterministic parabolic equation has been considered in [20], that the numerical algorithm based on Carleman weight function proposed can stably invert the solution near the known boundary conditions, more precisely, solutions for \(t\in[0.5,1]\) for 1-D example. With comparison of the results of Example 4.1, our method works well for recovering the solution for \(t\in[0.1,1]\) for the stochastic parabolic equation. We believe this algorithm also works well for deterministic parabolic equation, and can be an i Figure 4.1: Approximated solution with different noisy levels of Example 4.1(a). Figure 4.3: Numerical results of Example 4.1(b) with different noisy level data \(\delta\). Figure 4.2: Numerical results by expectation of solutions of different numbers of sample paths. Furthermore, the proposed method also works well for 2-D examples in both rectangular and discal domains. We explore the effectiveness of the proposed numerical methods in following two examples in bounded domains in \(\mathbb{R}^{2}\), as well as the influence of parameter in the kernel-based learning method. We always let \(\delta=3\%\) unless particularly stated. **Example 4.2**: _Suppose that \(G=[-1,1]\times[-1,1]\), and \(\Gamma_{1}=\{x_{1}=1,x_{2}\in[-1,1]\},\Gamma_{2}=\{x_{1}\in[-1,1]\},\Gamma_{3 }=\partial G\backslash\{x_{1}=-1,x_{2}\in[-1,1]\}\). Let_ * \(f(x_{1},x_{2})=\sin\left(\pi x_{1}\right)\sin\left(\pi x_{2}\right)+2,\quad g_ {1}(t,x_{1},x_{2})=2.\)__ * \(f(x_{1},x_{2})=\begin{cases}3,\quad(x_{1}-0.5)^{2}+(x_{2}-0.5)^{2}\leq 0.15^{2} \\ 3,\quad(x_{1}-0.5)^{2}+(x_{2}+0.5)^{2}\leq 0.15^{2}\\ 3,\quad(x_{1}+0.5)^{2}+(x_{2}-0.5)^{2}\leq 0.15^{2}\\ 3,\quad(x_{1}+0.5)^{2}+(x_{2}+0.5)^{2}\leq 0.15^{2}\\ 1,\quad otherwise.\end{cases},\quad g_{1}(t,x_{1},x_{2})=1.\)__ We first consider the optimal choices of parameters \(R\) and \(DT\) in case that the Cauchy data are given on \(\Gamma_{2}\) in the kernel-based approximation processes. We set \(DT=-0.1\) and observe relative errors \(E\) with the change of \(R\). It can be seen from figure 4.4 that \(R\in[2.5,3.5]\), the methods performs well. Numerical results in Figure 4.5 illustrate that any \(DT\in[-0.2,-0.1]\) is a reasonable choice. Thus we fix \(R=3.5,DT=-0.1\) in the following computing. Now we show the numerical approximations and relative errors for different noise level in Figure 4.6. Figure 4.4: The relative error \(E(x_{1},x_{2})\) for different \(R\). Figure 4.6: The approximations for \(y(0,x_{1},x_{2})\) and relative errors \(E(x_{1},x_{2})\) for different noise levels for Example 4.2(a). Figure 4.5: The relative error \(E(x_{1},x_{2})\) for different \(DT\). According to the conditional stability and convergence estimate we analyzed in section 2 and 3, the length of partial of boundary for which Cauchy data be given, will also effect on the approximation. Thus, we verify the proposed method for this example with Cauchy data be given in \(\Gamma_{j},j=1,2,3\) by Figure 4.7. In Example 4.2, \(G\) is a rectangular domain that the initial-boundary problem could be computed well by the finite difference method. However, in case that \(G\) is a general bounded domain, we should improve the finite difference method by heterogeneous grids. To avoid the complicated analysis for the numerical solution of initial-boundary problem, we solve the problem by kernel-based learning theory, by treating the fundamental solution as kernels. **Example 4.3**: _Suppose that \(G=\{r^{2}\leq 1\}\), where \(r\triangleq\sqrt{x_{1}^{2}+x_{2}^{2}}\), \(\tan\theta=x_{2}/x_{1}\), and \(\Gamma_{\Theta}=\{\theta\in[0,\Theta],r=1\}\). Let \(f(x_{1},x_{2})=\begin{cases}3,&0.2\leq x_{1}\leq 0.6,0.2\leq x_{2}\leq 0.6\\ 1,&otherwise,\end{cases},\quad g_{1}(t,x_{1},x_{2})=1.\)_ We show the numerical results when \(\Theta=\frac{\pi}{3}\) with varies \(\delta\) and results with different partial of boundary for \(\Theta=\frac{\pi}{6},\frac{\pi}{4},\frac{\pi}{2}\) for \(\delta=3\%\) in Figure 4.8 and 4.9, respectively. In conclusion, we do the numerical experiments for the Cauchy problem of stochastic parabolic equation for examples on both 1-D and 2-D bounded domains by smooth, piecewise smooth and noncontinuous initial conditions, numerical results illustrate that the proposed method works well for all these examples. Although we haven't solved examples for other irregular 2-D domains and 3-D domains, it can be seen from the kernel-based learning theory that the method should be satisfied, because there is no limitation of dimension \(n\) and shape of domains in the kernel-based learning theory but only distances between source points and collection point be concerned. Moreover, since no iterated method used in the numerical computation, the computational cost is reduced. However, the convergence rate for the kernel-based learning theory is still open. Figure 4.7: Approximations and relative errors for Example 4.2(b) with Cauchy data be given on different partial of boundary. Figure 4.9: Numerical results with Cauchy data on different partial boundary for Example 4.3. Figure 4.8: Numerical results with different noisy level for Example 4.3. ## Acknowledgement The authors would like to thank Professor Kui Ren for providing valuable suggestions. This work is partially supported by the NSFC (No. 12071061), the Science Fund for Distinguished Young Scholars of Sichuan Province (No. 2022JDJQ0035).
2306.12582
Adversarial Training with Generated Data in High-Dimensional Regression: An Asymptotic Study
In recent years, studies such as \cite{carmon2019unlabeled,gowal2021improving,xing2022artificial} have demonstrated that incorporating additional real or generated data with pseudo-labels can enhance adversarial training through a two-stage training approach. In this paper, we perform a theoretical analysis of the asymptotic behavior of this method in high-dimensional linear regression. While a double-descent phenomenon can be observed in ridgeless training, with an appropriate $\mathcal{L}_2$ regularization, the two-stage adversarial training achieves a better performance. Finally, we derive a shortcut cross-validation formula specifically tailored for the two-stage training method.
Yue Xing
2023-06-21T21:35:36Z
http://arxiv.org/abs/2306.12582v1
# Adversarial Training with Generated Data in High-Dimensional Regression: An Asymptotic Study ###### Abstract In recent years, studies such as (Carmon et al., 2019; Gowal et al., 2021; Xing et al., 2022) have demonstrated that incorporating additional real or generated data with pseudo-labels can enhance adversarial training through a two-stage training approach. In this paper, we perform a theoretical analysis of the asymptotic behavior of this method in high-dimensional linear regression. While a double-descent phenomenon can be observed in ridgeless training, with an appropriate \(\mathcal{L}_{2}\) regularization, the two-stage adversarial training achieves a better performance. Finally, we derive a shortcut cross-validation formula specifically tailored for the two-stage training method. Machine Learning, ICML, ICML ## 1 Introduction The development of machine learning and deep learning methods has led to breakthrough performance in various applications. However, recent studies, e.g., (Goodfellow et al., 2014), observe that these models are vulnerable when the data are perturbed by adversaries. Attacked inputs can be imperceptibly different from clean inputs to humans but can cause the model to make incorrect predictions. To defend against adversarial attacks, adversarial training is a popular and promising way to improve the adversarial robustness of modern machine learning models. Adversarial training first generates attacked samples, then calculates the gradient of the model based on these augmented data. Such a procedure can make the model less susceptible to adversarial attacks in real-world situations. There are fruitful results in the theoretical justification and methodology development in adversarial training. Among various research directions, one interesting aspect is to improve adversarial training with extra unlabeled data. Recent works successfully demonstrate great improvements in the adversarial robustness with additional unlabeled data. For example, (Xing et al., 2021), show that additional external real data help improve adversarial robustness; (Gowal et al., 2021; Wang et al., 2023) use synthetic data to improve the adversarial robustness and achieve the highest 65% to 70% adversarial testing accuracy for CIFAR-10 dataset under AutoAttack (AA) in (Croce et al., 2020)1. Footnote 1: [https://robustbench.github.io](https://robustbench.github.io) A recent study (Xing et al., 2022) reveals that adversarial training gains greater benefits from unlabeled data than clean (natural) training. The key observation is that adversarially robust models rely on the conditional distribution of the response given the features (\(Y|X\)) and the marginal distribution of the features (\(X\)). In contrast, clean training only depends on \(Y|X\) in their study. As a result, adversarial training can benefit more than clean training from unlabeled data. Besides adversarial training, high dimensional statistics is another important field of traditional machine learning to solve real-world problems from genomics, neuroscience to image processing. While many studies focus on obtaining a better performance via regularization, one surprising phenomenon in this field is the double descent phenomenon (Belkin et al., 2019; Hastie et al., 2019), which refers to a U-shaped curve in the test error as a function of the model complexity, together with a second descent phase occurring in the over-parameterized regime. This phenomenon challenges the conventional wisdom that increasing model complexity always leads to over-fitting. It provides significant implications for designing and analyzing machine learning algorithms in high-dimensional settings. Given the substantial achievements in high-dimensional statistics, this paper aims to extend the analysis of (Xing et al., 2022) to a high-dimensional regression setup, in which both the data dimension \(d\) and the sample size of the labeled data \(n_{1}\) increase and \(d/n_{1}\rightarrow\gamma\) asymptotically. Although (Xing et al., 2022) provides a theoretical explanation for the benefits of unlabeled data in the large sample regime (\(n_{1}\gg d\)), the asymptotic behavior of the two-stage method in other scenarios remains unclear. Our contributions are summarized as follows: * We derived the asymptotic convergence of the two-stage adversarial training when \(d/n_{1}\to\gamma\) for some constant \(\gamma>0\). (Section 3.1). * It is observed that a proper ridge penalty in the clean training stage benefits the two-stage method. However, the optimal ridge penalty for the clean estimate in the first stage of (Xing et al., 2022) differs from the one yielding the best clean performance. We conjecture that this discrepancy arises from the change in the error decomposition from clean training to two-stage adversarial training. To facilitate more efficient hyperparameter tuning, we propose adaptations to existing cross validation (CV) methods, improving the time-consuming vanilla CV approach (Sections 3.2 and 3.3). ### Related Works Below is a summary of related works in adversarial training, high-dimensional statistics, and cross validation. Adversarial Training.There are many studies in the area of adversarial training. Some studies, e.g., (Goodfellow et al., 2014; Zhang et al., 2019; Wang et al., 2019; Cai et al., 2018; Zhang et al., 2020; Carmon et al., 2019; Gowal et al., 2021), work in methodology. Theoretical investigations have also been conducted from different perspectives. For instance, Chen et al. (2020); Javanmard et al. (2020); Taheri et al. (2021); Yin et al. (2018); Raghunathan et al. (2019); Najafi et al. (2019); Min et al. (2020); Hendrycks et al. (2019); Dan et al. (2020); Wu et al. (2020); Deng et al. (2021) study the statistical properties of adversarial training; Sinha et al. (2018); Wang et al. (2019); Xiao et al. (2022) study the optimization perspective; Gao et al. (2019); Zhang et al. (2020); Zhang and Li (2023); Mianjy and Arora (2022); Lv and Zhu (2021); Xiao et al. (2021) work on deep learning. Double Descent and High-Dimensional Statistics.Double descent phenomenon is an observation in the learning curves of machine learning models. It describes the behavior of the generalization gap, i.e., the difference between the model performance on the training data and testing data. In a typical learning curve, the generalization error decreases and then increases with larger model complexity. However, in the double descent phenomenon, after the first decrease-increase pattern, the error decreases again when further enlarging the model complexity in the over-fitting regime. This non-monotonic behavior of the learning curve has been observed in various machine learning settings. Comprehensive investigations into the double descent phenomenon can be found in (Belkin et al., 2019; Hastie et al., 2019; Ba et al., 2020; d'Ascoli et al., 2020; Adam and Pennington, 2020; Liu et al., 2021; Rocks and Mehta, 2022). Cross Validation.Cross validation (CV) is a resampling procedure used to evaluate the performance of machine learning models. This paper mainly considers leave-one-out CV. For leave-one-out CV, it trains the model using all-but-one samples and repeats this process so that every sample is left in the estimation once. The final model performance is then averaged across all the models. The model can generalize better to new data by optimizing the hyperparameters in the model, e.g., regularization, through CV. However, although a leave-one-out CV is an effective method for selecting hyperparameters, it is time-consuming by its design. Consequently, some studies propose short-cut formulas for the leave-one-out CV to reuse some terms when estimating the model using different data. Studies related to CV can be found in (Stone, 1978; Picard and Cook, 1984; Shao, 1993; Browne, 2000; Berrar, 2019). ## 2 Model Setup In this section, we present the data generation model and the two-stage adversarial training framework. Data generation model.We assume that the attributes \(X\sim N(\boldsymbol{0},\boldsymbol{\Sigma})\) with covariance matrix \(\boldsymbol{\Sigma}=\boldsymbol{I}_{d}\), and the response \(Y\) satisfies \(Y=X^{\top}\boldsymbol{\theta}_{0}+\varepsilon\) for \(\|\boldsymbol{\theta}_{0}\|=r=O(1)\) and a Gaussian noise \(\varepsilon\) with \(Var(\varepsilon)=\sigma^{2}\). Two-stage adversarial training.There are two stages in this training framework. In the first stage, we utilize \(n_{1}\) i.i.d. labeled samples, i.e., \((\boldsymbol{x}_{i},y_{i})\) for \(i=1,\ldots,n_{1}\). We consider the scenario where \(d\asymp n_{1}\). The first stage solves the following clean training problem \[\frac{1}{n_{1}}\sum_{i=1}^{n_{1}}(\boldsymbol{x}_{i}^{\top}\boldsymbol{\theta }-y_{i})^{2}+\lambda\|\boldsymbol{\theta}\|^{2} \tag{2.1}\] and obtain the clean estimate \(\widehat{\boldsymbol{\theta}}_{0}(\lambda)\). In the second stage, we use the trained model \(\widehat{\boldsymbol{\theta}}_{0}(\lambda)\) to generate a pseudo response for a set of unlabeled data, i.e., \[\widehat{y}_{i}=\boldsymbol{x}_{i}^{\top}\widehat{\boldsymbol{\theta}}_{0}( \lambda)+\varepsilon_{i}\] for \(i=n_{1}+1,\ldots,n_{1}+n_{2}\). In this paper, we consider the scenario where \(n_{2}=\infty\). We also assume \(\sigma^{2}\) is known and \(\varepsilon_{i}\) are generated from \(N(0,\sigma^{2})\). Finally we use the extra data with pseudo response to do adversarial training and minimize the following loss w.r.t \(\boldsymbol{\theta}\): \[\frac{1}{n_{2}}\sum_{i=n_{1}+1}^{n_{1}+n_{2}}\sup_{\boldsymbol{z}\in\mathcal{B }_{2}(\boldsymbol{x}_{i},\epsilon)}(\boldsymbol{z}^{\top}\boldsymbol{\theta}- \widehat{y}_{i})^{2}. \tag{2.2}\] Denote the final solution as \(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\). **Remark 1**.: _The two-stage method in this paper is slightly different from the original one in (Gowal et al., 2021; Xing et al., 2022). We only utilize the generated data in the second stage. This simplifies the theoretical analysis. In addition, when \(d/n_{1}=\gamma\) is a large constant, we empirically observe that the two-stage method is better than an adversarial training with only labeled data, i.e., the right of Figure 1._ **Remark 2**.: _Our initial trial indicates that adding additional regularization in equation (2.2) does not help much. Thus, we only inject a penalty in the clean training stage._ **Expected Adversarial Risk** Under the model assumption of \((X,Y)\), the population adversarial risk for any given estimate \(\mathbf{\theta}\) becomes \[R_{\epsilon}(\mathbf{\theta},\mathbf{\theta}_{0}) = \|\mathbf{\theta}-\mathbf{\theta}_{0}\|_{\Sigma}^{2}\] \[+2c_{0}\epsilon\|\mathbf{\theta}\|\sqrt{\|\mathbf{\theta}-\mathbf{\theta}_{ 0}\|_{\Sigma}^{2}+\sigma^{2}}+\epsilon^{2}\|\mathbf{\theta}\|^{2},\] where \(\|\cdot\|\) is the \(\mathcal{L}_{2}\) norm, and \(c_{0}=\sqrt{2/\pi}\) is derived from the exact distribution of \((X,Y)\). We rewrite \(R_{\epsilon}(\mathbf{\theta},\mathbf{\theta}_{0})\) as \(R_{\epsilon}(\mathbf{\theta})\) for simplicity when no confusion arises. **Remark 3**.: _One can denote \(\mathbf{\theta}_{\epsilon}=\arg\min_{\mathbf{\theta}}R_{\epsilon}(\mathbf{\theta},\mathbf{ \theta}_{0})\) as the best robust model. However, from \(R_{\epsilon}(\mathbf{\theta},\mathbf{\theta}_{0})\), we are interested in \(\|\mathbf{\theta}-\mathbf{\theta}_{0}\|_{\Sigma}\) and \(\|\mathbf{\theta}\|\) rather than \(\|\mathbf{\theta}-\mathbf{\theta}_{\epsilon}\|\)._ _Based on (Xing et al., 2021), when an estimate \(\mathbf{\theta}\rightarrow\mathbf{\theta}_{\epsilon}\), the excess adversarial risk \(R_{\epsilon}(\mathbf{\theta},\mathbf{\theta}_{0})-R_{\epsilon}(\mathbf{\theta}_{\epsilon},\mathbf{\theta}_{0})\) can be approximated by a function of \(\mathbf{\theta}-\mathbf{\theta}_{\epsilon}\). However, when \(\mathbf{\theta}-\mathbf{\theta}_{\epsilon}\) diverges in the high-dimensional setup, such an approximation leads to a large error._ ## 3 Analyzing the Two-Stage Adversarial Training Framework This section presents the main theoretical results and simulation studies. We first demonstrate the main theory of the convergence of the two-stage method in Section 3.1, take different \(\lambda\) under different attack strength \(\epsilon\) in Section 3.2, and finally introduce a CV method in Section 3.3. ### Convergence Result For the two-stage adversarial framework, to study \(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\), we denote the following function \[m_{\gamma}(-\lambda)=\frac{-(1-\gamma+\lambda)+\sqrt{(1-\gamma+\lambda)^{2}+4 \lambda\gamma}}{2\gamma\lambda},\] which is used to describe the asymptotic behavior of \(tr\left((\sum_{i=1}^{n_{1}}\mathbf{x}_{i}\mathbf{x}_{i}^{\top}+\lambda\mathbf{I}_{d})^{- 1}\right)\) as in (Hastie et al., 2019). After defining \(m_{\gamma}\), one can obtain the convergence of \(\widehat{\mathbf{\theta}}_{0}(\lambda)\), and further figure out the asymptotic behavior of \(\widehat{\mathbf{\theta}}_{\epsilon}(\lambda)\). The convergence of the two-stage adversarial training framework is as follows: **Theorem 1** (Convergence of Two-Stage Adversarial Training).: _With probability tending to 1, \(\widehat{\mathbf{\theta}}_{0}(\lambda)\) satisfies_ \[\|\widehat{\mathbf{\theta}}_{0}(\lambda)-\mathbf{\theta}_{0}\|^{2} \rightarrow \lambda^{2}r^{2}m_{\gamma}^{\prime}(-\lambda)\] \[+\sigma^{2}\gamma\left(m_{\gamma}(-\lambda)-\lambda m_{\gamma}^{ \prime}(-\lambda)\right),\] \[\|\widehat{\mathbf{\theta}}_{0}(\lambda)\|^{2} \rightarrow r^{2}[1-2\lambda m_{\gamma}(-\lambda)+\lambda^{2}m_{\gamma}^{ \prime}(-\lambda)]\] \[+\sigma^{2}\gamma[m_{\gamma}(-\lambda)-\lambda m_{\gamma}^{ \prime}(-\lambda)].\] _For the two-stage adversarial estimate \(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\), assuming \(n_{2}=\infty\), \(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\) satisfies_ \[\|\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)-\mathbf{\theta}_{0}\|^{2} \rightarrow \frac{1}{(1+\alpha_{\epsilon}(\lambda))^{2}}\|\widehat{\mathbf{\theta }}_{0}(\lambda)\|^{2}\] \[+r^{2}-\frac{2}{(1+\alpha_{\epsilon}(\lambda))}\widehat{\mathbf{ \theta}}_{0}(\lambda)^{\top}\mathbf{\theta}_{0},\] \[\|\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\|^{2} \rightarrow \frac{1}{(1+\alpha_{\epsilon}(\lambda))^{2}}\|\widehat{\mathbf{\theta }}_{0}(\lambda)\|^{2},\] _where \(2\widehat{\mathbf{\theta}}_{0}(\lambda)^{\top}\mathbf{\theta}_{0}\) can be calculated via_ \[2\widehat{\mathbf{\theta}}_{0}(\lambda)^{\top}\mathbf{\theta}_{0}=\|\mathbf{\theta}_{0}\|^{2 }+\|\widehat{\mathbf{\theta}}_{0}(\lambda)\|^{2}-\|\widehat{\mathbf{\theta}}_{0}( \lambda)-\mathbf{\theta}_{0}\|^{2},\] _and \(\alpha_{\epsilon}(\lambda)\) is the solution of \(\alpha\) in_ \[\alpha+\epsilon c_{0}\frac{\alpha\|\widehat{\mathbf{\theta}}_{0}( \lambda)\|}{\sqrt{\|\widehat{\mathbf{\theta}}_{0}(\lambda)\|^{2}\alpha^{2}+\sigma^{ 2}(1+\alpha)^{2}}}\] \[= \epsilon c_{0}\frac{\sqrt{\|\widehat{\mathbf{\theta}}_{0}(\lambda)\|^{2 }\alpha^{2}+\sigma^{2}(1+\alpha)^{2}}}{\|\widehat{\mathbf{\theta}}_{0}(\lambda)\| }+\epsilon^{2}.\] The proof of Theorem 1 is in the appendix. We first study the convergence of \(\widehat{\mathbf{\theta}}_{0}(\lambda)\), and then evaluate \(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\). From Theorem 1, similar to \(\widehat{\mathbf{\theta}}_{0}\), one can see that \(\|\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)-\mathbf{\theta}_{0}\|^{2}\) and \(\|\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\|^{2}\) converges to some value as a function of \((\gamma,\lambda,\epsilon,\sigma^{2})\) asymptotically. We conduct a simulation to verify Theorem 1 and study the risk of the two-stage adversarial training. In the experiment, we take \(n_{1}=100\) and \(n_{2}=\infty\), i.e., we directly use the population adversarial risk in the second stage. We change the data dimension \(d\) to obtain different \(\gamma=d/n_{1}\). The data follows \(X\sim N(\mathbf{0},\mathbf{I}_{d})\), \(Y=X^{\top}\mathbf{\theta}_{0}+\varepsilon\) with \(\mathbf{\theta}_{0}\sim N(0,\mathbf{I}_{d}/d)\) and \(\varepsilon\sim N(0,1)\). The adversarial attack is taken as \(\epsilon=0.3\). We repeat the experiment 100 times to obtain the average performance. We use the excess adversarial risk, i.e., \(R_{\epsilon}(\mathbf{\theta})-R_{\epsilon}(\mathbf{\theta}_{\epsilon})\) for \(\mathbf{\theta}\in\{\widehat{\mathbf{\theta}}_{0}(\lambda),\widehat{\mathbf{\theta}}_{\epsilon }(\lambda),\widehat{\mathbf{\theta}}_{\epsilon}(\lambda)\}\), to evaluate the performance of the three methods. The model \(\widehat{\mathbf{\theta}}_{\epsilon}(\lambda)\) refers to the vanilla adversarial training as an additional benchmark, i.e., we conduct adversarial training using the \(n_{1}\) labeled samples. The simulation results are summarized in Figure 1, 2, 3, 4. In Figure 1, we take \(\lambda\to 0\) to align with the experiments in the double descent literature. There are several observations from Figure 1. First, if we compare the performance of the two-stage adversarial training and the clean training, the two-stage adversarial training is better than clean training. Second, when \(d/n_{1}\) gets larger, the performance of the two-stage adversarial training is better than the vanilla adversarial training, indicating that the information of the additional extra data matters. Finally, for all the three training methods, they all observe a double-descent phenomenon. In addition, we plot the theoretical curves for the excess adversarial risk associating with the two-stage adversarial training. From Figure 2, the theoretical curve and the simulation result match with each other. Finally, we examine how the ridge penalty affects the performance. In the simulation in Figure 3, we take \(\epsilon=0,0.3\) and compare the performance when \(\lambda=0\) and \(\lambda\) is taken to minimize the risk. In Figure 3, the y-axis is the corresponding excess adversarial risk, i.e., \(\epsilon=0,0.3\) for the corresponding groups respectively. The corresponding theoretical curves can be found in Figure 4. From Figure 3, one can see that the excess risk for the ridgeless regression is similar, while the two-stage adversarial training (\(\epsilon=0.3\)) benefits more than clean training (\(\epsilon=0\)) when taking a proper ridge penalty, which motivates us to further investigate in the penalty term in the following sections. In addition, the theoretical curves in Figure 4 align with the simulation results in 3 as well. ### A Better Clean Estimate May Not Be Preferred Different from ridgeless regression in the large-sample regime, with high-dimensional data, it is essential to utilize ridge penalty or other regularization to improve the testing performance. While one can adjust the penalty to control the performance of the clean estimate, we would like to ask: Figure 1: Simulation: Excess adversarial risk of clean training, vanilla adversarial training, and the two-stage adversarial training, without ridge penalty. Figure 4: Theoretical value corresponding to Figure 3 Figure 3: Simulation: Ridgeless regression and ridge regression with the best penalty in clean training and the two-stage adversarial training respectively. Adversarial training benefits more from a proper penalty. Figure 2: Theoretical value corresponding to Figure 1. _Is a better clean estimate (measured by clean testing performance) always preferred in the two-stage method?_ To answer the above question, it is essential to investigate the role of the clean estimate in the two-stage method. Recall that the population adversarial risk is written as \[R_{\epsilon}(\mathbf{\theta},\mathbf{\theta}_{0}) = \|\mathbf{\theta}-\mathbf{\theta}_{0}\|_{\Sigma}^{2}\] \[+2c_{0}\epsilon\|\mathbf{\theta}\|\sqrt{\|\mathbf{\theta}-\mathbf{\theta}_{0} \|_{\Sigma}^{2}+\sigma^{2}}+\epsilon^{2}\|\mathbf{\theta}\|^{2},\] where taking expectation on training data we have \[\mathbb{E}\|\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)-\mathbf{\theta}_{0}\|_{ \Sigma}^{2}=\|\mathbb{E}\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)-\mathbf{ \theta}_{0}\|_{\Sigma}^{2}+tr(Var(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda) )),\] and \[\mathbb{E}\|\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\|^{2}=\|\mathbb{E} \widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\|^{2}+tr(Var(\widetilde{\mathbf{ \theta}}_{\epsilon}(\lambda))).\] The above decompositions imply that the while ridge regression balances bias and variance of \(\widetilde{\mathbf{\theta}}_{0}(\lambda)\), the importance of bias and variance are changed in \(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\). As a result, the optimal \(\lambda\) for the clean estimate may not be the best when applied in the two-stage adversarial training. To investigate how the optimal \(\lambda\) changes in the two-stage method, a simulation study is conducted in Figure 5. We take \(n_{1}=50\). The data \(X\sim N(\mathbf{0},\mathbf{I}_{d})\) and \(d=200\). The response \(Y=\mathbf{\theta}_{0}^{\top}X+\varepsilon\) with \(\mathbf{\theta}_{0}=\mathbf{1}/\sqrt{d}\) and \(\varepsilon\sim N(0,0.1^{2})\). Besides the \(n_{1}\) labeled data, we take \(n_{2}=\infty\). We repeat 30 times to get the average result and check the best \(\lambda\) under different attack strength \(\epsilon\). From Figure 5, one can see that the optimal \(\lambda\) gets larger when the attack strength gets larger. When \(\epsilon=0\), the optimal \(\lambda\) is closed to zero. When \(\epsilon=0.3\), the best \(\lambda\) is around 1, and 3 when \(\epsilon=0.5\), both of which are much larger than the case for \(\epsilon=0\). ### Cross Validation Observing that the optimal \(\lambda\) for clean training is not the best for the two-stage adversarial training, we next investigate how to better select a proper \(\lambda\). While one can always use the leave-one-out procedure for any estimate, it is time-consuming. As a result, existing literature, e.g. (Hastie et al., 2019), utilize ways to approximate the leave-one-out CV procedure. Recall that when \(n_{2}=\infty\), the second stage of the two-stage method minimizes \[R_{\epsilon}(\mathbf{\theta},\widehat{\mathbf{\theta}}_{0}(\lambda)) = \|\mathbf{\theta}-\widehat{\mathbf{\theta}}_{0}(\lambda)\|_{\Sigma}^{2}+ \sigma^{2}+\epsilon^{2}\|\mathbf{\theta}\|^{2}\] \[+2c_{0}\epsilon\|\mathbf{\theta}\|\sqrt{\|\mathbf{\theta}-\widehat{\mathbf{ \theta}}_{0}(\lambda)\|_{\Sigma}^{2}+\sigma^{2}},\] and the solution is \[\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)=(\mathbf{\Sigma}+\alpha_{\epsilon}( \lambda)\mathbf{I}_{d})^{-1}\mathbf{\Sigma}\widehat{\mathbf{\theta}}_{0}(\lambda),\] for some \(\alpha_{\epsilon}(\lambda)\geq 0\). One needs to rerun the CV procedure for \(n_{1}\) times and obtain different \(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)^{-j}\), the leave-one-out estimate of \(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\) leaving the \(j\)th labeled sample. Given that the above formula \(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\) is a transformation of \(\widehat{\mathbf{\theta}}_{0}(\lambda)\), one can borrow the idea of approximating CV in clean training to the two-stage adversarial training. To be specific, since both the \(\alpha_{\epsilon}(\lambda)\) and \(\widehat{\mathbf{\theta}}_{0}(\lambda)\) relate to each labeled sample, assuming the \(j\)th sample is discarded, the estimate of the two-stage method will be \[(\mathbf{\Sigma}+\alpha^{-j}\mathbf{I}_{d})\mathbf{\Sigma}\widehat{\mathbf{\theta}}_{0}^{-j}( \lambda), \tag{3.1}\] and we approximate both \(\alpha^{-j}\) and \(\widehat{\mathbf{\theta}}_{0}^{-j}(\lambda)\). The following lemma shows how to approximate \(\alpha_{\epsilon}(\lambda)\) in the leave-one-out CV: **Lemma 1**.: _Rewrite \(\widetilde{\mathbf{\theta}}_{\epsilon}(\lambda)\) as \(\widetilde{\mathbf{\theta}}\), \(\widehat{\mathbf{\theta}}_{0}(\lambda)\) as \(\widehat{\mathbf{\theta}}_{0}\), and \(\alpha=\alpha_{\epsilon}(\lambda)\) for simplicity. Denote \(\Delta_{j}=\widehat{\mathbf{\theta}}_{0}^{-j}-\widehat{\mathbf{\theta}}_{0},\) and_ \[A_{1} = \frac{1}{\|\widehat{\mathbf{\theta}}-\widehat{\mathbf{\theta}}_{0}\|_{ \Sigma}\|\widehat{\mathbf{\theta}}\|}\widetilde{\mathbf{\theta}}^{\top}(\mathbf{\Sigma}+ \alpha\mathbf{I}_{d})^{-2}\mathbf{\Sigma}\widehat{\mathbf{\theta}}_{0}\] \[-\frac{\|\widetilde{\mathbf{\theta}}\|}{\|\widetilde{\mathbf{\theta}}- \widehat{\mathbf{\theta}}_{0}\|_{\Sigma}^{3}}(\widetilde{\mathbf{\theta}}-\widehat{ \mathbf{\theta}}_{0})^{\top}\mathbf{\Sigma}(\mathbf{\Sigma}+\alpha\mathbf{I}_{d})^{-2}\mathbf{ \Sigma}\widehat{\mathbf{\theta}}_{0},\] \[A_{2} = \frac{1}{\|\widetilde{\mathbf{\theta}}-\widehat{\mathbf{\theta}}_{0}\|_{ \Sigma}\|\widetilde{\mathbf{\theta}}\|}(\widetilde{\mathbf{\theta}}-\widehat{\mathbf{ \theta}}_{0})^{\top}\mathbf{\Sigma}(\mathbf{\Sigma}+\alpha\mathbf{I}_{d})^{-2}\mathbf{ \Sigma}\widehat{\mathbf{\theta}}_{0}\] \[-\frac{\|\widetilde{\mathbf{\theta}}-\widehat{\mathbf{\theta}}_{0}\|_{ \Sigma}}{\|\widetilde{\mathbf{\theta}}\|^{3}}\widetilde{\mathbf{\theta}}^{\top}\mathbf{ \Sigma}(\mathbf{\Sigma}+\alpha\mathbf{I}_{d})^{-2}\widehat{\mathbf{\theta}}_{0},\] Figure 5: Simulation: How the tuning parameter \(\lambda\) in clean ridge regression affects the final adversarial robustness when using extra unlabeled data in training. While a small \(\lambda\) minimizes the population clean risk, this choice of \(\lambda\) is sub-optimal when using \(\widehat{\mathbf{\theta}}_{0}(\lambda)\) to create pseudo response. Besides the cases of \(\epsilon\in\{0,0.3,0.5\}\), when \(\epsilon=0.7\), the best penalty \(\lambda\) is extremely large and is not included in the figure. \[A_{3} = \left(I_{d}+\epsilon c_{0}\frac{\|\widetilde{\mathbf{\theta}}\|}{\| \widetilde{\mathbf{\theta}}-\theta_{0}\|_{\mathbf{\Sigma}}}\right)\Sigma(\mathbf{\Sigma}+ \alpha\mathbf{I}_{d})^{-1}\widetilde{\mathbf{\theta}}\] \[+\left(\epsilon c_{0}\frac{\|\widetilde{\mathbf{\theta}}-\widehat{\bm {\theta}}_{0}\|_{\mathbf{\Sigma}}}{\|\widetilde{\mathbf{\theta}}\|}+\epsilon^{2} \right)(\mathbf{\Sigma}+\alpha\mathbf{I}_{d})^{-1}\widetilde{\mathbf{\theta}},\] \[A_{4} = \frac{1}{\|\widetilde{\mathbf{\theta}}-\widehat{\mathbf{\theta}}_{0}\|_{ \mathbf{\Sigma}}\|\widetilde{\mathbf{\theta}}\|}\widetilde{\mathbf{\theta}}^{\top}(\mathbf{ \Sigma}+\alpha\mathbf{I}_{d})^{-1}\mathbf{\Sigma}\] \[+\frac{\|\widetilde{\mathbf{\theta}}\|}{\|\widetilde{\mathbf{\theta}}- \widehat{\mathbf{\theta}}_{0}\|_{\mathbf{\Sigma}}^{3}}\alpha(\widetilde{\mathbf{\theta}}- \widehat{\mathbf{\theta}}_{0})^{\top}\mathbf{\Sigma}(\mathbf{\Sigma}+\alpha\mathbf{I}_{d})^{- 1},\] \[A_{5} = \frac{1}{\|\widetilde{\mathbf{\theta}}-\widehat{\mathbf{\theta}}_{0}\|_{ \mathbf{\Sigma}}\|\widetilde{\mathbf{\theta}}\|}\alpha(\widetilde{\mathbf{\theta}}- \widehat{\mathbf{\theta}}_{0})^{\top}\mathbf{\Sigma}(\mathbf{\Sigma}+\alpha\mathbf{I}_{d})^{-1}\] \[-\frac{\|\widetilde{\mathbf{\theta}}-\widehat{\mathbf{\theta}}_{0}\|_{ \mathbf{\Sigma}}}{\|\widetilde{\mathbf{\theta}}\|^{3}}\widetilde{\mathbf{\theta}}^{\top}( \mathbf{\Sigma}+\alpha\mathbf{I}_{d})^{-1}\mathbf{\Sigma},\] _then when \(\|\widehat{\mathbf{\theta}}_{0}-\widehat{\mathbf{\theta}}_{0}^{-j}\|=o(1)\), the leave-one-out estimate of \(\alpha\) satisfies_ \[\alpha^{-j}-\alpha = \frac{\left(\epsilon c_{0}A_{1}\mathbf{\Sigma}(\widetilde{\mathbf{\theta }}-\widehat{\mathbf{\theta}}_{0})+\epsilon c_{0}A_{2}\widetilde{\mathbf{\theta}}+A_{3 }\right)^{\top}}{\|\epsilon c_{0}A_{1}\mathbf{\Sigma}(\widetilde{\mathbf{\theta}}- \widehat{\mathbf{\theta}}_{0})+\epsilon c_{0}A_{2}\widetilde{\mathbf{\theta}}+A_{3}\| ^{2}}\] \[\times\left(\epsilon c_{0}A_{4}\Delta_{j}\mathbf{\Sigma}(\widetilde{ \mathbf{\theta}}-\widehat{\mathbf{\theta}}_{0})+\epsilon c_{0}A_{5}\Delta_{j} \widetilde{\mathbf{\theta}}\right)+o,\] _where \(o\) represents negligible terms._ The proof of Lemma 1 can be found in the appendix. Based on the result in Lemma 1, we can use \[\widehat{\alpha}^{-j}-\alpha = \frac{\left(\epsilon c_{0}A_{1}\mathbf{\Sigma}(\widetilde{\mathbf{\theta }}-\widehat{\mathbf{\theta}}_{0})+\epsilon c_{0}A_{2}\widetilde{\mathbf{\theta}}+A_{3 }\right)^{\top}}{\|\epsilon c_{0}A_{1}\mathbf{\Sigma}(\widetilde{\mathbf{\theta}}- \widehat{\mathbf{\theta}}_{0})+\epsilon c_{0}A_{2}\widetilde{\mathbf{\theta}}+A_{3}\| ^{2}}\] \[\times\left(\epsilon c_{0}A_{4}\Delta_{j}\mathbf{\Sigma}(\widetilde{ \mathbf{\theta}}-\widehat{\mathbf{\theta}}_{0})+\epsilon c_{0}A_{5}\Delta_{j}\widetilde{ \mathbf{\theta}}\right)\] to approximate \(\alpha^{-j}\). In terms of the leave-on-out estimate of \(\widehat{\mathbf{\theta}}_{0}(\lambda)\), i.e., \(\widehat{\mathbf{\theta}}_{0}^{-j}(\lambda)\), one can use the Kailath Variant fomular (from 3.1.2 of Petersen and Pedersen, 2008) and obtain \[\widehat{\mathbf{\theta}}_{0}(\lambda)-\widehat{\mathbf{\theta}}_{0}^{-j}(\lambda)= \frac{y_{j}-\widehat{y}_{j}(\lambda)}{1-S_{j}(\lambda)}(\mathbf{X}^{\top}\mathbf{X}+n \lambda\mathbf{I}_{d})^{-1}\mathbf{x}_{j},\] where \(\mathbf{X}\in\mathbb{R}^{n_{1}\times d}\) denotes the labeled data matrix, and \(\widehat{y}_{j}(\lambda)=\widehat{\mathbf{\theta}}_{0}(\lambda)^{\top}\mathbf{x}_{j}\) as the fitted value of the \(j\)th observation. After obtaining the estimate \(\widehat{\alpha}^{-j}\) and \(\widehat{\mathbf{\theta}}_{0}^{-j}(\lambda)\), one can put them into (3.1) to obtain the leave-one-out estimate of \(\widetilde{\mathbf{\theta}}_{e}(\lambda)\). The following theorem justifies the correctness of the above procedure: **Theorem 2**.: _Denote_ \[CV(\lambda,\epsilon)=\frac{1}{n_{1}}\sum\left(|\mathbf{x}_{i}^{\top}\widetilde{\bm {\theta}}_{e}^{-j}(\lambda)-y_{i}|+\epsilon\|\widetilde{\mathbf{\theta}}_{e}^{-j}( \lambda)\|\right)^{2},\] _and \(\widetilde{\mathbf{\theta}}_{e}^{-j}(\lambda)=(\mathbf{\Sigma}+\widehat{\alpha}^{-j} \mathbf{I}_{d})\Sigma\widehat{\mathbf{\theta}}_{0}^{-j}(\lambda)\) as the approximation of the leave-one-out estimate using Lemma 1. Then under the Gaussian model assumption of \((X,Y)\), the approximated CV converges to the actual CV result, i.e.,_ \[\frac{1}{n_{1}}\sum\left(|\mathbf{x}_{i}^{\top}\widetilde{\mathbf{\theta}}_{e}^{-j}( \lambda)-y_{i}|+\epsilon\|\widetilde{\mathbf{\theta}}_{e}^{-j}(\lambda)\|\right)^{ 2}\xrightarrow{P}CV(\lambda,\epsilon).\] We use the simulation setting in Figure 5 to examine the performance of the above cross validation method. The results are summarized in Table 1. From Table 1, there are two observations. First, one can see that using the cross validation, the CV loss in training is closed to the corresponding population risk. In addition, the performance of the proposed algorithm is closed to the optimal \(\lambda\), and using clean regression in cross validation leads to a worse performance. ## 4 Conclusion and Future Directions This paper studies the asymptotics of the two-stage adversarial training in a high-dimensional linear regression setup. Double descent is observed for the ridge-less regression case, and a better performance can be achieved via \(\mathcal{L}_{2}\) regularization. We also derive the shortcut cross validation formula for this two-stage method to simplify the computation for cross validation. The results in this paper can be extended in some directions. First, in literature, e.g., (Ba et al., 2020), the double descent phenomenon is also related to two-layer neural networks. An interesting future direction is to extend the analysis in this paper to the neural network setup. Second, since the shortcut formula for cross validation is distribution specific and assumes \(n_{2}=\infty\), one may investigate in a more general cross validation procedure or relax to the scenario with a finite \(n_{2}\). \begin{table} \begin{tabular}{c|c c c} \hline \hline \(\epsilon\) & 0.3 & 0.5 & 0.7 \\ \hline Cross validation (CV loss in training) & 0.8750 & 0.9663 & 1.0300 \\ Cross validation (corresponding population risk) & 0.8871 & 0.9751 & 1.0270 \\ Cross validation for clean regression (corresponding population risk) & 0.8873 & 1.0076 & 1.1140 \\ \hline Best \(\lambda\) (corresponding population risk) & 0.8741 & 0.9648 & 1.0185 \\ \hline \hline \end{tabular} \end{table} Table 1: Adversarial risks using cross validation and the best \(\lambda\).
2301.06550
Statistical Topology -- Distribution and Density Correlations of Winding Numbers in Chiral Systems
Statistical Topology emerged since topological aspects continue to gain importance in many areas of physics. It is most desirable to study topological invariants and their statistics in schematic models that facilitate the identification of universalities. Here, the statistics of winding numbers and of winding number densities are addressed. An introduction is given for readers with little background knowledge. Results that my collaborators and I obtained in two recent works on proper random matrix models are reviewed, avoiding a technically detailed discussion. A special focus is on the mapping of topological problems to spectral ones as well as on the first glimpse on universality.
Thomas Guhr
2023-01-16T18:28:45Z
http://arxiv.org/abs/2301.06550v1
# Statistical Topology -- Distribution and Density Correlations of Winding Numbers in Chiral Systems ###### Abstract Statistical Topology emerged since topological aspects continue to gain importance in many areas of physics. It is most desirable to study topological invariants and their statistics in schematic models that facilitate the identification of universalities. Here, the statistics of winding numbers and of winding number densities are addressed. An introduction is given for readers with little background knowledge. Results that my collaborators and I obtained in two recent works on proper random matrix models are reviewed, avoiding a technically detailed discussion. A special focus is on the mapping of topological problems to spectral ones as well as on the first glimpse on universality. _Dedicated to Giulio Casati on the Occasion of His 80th Birthday._ _Keywords:_ Statistical Topology, Random Matrices, Chirality, Winding Numbers ## 1 Introductory Remarks Statistical Topology aims at combining, in a generalizing form, topological questions appearing in physics with the powerful concepts of Random Matrix Theory (RMT) which is capable of describing spectral statistics in a huge number of systems, stemming from different areas of physics and beyond. The focus in this work is exlusively on winding numbers and associated statistical quantities studied in the framework of a random matrix model, other topological invariants which are also of considerable interest are not discussed. The long-term aim is to study the emergence of universalities whose identification and usage is always, in all branches of statistical physics, the most rewarding enterprise. I have two goals: First, I want to present an introduction to Statistical Topology, restricted to statistical problems which are related to winding numbers, for readers without pertinent background. Neither physics expert jargon nor heavy mathematics and mathematical physics terminology are used. Second, I want to review and summarize results that my collaborators and I obtained in two recent studies [1, 2]. We calcuated for a chiral unitary random matrix model correlators of winding number densities and the winding number distribution. We also computed generators for these correlatores in a chiral unitary and a chiral symplectic random matrix model. Furthermore, we made first steps to find universalities. The paper is organized as follows: in Section 2 the salient features of winding numbers and chiral symmetry are presented. In Section 3 a schematic model with the necessary mathematical setup is formulated. Results are reviewed in Section 4, discussion and conclusions are given in Section 5. ## 2 Winding Number and Chirality After briefly revisiting the occurence of winding numbers in complex analysis in Section 2.1, the Kitaev chain is discussed in Section 2.2 and the statistics ansatz is motivated in Section 2.3. The research is put in the framework of Quantum Chromodynamics (QCD) and Condensed Matter Physics in Section 2.4, summarizing the corresponding remarks in Ref. [1, 2]. ### A Simple Topological Invariant in Complex Analysis The winding number is a topological concept encountered in complex analysis. Before discussing applications in physics, we briefly sketch the mathematical background. The winding number \(W=W(z_{i})\) counts, how many times a point \(z_{i}\) in the complex plane \(\mathbb{C}\) is encircled by a closed contour \(\gamma\), where counterclockwise or clockwise give a positive or a negative contribution, respectively. An example is shown in Figure 1, we have \(W(z_{1})=0\), \(W(z_{2})=1\) and \(W(z_{3})=2\). Obviously, the winding number \(W(z_{i})\) is a topological constant or, in physics terminology, a quantum number. It is invariant under all deformations of \(\gamma\) that do not cross the point \(z_{i}\) in question. In particular, the winding number is always a positive or negative integer, \(W\in\mathbb{Z}\). It may be written as Figure 1: Left: Three points \(z_{i},\ i=1,2,3\) in the complex plane \(\mathbb{C}\) and a closed contour \(\gamma\). Right: A closed contour \(\Gamma\) encircling zeros and poles of a meromorphic function \(f(z)\). the contour integral \[W(z_{i})=\frac{1}{2\pi i}\oint_{\gamma}\frac{d\zeta}{\zeta-z_{i}}. \tag{1}\] One easily establishes the link to Cauchy's argument principle: Consider a meromorphic function \(f(z)\) and a closed contour \(\Gamma\), encircling some zeros and poles of \(f(z)\) in the complex plane \(\mathbb{C}\) as shown in the example in Figure 1. The integral along this contour \(\Gamma\) over the logarithmic derivative of \(f(z)\) yields the difference of the number \(N_{Z}\) of zeros and the number \(N_{P}\) of poles, hence \[\frac{1}{2\pi i}\oint_{\Gamma}\frac{f^{\prime}(z)}{f(z)}dz=N_{Z}-N_{P}. \tag{2}\] The close relation to the winding number is found by making the change of variable \(\zeta=f(z)\) and accordingly of the contour, \(\Gamma\to\ f(\Gamma)\), \[N_{Z}-N_{P}=\frac{1}{2\pi i}\oint_{\Gamma}\frac{f^{\prime}(z)}{f(z)}dz=\frac{1 }{2\pi i}\oint_{f(\Gamma)}\frac{d\zeta}{\zeta}=W(0). \tag{3}\] We conclude that \(N_{Z}-N_{P}\) is the winding number \(W(0)\) of the closed contour \(f(\Gamma)\) around the origin \(z=0\). As, from now on, all winding numbers will refer to the origin, we drop the argument and simply write \(W\). ### Kitaev Chain and Winding Number To illustrate the occurence of topological invariants in physics, we look at the Kitaev chain [3, 4] as a prominent example. It consists of spinless electrons with next-neighbour hopping and superconductive pairing. The Hamiltonian reads, in a slightly simplified form sufficient for the present discussion, \[\hat{H}=\sum_{n}\biggl{(}t\left(\hat{c}_{n}^{\dagger}\hat{c}_{n+1}+\hat{c}_{n+ 1}^{\dagger}\hat{c}_{n}\right)+\mu\hat{c}_{n}^{\dagger}\hat{c}_{n}+\frac{ \Delta}{2}\left(\hat{c}_{n+1}^{\dagger}\hat{c}_{n}^{\dagger}+\hat{c}_{n}\hat{c }_{n+1}\right)\biggr{)}\, \tag{4}\] Figure 2: Kitaev chain, electrons as larger open circles (red), Majorana fermions as small dots (green) with the pairing indicated by connecting lines (green). Top: all Majorana fermions are paired, normal or trivial superconducting phase. Bottom: unpaired Majorana fermions at the ends of the chain, topological superconducting phase. where \(\hat{c}_{n}\) and \(\hat{c}_{n}^{\dagger}\) are annihilation and creation operators, respectively, at position \(n\) on the chain. Moreover, \(\mu\) and \(\Delta\) are chemical and pairing potentials and \(t\) the hopping strength. The Hamiltonian may be reformulated in terms of Majorana fermions whose number is twice that of the electrons. Remarkably, depending on the parameters, there are two possibilities, as schematically depicted in Figure 2. Either all Majorana fermions are paired or, at the ends of the chain, two of them are unpaired [5]. In the former case the chain is in a normal or trivial superconducting phase, in the latter in a topological one. This aspect deserves further discussion. In Fourier space, the Kitaev chain corresponds to the Bloch-Bogolyubov-de Gennes Hamiltonian matrix \(H(k)\). It is a crucial that this \(2\times 2\) matrix satisfies chiral symmetry, \[\{H(k),{\cal C}\}=0\qquad\mbox{with}\qquad{\cal C}=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}. \tag{5}\] The matrix \({\cal C}\) is the chiral operator in its proper basis and \(\{\,\ \}\) is the anticommutator. It is then possible to write the Hamiltonian matrix in the form \[H(k)=\vec{d}(k)\cdot\vec{\sigma}\qquad\mbox{with}\qquad\vec{d}(k)=(0,\Delta \sin k,\mu+2t\cos k). \tag{6}\] Hence, using the three-component vector \(\vec{\sigma}\) of the \(2\times 2\) Pauli matrices, \(H(k)\) is found to be a scalar product with all physics encoded in the vector \(\vec{d}(k)\) that depends on the wave number \(k\) and the three parameters \(\mu\), \(\Delta\) and \(t\). Importantly, the first component is zero, \(d_{x}=0\). This restriction to effectively only two dimensions can be shown to be a consequence of chiral symmetry (5). To see how topology enters, we notice that the vector \(\vec{d}(k)\) describes an ellipse with parameter \(k\) on the curve, \(\mu\) determines the postion of its center, \(\Delta\) and \(t\) determine its shape. In Figure 3 we depict \(\vec{d}(k)\) for fixed values of \(\mu=1\), \(\Delta=1\) and three different values \(t=0.25,0.5,1\) with the corresponding energy dispersion relation \(E(k)\). If the origin of the \((y,z)\) plane is included in the closed contour that the ellipse describes, its winding number is one, \(W=1\). If not, the winding number is zero, \(W=0\). These are two topologically separated scenarios, reflecting the distinctly different role of the Majorana fermions in the top and bottom parts of Figure 2. For \(W=0\), the superconducting phase is the normal or trivial one, while it is topological for \(W=1\). A special situation occurs if the ellipse just touches the \(x\) axis, the band gap disappears, marking the phase transition point. ### Chirality, Random Winding Numbers and Modeling Aspects When studying such topological invariants in statistical physics, the closed contour might be a random quantity, for example, generated by a proper ensemble of Hamiltonians. In the case of the Kitaev chain, this ensemble may be realized by choosing the parameters \(\mu\), \(\Delta\) and \(t\) from probability distributions. Hence, the contour can be different for a particular choice, i.e. it becomes random, and the winding number \(W\) will be random as well. In general, the dynamics of a system under consideration, described by the Hamiltonian, and the distributions of its parameters will determine the Figure 3: Ellipses described by \(\vec{d}(k)\) (left) and corresponding dispersion relations \(E(k)\) (right). Top: \(t=0.25\), normal superconducting phase, \(W=0\). Center: \(t=0.5\), phase transition point. Bottom: \(t=1\), topological superconducting phase, \(W=1\). Courtesy of Nico Hahn. distribution \(P(W)\). Are there universalities when comparing different systems? -- If yes, in which quantities do these universalities manifest? -- In the distributions \(P(W)\) on their original scales, or on some scales which make these systems comparable? -- These are the guiding questions for our research. Universalities are best identified in random schematic systems that only contain the most basic ingredients needed for the relevant physics, in the present case for the occurence of winding numbers. Random Matrix Theory (RMT) [6, 7] is known to be a powerful concept in this spirit when studying universalities in spectral correlations as well as in the correlations of parametric level motion [8, 9]. The chiral symmetry (5) and thus the restriction to two dimensions are essential for the interpretation of the two superconducting phases in the Kitaev chain in terms of the winding number. Hence, when setting up a schematic random matrix model, we need to employ chirality. ### Connections to Quantum Chromodynamics and Condensed Matter Physics In Quantum Chromodynamics the chiral symmetry of the Dirac operator is broken spontaneously as well as explicitly by the quark masses. The chiral condensate is the order parameter of the phase transition that occurs at a high temperature and that restores chiral symmetry, which is related to the confinement-deconfinement transition. To investigate statistical properties of lattice gauge calculations, chiral RMT [10, 11, 12, 13, 14, 15, 16, 17] is remarkably successful. As in original RMT, presence or absence of time-reversal invariance combined with spin-rotation symmetries results in three classes of chiral random matrices: orthogonal, unitary, and symplectic. It was then shown that altogether ten RMT symmetry classes [18, 19, 20, 21, 22] exist, referred to as the tenfold way. The three original and the three chiral ones comprise six of these ten classes, the remaining four emerge when particle-hole symmetry is also considered, see Refs. [23, 24]. In condensed matter physics chiral symmetry is realized by sublattice symmetry (see early work in Ref. [25]) or as a combination of time reversal and particle-hole symmetry [24]. In the terminology of condensed matter physics, the winding number comes in as characterization of translationally invariant one-dimensional chiral systems that are gapped at the center of the spectrum. The winding number is the integer topological index with respect to the bundle of negative-energy bands. A non-zero winding number \(W\) indicates the topologically nontrivial situation with \(|W|\) modes at each boundary [26, 27, 28, 29]. The winding number differs for different realization of the disorder, i.e. it becomes random. Our research on the winding number was inspired by studies of systems with energy bands in two dimensions, allowing for a topological classification by the (first) Chern number. A random matrix model [30, 31] revealed a Gaussian distribution of Chern numbers with a universal covariance. ## 3 Formulation of the Problem and Mathematical Setup After introducing chiral random matrix ensembles with a parameter dependence in Section 3.1, the statistical quantities of interest are defined in Section 3.2. In Section 3.3 a crucial step for all of our mathematical investigations is presented, namely the mapping of the topological problem addressed to a spectral one which greatly facilitates the computations. ### Chiral Random Matrix Ensembles with Parametric Dependence We derived results [1, 2] for the chiral unitary and the chiral symplectic symmetry classes labeled AIII and CII, respectively, see Ref. [18]. The latter case is mathematically much more demanding than the former, but not as involved as the orthogonal case, labeled BDI. The cases BDI and CII describe time-reversal invariant systems, while this invariance does not exist in the case AIII. We refer to the matrices as Hamiltonians \(H\), as most of the present application of winding numbers seem to stem from Condensed Matter Physics. The matrices \(H\) are complex Hermitean or quaternion real, i.e. self-adjoint, with even dimension \(\beta N\times\beta N\) where we employ the Dyson indices \(\beta=2\) and \(\beta=4\) for AIII and CII. Chiral symmetry manifests in the relation \[\{\mathcal{C},H\}=0 \tag{7}\] where in the chiral basis \[\mathcal{C}=\begin{bmatrix}\mathds{1}_{\beta N/2}&0\\ 0&-\mathds{1}_{\beta N/2}\end{bmatrix}. \tag{8}\] The Hamiltonians thus take the block off-diagonal form \[H=\begin{bmatrix}0&K\\ K^{\dagger}&0\end{bmatrix}\, \tag{9}\] where the \(\beta N/2\times\beta N/2\) matrices \(K\) have no further symmetries. We draw the matrices \(H\) form the chiral Gaussian Unitary, respectively, symplectic Ensembles (chGUE, chGSE). To study questions of topology, we give these random matrices a parametric dependence \(K=K(p)\) and thus \(H=H(p)\), where the real variable \(p\) lies on the unit circle. The winding number corresponding to these Hamiltonians is then [32, 33] \[W=\frac{1}{2\pi i}\int\limits_{0}^{2\pi}w(p)\,dp\, \tag{10}\] with the winding number density \[w(p)=\frac{d}{dp}\ln\det K(p)=\frac{1}{\det K(p)}\frac{d}{dp}\det K(p). \tag{11}\] Cauchy's argument principle applies to the integral (10), provided \(\det K\) is a nonzero analytic function of \(p\), see Section 2.1 and particularly Eq. (3). To produce explicit results, we choose a particular realization of the parameter dependence. With two smooth and \(2\pi\) periodic scalar functions \(a(p)\) and \(b(p)\), we set \[K(p)=a(p)K_{1}+b(p)K_{2}\, \tag{12}\] where the matrices \(K_{1}\) and \(K_{2}\) have dimensions \(\beta N/2\times\beta N/2\). The associated Hamiltonians \[H(p)=a(p)H_{1}+b(p)H_{2}\qquad\mbox{with}\qquad H_{m}=\begin{bmatrix}0&K_{m} \\ K_{m}^{\dagger}&0\end{bmatrix}\,\ m=1,2\, \tag{13}\] define parametric combinations of either two chGUE's or two chGSE's. Averages over these combined ensembles have to be performed. It is convenient to introduce the vector \[v(p)=(a(p),b(p))\ \in\mathbb{C}^{2}. \tag{14}\] Time-reversal invariance imposes the condition \(v^{*}(p)=v(-p)\) in the chiral symplectic case CII. ### Statistical Quantities Considered Considering \(k\) different points \(p_{i},\ i=1,\ldots,k\), on the unit circle, we are interested in the \(k\)-point correlators of winding number densities \[C_{k}^{(\beta,N)}(p_{1},\ldots,p_{k})=\langle w(p_{1})\cdots w(p_{k})\rangle \tag{15}\] The precise meaning of the angular brackets indicating the ensemble average will be given later on. In the chiral unitary case AIII, we computed these correlators directly [1], see Section 4.1. As, first, this approach becomes forbiddingly complicated in the chiral symplectic case CII, and, second, results in cumbersome expressions for larger \(k\), we calculated the generators \[Z_{k|l}^{(\beta,N)}(q,p)=\left\langle\frac{\prod_{j=1}^{l}\det K(p_{j})}{\prod _{j=1}^{k}\det K(q_{j})}\right\rangle \tag{16}\] for two sets of variables \(p_{1},\ldots,p_{l}\) and \(q_{1},\ldots,q_{k}\) in Ref. [2], see Section 4.4. Only the case \(k=l\) is needed, but the more general definition (16) for \(k\) and \(l\) being different has technical advantages. We notice that \(k\) and \(l\) are the numbers of determinants in denominator and numerator, respectively. The \(k\)-fold derivative \[C_{k}^{(\beta,N)}(p_{1},\ldots,p_{k})=\frac{\partial^{k}}{\prod_{j=1}^{k} \partial p_{j}}Z_{k|k}^{(\beta,N)}(q,p)\Bigg{|}_{q=p} \tag{17}\] of the generator (16) for \(k=l\) at \(q=p\) yields the correlator (15). Anticipating the later discussion, we emphasize that the generators for both Dyson indices \(\beta=2,4\) will exhibit a remarkably clear structure [2] which is an important reason to address them here. It is worth mentioning that the correlators (15) and the generators (16) are very different from those for the parametric level motion considered in Refs. [8, 9]. Furthermore, we also computed the distribution of winding numbers \(P(W)\) in the chiral unitary case AIII [1], see Section 4.2. ### Mapping a Topological to a Spectral Problem At first sight, the computation of the correlators (15) and the generators (16) appears as a formidable task, requiring the development of completely new techniques in RMT. Luckily, one can establish a link between the topological problem set up above and spectral problems in RMT for which a wealth of literature exists. This amounts to a tremendous simplification, even though the calculations to be carried out are still involved and quite demanding particularly in the chiral symplectic case. The key observation is that a combination of the two matrices \(K_{1}\) and \(K_{2}\) in Eq. (12) encodes all the statistical information needed. Pulling out \(K_{1}\), say, one has \[K(p)=a(p)K_{1}+b(p)K_{2}=b(p)K_{1}\left(\kappa(p)\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}_{\beta N/2}+K_{1}^{-1}K_{2}\right) \tag{18}\] with the ratio \[\kappa(p)=\frac{a(p)}{b(p)}. \tag{19}\] Since the winding number density (11) is the derivative of the logarithm \[\ln\det K(p)=\ln\det K_{1}+\beta N\ln b(p)+\ln\det\left(\kappa(p)+K_{1}^{-1}K _{2}\right)\, \tag{20}\] the first term \(\ln\det K_{1}\) does not contribute and, remarkably, only the combination \(Y=K_{1}^{-1}K_{2}\) is relevant. Using Eq. (18) the generators acquire the form \[Z_{k|k}^{(\beta,N)}(q,p)=\left(\prod_{j=1}^{k}\frac{b(p_{j})}{b(q_{j})}\right) ^{\beta N}\left\langle\prod_{j=1}^{k}\frac{\det(\kappa(p_{j})\leavevmode \hbox{\small 1\kern-3.8pt\normalsize 1}_{\beta N/2}+Y)}{\det(\kappa(q_{j}) \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\beta N/2}+Y)}\right\rangle\, \tag{21}\] which as well only contains the matrix \(Y\). The task to be solved is the derivation of the probability density for the random matrices \(Y=K_{1}^{-1}K_{2}\) from the independent Gaussian distributions for the random matrices \(K_{1}\) and \(K_{2}\). Once again luckily, the results are known as spherical [34, 35] ensembles and their probability densities read explicitly \[\widetilde{G}^{(\beta)}(Y)=\frac{1}{\pi^{\beta N^{2}/2}}\prod_{j=1}^{N}\frac{ (\beta(N+j)/2-1)!}{(\beta j/2-1)!}\ \frac{1}{\det^{2N}\left(\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}_{\beta N/2}+YY^{\dagger}\right)}. \tag{22}\] These ensembles are referred to as complex spherical and quaternion spherical for \(\beta=2,4\). In the complex case, the probability density (22) can be reduced to a joint probability density of the \(N\) complex eigenvalues \(z=\mbox{diag}\,(z_{1},\ldots,z_{N})\) of \(Y\) and reads \[G^{(2)}(z)=\frac{1}{c^{(2)}}|\Delta_{N}(z)|^{2}\prod_{j=1}^{N}\frac{1}{(1+|z_{ j}|^{2})^{N+1}} \tag{23}\] with the the Vandermonde determinant \[\Delta_{N}(z)=\prod_{j<l}(z_{j}-z_{l}). \tag{24}\] In the quaternion case, however, each eigenvalue \(z_{j}\) of \(Y\) has a complex conjugate \(z_{j}^{*}\) which is also an eigenvalue. The corresponding joint probability density of the eigenvalues \(z=\mbox{diag}\,(z_{1},z_{1}^{*},z_{2},z_{2}^{*},\ldots,z_{N},z_{N}^{*})\) is given by \[G^{(4)}(z)=\frac{1}{c^{(4)}}\Delta_{2N}(z)\prod_{j=1}^{N}\frac{z_{j}-z_{j}^{*}}{ (1+|z_{j}|^{2})^{2N+2}}. \tag{25}\] The normalization constants are \[c^{(\beta)}=(\beta\pi/2)^{N}N!\prod_{j=1}^{N}\mbox{B}(\beta j/2,\beta(N+1-j)/2 )\, \tag{26}\] where \(\mbox{B}(x,y)\) is Euler's Beta function. The question whether the integrals to be done are well-defined for \(\beta=4\) arises, but the answer is affirmative [2]. Hence, the ensemble average over a function \(f(z)\) to be performed amounts to carrying out the integral \[\langle f(z)\rangle=\int\limits_{\mathbb{C}}d[z_{1}]\cdots\int\limits_{ \mathbb{C}}d[z_{N}]\,G^{(\beta)}(z)\,f(z) \tag{27}\] over all complex eigenvalues. Hence, by reducing the two chiral ensembles to a single spherical one for either \(\beta\), all information of the topological problem is contained in the determinants \(\det(\kappa(p)\mbox{1\kern-2.5pt1}_{\beta N/2}+Y)\) or their derivatives. Most advantageously, this is equivalent to a spectral problem where \(Y\) and \(\kappa(p)\) formally play the roles of a (complex or quaternion) "Hamiltonian" and of the corresponding "energy", respectively. ## 4 Results The correlators for the unitary case are addressed in Section 4.1, the distribution is given in in Section 4.2. Aspects of universality are discussed in Section 4.3. The generators in the chiral unitary and symplectic cases are presented in Section 4.4. ### Winding Number Correlators in the Chiral Unitary Case In Ref. [1], we calculated the winding number correlators \(C_{k}^{(2,N)}(p_{1},\ldots,p_{k})\) as defined in Eq. (15) in the unitary case directly. We chose \[a(p)=\cos p\qquad\mbox{and}\qquad b(p)=\sin p. \tag{28}\] Using Eqs. (11) and (20) as well as the complex eigenvalues of \(Y\), one has \[w(p)=N\cot p+y(p)\qquad\mbox{with}\qquad y(p)=-\frac{1}{\sin^{2}p}\sum_{n=1}^{ N}\frac{1}{\cot p+z_{n}}. \tag{29}\] Only the \(k\)-fold products of \(y(p)\) have to be ensemble averaged with the joint probability density (23), the presence of the inconvenient term \(N\cot p\) implies that the correlator \(C_{k}^{(2,N)}(p_{1},\ldots,p_{k})\) of the \(k\) winding number densities \(w(p_{j})\) becomes a combinatorial sum of the \(y(p_{j})\) correlators. Furthermore, the latter themselves turn out to be rather involved combinatorial expressions. Eventually, \(C_{k}^{(2,N)}(p_{1},\ldots,p_{k})\) is found to be a combinatorial sum of determinants with the entries \[L_{nml}(q_{l})=\frac{(-1)^{m-n}\pi}{q_{l}^{m-n+1}}\mathrm{B}(m,N-m+1)\begin{cases} u_{m}(N,q_{l}^{2})&m\geq n\\ -v_{m}(N,q_{l}^{2})&m<n\end{cases}\, \tag{30}\] with the properly normalized incomplete Beta functions \[u_{m}(N,q_{l}^{2}) = \frac{2}{\mathrm{B}(m,N-m+1)}\int\limits_{0}^{q}d\rho\frac{\rho^ {2m-1}}{(1+\rho^{2})^{N+1}}\] \[v_{m}(N,q_{l}^{2}) = \frac{2}{\mathrm{B}(m,N-m+1)}\int\limits_{q}^{\infty}d\rho\frac{ \rho^{2m-1}}{(1+\rho^{2})^{N+1}} \tag{31}\] that satisfy \(u_{m}(N,q_{l}^{2})+v_{m}(N,q_{l}^{2})=1\). Even though \(\mathrm{B}(m,N-m+1)\) drops out in the \(L_{nml}(q_{l})\), this normalization has advantages, see Ref. [1]. The first two correlators read \[C_{1}^{(2,N)}(p_{1}) = 0\] \[C_{2}^{(2,N)}(p_{1},p_{2}) = -\frac{1-\cos^{2N}\left(p_{1}-p_{2}\right)}{1-\cos^{2}\left(p_{1} -p_{2}\right)}. \tag{32}\] The at first sight surprising vanishing of the averaged winding number density is actually quite natural, as the winding number \(W\) must have a symmetric distribution with vanishing first moment. The integral of \(C_{1}^{(2,N)}(p_{1})\) over \(p_{1}\) is this first moment. ### Winding Number Distribution In Ref. [1], we also computed the winding number distribution \(P(W)\) in the unitary case for the choice (28). Using Cauchy's argument principle, we derive the discrete probability distribution \[P(W)=r\left(\frac{W+N}{2}\right)\binom{N}{(W+N)/2} \tag{33}\] on the integers \(W\) between \(-N\) and \(N\) for arbitrary, finite matrix dimension \(N\). Here, \(r(m)\) is the probability that \(m\) eigenvalues are inside the unit circle and the remaining ones outside which may be written as \[r(m)=\int\limits_{|z_{1}|<1}d[z_{1}]\cdots\int\limits_{|z_{m}|<1}d[z_{m}]\int \limits_{|z_{m+1}|>1}d[z_{m+1}]\cdots\int\limits_{|z_{N}|>1}d[z_{N}]\,G^{(2)} (z). \tag{34}\] Doing the integrals yields \[r(m)=\frac{1}{N!}\sum\limits_{\omega\in\mathbb{S}_{N}}\left(\prod\limits_{i=1} ^{m}u_{\omega(i)}(N,1)\right)\left(\prod\limits_{i=m+1}^{N}v_{\omega(i)}(N,1) \right), \tag{35}\] in terms if the functions (31). The combinatorial factor in formula (33) takes into account the permutation invariance of the eigenvalues inside, respectively outside, the unit circle. The sum runs over all permutations, \(\mathbb{S}_{N}\) is the permutation group. ### Aspects of Universality The quest for universality is twofold, first, there is the question whether the same statistical effects, distributions or scalings, etc, can be identified in empirical or experimental data of different physical systems. Second, there is the theoretical and mathematical side concerned with often schematic models and their ability to describe or even predict the results from data analysis. In the case of spectral correlations, universal statistics is found on the local scale of the mean level spacing, i.e. universalities are revealed after a rescaling of the energies, referred to as unfolding. The unfolded correlators of, on the one hand, RMT for infinite level number and of, on the other hand, numerous physical systems of very different nature with large number of levels coincide, see the discussion in Refs. [7, 6]. The theoretical and mathematical challenge is non-trivial as it amounts to showing that a most general class of probability densities for the random matrices yields after unfolding the same statistical quantities. Put differently, it suffices to consider Gaussians, because the resulting statistics is, always after unfolding, universal. In the case of statistical topology, universality is of equally high importance, but appears to be considerably more complicated. Already on the theoretical and mathematical side there are several natural questions to be posed: First, is there an unfolding scale comparable to the local mean level spacing and how is it related to the scale of the level velocity as in the parametric correlations [8, 9, 36]? -- Second, which probability densities for the random matrices yield in the model set up in Section 3.1 the same statistics? -- Third, what are the conditions on the functions \(a(p)\) and \(b(p)\) or, more precisely, the combined conditions on these functions and the probability densities that yield in the model universal statistics? -- Fourth, is it possible to find universal statistics for models more general than the one of Section 3.1? In Ref. [1], we started addressing these issues in the unitary case for the choice (28). Guided by unfolding in spectral statistics, we rescaled the arguments \(p_{i}\) in the correlation functions \(C_{k}^{(2,N)}(p_{1},\ldots,p_{k})\) according to \[\psi_{i}=N^{\alpha}p_{i}. \tag{36}\] The power \(\alpha\) should be positive, because we want to zoom into the parametric dependence in the limit \(N\to\infty\). Consider the two-point function (32) and the limit \[\lim_{N\to\infty}C_{2}^{(2,N)}\left(\frac{\psi_{1}}{N^{\alpha}},\frac{\psi_{2 }}{N^{\alpha}}\right)\frac{d\psi_{1}}{N^{\alpha}}\frac{d\psi_{2}}{N^{\alpha}}= f_{2}^{(\alpha)}(\psi_{1},\psi_{2})d\psi_{1}d\psi_{2} \tag{37}\] defining the function \(f_{2}^{(\alpha)}\), if existing. A straightforward calculation yields \[f_{2}^{(\alpha)}(\psi_{1},\psi_{2})=\begin{cases}-\frac{1}{\left(\psi_{1}- \psi_{2}\right)^{2}}&\alpha<\frac{1}{2}\\ -\frac{1-\exp(-(\psi_{1}-\psi_{2})^{2})}{\left(\psi_{1}-\psi_{2}\right)^{2}}& \alpha=\frac{1}{2}\\ 0&\alpha>\frac{1}{2}\end{cases}. \tag{38}\] Figure 4: Unfolded two-point function after the rescaling (36) for different values of \(N\) (blue). In (a) we used \(N=5,10,20,50,100,150,200,300,1000\) and \(\alpha=1/6\), in (b) \(N=2,5,7,10,15,20,50,100\) and \(\alpha=1/2\). For comparison the limit (37) (red). Taken from Ref. [1] We notice \(C_{2}^{(2,N)}(p_{1},p_{1})=-1\), see Eq. (32), implying that \(\psi_{1}\neq\psi_{2}\) when taking the limit for arbitrary \(\alpha\). The result (38) reveals different regimes, the one for \(\alpha=1/2\) involves the same scale as in Refs. [8, 9]. Figure 4 shows results for two values of \(\alpha\) and various values of \(N\), the unfolded two-point function approaches the limit (38) when \(N\) increases. We conjectured that the function \(f_{2}^{(\alpha)}(\psi_{1},\psi_{2})\) is universal [1]. In Ref. [1], we also showed that the winding number distribution (33) becomes Gaussian for large \(N\). More precisely, its second moment behaves like \(\langle W^{2}\rangle\sim\sqrt{N}\), suggesting an unfolding of the form \(W/N^{1/4}\), i.e. different from the rescaling above. It then follows that \(P(W)\) approaches a Gaussian with variance \(2\sqrt{N/\pi}\) for large \(N\). ### Generators in the Chiral Unitary and Symplectic Cases We computed the generators (16), respectively (21) exactly for \(\beta=2\) and \(\beta=4\) in Ref. [2]. To this end, we used the method put forward some years ago in Refs. [37, 38]. It identifies and employs, in ordinary space, supersymmetric structures deeply rooted in the ensemble averages. As there is no mapping performed of the ensemble averages to superspace, the method is often referred to, jokingly, but not deceptively, as "supersymmetry without supersymmetry". In the chiral unitary case \(\beta=2\), we found a ratio of two determinants, \[Z_{k|k}^{(2,N)}(q,p)=\frac{\det\left[\frac{1}{v^{T}(q_{m})\sigma_{2}v(p_{n})} \left(\frac{v^{\dagger}(q_{m})v(p_{n})}{v^{\dagger}(q_{m})v(q_{m})}\right)^{N }\right]_{1\leq m,n\leq k}}{\det\left[\frac{1}{v^{T}(q_{m})\sigma_{2}v(p_{n}) }\right]_{1\leq m,n\leq k}}\, \tag{39}\] where \(\sigma_{2}\) is the second \(2\times 2\) Pauli matrix and \(v(p_{n})\) te vector defined in Eq. (14). In the chiral symplectic case \(\beta=4\), we arrived at a ratio of a Pfaffian and a determinant, \[Z_{k|k}^{(4,N)}(q,p)=\frac{\mathrm{Pf}\begin{bmatrix}\widehat{\mathrm{K}}_{1} (p_{m},p_{n})&\widehat{\mathrm{K}}_{2}(p_{m},q_{n})\\ -\widehat{\mathrm{K}}_{2}(p_{n},q_{m})&\widehat{\mathrm{K}}_{3}(q_{m},q_{n}) \end{bmatrix}_{1\leq m,n\leq k}}{\det\left[\frac{1}{iv^{T}(q_{m})\sigma_{2}v(p_ {n})}\right]_{1\leq m,n\leq k}}. \tag{40}\] The three kernel functions \(\widehat{\mathrm{K}}_{l}(p_{m},p_{n})\,l=1,2,3\) are quite complicated and can be found explicitly in Ref. [2]. Considering the complexity of the problem and of its mathematical structure, these are remarkably compact results, even in the chiral symplectic case. This compactness is the reason why we give these results here. Their form is intimately connected with the mapping of the topological to a spectral problem discussed in Section 3.3, because such determinant and Pfaffian expressions are ubiquitous for the generators in spectral statistics. Importantly this carries over, at least for the model considered, to the generators for the correlators of winding number densities. ## 5 Discussion and Conclusions Statistical Topology is an emerging branch in statistical physics, with connections to various branches of mathematics. It is triggered by the identification of topological questions in many areas of physics, ranging from quantum mechanics and quantum field theory over semiclassics to QCD and Condendsed Matter Physics. First, I tried to give an introduction to winding number statistics for newcomers who do not have any background, avoiding usage of expert jargon and of burying the key ideas under the advanced terminology developed in mathematics and mathematical physics. Second, I reviewed results that my collaborators and I obtained in two recent works. We studied winding numbers and associated statistical quantities in a random matrix model. There are, of course, also other topological invariants of considerable interest in physics, most notably the Chern numbers. I presented our first, probably awkward, steps to look at universal behavior. In my opinion, the most fascinating challenge for the future is the further study of universality in statistical topology, more precisely, of both of its aspects, the experimental-empirical as well as the theoretical-mathematical one. ## Acknowledgements I thank Omri Gat, Nico Hahn, Mario Kieburg and Daniel Waltner, my collaborators of Refs. [1, 2], I deeply regret that I cannot thank anymore Petr Braun who passed away in late 2020. I am greatful to Nico Hahn for Figure 3. This work was funded by the German-Israeli Foundation within the project _Statistical Topology of Complex Quantum Systems_, grant number GIF I-1499-303.7/2019.
2310.10473
Spin Splitting and Disorder in HgTe-Based Massless Dirac Fermion Landau Levels
An experimental study of Landau levels (LLs) in a system of two-dimensional massless Dirac fermions based on a critical thickness HgTe quantum well has been carried out. The magnetotransport and the capacitive response have been investigated simultaneously. It is shown that the formation of Shubnikov-de Haas (SdH) oscillations associated with odd v filling factors occurs in a magnetic field whose strength grows monotonically with v. This behavior is consistent with calculations of the electron spectrum, which predicts a decrease in cyclotron gaps with increasing v. Oscillations with even filling factors, corresponding to spin gaps, behave less trivially. First, the SdH oscillations with filling factors of 4 and higher are resolved in a magnetic field that is 2-2.5 times smaller than the field required to resolve neighboring SdH oscillations with odd filling factors of 3 and higher. This indicates a significant increase in the size of the spin gap caused by an interface inversion asymmetry (IIA) leading to Dirac cone splitting in a zero magnetic field. Using the spin splitting value gamma as a fitting parameter, we obtained the best agreement between experimental data and calculations at gamma=1.5 meV. Next, spin splitting for the zeroth and first LLs is observed in 2-3 times stronger magnetic fields than for the other levels, indicating an increase in disorder near the Dirac point, due to the lack of screening.
D. A. Kozlov, J. Ziegler, N. N. Mikhailov, Z. D. Kvon, D. Weiss
2023-10-16T14:53:02Z
http://arxiv.org/abs/2310.10473v1
# Spin Splitting and Disorder in HgTe-Based Massless Dirac Fermion Landau Levels ###### Abstract An experimental study of Landau levels (LLs) in a system of two-dimensional massless Dirac fermions based on a critical thickness HgTe quantum well has been carried out. The magneto-transport and the capacitive response have been investigated simultaneously. It is shown that the formation of Shubnikov-de Haas (SdH) oscillations associated with odd \(\nu\) filling factors occurs in a magnetic field whose strength grows monotonically with \(\nu\). This behavior is consistent with calculations of the electron spectrum, which predicts a decrease in cyclotron gaps with increasing \(\nu\). Oscillations with even filling factors, corresponding to spin gaps, behave less trivially. First, the SdH oscillations with filling factors of 4 and higher are resolved in a magnetic field that is 2-2.5 times smaller than the field required to resolve neighboring SdH oscillations with odd filling factors of 3 and higher. This indicates a significant increase in the size of the spin gap caused by an interface inversion asymmetry (IIA) leading to Dirac cone splitting in a zero magnetic field. Using the spin splitting value \(\gamma\) as a fitting parameter, we obtained the best agreement between experimental data and calculations at \(\gamma=1.5\) meV. Next, spin splitting for the zeroth and first LLs is observed in 2-3 times stronger magnetic fields than for the other levels, indicating an increase in disorder near the Dirac point, due to the lack of screening. Massless Dirac fermion systems, similar to graphene, can be realized in HgTe quantum wells with a critical thickness of 6.3...6.6 nm. At this thickness the energy spectrum changes from direct to inverted [1]. Such systems are of interest because of their linear dispersion law and their strong spin-orbit interaction. In this system classical transport [2; 3; 4; 5; 6], quantum transport [1; 3; 7; 8; 9; 10; 11; 12], and cyclotron resonance [13; 14; 15; 16; 17] were investigated, the density of states (DoS) was measured [18] and the quantization of the Faraday rotation was discovered [19]. Although numerous methods have been applied and diverse results have been obtained, with a considerable body of work dedicated to band structure and Landau level (LL) calculations [1; 14; 15; 20; 21; 22; 23; 24; 25], the experimental data currently available on the characteristics of these levels remains remarkably incomplete. On the one hand, the study of cyclotron resonance has allowed to confirm the existence of the Dirac spectrum and the non-equilistance of LLs [13; 14; 16]. On the other hand, the cyclotron resonance obeys selection rules prohibiting spin flips when the orbital number changes by one. It is, therefore, insensitive to Zeeman splitting. Magnetotransport measurements are not subject to these limitations. However, previous research has primarily focused on the quantum Hall effect (QHE) in high magnetic fields [8; 10] or on the hole side, where Fermi level pinning and the observation of ultra-long QHE plateaus for light holes are observed due to the presence of side valleys with heavy holes [9; 12]. At the same time, the region of weak magnetic fields, where the features of the Dirac spectrum should be most evident, is virtually unexplored experimentally. The charge neutrality point is of particular interest. In quantizing magnetic fields, the zero LL is a characteristic fingerprint of Dirac fermion system [26; 27]. In graphene, due to the presence of two valleys and spin, it is 4-fold degenerate, but because of the interaction effects the degeneracy can be lifted [28]. However, due to the small magnitude of the interaction effects and the small value of the g-factor, the degeneracy lifting is observed at rather high magnetic fields. In the QHE regime, the conductivity of graphene at the Dirac point can be due to both the weakly conducting bulk and the counterpropagating dissipative edge states [29]. A different behavior can be expected in HgTe Dirac fermion quantum wells because of the large Zeeman splitting which could open a gap at the Dirac point. The QHE near the charge neutrality point has been studied in detail in semi-metallic HgTe QWs [30; 31] and, to a lesser extent, in QWs of critical thickness [11; 32] or close to it [33], but with a focus on strong magnetic fields or mesoscopic samples. Thus, the behavior of the system of massless Dirac fermions at the charge neutrality point at the transition from weak to quantizing magnetic fields remains poorly understood. The present work focuses on the study of Landau levels in a (013)-oriented 6.6 nm thick HgTe quantum well (see Fig. 1(a)) in quantizing but relatively weak (less than 3 Tesla) magnetic fields at temperatures of 1.5-10 K. We performed combined magnetotransport and capacitance measurements. Due to the high sensitivity of the transport measurements and the possibility to measure the DoS directly by the capacitive technique, we were able to study the zero LL in detail and to demonstrate its splitting in a strong field. The samples investigated were 10-pin gated Hallbars with a size of 450\(\times\)50 \(\mu\)m and a total capacitance of 36 pF. Magnetotransport measurements were performed using a 4-terminal scheme with lock-in detection at a frequency of 4-12 Hz and a drive current of 10-100 nA, which prevents heating effects. The capacitive measurements were performed according to a scheme similar to that described in [18; 34]: a small oscillating voltage \(V_{ac}\) was applied to the gate at frequency against a constant bias \(V_{g}\), while the quantum well was at zero potential. The magnitude of the AC current, phase-shifted by 90 degrees with respect to the AC voltage, reflected the capacitance of the structure. The parameters \(V_{\rm ac}\), \(f\) and \(T\) were varied as a function of the magnetic field in order to eliminate both resistive effects and the effect of DoS smearing by temperature and drive voltage, while achieving the highest possible signal-to-noise ratio. The parameters used were \(V_{\rm ac}=20\ldots 100\) mV, \(f=83\ldots 4\) Hz, \(T=1.5\ldots 10\) K, where the first number of the respective range corresponds to zero magnetic field and the second one to \(B=3\) T. The measured dependencies of \(\rho_{\rm xx}(V_{g})\) and \(C(V_{g})\) are shown in Fig. 1(c). The Dirac point (DP) is located at \(V_{g}=-0.18\) V near the maximum of \(\rho_{\rm xx}\) and the minimum of \(C\). As one moves away from the DP to the right, the electron density in the system increases and a smooth decrease in resistance and increase in capacitance is observed. From the measured capacitance and following the previously developed approach [18], we extracted the dependencies of the Fermi energy \(E_{\rm F}\) (Fig. 1(d)) and the DoS (Fig. 1(e)) on the gate voltage. On the electron side, we can clearly see that the DoS depends almost linearly on the Fermi energy. It reaches a value of 80 meV at \(V_{g}=2\) V. On the left side of the DP, Dirac holes coexist with heavy holes [18; 9; 12]. The presence of heavy holes leads to a rapid increase of the DoS, and thus to the pinning of the Fermi level at about \(-25\) meV below the DP (Fig. 1(d)) and causes the measured capacitance to saturate to the value of the geometric capacitance (Fig. 1(c)). Figure 2 shows the dependencies of the conductivity tensor components \(\sigma_{\rm xx}(V_{g})\) and \(\sigma_{\rm xy}(V_{g})\) in magnetic Figure 1: (a) Cross section of the investigated structure. (b) Schematic band structure of the system. The side valleys in the valence band are located \(\approx 25\) meV below the Dirac point. (c) Gate voltage dependence of the longitudinal resistance \(\rho_{\rm xx}\) (red line, left axis) and the capacitance (blue line, right axis) at \(B=0\) and \(T=1.5\) K. (d) and (e) Fermi energy vs. gate voltage (d) and the DoS vs. Fermi energy (e) extracted from the capacitance data. Figure 2: (a) Gate voltage dependence of the longitudinal conductivity \(\sigma_{\rm xx}\), measured for \(B=0\), \(0.3\,\ldots 1.5\) T and \(T=1.5\) K. For clarity, curves measured at non-zero magnetic fields are multiplied by the factor shown in parentheses. The minima and corresponding filling factors are indicated by the numbered vertical arrows. (b) Gate voltage dependence of the Hall conductivity \(\sigma_{\rm xy}\), measured for \(B=0.1\ldots 1.5\) T in steps of 0.1 T. In both panels ”DP” denotes the Dirac point. fields up to 1.5 T, calculated by tensor inversion from the measured \(\rho_{xx}\) and \(\rho_{xy}\) data. These dependencies show a transition from classical transport to the QHE regime, accompanied by the formation of a series of distinct minima in \(\sigma_{\rm xx}\) and plateaus in \(\sigma_{\rm xy}\). The most striking effect is the strong asymmetry between electrons and holes: first, the QHE on the hole side is formed in a magnetic field of only 0.4 T (and even at 0.15 T when measured at lower temperatures [9]), while on the electron side it requires twice the field. Second, the plateaus on the hole side are anomalously long, and from 0.7 to 1.5 T only one plateau with fixed \(\sigma_{\rm xy}=-e^{2}/h\) is observed. Both features are explained by the coexistence of light and heavy holes [9; 12; 35]. The anomalous length of the plateau is related to the exchange light and heavy holes, implementing the QHE reservoir model. A similar effect was later discovered in graphene on some substrates and is known as "giant QHE" [36]. The second feature, i.e., the formation of the plateau at ultra-low fields is associated with an effective screening of the random potential by heavy holes leading to a significant increase of the quantum lifetime. The QHE on the electron side in Fig. 2(b) shows up to five electronic plateaus of \(\sigma_{\rm xy}\), which become blurred for higher filling factors. This behavior reflects the peculiarity of Dirac fermion systems, where the distance between neighboring LLs decreases with increasing \(\nu\). It could be explained by the magnetic field dependence of the LLs in the simplest approximation, fitted to the the 4-zone Bernevig-Hughes-Zhang (BHZ) Hamiltonian [1; 22] \[E_{n}^{\pm}(B)=\alpha\sqrt{nB}\pm g\mu_{B}B/2, \tag{1}\] where \(n=0,1,2\ldots\) is the Landau quantum number, \(\alpha=25\) meV\(\cdot\)T\({}^{-1/2}\) is a numerical coefficient, \(g=50\) is the effective g-factor, \(\mu_{B}\) is the Bohr magneton, with \(g\mu_{B}=3.5\) meV\(\cdot\)T\({}^{-1}\), and \(\pm\) denotes different spin orientations. From formula (1) it can be seen that in weak magnetic fields the distance between Landau gaps characterized by odd filling factors is larger than the distance between spin-split gaps (even filling factor) with the maximum distance for filling factor \(\nu=1\). These calculations agree with the experimental \(\sigma_{\rm xx}(V_{g})\) values at \(B=1.5\) T (Fig. 2(a)): the deepest minima of the conductances are observed at \(\nu=1\) and \(\nu=3\), while the minima at \(\nu=2,4\), and 6 are significantly less deep. At higher filling factors, the amplitude of the conductivity oscillations associated with even and odd filling factors become equal. For the most accurate calculation it is necessary to use the more complicated 6- or even 8-band Kane Hamiltonian [20; 23]. However, on the electron side and in weak magnetic fields (below 2 T) very similar results are obtained with all three approaches. Let us analyze the behavior of the measured conductivity oscillations in the limit of weak magnetic fields. Fig. 3(b) shows a two-dimensional map of the second derivative of the conductance with respect to the gate voltage \(d^{2}\sigma_{\rm xx}(B,V_{g})/dV_{g}^{2}\). From the depth of the minima at different filling factors, one can estimate the size of the corresponding energy gap. The red and magenta dots in Fig. 3 indicate the points \(B_{\nu}\) where minima of the conductivity with filling factor \(\nu\) first occur. For odd filling factors (red dots), a clear trend of \(B_{\nu}\) toward increasing \(B\) with increasing \(\nu\) is observed. This is consistent with formula (1) and reflects the decreasing energy gap for larger \(\nu\). The behavior of \(B_{\nu}\) for even filling factors (magenta dots), reflecting spin gaps, turns out to be less trivial. For \(\nu=2\) the gap opens at a field of 0.8 T, which exceeds both \(B_{1}=0.25\) T and \(B_{3}=0.65\) T and thus qualitatively agrees with the theoretical expectation described by Eq. 1 However, all subsequent spin gaps open in a magnetic field smaller than 0.5 T, i.e. 2-2.5 times smaller than the field for the neighboring odd Figure 3: (a) LLs calculated on the basis of the BHZ model with interface inversion asymmetry taken into account with cone splitting \(\gamma=1.5\) meV [24; 25]. Numbers denote filling factors, counting the number of occupied Landau levels. The magnetic field axis is the same as in panel (b). (b) Two-dimensional map of the second derivative \(d^{2}\sigma_{\rm xx}(B,V_{g})/dV_{g}^{2}\). The blue color corresponds to minima of \(\sigma_{\rm xx}\), yellow to maxima. Numbers, as in panel (a), denote the corresponding filling factors. Green dashed lines mark the behavior of Dirac holes LLs [12]. In both panels the dots indicate the points \(B_{\nu}\) where the minima of the conductivity oscillations with the filling factor \(\nu\) appear first: red dots correspond to odd filling factors, magenta dots correspond to even ones. The position \(B_{1}\) of the first red dot was determined from the \(\sigma_{\rm xx}(V_{g})\) data in Fig. 2. filling factors. In order to explain the observed behavior of the spin gaps, we performed the LL calculations based on the 4-zone BHZ Hamiltonian with an additional factor taken into account, namely the \(B=0\) Dirac cone splitting. The splitting naturally comes from the IIA [24; 25] and its magnitude \(\gamma\) was used as a fitting parameter. The best fit was obtained for \(\gamma=1.5\) meV and the calculations result is shown in Fig. 3(a). Note, that the optimal value of \(\gamma\) varies from 1.3 to 1.7 meV for different energy ranges, giving an average value of 1.5 meV. When IIA is taken into account, the calculation agrees well with the experimental data. This is illustrated by the red and magenta dots in Fig. 3(a) which are taken from panel (b). It can be seen that the distance between neighboring LLs for odd (red) and even (magenta) filling factors are quite similar, if compared at same Fermi energy (i.e gap for \(\nu=5\) should be compared with \(\nu=8\)). The comparison at the same energy is essential, because the efficiency of impurities screening and thus the LLs' broadening depend on energy. The magnitude of disorder has its maximum near the DP and can reach 10-15 meV [18]. To check the last assumption we studied the zero LL. The nature of the QHE state with \(\nu=0\) differs from all other QHE states with integer filling factors. In Dirac fermion systems, the \(\nu=0\) state may be formed either by a spin-degenerate half-occupied zero LL or its degeneracy could be lifted because of the spin splitting [29]. However, in both cases the values of \(\sigma_{\rm xx}\) and \(\sigma_{\rm xy}\) tend to be zero due to bulk localization. Alternatively, the spin splitting of the zero LL could be accompanied by the formation of two counterpropagating (electron and hole) edge channels, which could enhance the conductivity. However, these channels are not protected from backscattering and therefore the conductivity remains low. Note, that the double peak structure of \(\sigma_{\rm xx}\) shown in Fig. 2(a) stems from resistivity tensor inversion and is not a signature of spin splitting. Thus, the local transport response turns out to be weakly sensitive to the Landau zero level splitting. To solve this problem, we used capacitive magnetospectroscopy, which directly probes the value of the DoS for arbitrary filling factors. The results of the magnetocapacitance measurements \(C(V_{g})\) are shown in Fig. 4(a). The pronounced SdH oscillations in \(C(V_{g})\) can only be observed from \(B=1.5\) T on due to the lower sensitivity compared to transport measurements. To improve the signal, the calculated differential signal \(\delta C=C(V_{g},B)-C(V_{g},B=0)\) is shown in Fig. 4(b). In the differential signal, the SdH oscillations start to appear around 0.8 T. However, even at \(B=1\) T not all gaps are resolved: only the largest Landau gaps with corresponding filling factors \(\nu\) of 1 and 3 are observed, while the oscillations at higher gate voltages reflect the spin gaps with filling factors 6, 8, 10 and so on. The observed behavior is consistent with the magnetotransport data and the LL calculations. The absence of some minima can be explained by their smaller energy gaps. The zeroth LL, a characteristic and the most remarkable footprint of Dirac fermions, appears in the capacitance signal already at \(B=0.2\) T and is the dominant maximum in Fig. 4(b). However, at fields below 1 T there is not even a hint of Zeeman splitting. The zeroth LL becomes clearly visible as a separate maximum in the non-differential capacitance signal only from \(B=2\) T on. At this field all SdH oscillation minima except \(\nu=0\) and 2 are already resolved. Finally, the splitting of the zeroth LL appears at \(B\sim 3\) T, where a small and broad minimum can be observed close to the Dirac point. That the spin splitting of the zeroth LLs occurs at a significantly larger magnetic field than for any other one, supports the hypothesis of stronger disorder and broadening. According to our calculations, the value of the spin gap for \(\nu=0\) at \(B=3\) T is around 10 meV, which is 2 times bigger then the gap for \(\nu=1\), clearly seen in the capacitance at \(B=1\) T. Note that the small deviation of Figure 4: (a) Gate voltage dependence of the capacitance \(C\), measured at \(B=0\), \(0.5\)...\(3\) T and \(T=1.5-10\) K. For clarity, the curves are shifted vertically. Dashed lines show the zero-field traces as a guide. The Dirac point ”DP” is marked by a vertical arrow, ”\(\nu=0\)” marks the local capacitance minima, associated with the zeroth Landau level splitting at \(B=3\) T. (b) The differential capacitance \(\delta C(V_{g},B)=C(V_{g},B)-C(V_{g},B=0)\) measured at \(B=0\), \(0.1\)...\(1\) T and \(T=1.5\) K. The curves for finite \(B\) are shifted both on the \(x\) and \(y\) axes to enhance visibility. The numbers denote the filling factors \(\nu\). the QW thickness from the critical value could open a small gap between valence and conduction bands also affecting to the splitting of zero LL, however our previous measurements at low temperatures up to 0.2 K [3; 9] and zero magnetic field proved that this energy gap is absent or insignificant. In summary, the experiments show that for odd filling factors \(\nu\) the SdH oscillations are resolved at magnetic field values that increase monotonically with increasing \(\nu\). This is consistent with the simplest model (Eq. 1) describing LLs in Dirac fermion systems. The behavior of SdH oscillations with even filling factors, i.e. those associated with spin gaps, differs significantly from the simplest model. First, SdH oscillations with even filling factors of 6 and higher are formed in a magnetic field that is 2-2.5 times smaller than the field required to form neighboring SdH oscillations with odd filling factors. Second, the oscillations for small even \(\nu\) appear at much stronger field, reaching its maximum for \(\nu=0\) at a magnetic field of \(\sim\) 3 T. The observed spin gaps are explained by the presence of an interface inversion asymmetry [24; 25] with a magnitude of \(\gamma=1.5\) meV and enhanced disorder at the Dirac point. The obtained value of \(\gamma\) is almost an order of magnitude smaller than expected from an interfacial atomistic calculation [24], but it qualitatively agrees with a recent THz spectrocopy study [15] with an even smaller value of 0.6 meV. We are grateful to E. L. Golub for useful discussions. The work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 787515, "ProMotion").
2303.08304
Universal Law of Coiling for a Short Elastic Strip Contacting Within a Tube
We find that there exists a universal law of coiling not only for a long elastic strip contacting within a tube but also for a short one. Here the elastic strip we consider has the ratio of $2 < L/R \le 2\pi$ for its length $L$ to the tube radius $R$. By varying the ratio of $L/R$, we identify four types of deformation for such a short elastic strip, namely, two point-contact, three point-contact, continuous-contact, and self-contact. With theoretical formulas in closed forms and experimental demonstration, these four types are verified for any elastic strips contacting within a tube, irrespective of elastic properties, strip lengths, and tube radius. Our results on coiling can be readily applied to a variety of physical systems, including thin flexible electronic devices, van der Waals materials in scroll shape, and DNA packaging into viral capsids.
Jeng Yi Lee, Hao-Yu Lu, Ray-Kuang Lee
2023-03-15T01:31:54Z
http://arxiv.org/abs/2303.08304v1
# Universal Law of Coiling for a Short Elastic Strip Contacting Within a Tube ###### Abstract We find that there exists a universal law of coiling not only for a long elastic strip contacting within a tube but also for a short one. Here the elastic strip we consider has the ratio of \(2<L/R\leq 2\pi\) for its length \(L\) to the tube radius \(R\). By varying the ratio of \(L/R\), we identify four types of deformation for such a short elastic strip, namely, two point-contact, three point-contact, continuous-contact, and self-contact. With theoretical formulas in closed forms and experimental demonstration, these four types are verified for any elastic strips contacting within a tube, irrespective of elastic properties, strip lengths, and tube radius. Our results on coiling can be readily applied to a variety of physical systems, including thin flexible electronic devices, van der Waals materials in scroll shape, and DNA packaging into viral capsids. _Introduction.--_Packing a long wire, fiber, or strip inside a container happens in many different systems, such as folding elastic wire in a spherical cavity [1; 2; 3; 4; 5; 6; 7; 8], bending graphene sheets or van der Waals material in scroll shapes [9; 10; 11; 12], packing DNA into viral capsids [13; 14; 15; 16; 17; 18; 19; 20; 21], and curling sheets in confirmed structures [22; 23; 24]. By considering a long elastic strip inside a smooth and solid cylindrical tube, a universal law of coiling was discovered [25; 26]. Irrespective of the tube size, the total strip length, and the elastic bending stiffness, when an elastic strip intrinsically flat is coiled inside a tube, the innermost strip is detached from the tube wall at multiple-layered curls. With free-of-friction contact forces between strip-strip and strip-tube interfaces, the tangential angle of a detached strip at the free end to the tube's tangent is \(24.1^{\circ}\); while the opening angle, subtended from the detached region, is \(125.2^{\circ}\)[25; 26]. This universal phenomenon was derived in theory first, then has been observed on a variety of surprisingly different length scales and in unexpected systems, including mechanical, biological, and condensed matters. Although experimental measurements and theoretical analyses suggest that the classical elastic plate model deviates to describe the bending deformation of monolayer graphene due to absence of in-plane \(\sigma-\) bonding, the continuum plate phenomenology can still be well employed in glued multilayers due to the mediation of van der Waals force [9; 10; 11; 12]. Instead of long strips, in this _Letter_, we reveal that there also exists a universal law of coiling for a short elastic strip contacting within a tube. Here, we refer to a short strip as one with a ratio of the strip length \(L\) to the tube radius \(R\) between \(2<L/R\leq 2\pi\). With the help of Kirchhoff's equations for a elastic, we first theoretically identify four types of deformation for such a short elastic strip in Fig. 1, labelled, (a) two point-contact, (b) three point-contact, (c) continuous-contact, and (d) self-contact, which are characterized for arbitrary elastic materials. In experiments, see the illustrations in Figs. 1(e)-(h), we fabricate samples in polyvinyl chloride (PVC) and polyethylene terephthalate (PET) with different lengths and thickness to verify our theoretical prediction, resulting in good agreement. As similar scenarios of such a short strip can be easily found in a wide range of physical systems, our results on coiling a short elastic strip can be readily applied to thin flexible electronic devices, photovoltaic solar cells, van der Waals materials, and DNA packaging. _Theory: Kirchhoff's equations.--_As illustrated in Figs. 1(a-d), we model an intrinsically flat strip as a quasi one-dimensional elastica. Based on Kirchhoff's theory, the force and moment equations in the static equilibrium read [27; 28], \[\frac{d\,\bar{F}(s)}{ds}+\vec{K}(s)=0, \tag{1}\] \[\frac{d\,\vec{M}(s)}{ds}+\hat{t}(s)\times\vec{F}(s)=0. \tag{2}\] Here, \(s\) is the arc-length along the elastic strip, \(\hat{t}(s)\) is the unit tangent vector, \(\vec{K}(s)\) is the external force per unit length, \(\vec{F}(s)\) and \(\vec{M}(s)\) denote resultant stress force and moment at \(s\), respectively. In our theoretical analysis, we assume the elastica free of external bending moment, i.e., without friction between strip-strip and strip-tube interfaces and free of gravity. In case of a planar deformation, we have \(\vec{M}=B\,\hat{t}\times d\hat{t}/ds\). Here \(B\) is the bending stiffness composed by \(B=Y\,I\), with \(Y\) being Young's modulus and \(I\) being the moment of inertia, which is equivalent to the quadratic form of curvature in the bending free energy [28]. Based on Eqs. (1-2), we start our analyses by increasing the stripe length ratio \(L/R\): from slightly larger than 2 to \(2\pi\) and reveal the emergence of four different types of deformation. _Two point-contact.--_First of all, we consider the strip length slightly larger than \(2R\), resulting in only two point-contacts upon the strip at \(s=0\) and \(s=L\), as the numerical calculation shown in Fig. 1(a), as well as the corresponding experimental illustration in Fig. 1(e). Now, the associated point-contact forces at \(s=0\) and \(s=L\) are \(\vec{P}_{0}\) and \(\vec{P}_{1}\), respectively, as blue arrows in Fig. 1(a). Due to absence of friction, the directions of external forces is normal to the tube wall. Moreover, in static equilibrium, we have \(\vec{P}_{0}=-\vec{P}_{1}\). With \(\vec{K}=0\), by Eq. (1), the stress force \(\vec{F}(s)\) is constant throughout the strip, i.e., \(\vec{F}(s)=-\vec{P}_{0}\). By resorting to moment balance of Eq. (2), one can obtain a curvature equation, \[\frac{d\phi(s)}{ds}=\sqrt{\frac{2P_{0}}{B}(\sin\phi-\sin\phi_{0})}. \tag{3}\] Here, \(P_{0}=|\vec{P}_{0}|\) and \(\phi(s)\) is the tangential angle with respect to X-axis. Moreover, we denote \(\phi_{0}\equiv\phi(s=0)\), and use a zero moment condition at \(s=0\), corresponding to \(d\phi/ds|_{s=0}=0\). As the length of the strip projected onto the Y-axis is always \(2R\) and the strip length \(L\) is conserved, we thus have two geometric constraints: \[R = \int_{\phi_{0}}^{\frac{\pi}{2}}\frac{\sin\phi}{\sqrt{\frac{2P_{0 }}{B}}(\sin\phi-\sin\phi_{0})}\,d\phi, \tag{4}\] \[\frac{L}{2} = \int_{\phi_{0}}^{\frac{\pi}{2}}\frac{1}{\sqrt{\frac{2P_{0}}{B}}( \sin\phi-\sin\phi_{0})}\,d\phi, \tag{5}\] in which two unknowns \(\phi_{0}\) and \(P_{0}\) are involved. Then, by eliminating \(P_{0}\), consequently \(\phi_{0}\) can be calculated for a given value of \(L/R\), irrespective of bending stiffness \(B\), but under the crucial geometric constraint for the strip at \(s=L/2\), i.e., \(0\leq X(s=L/2)/R<1\). Accordingly, we numerically find the condition to support \(\phi_{0}\) and reach the constraint \(0.358<\phi_{0}\leq\pi/2\), which gives us the corresponding strip length in this two point-contact region: \[L/R\in[2,3.033);\,\text{two point-contact region}. \tag{6}\] We want to remark that this result holds for any elastic strip systems. As we increase \(L/R\), the strip bends more, reflecting a monotonous decrease of \(\phi_{0}\) as shown in Fig. 2(a). Furthermore, with Eqs. (4-5), one can also introduce a dimensionless force: \[\frac{P_{0}R^{2}}{B}=\frac{1}{2}[\int_{\phi_{0}}^{\frac{\pi}{2}}\frac{\sin \phi}{\sqrt{\sin\phi-\sin\phi_{0}}}d\phi]^{2}, \tag{7}\] which is a function of \(\phi_{0}\). In Fig. 2(b), we display this dimensionless force \(P_{0}R^{2}/B\) as a function of \(L/R\). In this two point-contact region, although this dimensionless force \(P_{0}R^{2}/B\) monotonously decreases as \(L/R\) increases, when with material property and stripe length (\(B\) and \(L\)) are fixed, the associated point-contact force \(P_{0}\) increases for a short radius \(R\). The value of \(P_{0}R^{2}/B=2.47\) at \(L/R=2\) corresponds to the critical Figure 1: Four different types of deformations are identified as the universal law of coiling, when the ratio of the strip length \(L\) to the tube radius \(R\) is \(2<L/R\leq 2\pi\). The First row (a)-(d) shows the simulation results for (a) \(L/R=2.37\), (b) 3.2, (c) 5.1, and (d) \(2\pi\), respectively. Here the red curves denote the detached strips; while the green curves denote the continuous contact parts of the strips. The Second row (e)-(h) shows the corresponding experimental measurements as a comparison (see Table 1 for more details). Here, the four types of deformations are (a, e) two point-contact, (b, f) three point-contact, (c, g) continuous-contact, and (d, h) self-contact. Related force analysis diagrams are also depicted in (a-d), denoted in blue colors. force for the classical Euler's buckling bifurcation with \(B\pi^{2}/L^{2}\)[28; 29; 30]. _Three point-contact._--When the middle segment of the strip (\(s=L/2\)) makes a point contact with the tube, a three point-contact situation occurs, as illustrated from numerical calculation and experimental measurement in Figs. 1(b) and 1(f), respectively. Here, as shown in Fig. 1(b), we denote three external point-contact forces upon the strip as \(\vec{P}_{0}\), \(\vec{P}_{1}\), and \(\vec{P}_{2}\). In the static equilibrium, their vectorial sum is zero. By applying mirror symmetry to the strip, we conclude that the magnitudes of these external forces at two free ends are identical \(|\vec{P}_{0}|=|\vec{P}_{2}|=P_{0}\), although their direction is different. Again, in the detached region from \(s=0\) and \(s=L/2\), due to absence of external forces we also have \(\vec{F}(s)=-\vec{P}_{0}.\) Then, by employing the geometric constraint for point-contact and that the strip length \(L\) is conserved, as well as the zero moment at free ends, two unknowns can be determined: the tangential angles with respect to X-axis at \(s=0\) and \(s=L/2\), i.e., \(\phi_{0}\) and \(\phi_{1}\equiv\phi(s=L/2)\) can be determined through the following conditions, \[R\sin\phi_{1} =\int_{\phi_{0}}^{\phi_{1}}\frac{\cos\phi}{\sqrt{\frac{2P_{0}}{B} [\sin\phi-\sin\phi_{0}]}}\,d\phi, \tag{8}\] \[R-R\cos\phi_{1} =\int_{\phi_{0}}^{\phi_{1}}\frac{\sin\phi}{\sqrt{\frac{2P_{0}}{B} [\sin\phi-\sin\phi_{0}]}}\,d\phi. \tag{9}\] Again, by eliminating the unknown \(P_{0}\) for a given value of \(L/R\), we can sufficiently determine \(\phi_{0}\) and \(\phi_{1}\), independently of \(B\). The detailed derivation is given in Supplementary Materials. By requiring the radius of curvature at \(s=L/2\) being \(R\), corresponding to the onset of continuous-contact, we can find out the supported values of \(\phi_{0}=0.358=20.51^{\circ}\), which defines the maximum strip length obeying the three point-contact situation: \[L/R\in[3.033,4.176);\,\text{three point-contact region.} \tag{10}\] The corresponding dimensionless force \(P_{0}R^{2}/B\) for the three point-contact case is \[\frac{P_{0}R^{2}}{B}=\frac{2(\sin\phi_{1}-\sin\phi_{0})}{\sin^{2}\phi_{1}}, \tag{11}\] which has an implicit dependence on \(L/R\). When \(L/R=3.033\), the emergent point-contact at the middle strip makes \(\vec{P}_{2}\) have a horizontal component in order to obey static equilibrium. In this way, the strength of \(\vec{P}_{2}\) along the vertical direction would be decreased, leading to reduce \(|\vec{P}_{0}|\) in Fig. 2 (b). As a result, when the length ratio \(L/R>3.033\) the tangential angle \(\phi_{0}\) increases, as shown in the inset of Fig. 2 (a). _Continuous-contact._--When \(d\phi/ds|_{s=L/2}\) becomes \(1/R\), the strip deformation switches to continuous contact, as illustrated in in Figs. 1(c) and 1(g) from numerical and experimental results, respectively. Now, we have a finite region making continuous contact with the tube, as marked by the green colored curve in Fig. 1(c). The associated pressure distribution \(\vec{K}\) exerted by the tube remains constant in magnitude [31]. We mark the two ends of this continuous contact as \(C_{1}\) and \(C_{2}\). By considering the geometric constraint of the curvature at \(C_{1}\) being \(1/R\), as well as zero moment at the free end \(s=0\), again one can have the following two equations involving two unknown tangential angles at \(s=0\) and \(s=L^{\prime}\), \[2\sin\phi_{0} =\sin\phi_{1}, \tag{12}\] \[\frac{\sin\phi_{1}}{1-\cos\phi_{1}} =\frac{\int_{\phi_{0}}^{\phi_{1}}\frac{\cos\phi}{\sqrt{(\sin\phi- \sin\phi_{0})}}\,d\phi}{\int_{\phi_{0}}^{\phi_{1}}\frac{\sin\phi}{\sqrt{(\sin \phi-\sin\phi_{0})}}\,d\phi}. \tag{13}\] For details see Supplementary Materials. By numerical calculation, we obtain that the tangential angle at the free end is \(\phi_{0}=0.421=24.12^{\circ}\); while the opening angle is \(\phi_{1}=2.185=125.2^{\circ}\). Interestingly, these two angles are exactly the same angles for a long stripe [25; 26]. The curvature equation for this detached strip, as well as that boundary conditions, return to the same scenario as that of multiple layered curls. We indicate this result in Fig. 2(a), in which \(\phi_{0}\) is always \(24.12^{\circ}\). By taking the dimensionless force at \(s=0\), i.e., \(P_{0}R^{2}/B=1/\sin\phi_{1}=1.22\), into the curvature equation for the detached strip, the detached length can be numerically found to be \(L^{\prime}/R=2.088\). This ratio also reflects the minimum length permitted in the continuous-contact Figure 2: (a) In terms of the length ratio \(L/R\), we identify different deformation regions through numerical simulations for the tangential angle \(\phi_{0}\) (black curve), and three sets of experimental measurements: \(\circ\) (0.5mm thickness of PVC, denoted as PVC1), \(\diamond\) (0.1mm thickness of PVC, denoted as PVC2), and \(\triangle\) (0.2mm thickness of PET, denoted as PET). (b) The corresponding dimensionless force \(P_{0}R^{2}/B\) is also depicted as a function of the length ratio \(L/R\). deformation, i..e, \(2L^{\prime}\). However, the continuous-contact deformation will be terminated when two free ends of the strip meet at \((X/R,Y/R)=(0,0)\). Accordingly, we can derive the maximum length for the continuous contact situation by \(2L^{\prime}+R(2\pi-2\phi_{1})=6.091R\). Therefore, the supported strip length for this continuous-contact region is bounded by \[L/R\in[4.176,6.091];\,\text{continuous-contact region}. \tag{14}\] We want to emphasize that the corresponding dimensionless force \(P_{0}R^{2}/B\) in this continuous-contact region, somewhat counter-intuitively, remains constant, as the numerical results show in Fig. 2 (b). _Self-contact._--Last but not least, we consider the strip length in the self-contact region: \[L/R>6.091;\,\text{self-contact region}. \tag{15}\] Now, the strip self-contacts, as illustrated in Figs. 1(d) and 1(h) from simulation and experiment, respectively. Here, one free end of the strip makes a point-contact with the front side of the tube. At this stage, the point-contact is accompanied with internal point forces, as denoted by \(\vec{P}_{1}\) and \(\vec{P}_{4}\) shown in Fig. 1(d). By Newton's third law, we have \(\vec{P}_{1}=-\vec{P}_{4}\). As a result, the interaction among different segmental parts of the strip leads to a nonlocal effect. Moreover, there still exists a finite continuous-contact region, marked by the green colored curve in Fig. 1(d), in association with the pressure distribution \(\vec{K}\). Interestingly, only two external point forces are upon the whole strip. One is the point-contact force at \(s=0\), denoted as \(\vec{P}_{0}\), and the other one is the pressure \(\vec{K}\) from the tube. Since the null of friction is still valid in our system, the direction of the point contact force at \(s=0\) is vertical. Consequently, in order to maintain the static equilibrium, the pressure distribution \(\vec{K}\) needs to be a symmetric distribution with respect to Y-axis. This implies that the positions of \(C_{1}\) and \(C_{2}\) form mirror symmetry with respect to the Y-axis. The detailed derivation can be found in the Supplementary Materials. In Fig. 2 (b), we also calculate the corresponding dimensionless point-contact force at \(s=0\). Since the emergence of internal force is close to \(s=0\) in our case, the dimensionless point-contact force \(P_{0}R^{2}/B\) can be expectedly to be larger than that in the continuous-contact region. At \(L/R=2\pi\), we find \(P_{0}R^{2}/B=2.76\). Consequently, the pushing force \(\vec{P}_{1}\) causes the front strip downward, resulting in \(\phi_{0}\) decreasing with respect to \(L/R\), as shown in Fig. 2(a). _Experimental verification._--We design and fabricate a series of different stripe lengths \(L/R\), from 2.4 to 6.28, see Table 1 for more details. In experiments, we prepare two different elastic materials: polyvinyl chloride (PVC) and polyethylene terephthalate (PET), but also with different thickness in order to verify our theoretical findings. Three sets of material parameters are performed: 0.5mm thickness of PVC (PVC1), 0.2 mm thickness of PET (PET), and 0.1mm thickness of PVC (PVC2). All the samples are 2 cm wide. The elastic strips are also prepared in initially flat condition, i.e., in the absence of tube confinement, to avoid any plastic deformation. The tube used is acrylic (polymethylmethacrylate, PMMA). It an inner radius of 3 cm. With the tangential angle at \(s=0\), denoted as \(\phi_{0}\), the measured tangential angles from three sets of PVC1, PET, and PVC2, are listed in Table 1, along with the comparison to the theoretical values. All the obtained data are also plotted in Fig. 2(a), as well as the selected pictures shown in Fig. 1(e)-(h). They show good agreements with our simulation curves for two point-contact, three point-contact, and continuous-contact regions. When the length of strip satisfies \(4.176\leq L/R\leq 6.091\), the measured \(\phi_{0}\) confirms the theoretical value \(24.12^{\circ}\) according to prediction from the continuous contact case. _Conclusion._--In addition to the universal law of coiling for a long stripe, we find theoretically and experimentally that a short elastic strip contacting within a tube, with the length ratio \(2<L/R\leq 2\pi\), also follows universal behavior. Four different types of deformation: two point-contact, three point-contact, continuous-contact, and self-contact, are identified in theory and verified in experiments. Theoretically, the boundaries between two adjunct regions of deformation are characterized by elastic Kirchhoff's equations; while experimentally three sets of material parameters, with a series of different lengths, are investigated, resulting in good agreement with our theoretical analysis. Our results show the existence of a universal law even for a short strip, irrespective of elastic properties, strip lengths, and tube radii. The results in this work can be readily applied to many practical applications, ranging, e.g., from flexible electronic devices, medical fibre imaging, to DNA packaging. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Length & Ratio & PVC & PET & PVC & Theoretical \\ \(L/R\) & 0.5mm & 0.2mm & 0.1mm & Values, \(\phi_{0}\) \\ \hline 2.4 & \(43.9^{\circ}\) & \(42.8^{\circ}\) & \(42.3^{\circ}\) & \(42.37^{\circ}\) \\ 2.6 & \(33.1^{\circ}\) & \(34.2^{\circ}\) & \(34.1^{\circ}\) & \(33.55^{\circ}\) \\ 2.8 & \(26.7^{\circ}\) & \(28.6^{\circ}\) & \(28.1^{\circ}\) & \(26.78^{\circ}\) \\ 3.03 & \(20.8^{\circ}\) & \(20.2^{\circ}\) & \(20.7^{\circ}\) & \(20.5^{\circ}\) \\ 3.2 & \(21.2^{\circ}\) & \(21.3^{\circ}\) & \(21.3^{\circ}\) & \(21.38^{\circ}\) \\ 3.7 & \(23.3^{\circ}\) & \(23.2^{\circ}\) & \(24^{\circ}\) & \(23.38^{\circ}\) \\ 4.176 & \(23.4^{\circ}\) & \(23.6^{\circ}\) & \(24.3^{\circ}\) & \(24.12^{\circ}\) \\ 4.5 & \(24.2^{\circ}\) & \(24.3^{\circ}\) & \(24.3^{\circ}\) & \(24.12^{\circ}\) \\ 5.1 & \(23.8^{\circ}\) & \(24.2^{\circ}\) & \(24.1^{\circ}\) & \(24.12^{\circ}\) \\ 6.28 & \(17.5^{\circ}\) & \(17.5^{\circ}\) & \(17.9^{\circ}\) & \(15.76^{\circ}\) \\ \hline \end{tabular} \end{table} Table 1: Experimental measurements on the tangential angle \(\phi_{0}\) from different stripe length ratio \(L/R\). Here, we have three sets in material parameters: 0.5mm thickness of PVC (PVC1), 0.2 mm thickness of PET (PET), and 0.2mm thickness of PVC (PVC2). Theoretical values are also listed for the comparison. ## Acknowledgement The authors are indebted to Prof. Ole Stuernagle for useful discussions. This work is partially supported by the Ministry of Science and Technology of Taiwan (Nos. 110-2112-M-259-005, 111-2112-M-259-011, 110-2627-M-008-001, and 110-2123-M-007-002), the International Technology Center Indo-Pacific (ITC IPAC) and Army Research Office, under Contract No. FA5209-21-P-0158, and the Collaborative research program of the Institute for Cosmic Ray Research (ICRR), the University of Tokyo.
2303.07037
Unconditional bases and Daugavet renormings
We introduce a new diametral notion for points of the unit sphere of Banach spaces, that naturally complements the notion of Delta-points, but is weaker than the notion of Daugavet points. We prove that this notion can be used to provide a new geometric characterization of the Daugavet property, as well as to recover -- and even to provide new -- results about Daugavet points in various contexts such as absolute sums of Banach spaces or projective tensor products. Finally, we show that this notion leads to powerful new ideas for renorming questions, and that those ideas can be combined with previous constructions from the literature in order to renorm every infinite dimensional Banach space with an unconditional Schauder basis to have a Daugavet point.
Rainis Haller, Johann Langemets, Yoël Perreau, Triinu Veeorg
2023-03-13T11:55:53Z
http://arxiv.org/abs/2303.07037v1
# Unconditional bases and Daugavet Renormings ###### Abstract. We introduce a new diametral notion for points of the unit sphere of Banach spaces, that naturally complements the notion of \(\Delta\)-points, but is weaker than the notion of Daugavet points. We prove that this notion can be used to provide a new geometric characterization of the Daugavet property, as well as to recover - and even to provide new - results about Daugavet points in various contexts such as absolute sums of Banach spaces or projective tensor products. Finally, we show that this notion leads to powerful new ideas for renorming questions, and that those ideas can be combined with previous constructions from the literature in order to renorm every infinite dimensional Banach space with an unconditional Schauder basis to have a Daugavet point. This work was supported by the Estonian Research Council grants PRG1901, PSG487 and SJD58 ## 1. Introduction A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ is a Banach space \(X\) if and only if it is a Banach space \(X\) if and only if it is a Banach space. A _quasi-projective Banach space_ \(\mathfrak{D}\)-point by [3, Proposition 2.3]). In view of those examples, one can wonder whether a Banach space whose unit sphere is entirely composed of \(\nabla\)-points has to contain Daugavet points. Our main result concerning \(\nabla\)-points is that this property is actually equivalent to the DPr. **Theorem 1.6**.: _Let \(X\) be a Banach space of dimension \(\dim X>1\). Then \(X\) has the DPr if and only if every point on its unit sphere is a \(\nabla\)-point._ Let us end the present section by giving a few words about the organization and the content of the paper. In Section 2, we make a general study of the notion of \(\nabla\)-points. We start by proving that any point that is simultaneously a \(\nabla\)-point and a \(\mathfrak{D}\)-point is a Daugavet-point, and deduce that a \(\nabla\)-point is always either a Daugavet point or a strongly exposed point. Building on these results, we prove Theorem 1.6 and obtain a new characterization of the DPr in terms of \(\nabla\)-points. Finally, we state a specific property of distance to denting points for \(\nabla\)-points, and observe that any strictly convex space which contains a \(\nabla\)-point fails the Radon-Nikodym property. In Section 3, we study \(\nabla\)-points in some classical Banach spaces. In Subsection 3.1, we look at \(\nabla\)-points in absolute sums of Banach spaces. First, we prove that if \(N\) is an absolute normalized norm on \(\mathbb{R}^{2}\) different from the \(\ell_{1}\)-norm and the \(\ell_{\infty}\)-norm, then the notions of \(\nabla\)-points and of Daugavet points coincide in the absolute sum \(X\oplus_{N}Y\) for arbitrary Banach spaces \(X\) and \(Y\). Second, we make a specific study of \(\nabla\)-points in \(\ell_{1}\)-sums and \(\ell_{\infty}\)-sums of Banach spaces, and prove that in order for a point \((x,y)\) in \(X\oplus_{N}Y\) to be a \(\nabla\)-point without being a Daugavet point, we must have either \(N=\left\|\cdot\right\|_{1}\) and \((\left\|x\right\|,\left\|y\right\|)=(1,0)\) or \((0,1)\), or \(N=\left\|\cdot\right\|_{\infty}\) and \((\left\|x\right\|,\left\|y\right\|)=(1,1)\). Last, we provide a complete characterization of those points in this context. In Subsection 3.2, we consider \(\nabla\)-points in some specific function spaces. On the one hand, we show that \(L_{1}(\mu)\)-spaces provide a natural framework for the existence of \(\nabla\)-points that are not Daugavet points. On the other, we show that the notion of \(\nabla\)-points coincides with all the other diametral notions in every infinite dimensional \(C(K)\)-space or \(C_{0}(L)\)-space, as well as in general \(L_{1}\)-preduals. In Subsection 3.3, we present a few non-trivial applications of the notion of \(\nabla\)-points in tensors products, where known transfer results for \(\Delta\)-points can be combined with properties of \(\nabla\)-points to get new transfer results for Daugavet points. Finally, we provide partial specific transfer results for \(\nabla\)-points, and start an investigation of the stability of super points. In Section 4, we deal with Daugavet renormings. First we prove that it naturally follows from some of the transfer results through \(\ell_{1}\)-sums that every Banach space can be renormed with a \(\nabla\)-point. Second, we combine this idea with an adapted version of the renorming from [1, Section 3] to get the main result of the text, Theorem 1.2. Last, we observe that if a Banach space \(X\) contains a complemented subspace \(Y\) that can be renormed with a Daugavet or a super Daugavet point, then \(X\) can also be renormed with a Daugavet or a super Daugavet point. Based on this, we combine Theorem 1.2 with classic results from James about unconditional bases and the \(\ell_{1}\)-isomorphism from [1] to get that every infinite dimensional Banach space with an unconditional basis (and more generally with a complemented unconditional basic sequence) can be renormed with a Daugavet point. Throughout the text, we will use standard notation and will usually follow the textbooks [5] or [10]. In particular, if \(A\) is a non-empty subset of a normed space \(X\), then we will denote respectively by \(\operatorname{span}A\) and \(\operatorname{conv}A\) the linear span and the convex hull of \(A\); and by \(\overline{\operatorname{span}}\,A\) and \(\overline{\operatorname{conv}}\,A\) their respective closure. We will also denote by \(\operatorname{ext}B_{X}\) the set of all extreme points of \(B_{X}\), and by \(\operatorname{dent}B_{X}\) the set of all denting points of \(B_{X}\). ## 2. \(\nabla\)-points and the Daugavet property We start with the following easy observations. **Lemma 2.1**.: _Let \(X\) be a Banach space and let \(x\in S_{X}\). Then \(x\) is a Daugavet point if and only if it is simultaneously a \(\nabla\)-point and a \(\mathfrak{D}\)-point._ Proof.: Clearly, a Daugavet point is simultaneously a \(\nabla\)-point and a \(\Delta\)-point, hence a \(\nabla\)-point and a \(\mathfrak{D}\)-point. Conversely, assume that \(x\) is simultaneously a \(\nabla\)-point and a \(\mathfrak{D}\)-point, and let \(S:=S(x^{*},\alpha)\) with \(x^{*}\in S_{X*}\) and \(\alpha>0\). If \(x^{*}\in D(x)\), then \(\sup_{y\in S}\|x-y\|=2\) because \(x\) is a \(\mathfrak{D}\)-point. On the other hand, if \(x^{*}\notin D(x)\), then there exists \(\beta>0\) such that \(x^{*}(x)\leqslant 1-\beta\). If \(\alpha\leqslant\beta\), then \(x\notin S\), and \(\sup_{y\in S}\|x-y\|=2\) because \(x\) is a \(\nabla\)-point. If \(\alpha>\beta\), then \(S(x^{*},\beta)\subset S\), so \(\sup_{y\in S}\|x-y\|\geqslant\sup_{y\in S(x^{*},\beta)}\|x-y\|=2\) by the previous case. It follows that \(x\) is a Daugavet point. **Proposition 2.2**.: _Let \(X\) be a Banach space. The set of all \(\nabla\)-points in \(X\) is a closed subset of \(X\)._ Proof.: The verification is immediate by the definition of \(\nabla\)-points. Indeed, assume that \((x_{n})\) is a sequence of \(\nabla\)-points in \(X\) which converges to some \(x\in X\). Consider a slice \(S\) of \(B_{X}\) which does not contain \(x\) and let \(\varepsilon>0\). Since \(x_{n}\to x\), we can find \(n\in\mathbb{N}\) such that \(x_{n}\notin S\) and \(\|x-x_{n}\|\leqslant\varepsilon/2\). Then as \(x_{n}\) is a \(\nabla\)-point, there exists \(y\in S\) such that \(\|x_{n}-y\|\geqslant 2-\varepsilon/2\). Thus \(\|x-y\|\geqslant 2-\varepsilon\), and it follows \(x\) is also a \(\nabla\)-point. We will now show that a \(\nabla\)-point is always either a Daugavet point or a strongly exposed point of the unit ball. **Theorem 2.3**.: _Let \(X\) be a Banach space, and let \(x\in S_{X}\) be a \(\nabla\)-point. Then either \(x\) is a Daugavet point, or \(x\) is a strongly exposed point of \(B_{X}\)._ Proof.: Assume that \(x\) is not a strongly exposed point of \(B_{X}\). To prove that \(x\) is a Daugavet point, it suffices by Lemma 2.1 to show that \(x\) is a \(\mathfrak{D}\)-point. So fix \(\alpha,\varepsilon>0\) and \(x^{*}\in D(x)\), and let us assume as we may that \(\alpha<1/2\). We want to show that there exists \(y\in S(x^{*},\alpha)\) such that \(\|x-y\|\geqslant 2-\varepsilon\). By assumption, there exist \(\beta\in(0,\alpha)\) such that \(\operatorname{diam}\big{(}S(x^{*},\gamma)\big{)}>4\beta\) for every \(\gamma>0\). So let \(\gamma>0\) be such that \[1-\frac{\alpha-\gamma}{2}<\frac{\beta}{\gamma+\beta},\] and pick \(z\in B_{X}\) such that \(x^{*}(z)>1-\gamma\) and \(\|x-z\|\geqslant 2\beta\). Then let \(y^{*}\in\frac{1}{2}S_{X*}\) be such that \(y^{*}(z)-y^{*}(x)\geqslant\beta\), and fix \(\lambda\in(0,1)\) such that \[\lambda(1+\alpha-\gamma)+(1-\lambda)y^{*}(z)=1.\] Note that \[\lambda=\frac{1-y^{*}(z)}{1+\alpha-\gamma-y^{*}(z)}=1-\frac{\alpha-\gamma}{1+ \alpha-\gamma-y^{*}(z)}<1-\frac{\alpha-\gamma}{2}<\frac{\beta}{\gamma+\beta}.\] Set \(z^{*}=\lambda x^{*}+(1-\lambda)y^{*}\). Then \[z^{*}(x) = \lambda x^{*}(x)+(1-\lambda)y^{*}(x)\leqslant\lambda+(1-\lambda) (y^{*}(z)-\beta)\] \[= \lambda+1-\lambda(1+\alpha-\gamma)-(1-\lambda)\beta=1-\lambda \alpha+\lambda\gamma+\lambda\beta-\beta<1-\lambda\alpha\] and \[z^{*}(z)=\lambda x^{*}(z)+(1-\lambda)y^{*}(z)>\lambda(1-\gamma)+1-\lambda(1+ \alpha-\gamma)=1-\lambda\alpha.\] Since \(x\) is a \(\nabla\)-point, there exists \(y\in B_{X}\) such that \(z^{*}(y)>1-\lambda\alpha\) and \(\|x-y\|\geqslant 2-\varepsilon\). Now \(y\in S(x^{*},\alpha)\) because \[\lambda x^{*}(y)=z^{*}(y)-(1-\lambda)y^{*}(y)>(1-\lambda\alpha)-(1-\lambda)= \lambda(1-\alpha),\] so we are done. Using the previous result, we can now prove that main theorem of the section. **Theorem 2.4**.: _A Banach space \(X\) of dimension \(\dim X>1\) has the DPr if and only if every element on its unit sphere is a \(\nabla\)-point._ Proof.: Since every Daugavet point is \(\nabla\)-point, one implication is clear. So assume that every element \(x\in S_{X}\) is a \(\nabla\)-point. In order to get that \(X\) has the DPr, it now suffices to show that \(X\) has the LD2P (i.e. that every slice of \(B_{X}\) has diameter two). Indeed, \(B_{X}\) would then have no strongly exposed point, and by Theorem 2.3 all unit sphere elements would be Daugavet points. So let \(S:=S(x^{*},\alpha)\) be a slice of the unit ball. As \(\dim X>1\), there exists \(x\in S\cap S_{X}\) such that \(x^{*}(x)<1\). Let \(\gamma\in(0,\alpha)\) be such that \(x^{*}(x)<1-\gamma\). Then \(x\notin S(x^{*},\gamma)\), and thus \(\sup_{y\in S(x^{*},\gamma)}\|x-y\|=2\) because \(x\) is a \(\nabla\)-point. In particular, since \(S(x^{*},\gamma)\subset S\) and \(x\in S\), we get that \(S\) has diameter \(2\), and \(X\) has the LD2P, hence the DPr. _Remark 2.5_.: Analogously to [3, Lemma 2.2], one can easily prove that a point \(x\) in the unit sphere of a Banach space \(X\) is a \(\nabla\)-point if and only if the operator \(T:=x^{*}\otimes x\) satisfies the Daugavet equation \(\|Id-T\|=2\) for every \(x^{*}\in S_{X^{*}}\) that does not belong to \(D(x)\). So Theorem 2.4 can be rephrased as: A Banach space of dimension greater than or equal to \(2\) has the DPr if and only if every rank-one norm-one operator that is not a projection satisfies the Daugavet equation. We end the section by stating the following straightforward \(\nabla\)-analogue to [14, Proposition 3.1] and by collecting a few applications of this result. **Proposition 2.6**.: _Let \(X\) be a Banach space and let \(x\in S_{X}\) be a \(\nabla\)-point. Then \(\|x-y\|=2\) for every denting point \(y\) of \(B_{X}\) with \(y\neq x\)._ In [16], an example of a strictly convex normed space with the DPr was provided. However, this space is not complete, and it is still an open question whether there exists a strictly convex Banach space with the DPr. It was also asked in [19, Question 7.5] whether one could provide a strictly convex Banach space with a Daugavet point. In the following result, we observe that it immediately follows from Proposition 2.6 that in order for a strictly convex space to contain a \(\nabla\)-point, its unit ball must contain no denting point other than the point itself and its opposite. In particular, strictly convex spaces with the Radon-Nikodym property cannot contain \(\nabla\)-points. **Corollary 2.7**.: _Let \(X\) be a strictly convex space and let \(x\in S_{X}\). If \(x\) is a \(\nabla\)-point, then \(\operatorname{dent}B_{X}\subset\{\pm x\}\)._ Proof.: Since \(X\) is strictly convex, we have that \[\{y\in B_{X}:\ \|x-y\|=2\}=\{-x\}\] for every \(x\in S_{X}\). So if \(x\) is a \(\nabla\)-point, then it follows from Proposition 2.6 that the only possible denting points in \(B_{X}\) are \(\pm x\). Recall that a Banach space \(X\) is _weakly midpoint locally uniformly rotund_ (wMLUR) if every point in \(S_{X}\) is weakly strongly extreme, a.k.a. preserved extreme. In [2], MLUR hence wMLUR Banach spaces with the DD2P were constructed. We do not know whether any of the spaces \(X_{D}\) from [2, Theorem 2.3] contains a Daugavet point, but let us point out that in this context, it would be enough to show that one of these contains a \(\nabla\)-point. **Proposition 2.8**.: _Let \(X\) be an infinite dimensional wMLUR Banach space. Then every \(\nabla\)-point in \(X\) is super Daugavet._ Proof.: By Choquet's lemma, slices form bases of neighborhoods in the relative weak topology for the preserved extreme points of the unit ball of any given Banach space. It immediately follows that every \(\nabla\)-point in a wMLUR space \(X\) is a "super \(\nabla\)-point", hence, as was previously observed, a super Daugavet point. Finally, recall that in a Banach space with the Radon-Nikodym property, every slice contains a denting point. Thus we immediately get, in those spaces, the following characterization for \(\nabla\)-points. **Proposition 2.9**.: _Let \(X\) be a Banach space with the Radon-Nikodym property, and let \(x\in S_{X}\). Then \(x\) is a \(\nabla\)-point in \(X\) if and only if \(\|x-y\|=2\) for every denting point \(y\) of \(B_{X}\) with \(y\neq x\)._ Analogously to Daugavet points (see [24, Theorem 2.1]), this characterization does also hold in every Lipschitz-free space. **Proposition 2.10**.: _Let \(M\) be a metric space and let \(\mu\in S_{\mathcal{F}(M)}\). Then \(\mu\) is a \(\nabla\)-point if and only if \(\|\mu-\nu\|=2\) for every denting point \(\nu\) of \(B_{\mathcal{F}(M)}\) with \(\nu\neq\mu\)._ Proof.: One implication is covered by Proposition 2.6. So assume that \(\|\mu-\nu\|=2\) for every denting point \(\nu\) of \(B_{\mathcal{F}(M)}\) with \(\nu\neq\mu\). Fix \(\varepsilon>0\) and a slice \(S(f,\alpha)\) with \(\mu\notin S(f,\alpha)\). If \(f\) is local, then by [14, Theorem 2.6] there exists \(m_{uv}\in S(f,\alpha)\) such that \(\|\mu-m_{uv}\|\geqslant 2-\varepsilon\). If \(f\) is not local, then by [25, Proposition 2.7] there exist a denting point \(m_{uv}\in S(f,\alpha)\), and by assumption we have \(\|\mu-m_{uv}\|=2\). Therefore \(\mu\) is a \(\nabla\)-point. ## 3. \(\nabla\)-points in classical Banach spaces ### Absolute sums Recall that a norm \(N\) on \(\mathbb{R}^{2}\) is said to be _absolute_ if \(N(a,b)=N(\left|a\right|,\left|b\right|)\) for every \((a,b)\in\mathbb{R}^{2}\) and _normalized_ if \(N(0,1)=N(1,0)=1\). If \(X\) and \(Y\) are Banach spaces, and if \(N\) is an absolute normalized norm on \(\mathbb{R}^{2}\), then we denote by \(X\oplus_{N}Y\) the _absolute sum_ of \(X\) and \(Y\), that is the Banach space \((X\times Y,\left\|\cdot\right\|)\) where \[\left\|(x,y)\right\|=N(\left\|x\right\|,\left\|y\right\|)\text{ for every }(x,y)\in X\times Y.\] In particular, if \(N:=\left\|\cdot\right\|_{p}\) for some \(p\in\left[1,\infty\right]\), then we simply denote by \(X\oplus_{p}Y\) the \(\ell_{p}\)_-sum_ of \(X\) and \(Y\). The study of Daugavet points in absolute sums of Banach spaces was started in [3] and completed in [11]. In particular, recall that no \(\ell_{p}\)-sum of Banach spaces contains a Daugavet point when \(1<p<\infty\), and that more generally, the absolute sum \(X\oplus_{N}Y\) does not contain a Daugavet point if \(N\) has the property \((\alpha)\) from [3, Definition 4.4] for arbitrary \(X\) and \(Y\). Also recall that Daugavet points transfer very well through \(\ell_{1}\)-sums and \(\ell_{\infty}\)-sums, and that positive transfer results are more generally available for A-octaheral norms, see [11, Section 2 and Section 3]. As was noticed in [3, Section 4], \(\Delta\)-points are much more flexible with respect to this operation than Daugavet points, and thus the \(\Delta\)-condition is not the one that provides obstructions to the existence of Daugavet points in absolute sums of Banach spaces. In fact, it can easily be checked that the \(\nabla\)-condition is the one that prevents the existence of these points whenever such an obstruction exists. But more can be said. Indeed, we will actually show that in absolute sums of Banach spaces, Daugavet points and \(\nabla\)-points can be different only if they are related either to the points \((1,0)\) or \((0,1)\) in \(\ell_{1}^{2}\), or to the point \((1,1)\) in \(\ell_{\infty}^{2}\). The specificity of these points is clearly apparent in the finite dimensional examples. Indeed, we have seen in Example 1.5 that every element of the unit vector basis \((e_{i})\) of \(\ell_{1}^{n}\) is a \(\nabla\)-point, so elements of the form \((x,0)\) or \((0,y)\) with \(x\in S_{X}\) and \(y\in S_{Y}\) can be \(\nabla\)-points without being Daugavet points in an \(\ell_{1}\)-sum. On the contrary, the point \(\frac{1}{2}(e_{1}+e_{2})\) is not a \(\nabla\)-point in \(\ell_{1}^{2}\), so the \(\ell_{1}\)-sum of two \(\nabla\)-points with respect to the point \((\frac{1}{2},\frac{1}{2})\) need not be a \(\nabla\)-point. Similarly, the following example shows that elements of the form \((x,y)\) with \(x\in S_{X}\) and \(y\in S_{Y}\) can be \(\nabla\)-points without being Daugavet points in an \(\ell_{\infty}\)-sum, while the \(\ell_{\infty}\)-sum of \(0\) and a \(\nabla\)-point need not be a \(\nabla\)-point, as \(e_{1}\) is not \(\nabla\) in \(\ell_{\infty}^{2}\). **Example 3.1**.: For every \(n\in\mathbb{N}\) and for every \(\theta:=(\theta_{i})\in\{-1,1\}^{n}\), we have that \(\sum_{i=1}^{n}\theta_{i}e_{i}\) is a \(\nabla\)-point in \(\ell_{\infty}^{n}\). So let us start by proving that the notions of Daugavet and \(\nabla\)-points coincide whenever the underlying absolute norm is neither equal to the \(\ell_{1}\)-norm nor to the \(\ell_{\infty}\)-norm. **Theorem 3.2**.: _Let \(X\) and \(Y\) be Banach spaces, let \(N\) be an absolute normalized norm that is different from the \(\ell_{1}\)-norm and the \(\ell_{\infty}\)-norm, and let \((x,y)\in S_{X\oplus_{N}Y}\). Then \((x,y)\) is a \(\nabla\)-point in \(X\oplus_{N}Y\) if and only if it is a Daugavet point._ Proof.: As every Daugavet point is a \(\nabla\)-point, it suffices to show that \((x,y)\in S_{X\oplus_{N}Y}\) is a Daugavet point whenever it is assumed to be a \(\nabla\)-point. So assume that \((x,y)\in S_{X\oplus_{N}Y}\) is a \(\nabla\)-point. We will start by proving that if \(x\neq 0\), then \(x/\|x\|\) is a Daugavet point. First notice that if \(0<\|x\|<1\), then we can actually copy paste the proof of [11, Theorem 3.1]. Indeed, the key slices that are involved there are defined by functionals of the form \(f:=(x^{*},0)\) with \(x^{*}\in S_{X^{*}}\). But as we have \(f(x,y)\leq\|x\|<1\) by assumption, taking a small enough parameter \(\delta>0\) will exclude \((x,y)\) from the corresponding slices, and the \(\nabla\)-condition will be available in place of the Daugavet condition. We leave the details to the reader. Now assume that \(\|x\|=1\). Then \(\|y\|<1\) because \(N\) is different from the \(\ell_{\infty}\)-norm. Fix \(x^{*}\in S_{X^{*}}\) and \(\alpha>0\). Then let \(y^{*}\in S_{Y^{*}}\) be such that \(y^{*}(y)\leq 0\), and let \(f:=(x^{*},y^{*})\). Since \(N\) is different from the \(\ell_{1}\)-norm, we have \(\|f\|>1\). Let \(\varepsilon\in(0,1]\) be such that \(\|f\|>1+\varepsilon\). By [11, Lemma 1.4] there exists \(\delta\in\left(0,\varepsilon\alpha\right)\) such that \(\|y\|<1-\delta\), \(\|f\|>1+\varepsilon+\delta\), and for every \(p,q,r\geq 0\), if \[2-\delta\leq N(p,q)\leq N(r,q)\leq 2\quad\text{and}\quad q<2-\delta,\] then \(|p-r|<\varepsilon^{2}\). As \(y^{*}(y)\leq 0\), we have \(f(x,y)\leq x^{*}(x)\leq 1\), and thus since \((x,y)\) is a \(\nabla\)-point, there exists \((u,v)\in B_{X\oplus_{N}Y}\) such that \(f(u,v)>\|f\|-\delta\) and \(\big{\|}(x,y)-(u,v)\big{\|}\geq 2-\delta\). Then \[\|u\|\geq x^{*}(u)=f(u,v)-y^{*}(v)>\|f\|-\delta-\|v\|\geq\|f\|-\delta-1\geq\varepsilon.\] We also have \[x^{*}(u)>\|f\|-\delta-\|v\|\geq\|x^{*}\|\|u\|+\|y^{*}\|\|v\|-\delta-\|v\|=\|u \|-\delta\geq\|u\|-\varepsilon\alpha\geq\|u\|(1-\alpha),\] and thus \(u/\|u\|\in S(x^{*},\alpha)\). Furthermore \[2-\delta\leq N\big{(}\|x-u\|,\|y-v\|\big{)}\leq N\big{(}\|x\|+\|u\|,\|y-v\| \big{)}\leq 2\] and \(\|y-v\|\leq\|y\|+\|v\|<2-\delta\). Hence \[\big{\|}x\|+\|u\|-\|x-u\big{\|}<\varepsilon^{2}<\varepsilon\|u\|,\] and \[\Big{\|}x-\frac{u}{\|u\|}\Big{\|} \geq\frac{1}{\|u\|}\|x-u\|-\Big{(}\frac{1}{\|u\|}-1\Big{)}\|x\|\] \[\geq\frac{1}{\|u\|}\big{(}\|x\|+\|u\|-\varepsilon\|u\|\big{)}- \Big{(}\frac{1}{\|u\|}-1\Big{)}\|x\|\] \[=2-\varepsilon.\] Thus \(x\) is a Daugavet point. Similarly we can prove that if \(y\neq 0\), then \(y/\|y\|\) is a Daugavet point. Consequently, we get by the observations from [3, Section 4] that \((x,y)\) is a \(\Delta\)-point in \(X\oplus_{N}Y\), and since it is also \(\nabla\) by assumption, we deduce that \((x,y)\) is a Daugavet point as we wanted. From this we immediately get as corollaries to the results from [3] and [11] that \(\ell_{p}\)-sums of Banach spaces with \(p\in(1,\infty)\) (as well as \(N\)-sums of Banach spaces when \(N\) has property \((\alpha)\)) do not admit \(\nabla\)-points, and that characterizations for \(\nabla\)-points are available in the case of A-octahedral norms that are different from the \(\ell_{1}\)-norm and the \(\ell_{\infty}\)-norm, see e.g. [11, Theorems 2.2 and 3.1]. To conclude the section, it now only remains to do a specific study for \(\ell_{1}\)-sums and \(\ell_{\infty}\)-sums. For \(\ell_{1}\)-sums, we have the following statements. **Proposition 3.3**.: _Let \(X\) and \(Y\) be Banach spaces, and let \(x\in S_{X}\). Then \(x\) is a \(\nabla\)-point in \(X\) if and only if \((x,0)\) is a \(\nabla\)-point in \(X\oplus_{1}Y\)._ Proof.: Let \(Z:=X\oplus_{1}Y\), and let us first assume that \(x\) is a \(\nabla\)-point in \(X\). Fix \(\varepsilon>0\), and let \(S:=S(B_{Z},f,\alpha)\) with \(f:=(x^{*},y^{*})\in S_{Z^{*}}\) and \(\alpha>0\). Then assume that \((x,0)\notin S\). We look at two cases: * If \(\|x^{*}\|=1\), then \(x\notin S(B_{X},x^{*},\alpha)\). Thus there is a \(u\in S(B_{X},x^{*},\alpha)\) such that \(\|x-u\|\geq 2-\varepsilon\). Now \((u,0)\in S\) and \(\|(x,0)-(u,0)\|_{1}\geq 2-\varepsilon\). 2. If \(\|x^{*}\|<1\), then \(\|y^{*}\|=1\). Find \(y\in S_{Y}\) such that \(y^{*}(y)>1-\alpha\). Then \((0,y)\in S\) and \(\|(x,0)-(0,y)\|_{1}=\|x\|+\|y\|=2\), so we are done. Conversely, let us assume that \((x,0)\) is a \(\nabla\)-point in \(Z\). Fix \(\varepsilon>0\) and let \(S:=S(B_{X},x^{*},\alpha)\) with \(x^{*}\in S_{X*}\) and \(\alpha>0\). Then assume that \(x\notin S\). Let \(\delta=\min\{\alpha,\varepsilon/2\}\). Then \((x,0)\notin S(B_{Z},f,\delta)\) where \(f:=(x^{*},0)\in S_{Z*}\). Since \((x,0)\) is a \(\nabla\)-point in \(Z\), we can find \((u,v)\in S(B_{Z},f,\delta)\) such that \(\|x-u\|+\|v\|\geqslant 2-\delta\). Then \(u\in S(B_{X},x^{*},\delta)\subset S(B_{X},x^{*},\alpha)\). In particular, \(\|v\|=1-\|u\|<\delta\), hence \(\|x-u\|\geqslant 2-2\delta\geqslant 2-\varepsilon\). **Proposition 3.4**.: _Let \(X\) and \(Y\) be Banach spaces, let \(x\in S_{X}\) and \(y\in S_{Y}\), and let \(a,b>0\) be such that \(a+b=1\). If \((ax,by)\) is a \(\nabla\)-point in \(X\oplus_{1}Y\), then both \(x\) and \(y\) are Daugavet points._ Proof.: Let \(Z:=X\oplus_{1}Y\) and assume that \((ax,by)\) is a \(\nabla\)-point in \(Z\). Note that since \(a,b>0\), we have that \((ax,by)\) is not extreme in \(B_{Z}\) hence not strongly exposed. So by Theorem 2.3, \((ax,by)\) is actually a Daugavet point in \(Z\). It then follows from [11, Theorem 3.1] that \(x\) and \(y\) are Daugavet points in \(X\) and \(Y\) respectively. Finally, for \(\ell_{\infty}\)-sums, we have the following result. One direction is straightforward, and the other is analogous to [11, Theorem 3.2]. We leave the details to the reader. **Proposition 3.5**.: _Let \(X,Y\) be Banach spaces, and let \(x\in B_{X}\) and \(y\in B_{Y}\). Then \((x,y)\) is a \(\nabla\)-point in \(X\oplus_{\infty}Y\) if and only if one of the two following conditions is satisfied:_ _(i) \(x\) is a Daugavet point or \(y\) is a Daugavet point;_ _(ii) \(x\) and \(y\) are both \(\nabla\)-points._ ### Function spaces Let \((\Omega,\Sigma,\mu)\) be a measured space. It is well known that the space \(L_{1}(\mu)\) has the Daugavet property if and only if \(\mu\) admits no atom (see e.g. [27, Section 2, Example (b)]). In fact, it was observed in [19, Proposition 4.12] that this is actually equivalent to \(L_{1}(\mu)\) having the strong diameter \(2\) property. Building on [3, Theorem 3.1], the following result was proved in [19, Corollary 4.1]. **Proposition 3.6**.: _Let \(f\in S_{L_{1}(\mu)}\). Then the following assertions are equivalent:_ _(i) \(f\) is a super Daugavet point;_ _(ii) \(f\) is a \(\Delta\)-point;_ _(iii) The support of \(f\) contains no atom._ If \(\mu\) admits atoms, then we can also show that \(L_{1}(\mu)\) naturally contains \(\nabla\)-points which are not Daugavet points, and that those are exactly the points given by normalized indicator functions over an atom and their opposites, extending Example 1.5 in a natural way. **Proposition 3.7**.: _Let \(f\in S_{L_{1}(\mu)}\) and assume that the support of \(f\) contains an atom \(A\). Then \(f\) is a \(\nabla\)-point if and only if \(f=\pm\frac{1}{\mu(A)}\)._ Proof.: If \(\operatorname{supp}f\) contains an atom \(A\), then either \(f=g+\theta\alpha\mathbb{1}_{A}\) with \(g\in L_{1}(\mu)\) non-zero, \(\theta\in\{-1,1\}\) and \(\alpha\in\left(0,\frac{1}{\mu(A)}\right)\); or \(f=\frac{\theta}{\mu(A)}\mathbb{1}_{A}\) with \(\theta\in\{-1,1\}\). In the first case, we have \[\left\|f-\frac{\theta}{\mu(A)}\mathbb{1}_{A}\right\|=\int_{\Omega\setminus A}| g|\,d\mu+(1-\alpha\mu(A))=2-2\alpha\mu(A)<2;\] so \(f\) is at distance strictly less than \(2\) to a denting point of \(B_{L_{1}(\mu)}\) that is distinct from \(f\), hence \(f\) is not a \(\nabla\)-point by Proposition 2.6. In the second case, observe that \(f=(0,1)\) in the Banach space \(L_{1}(\mu_{|\Omega\setminus A})\oplus_{1}\mathbb{R}\equiv L_{1}(\mu)\), so Proposition 3.3 yields that \(f\) is a \(\nabla\)-point. Let \(K\) be a compact Hausdorff space. It is well known that \(C(K)\) has the Daugavet property if and only if \(K\) has no isolated point (see e.g. [27, Section 2, Example (a)]). Building on [3, Theorem 3.4], the following result was proved in [19, Corollary 4.3]. **Proposition 3.8**.: _Let \(f\in S_{C(K)}\). Then the following assertions are equivalent:_ 1. \(f\) _is a super Daugavet point;_ 2. \(f\) _is a_ \(\Delta\)_-point;_ 3. \(f\) _attains its norm at an accumulation point of_ \(K\)_._ We have seen in Example 3.1 that, similar to \(\ell_{1}^{n}\), the space \(\ell_{\infty}^{n}\) contains plenty of \(\nabla\)-points. But unlike the infinite dimensional \(L_{1}(\mu)\) setting, we will now prove that Proposition 3.8 also provides a characterization for \(\nabla\)-points in infinite dimensional \(C(K)\)-spaces, so that all the diametral notions coincide in this context. **Proposition 3.9**.: _Let \(K\) be an infinite compact Hausdorff space. If a function \(f\in S_{C(K)}\) does not attain its norm at an accumulation point of \(K\), then it is not a \(\nabla\)-point._ Proof.: Let us consider the set \(H:=\{x\in K:|f(x)|=1\}\). Since \(K\) is compact and \(f\) does not attain its norm at an accumulation point of \(K\), we have that the set \(H\) is finite and in particular clopen in \(K\). As a consequence, \(|f|\) attains its maximum on \(K\backslash H\), and there exists \(\varepsilon\in(0,1]\) such that \(\left|f_{|K\backslash H}\right|\leqslant 1-\varepsilon\). Now fix any \(x_{0}\in K\backslash H\), and for every \(x\in H\), let \(\theta_{x}:=\operatorname{sign}f(x)\). We consider the functional \(\varphi:=\frac{1}{|H|+1}\left(\sum_{x\in H}\theta_{x}\delta_{x}+\delta_{x_{0} }\right)\in S_{C(K)*}\). We have \[\varphi(f)=\frac{|H|+f(x_{0})}{|H|+1}\leqslant\frac{|H|+1-\varepsilon}{|H|+1}= 1-\frac{\varepsilon}{|H|+1},\] so \(f\notin S\left(\varphi,\frac{\varepsilon}{|H|+1}\right)\). Now pick any \(g\in S\left(\varphi,\frac{\varepsilon}{|H|+1}\right)\) and pick some \(z\in K\). If \(z\notin H\), then \(|f(z)-g(z)|\leqslant|f(z)|+|g(z)|\leqslant 2-\varepsilon\). Else, \[\theta_{z}g(z)=(|H|+1)\left(\varphi(g)-\left(\varphi(g)-\frac{\theta_{z}g(z)} {|H|+1}\right)\right)>|H|+1-\varepsilon-|H|=1-\varepsilon,\] and as a consequence, \[|f(z)-g(z)|=|\theta_{z}-g(z)|=|1-\theta_{z}g(z)|<\varepsilon\leqslant 1.\] Thus \(\|f-g\|\leqslant 2-\varepsilon\), and \(f\) is not a \(\nabla\)-point. _Remark 3.10_.: 1. Let \(L\) be an infinite locally compact Hausdorff space. We can show analogously that if a function \(f\in S_{C_{0}(L)}\) does not attain its norm at an accumulation point of \(L\), then it is not a \(\nabla\)-point. In particular, if \(L\) does not have an accumulation point, then \(C_{0}(L)\) does not contain \(\nabla\)-points. 2. If \(X\) is an \(L_{1}\)-predual, then \(X^{**}\) is a \(C(K)\) space. It follows that all the diametral notions - including the notion of \(\nabla\)-points - coincide in \(X\). A characterization for those points was provided in [20, Theorem 3.2]. ### Projective tensor products The transfer of \(\Delta\)-points (respectively Daugavet points) in projective tensor products of Banach spaces was first investigated in [18] (respectively [6]). We summarize the results obtained in these two papers here: **Proposition 3.11**.: _Let \(X\) and \(Y\) be Banach spaces, and let \(x_{0}\in S_{X}\) and \(y_{0}\in S_{Y}\)._ 1. _If_ \(x_{0}\) _is a_ \(\Delta\)_-point, then_ \(x_{0}\otimes y\) _is a_ \(\Delta\)_-point in_ \(X\widehat{\otimes}_{\pi}Y\) _for every_ \(y\in S_{Y}\)__ _[_18_, Remark 5.4]__._ 2. _If_ \(x_{0}\otimes y_{0}\) _is a_ \(\Delta\)_-point in_ \(X\widehat{\otimes}_{\pi}Y\) _and_ \(y_{0}\) _is a strongly exposed point, then_ \(x_{0}\) _is a_ \(\Delta\)_-point_ _[_6_, Proposition 2.12, (a)]__._ 3. _If_ \(x_{0}\) _and_ \(y_{0}\) _are both Daugavet points, then_ \(x_{0}\otimes y_{0}\) _is a Daugavet point in_ \(X\widehat{\otimes}_{\pi}Y\)__ _[_6_, Proposition 2.12, (b)]__._ 4. _If_ \(x_{0}\otimes y_{0}\) _is a Daugavet point in_ \(X\widehat{\otimes}_{\pi}Y\) _and_ \(y_{0}\) _is a denting point, then_ \(x_{0}\) _is a Daugavet point_ _[_6_, Proposition 2.12, (c)]__._ Our goal in this subsection is to study similar stability results for \(\nabla\)-points. In particular, we will show that in order to get that \(x_{0}\otimes y_{0}\) is a Daugavet point in [6, Proposition 2.12, (b)], it suffices to assume that one of the points is a Daugavet point and that the other is a \(\nabla\)-point (see Proposition 3.14). We begin by pointing out the following simple lemma, which is certainly well known to experts on Daugavet points, but for which we could not find an explicit reference. **Lemma 3.12**.: _Let \(X\) be a Banach space. If \(x\in S_{X}\) is a Daugavet point, then for every slice \(S\) of \(B_{X}\) and for every \(\varepsilon>0\), there exists \(y\in S\) such that \(\|x\pm y\|\geq 2-\varepsilon\)._ Proof.: Let \(x\) be a Daugavet point, \(S\) a slice of \(B_{X}\), and \(\varepsilon>0\). Since \(x\) is a Daugavet point, then by [14, Remark 2.3], we can find a slice \(\tilde{S}\subset S\) such that \[\|x-u\|\geq 2-\varepsilon\qquad\text{for all }u\in\tilde{S}.\] Note that \(-x\) is also a Daugavet point, hence we can find a \(y\in\tilde{S}\) such that \(\|-x-y\|=\|x+y\|\geq 2-\varepsilon\). Therefore, \(\|x\pm y\|\geq 2-\varepsilon\) and \(y\in S\) as we wanted. _Remark 3.13_.: Let us point out that there is no complete \(\nabla\)-analogue to Lemma 3.12. Indeed, \((0,1)\) is a \(\nabla\)-point in \(\ell_{1}^{2}\), but if one considers any slice \(S\) of \(B_{\ell_{2}^{2}}\) such that \((0,-1)\in S\), but \((0,1),(1,0),(-1,0)\notin S\), then clearly there exists \(\varepsilon>0\) such that \(\|(1,0)+y\|\leq 2-\varepsilon\) for every \(y\in S\). However, the result does hold true for a \(\nabla\)-point \(x\) and slices that contain neither \(x\) nor \(-x\). **Proposition 3.14**.: _Let \(X\) and \(Y\) be Banach spaces. If \(x\in S_{X}\) is a Daugavet point and \(y\in S_{Y}\) is a \(\nabla\)-point, then \(x\otimes y\in S_{X\otimes_{x}Y}\) is a Daugavet point._ Proof.: We follow [6, Proposition 2.12, (b)]. Let \(\varepsilon>0\) and \(S:=S(B_{X\otimes_{x}Y},B,\alpha)\) be an arbitrary slice. Our goal is to find a \(z\in S\) such that \(\|x\otimes y-z\|\geq 2-\varepsilon\). Find \(x_{0}\in B_{X}\) and \(y_{0}\in B_{Y}\) such that \(B(x_{0},y_{0})>1-\alpha/2\). Consider first the following slice \[S_{1}:=\left\{x\in B_{X}\colon B(x,y_{0})>\sup_{u\in B_{X}}B(u,y_{0})-\frac{ \alpha}{4}\right\}.\] Since \(x\) is a Daugavet point, by Lemma 3.12, we can find \(u_{1}\in S_{1}\) such that \(\|x\pm u_{1}\|\geq 2-\varepsilon\). Now look at the slice \[S_{2}:=\left\{y\in B_{Y}\colon B(u_{1},y)>\sup_{v\in B_{Y}}B(u_{1},v)-\frac{ \alpha}{4}\right\}.\] We consider two cases: (a) \(y\in S_{2}\) and (b) \(y\notin S_{2}\). 1. Assume that \(y\in S_{2}\). We can then take \(z:=u_{1}\otimes y\). Indeed, \(\|x\otimes y-u_{1}\otimes y\|=\|x-u_{1}\|\cdot\|y\|\geq 2-\varepsilon\) and \[B(u_{1},y) >\sup_{v\in B_{Y}}B(u_{1},v)-\frac{\alpha}{4}\] \[\geq B(u_{1},y_{0})-\frac{\alpha}{4}\] \[>\sup_{u\in B_{X}}B(u,y_{0})-\frac{\alpha}{4}-\frac{\alpha}{4}\] \[\geq B(x_{0},y_{0})-\frac{\alpha}{2}\] \[>1-\alpha.\] Hence, \(z\in S\) and \(\|x\otimes y-z\|\geq 2-\varepsilon\) as we wanted. 2. Assume that \(y\notin S_{2}\). Then since \(y\) is a \(\nabla\)-point we can find \(v_{2}\in S_{2}\) such that \(\|y-v_{2}\|\geq 2-\varepsilon\). Similar computations as in (a) show that \(B(u_{1},v_{2})>1-\alpha\), that is, \(z:=u_{1}\otimes v_{2}\in S\). Finally, let us show that \(\|x\otimes y-z\|\geq 2-\varepsilon\) also. Since \(\|x+u_{1}\|\geq 2-\varepsilon\) and \(\|y-v_{2}\|\geq 2-\varepsilon\), we can find \(x^{*}\in S_{X^{*}}\) and \(y^{*}\in S_{Y^{*}}\) such that \[x^{*}(x+u_{1})\geq 2-\varepsilon\qquad\text{and}\qquad y^{*}(y-v_{2})\geq 2-\varepsilon.\] Therefore, \(x^{*}(x),x^{*}(u_{1}),y^{*}(y),y^{*}(-v_{2})\geqslant 1-\varepsilon\). Define a bilinear operator \(B_{0}(x,y):=x^{*}(x)y^{*}(y)\) for all \(x\in X\) and \(y\in Y\). Then \(\|B_{0}\|=1\) and \[\|x\otimes y-u_{1}\otimes v_{2}\| \geqslant B_{0}(x\otimes y-u_{1}\otimes v_{2})\] \[=x^{*}(x)y^{*}(y)-x^{*}(u_{1})y^{*}(v_{2})\] \[=x^{*}(x)y^{*}(y)+x^{*}(u_{1})y^{*}(-v_{2})\] \[\geqslant(1-\varepsilon)^{2}+(1-\varepsilon)^{2}=2(1-\varepsilon) ^{2}.\] Hence, \(x\otimes y\) is a \(\nabla\)-point. Although, we do not know whether an elementary tensor is a \(\nabla\)-point whenever both components are \(\nabla\)-points, we can at least say that such a point has to be far away from all denting points of the unit ball other than itself. **Proposition 3.15**.: _Let \(X\) and \(Y\) be Banach spaces. If \(x\in S_{X}\) and \(y\in S_{Y}\) are \(\nabla\)-points, then \(\|x\otimes y-z\|=2\) for every denting point \(z\in B_{X\widehat{\otimes}_{\pi}Y}\) with \(z\neq x\otimes y\)._ Proof.: Let \(z\) be a denting point in \(B_{X\widehat{\otimes}_{\pi}Y}\) such that \(z\neq x\otimes y\). By [26, Corollary 4], we know that \(z=u\otimes v\), where \(u\) is a denting point of \(B_{X}\) and \(v\) is a denting point of \(B_{Y}\). We will conclude the result arguing by cases: 1. Assume first that \(x\neq u\) and \(y=v\) (the case \(x=u\) and \(y\neq v\) is similar). Since \(x\) is a \(\nabla\)-point, then by Proposition 2.6, we have that \(\|x-u\|=2\). Hence, \[\|x\otimes y-z\|=\|x\otimes y-u\otimes y\|=\|x-u\|\cdot\|y\|=2.\] 2. Assume now that \(x\neq u\) and \(y\neq v\). 1. Suppose \(x=-u\) and \(y=-v\). Then \[\|x\otimes y-u\otimes v\|=\|(-u)\otimes(-v)-u\otimes v\|=2\|u\|\|v\|=2.\] 2. Suppose \(x\neq-u\) and \(y=-v\) (the case \(x=-u\) and \(y\neq-v\) is similar). Since \(-u\) is also a denting point, we can use Propositon 2.6 to obtain that \(\|x+u\|=2\). Thus \[\|x\otimes y-u\otimes v\|=\|x\otimes(-v)-u\otimes v\|=\|x+u\|\|v\|=2.\] 3. Suppose \(x\neq-u\) and \(y\neq-v\). By using Propositon 2.6 twice we obtain that \[\|x+u\|=2\quad\text{ and }\quad\|y-v\|=2.\] Let \(\varepsilon>0\). We can find \(x^{*}\in S_{X^{*}}\) and \(y^{*}\in S_{Y^{*}}\) such that \[x^{*}(x+u)\geqslant 2-\varepsilon\qquad\text{ and }\qquad y^{*}(y-v)\geqslant 2-\varepsilon.\] Therefore, \(x^{*}(x),x^{*}(u),y^{*}(y),y^{*}(-v)\geqslant 1-\varepsilon\). Define a bilinear operator \(B(r,s):=x^{*}(r)y^{*}(s)\) for all \(r\in X\) and \(s\in Y\). Then \(\|B\|=1\) and \[\|x\otimes y-u\otimes v\| \geqslant B(x\otimes y-u\otimes v)\] \[=x^{*}(x)y^{*}(y)-x^{*}(u)y^{*}(v)\] \[=x^{*}(x)y^{*}(y)+x^{*}(u)y^{*}(-v)\] \[\geqslant(1-\varepsilon)^{2}+(1-\varepsilon)^{2}=2(1-\varepsilon) ^{2}.\] Hence, \(\|x\otimes y-z\|=2\) as we wanted. _Remark 3.16_.: In particular, note that it follows from Proposition 2.9 that if \(X\widehat{\otimes}_{\pi}Y\) has the Radon-Nikodym property, then actually \(x\otimes y\) is a \(\nabla\)-point in \(X\widehat{\otimes}_{\pi}Y\) whenever \(x\) and \(y\) are \(\nabla\)-points. We now turn our attention to the converse of Proposition 3.14. For this we state a lemma that can be proven similarly to [14, Remark 2.3] and is left to the reader. **Lemma 3.17**.: _Let \(X\) be a Banach space, let \(x\in S_{X}\) be a \(\nabla\)-point in \(X\), and let \(\varepsilon>0\). If \(S\) is a slice of \(B_{X}\) that does not contain \(x\), then there exists a slice \(\tilde{S}\) of \(B_{X}\) contained in \(S\) such that \(\|x-y\|>2-\varepsilon\) for every \(y\in\tilde{S}\)._ Note that Lemma 3.17 gives us the following: If \(x\otimes y\in S_{X\widehat{\otimes}_{x}Y}\) is a \(\nabla\)-point, \(S\) is a slice of \(B_{X\widehat{\otimes}_{x}Y}\) not containing \(x\otimes y\), and \(\varepsilon>0\), then we can find an elementary tensor \(u\otimes v\in S\) such that \(\|x\otimes y-u\otimes v\|\geq 2-\varepsilon\). **Proposition 3.18**.: _Let \(X\) and \(Y\) be Banach spaces. If \(x\otimes y\in S_{X\widehat{\otimes}_{x}Y}\) is a \(\nabla\)-point and \(y\) is a denting point, then \(x\in S_{X}\) is a \(\nabla\)-point._ Proof.: We follow [6, Proposition 2.12, (c)]. Let \(S(B_{X},x^{*},\alpha)\) be such that \(x\notin S(B_{X},x^{*},\alpha)\) and \(\varepsilon>0\). Since \(y\) is a denting point, we can find a slice \(S(B_{Y},y^{*},\beta)\) such that \(y\in S(B_{Y},y^{*},\beta)\) and \(\operatorname{diam}(S(B_{Y},y^{*},\beta))\leq\varepsilon\). Define the bilinear form \(B(u,v)=x^{*}(u)y^{*}(v)\) for all \(u\in X\) and \(v\in Y\). Consider the slice \(S(B_{X\widehat{\otimes}_{x}Y},B,\gamma)\), where \(\gamma=\min\{\alpha,\beta\}\). Since \(x\otimes y\) is a \(\nabla\)-point and \(x\otimes y\notin S(B_{X\widehat{\otimes}_{x}Y},B,\gamma)\), we can find, by Lemma 3.17, an elementary tensor \(u_{0}\otimes v_{0}\in S(B_{X\widehat{\otimes}_{x}Y},B,\gamma)\) such that \[\|x\otimes y-u_{0}\otimes v_{0}\|\geq 2-\varepsilon.\] Observe that \[2-\varepsilon \leq\|x\otimes y-u_{0}\otimes v_{0}\|\] \[=\|(x-u_{0})\otimes y+u_{0}\otimes(y-v_{0})\|\] \[\leq\|x-u_{0}\|+\|y-v_{0}\|.\] Since \(u_{0}\otimes v_{0}\in S(B_{X\widehat{\otimes}_{x}Y},B,\gamma)\), we have that \[x^{*}(u_{0})y^{*}(v_{0})=B(u_{0},v_{0})>1-\gamma=1-\min\{\alpha,\beta\}.\] Therefore, \(x^{*}(u_{0})>1-\alpha\) and \(y^{*}(v_{0})>1-\beta\), that is, \(u_{0}\in S(B_{X},x^{*},\alpha)\) and \(v_{0}\in S(B_{Y},y^{*},\beta)\). Note that \(\|y-v_{0}\|<\varepsilon\), because \(\operatorname{diam}(S(B_{Y},y^{*},\beta))\leq\varepsilon\). Finally, \[\|x-u_{0}\|\geq 2-\varepsilon-\|y-v_{0}\|>2-2\varepsilon,\] hence \(x\) is a \(\nabla\)-point as we wanted to show. We will end this section with some results on super \(\Delta\)-points, and with some observations on super Daugavet points. We first show that super \(\Delta\)-points pass easily to projective tensor products. **Proposition 3.19**.: _Let \(X\) and \(Y\) be Banach spaces. If \(x\) is a super \(\Delta\)-point, then \(x\otimes y\in S_{X\widehat{\otimes}_{x}Y}\) is a super \(\Delta\)-point for every \(y\in S_{Y}\)._ Proof.: Let \(y\in S_{Y}\). Since \(x\) is a super \(\Delta\)-point, then by [19, Proposition 3.4, (2)], there is a net \((x_{\alpha})\) in \(B_{X}\) that converges to \(x\) weakly and such that \(\|x_{\alpha}-x\|\to 2\). Take \(z_{\alpha}:=x_{\alpha}\otimes y\). Clearly, \((z_{\alpha})\subset B_{X\widehat{\otimes}_{x}Y}\) and \[\|x\otimes y-x_{\alpha}\otimes y\|=\|x_{\alpha}-x\|\to 2.\] Finally, let us show that \(z_{\alpha}\) converges weakly to \(x\otimes y\). Indeed, let \(T\in(X\widehat{\otimes}_{\pi}Y)^{*}=\mathcal{L}(Y,X^{*})\). Then \[T(x_{\alpha}\otimes y)=\langle Ty,x_{\alpha}\rangle\to\langle Ty,x\rangle=T(x \otimes y),\] because \(x_{\alpha}\) converges to \(x\) weakly, and the conclusion follows. We do not know whether super Daugavet points pass to the projective tensor product similarly to Daugavet points. However, let us point out that there is an analogue to Lemma 3.12 for those points. **Lemma 3.20**.: _Let \(X\) be a Banach space, and let \(x\in S_{X}\). If \(x\) is a super Daugavet point, then for every relatively weakly open subset \(V\) of \(B_{X}\), and for every \(\varepsilon>0\), there exists \(y\in V\) such that \(\|x\pm y\|\geqslant 2-\varepsilon\). In particular, we can find for every \(y\in B_{X}\) a net \((y_{a})\) in \(S_{X}\) that converges weakly to \(y\) and such that \(\|x\pm y_{a}\|\to 2\)._ Proof.: Let \[\Delta_{\varepsilon}(x):=\{y\in B_{X}:\ \|x-y\|>2-\varepsilon\}.\] It was observed in [19, Proposition 3.4] that the set \(\Delta_{\varepsilon}(x)\) is relatively weakly open in \(B_{X}\). So as \(x\) is a super Daugavet point, we get that \(W:=V\cap\Delta_{\varepsilon}(x)\) is a non-empty relatively weakly open subset of \(B_{X}\). Thus \(-W\cap\Delta_{\varepsilon}(x)\) is also non-empty, and any element \(y\) in this set satisfies \(\|x\pm y\|>2-\varepsilon\). The desired net can then be constructed analogously to [19, Proposition 3.4]. ## 4. Renormings Our starting point will be the observation that every Banach space can easily be renormed with a \(\nabla\)-point. Indeed, it immediately follows from Proposition 3.3 and Example 1.5 that the addition of an "\(\ell_{1}\)-corner" to the unit ball of any given Banach space automatically produces a \(\nabla\)-point. We will then combine this simple idea with an adapted version of the renorming from [9, Theorem 2.4] that was already used in [1] to produce a renorming of \(\ell_{2}\) with a super \(\Delta\)-point in the space and its dual. **Proposition 4.1**.: _Every Banach space can be renormed with a \(\nabla\)-point._ Proof.: For finite dimensional spaces, the statement is clear, as we have already seen in Example 1.5 that \(\ell_{1}^{n}\) contains a \(\nabla\)-point for every \(n\in\mathbb{N}\). So let \(X\) be an infinite dimensional Banach space. Then \(X\) is isomorphic to \(Y\oplus_{1}\mathbb{R}\) for any given co-dimension 1 subspace \(Y\) of \(X\). Since 1 is a \(\nabla\)-point in \(\mathbb{R}\), we get from Proposition 3.3 that \((0,1)\) is a \(\nabla\)-point in \(Y\oplus_{1}\mathbb{R}\), and the conclusion follows. Let \(X\) be a Banach space with a Schauder basis \((e_{n})\). Recall that \((e_{n})\) is said to be _unconditional_ if the series \(x:=\sum_{n\geqslant 1}a_{n}e_{n}\) converges unconditionally for every \(x\in X\). Following [5] we will say that \((e_{n})\) is _1-unconditional_ if for every \((a_{n}),(b_{n})\in c_{00}\) with \(|a_{n}|\leqslant|b_{n}|\) we have \[\left\|\sum_{n\geqslant 1}a_{n}e_{n}\right\|\leqslant\left\|\sum_{n\geqslant 1}b _{n}e_{n}\right\|.\] The following theorem is the main result of the section. **Theorem 4.2**.: _Let \(X\) be an infinite dimensional Banach space with an unconditional weakly null Schauder basis \((e_{n})\) and biorthogonal functionals \((e_{n}^{*})\). Then there exists an equivalent norm \(|\!|\!|\cdot|\!|\!|\) on \(X\) such that_ 1. \(e_{1}\) _is a super Daugavet point in_ \((X,|\!|\!|\!|\cdot|\!|\!|)\)_;_ 2. \(e_{1}^{*}\) _is a weak_\({}^{*}\) _super Daugavet point in_ \((E,|\!|\!|\cdot|\!|\!|)\)_, where_ \(E:=\overline{\operatorname{span}}\{e_{n}^{*}\}\)_._ We will prove Theorem 4.2 in several steps. So from now on, \((X,\|\!|\cdot|\!|)\) will be an infinite dimensional Banach space with an unconditional weakly null Schauder basis \((e_{n})\) and biorthogonal functionals \((e_{n}^{*})\). We will also assume as we may that \((e_{n})\) is normalized and 1-unconditional with respect to the original norm of \(X\). For simplicity, we will keep using the notation from previous sections whenever we refer to this specific norm. Let \(Y:=\overline{\operatorname{span}}\{e_{n}\}_{n\geqslant 2}\), and let \(A\) be the set of all finitely supported elements in the positive cone of \(Y\). Similarly, let \(F\) be the set of all finitely supported elements in the positive cone of \(\overline{\operatorname{span}}\{e_{n}^{*}\}_{n\geqslant 2}\). We consider the equivalent norm \(|\!|\!|\cdot|\!|\!|\) on \(X\) whose unit ball is \[B_{(X,\|\!|\cdot|\!|)}:=\overline{\operatorname{conv}}\{\pm(e_{1}+2x)\colon x \in A\cap B_{X}\}.\] Since \((e_{n})\) is \(1\)-unconditional, every finitely supported element \(y\in B_{Y}\) can be written as \(y=y_{+}-y_{-}\) with \(y_{+},y_{-}\in A\cap B_{X}\), and it immediately follows that \(B_{Y}\) is contained in \(B_{(X,\left\|\cdot\right\|)}\). Hence, \[B_{Y\oplus_{1}\mathbb{R}e_{1}}=\operatorname{conv}(B_{Y}\cup\{\pm e_{1}\}) \subset B_{(X,\left\|\cdot\right\|)}\subset 3B_{X},\] and \(\left\|\cdot\right\|\) is indeed an equivalent norm on \(X\). We start by giving a geometric description of \(B_{(E,\left\|\cdot\right\|)}\), and by producing useful formula for the norm \(\left\|\cdot\right\|\) and for its dual norm. **Lemma 4.3**.: _The space \((X,\left\|\cdot\right\|)\) and the space \((E,\left\|\cdot\right\|\left\|)\) have the following properties:_ 1. _For every_ \(z^{*}\in B_{(E,\left\|\cdot\right\|)}\) _with finite support and_ \(z^{*}(e_{1})\geq 0\)_, there exist_ \(\lambda\in[0,1]\) _and_ \(x^{*},y^{*}\in F\cap B_{X^{*}}\) _with disjoint supports such that_ \[z^{*}=\lambda(e_{1}^{*}-y^{*})+(1-\lambda)\frac{1}{2}(x^{*}-y^{*});\] 2. _The unit ball of the space_ \((E,\left\|\cdot\right\|\left\|)\) _is given by_ \[B_{(E,\left\|\cdot\right\|\right\|)}=\overline{\operatorname{conv}}\,\Big{\{} \pm(e_{1}^{*}-x^{*}),\frac{1}{2}(x^{*}-y^{*})\colon x^{*},y^{*}\in F\cap B_{X^ {*}}\Big{\}};\] 3. _For every_ \(x\in X\)_,_ _where_ \(x_{+}\) _and_ \(x_{-}\) _are positive and negative parts of_ \(x-e_{1}^{*}(x)e_{1}\) _respectively;_ 4. _For every_ \(x^{*}\in E\)_,_ _where_ \(x_{+}^{*}\) _and_ \(x_{-}^{*}\) _are positive and negative parts of_ \(x^{*}-x^{*}(e_{1})e_{1}^{*}\) _respectively._ Proof.: For convenience, we will first prove (iv), and then use it for proving (i) and (ii). Fix \(x^{*}\in E\) and let \(x_{+}^{*}\) and \(x_{-}^{*}\) be the positive and negative parts of \(x^{*}-x^{*}(e_{1})e_{1}^{*}\) respectively. Then \(x^{*}=x^{*}(e_{1})e_{1}^{*}+x_{+}^{*}-x_{-}^{*}\). Since \[B_{(X,\left\|\cdot\right\|)}=\overline{\operatorname{conv}}\{\pm(e_{1}+2x) \colon x\in A\cap B_{X}\},\] we get \[\left\|x^{*}\right\| =\sup_{xeA\cap B_{X}}\left|x^{*}(e_{1}+2x)\right|\] \[=\sup_{xeA\cap B_{X}}\left|x^{*}(e_{1})+2x_{+}^{*}(x)-2x_{-}^{*}(x )\right|\] \[=\max\left\{\left|x^{*}(e_{1})+2\|x_{+}^{*}\|\right|,\left|x^{*} (e_{1})-2\|x_{-}^{*}\|\right|\right\}.\] The last equality strongly relies on the fact that \((e_{n})\) is \(1\)-unconditional with respect to the original norm. Indeed, the latter implies that \[\left\|y^{*}\right\|=\sup\{y^{*}(x):x\in A\cap B_{X},\ \operatorname{supp}x \subset\operatorname{supp}y^{*}\}\] for every functional \(y^{*}\in F\). So since \(x_{+}^{*}\) and \(x_{-}^{*}\) have disjoint supports, each of these functionals admits a norming set in \(A\cap B_{X}\) on which the other functional vanishes. Now let us prove (i). Fix \(z^{*}\in B_{(E,\left\|\cdot\right\|)}\) with finite support and \(z^{*}(e_{1})\geq 0\). Let \(z_{+}^{*}\) and \(z_{-}^{*}\) be the positive and negative parts of \(z^{*}-z^{*}(e_{1})e_{1}^{*}\) respectively. If \(z^{*}(e_{1})=1\), then from (iv) we get \[\max\left\{\big{|}1+2\|z_{+}^{*}\|\big{|},\left|1-2\|z_{-}^{*}\|\right\}= \left\|z^{*}\right\|\leq 1,\] which means \(z_{+}^{*}=0\) and \(\left\|z_{-}^{*}\right\|\leq 1\). Hence \(z^{*}=e_{1}^{*}-z_{-}^{*}\) and \(z_{-}^{*}\in F\cap B_{X^{*}}\). So let us assume that \(z^{*}(e_{1})\in[0,1)\). Then \[z^{*}=z^{*}(e_{1})e_{1}^{*}+z_{+}^{*}-z_{-}^{*}=z^{*}(e_{1})\left(e_{1}^{*}-y^{* }\right)+\big{(}1-z^{*}(e_{1})\big{)}\frac{1}{2}\left(x^{*}-y^{*}\right),\] where \(x^{*}=\frac{2}{1-z^{*}(e_{1})}z_{+}^{*}\) and \(y^{*}=\frac{2}{1+z^{*}(e_{1})}z_{-}^{*}\). Furthermore, \[\max\left\{\big{|}z^{*}(e_{1})+2\|z_{+}^{*}\big{|}\big{|},\big{|}z^{*}(e_{1})-2 \|z_{-}^{*}\big{|}\big{|}\right\}=\|z^{*}\|\leqslant 1\] and thus \[\frac{2\|z_{+}^{*}\|}{1-z^{*}(e_{1})}\leqslant 1\quad\text{ and }\quad\frac{2\|z_{-}^{*}\|}{1+z^{*}(e_{1})}\leqslant 1,\] meaning \(x^{*},y^{*}\in F\cap B_{X^{*}}\). Next we will prove (ii). By (i) we have \[B_{(E,\|\cdot\|)}\subseteq\overline{\operatorname{conv}}\Big{\{}\pm(e_{1}^{* }-x^{*}),\frac{1}{2}(x^{*}-y^{*})\colon x^{*},y^{*}\in F\cap B_{X^{*}}\Big{\}}.\] From (iv) we get \[\|\!\|e_{1}^{*}-x^{*}\|\!\|=\max\left\{\big{|}1+2\|0\big{|}\big{|},\big{|}1-2 \|x^{*}\big{|}\big{|}\right\}=1\] and \[\frac{1}{2}\|x^{*}-y^{*}\|\!\|=\max\left\{\|x^{*}\|,\|y^{*}\|\right\}\leqslant 1\] for all \(x^{*},y^{*}\in F\cap B_{X^{*}}\), which gives us the other inclusion. Finally, let us prove (iii). For this purpose, we start by proving that the basis \((e_{n})\) is also monotone with respect to the new norm. Then we will get e.g. from [5, Lemma 3.2.3] that \[\|\!\|x\|\!\|=\sup_{z^{*}\in B_{(E,\|\cdot\|)}}\big{|}z^{*}(x)\big{|}\] for every \(x\in X\). So let \(n\in\mathbb{N}\) and let \(P_{n}\) be the projection on \(\operatorname{span}\{e_{1},\ldots,e_{n}\}\). Since \((e_{n})\) is \(1\)-unconditional hence monotone with respect to the original norm, we have that \(P_{n}(x)\in A\cap B_{X}\) for every \(x\in A\cap B_{X}\). Thus \(P_{n}(e_{1}+2x)=e_{1}+2P_{n}(x)\in B_{(X,\|\cdot\|)}\), and from this we clearly get \(P_{n}\big{(}B_{(X,\|\cdot\|)}\big{)}\subseteq B_{(X,\|\cdot\|)}\). Therefore \(\|\!\|P_{n}\|\!\|\leqslant 1\), which is what we wanted. Now fix \(x\in X\) and let \(x_{+}\) and \(x_{-}\) be the positive and negative parts of \(x-e_{1}^{*}(x)e_{1}\) respectively. By (ii) we have \[\|\!\|x\|\!\| =\sup_{z^{*}\in B_{(E,\|\cdot\|)}}\big{|}z^{*}(x)\big{|}\] \[=\max\left\{\sup_{x*\in F\cap B_{X^{*}}}|(e_{1}^{*}-x^{*})(x)|, \frac{1}{2}\sup_{x*,y^{*}\in F\cap B_{X^{*}}}|(x^{*}-y^{*})(x)|\right\}\] \[=\max\left\{\sup_{x*\in F\cap B_{X^{*}}}|e_{1}^{*}(x)-x^{*}(x_{+} )+x^{*}(x_{-})|,\frac{1}{2}\sup_{x*,y^{*}\in F\cap B_{X^{*}}}\big{(}x^{*}(x_{+ })+y^{*}(x_{-})\big{)}\right\}\] \[=\max\left\{\big{|}e_{1}^{*}(x)\big{|},\big{|}e_{1}^{*}(x)-\|x_{+} \big{|}\big{|},\big{|}e_{1}^{*}(x)+\|x_{-}\big{|}\big{|},\frac{1}{2}\big{(}\| x_{+}\|+\|x_{-}\|\big{)}\right\}.\] Again, the last two equalities strongly rely on the fact that \((e_{n})\) is \(1\)-unconditional with respect to the original norm because the latter implies that every \(y\in A\) attains its norm on the set \[\{x^{*}\in F\cap B_{X^{*}}:\ \operatorname{supp}x^{*}\subset\operatorname{supp}y\},\] and in particular that \(x_{+}\) and \(x_{-}\) both admit a norming functional in \(F\cap B_{X^{*}}\) which vanishes at the other point. In particular, note that \(\|\!\|x\|\!\|=1\) for every \(x\in A\cap S_{X}\), and that \(\|\!\|e_{1}+2y\|\!\|=1\) for every \(y\in A\cap B_{X}\). Furthermore, \(S_{(X,\|\cdot\|)}\) contains plenty of line segments that will prove very useful later on. **Corollary 4.4**.: _For every \(x\in A\cap S_{X}\) and \(y\in A\cap B_{X}\) with disjoint supports, and for every \(\lambda\in[0,1]\), we have_ \[\|\!\|\lambda x-(1-\lambda)(e_{1}+2y)\|\!\|=1.\] Proof.: Fix \(x\in A\cap S_{X}\) and \(y\in A\cap B_{X}\) with disjoint supports, and fix \(\lambda\in[0,1]\). Then by Lemma 4.3 (iii) we have \[\left|\hskip-1.0pt\left|\hskip-1.0pt\left|\lambda x-(1-\lambda)(e_{1}+2y) \right|\hskip-1.0pt\right|\hskip-1.0pt\right|=\max\left\{1-\lambda,1,1-\lambda,\frac{1}{2}\big{(}\lambda+2(1-\lambda)\big{)}\right\}=1.\] Analogously, observe that \(\left|\hskip-1.0pt\left|\hskip-1.0pt\left|x^{\star}\right|\hskip-1.0pt\right| \hskip-1.0pt\right|=2\) for every \(x^{\star}\in F\cap S_{X^{\star}}\) and that \(\left|\hskip-1.0pt\left|\hskip-1.0pt\left|\hskip-1.0pt\left|e_{1}^{\star}-x^{ \star}\right|\hskip-1.0pt\right|\hskip-1.0pt\right|=1\) for every \(x^{\star}\in F\cap B_{X^{\star}}\). From these observations, one can already easily deduce some diametral properties for the points \(e_{1}\) and \(e_{1}^{\star}\). So for the reader's convenience, let us prove right away the two following corollaries. **Corollary 4.5**.: _The point \(e_{1}\) is a super \(\Delta\)-point in \((X,\left|\hskip-1.0pt\left|\hskip-1.0pt\left|\cdot\right|\hskip-1.0pt\right| \hskip-1.0pt\right|)\), and \(e_{1}^{\star}\) is a weak\({}^{\star}\) super \(\Delta\)-point in \((E,\left|\hskip-1.0pt\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\right|)\)._ **Corollary 4.6**.: _The points \(e_{1}\) and \(e_{1}^{\star}\) are \(\nabla\)-points in \((X,\left|\hskip-1.0pt\left|\hskip-1.0pt\left|\hskip-1.0pt\right|\hskip-1.0pt \right|)\) and \((E,\left|\hskip-1.0pt\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\right|)\) respectively._ Proof of Corollary 4.5.: Since \((e_{n})\) is weakly null, we have \(e_{1}+2e_{n}\to e_{1}\) weakly. By Corollary 4.4, we have \[\left|\hskip-1.0pt\left|\hskip-1.0pt\left|e_{1}\right|\hskip-1.0pt\right| \hskip-1.0pt\right|=\left|\hskip-1.0pt\left|\hskip-1.0pt\left|e_{1}+2e_{n} \right|\hskip-1.0pt\right|=1\text{ and }\left|\hskip-1.0pt\left|\hskip-1.0pt\left|e_{1}-(e_{1}+2e_{n}) \right|\hskip-1.0pt\right|=2\left|\hskip-1.0pt\left|\hskip-1.0pt\left|e_{n} \right|\hskip-1.0pt\right|=2\] for every \(n\geqslant 2\). Hence \(e_{1}\) is a super \(\Delta\)-point in \((X,\left|\hskip-1.0pt\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\right|)\). Analogously, \((e_{n}^{\star})\) is weak\({}^{\star}\) null, so we have \(e_{1}^{\star}-e_{n}^{\star}\to e_{1}^{\star}\) weak\({}^{\star}\). Then as \(\left|\hskip-1.0pt\left|\hskip-1.0pt\left|e_{1}^{\star}-(e_{1}^{\star}-e_{n}^{ \star})\right|\hskip-1.0pt\right|\hskip-1.0pt\right|=\left|\hskip-1.0pt\left| \hskip-1.0pt\left|e_{n}^{\star}\right|\hskip-1.0pt\right|=2\) for every \(n\geqslant 2\), it follows that \(e_{1}^{\star}\) is a weak\({}^{\star}\) super \(\Delta\)-point in \((E,\left|\hskip-1.0pt\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\right|)\). Proof of Corollary 4.6.: Clearly, \(B_{(X,\left|\hskip-1.0pt\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\right|)}\) is equal to the closure of the convex hull of the set \[\{\pm e_{1}\}\cup\{\pm(e_{1}+2x)\colon x\in A\cap S_{X}\}.\] So every slice of \(B_{(X,\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\right|)}\) contains either \(\pm e_{1}\), or \(\pm(e_{1}+2x)\) for some \(x\in A\cap S_{X}\). By Corollary 4.4, we have \[\left|\hskip-1.0pt\left|\hskip-1.0pt\left|e_{1}-(e_{1}+2x)\right|\hskip-1.0pt \right|\hskip-1.0pt\right|=2\left|\hskip-1.0pt\left|\hskip-1.0pt\left|x \right|\hskip-1.0pt\right|\hskip-1.0pt\right|=2\text{ and }\left|\hskip-1.0pt\left|\hskip-1.0pt\left|e_{1}+(e_{1}+2x)\right|\hskip-1.0pt \right|=2\left|\hskip-1.0pt\left|\hskip-1.0pt\left|e_{1}+x\right|\hskip-1.0pt \right|\hskip-1.0pt\right|=2\] for every \(x\in A\cap S_{X}\), and it clearly follows that \(e_{1}\) is a \(\nabla\)-point in \((X,\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\right|)\). It is also quite clear from the proof of Lemma 4.3 that \(B_{(E,\left|\hskip-1.0pt\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\right|)}\) is equal to the closure of the convex hull of the set \[\{\pm e_{1}^{\star}\}\cup\{\pm(e_{1}^{\star}-x^{\star})\colon x^{\star}\in S_{ X^{\star}}\}\cup\left\{\frac{1}{2}(x^{\star}-y^{\star}):\ x^{\star},y^{\star}\in F\cap S_{X^{\star}},\ \operatorname{supp}x^{\star}\cap\operatorname{supp}y^{\star}=\emptyset\right\}.\] Using Lemma 4.3 (iv), one can easily check that \[\left|\hskip-1.0pt\left|e_{1}^{\star}\pm(e_{1}^{\star}-x^{\star})\right| \hskip-1.0pt\right|=\left|\hskip-1.0pt\left|\hskip-1.0pt\left|e_{1}^{\star}- \frac{1}{2}(x^{\star}-y^{\star})\right|\hskip-1.0pt\right|\hskip-1.0pt\right|=2\] for every \(x^{\star},y^{\star}\in F\cap S_{X^{\star}}\) with disjoint supports, so it follows as above that \(e_{1}^{\star}\) is a \(\nabla\)-point in \((E,\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\left|\hskip-1.0pt\right|)\). In particular, it readily follows from Corollaries 4.5 and 4.6 that \(e_{1}\) is a Daugavet point in \((X,\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\left|\hskip-1.0pt\cdot\right| \hskip-1.0pt\right|)\) and that \(e_{1}^{\star}\) is a weak\({}^{\star}\) Daugavet point in \((E,\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\left|\hskip-1.0pt\cdot\right| \hskip-1.0pt\left|\hskip-1.0pt\right|)\). We will now finally show that those points are a super Daugavet point and a weak\({}^{\star}\) super Daugavet point in their respective unit balls. Proof of Theorem 4.2.: Let us first prove (i). Fix \(z\in B_{(X,\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\left|\hskip-1.0pt \right|)}\) with finite support. We will construct a sequence \((w_{n})\) in \(B_{(X,\left|\hskip-1.0pt\cdot\right|\hskip-1.0pt\left|\hskip-1.0pt\right|)}\) such that \(\left\|e_{1}-w_{n}\right\|\to 2\) and \((w_{n})\) converges weakly to \(z\) Let \(z_{+}\) and \(z_{-}\) be the positive and negative parts of \(z-e_{1}^{*}(z)e_{1}\) respectively, and let \(\lambda:=\frac{1+e_{1}^{*}(z)}{2}\in[0,1]\). Then define \[x:=\left\{\begin{array}{cl}\frac{1}{2\lambda}z_{+}&\text{if }\lambda>0,\\ 0&\text{if }\lambda=0,\end{array}\right.\quad\text{and}\quad y:=\left\{\begin{array} []{cl}y=\frac{1}{2(1-\lambda)}z_{-}&\text{if }\lambda<1,\\ 0&\text{if }\lambda=1.\end{array}\right.\] Note that if \(\lambda=0\), then \(e_{1}^{*}(z)=-1\), and it follows from Lemma 4.3 (iii) that \(z_{+}=0\). Analogously, if \(\lambda=1\), then \(z_{-}=0\). So in either case, we have \[z=\lambda(e_{1}+2x)-(1-\lambda)(e_{1}+2y).\] By Lemma 4.3 (iii) we have \(\|z_{+}\|-e_{1}^{*}(z)\leqslant\|z\|\leqslant 1\) and thus \[\|x\|=\frac{1}{2\lambda}\|z_{+}\|\leqslant\frac{1+e_{1}^{*}(z)}{2\lambda}=1,\] if \(\lambda\neq 0\). So for every \(n\in\mathbb{N}\) large enough, we have \(\|x\|\leqslant 1\) and \(\|x+e_{n}\|\geqslant\|e_{n}\|=1\). Thus there exists \(a_{n}\in[0,1]\) such that \(x+a_{n}e_{n}\in S_{X}\). Let \(w_{n}=z+2\lambda a_{n}e_{n}\). By construction, \(w_{n}\in B_{(X,\|\cdot\|)}\), and since \((e_{n})\) is weakly null, \(w_{n}\to z\) weakly. Furthermore, \(x+a_{n}e_{n}\in A\cap S_{X}\), and \(\operatorname{supp}(x+a_{n}e_{n})\cap\operatorname{supp}y=\emptyset\) for \(n\in\mathbb{N}\) large enough, so by Corollary 4.4 we get \[\|e_{1}-w_{n}\|=2\left\|(1-\lambda)(e_{1}+y)-\lambda(x+a_{n}e_{n})\right\|=2\] as desired. Finally, as the set of all finitely supported elements is dense in \(B_{(X,\|\cdot\|)}\), then this immediately implies that for every \(w\in B_{(X,\|\cdot\|)}\), there exists a net \((w_{\alpha})\) in \(B_{(X,\|\cdot\|)}\) that converges weakly to \(w\) and satisfies \(\|e_{1}-w_{\alpha}\|\to 2\). Hence \(e_{1}\) is a super Daugavet point in \((X,\left\|\cdot\right\|)\). Next let us prove (ii). Fix \(z^{*}\in B_{(E,\|\cdot\|)}\) with finite support. We will construct as above a sequence \((w_{n}^{*})\) in \(B_{(E,\|\cdot\|)}\) such that \(\|e_{1}^{*}-w_{n}^{*}\|\to 2\) and \((w_{n}^{*})\) converges weak\({}^{*}\) to \(z^{*}\). Since the set of all finitely supported functionals is dense in \(B_{(E,\|\cdot\|)}\), this will give us that \(e_{1}^{*}\) is a weak\({}^{*}\) super Daugavet point in \((E,\left\|\cdot\right\|)\). Actually, we will show that \(\|e_{1}^{*}\pm w_{n}^{*}\|\to 2\), which allows us to assume that \(z^{*}(e_{1})\geqslant 0\). Then by Lemma 4.3 (i), there exist \(\lambda\in[0,1]\) and \(x^{*},y^{*}\in F\cap B_{X^{*}}\) such that \[z^{*}=\lambda(e_{1}^{*}-y^{*})+(1-\lambda)\frac{1}{2}(x^{*}-y^{*}).\] Again, \(\|x^{*}\|\leqslant 1\) and \(\|x^{*}+e_{2n}^{*}\|\geqslant\|e_{2n}^{*}\|=1\) for \(n\in\mathbb{N}\) large enough. Thus there exists \(a_{n}\in[0,1]\) such that \(x^{*}+a_{n}e_{2n}^{*}\in S_{X^{*}}\). Similarly, there exists \(b_{n}\in[0,1]\) such that \(y^{*}+b_{n}e_{2n+1}^{*}\in S_{X^{*}}\). Let \[w_{n}^{*}=z^{*}+\frac{1-\lambda}{2}a_{n}e_{2n}^{*}-\frac{1+\lambda}{2}b_{n}e_{ 2n+1}^{*}.\] Since \((e_{n}^{*})\) is weak\({}^{*}\) null, \(w_{n}^{*}\to z^{*}\) weak\({}^{*}\). Furthermore, \(\operatorname{supp}(x^{*}+a_{n}e_{2n}^{*})\cap\operatorname{supp}(y^{*}+b_{n} e_{2n+1}^{*})=\emptyset\) for \(n\in\mathbb{N}\) large enough, so by Lemma 4.3 (iv) we get \[\left\|e_{1}^{*}-w_{n}^{*}\right\|=\left\|\overline{\left|(1-\lambda)e_{1}^{*} -\frac{1-\lambda}{2}(x^{*}+a_{n}e_{2n}^{*})+\frac{1+\lambda}{2}(y^{*}+b_{n}e_{ 2n+1}^{*})\right\|}=\max\{2,0\}=2\] and \[\left\|e_{1}^{*}+w_{n}^{*}\right\|=\left\|\overline{\left|(1+\lambda)e_{1}^{*} +\frac{1-\lambda}{2}(x^{*}+a_{n}e_{2n}^{*})-\frac{1+\lambda}{2}(y^{*}+b_{n}e_{ 2n+1}^{*})\right\|}=\max\{2,0\}=2.\] The conclusion follows. _Remark 4.7_.: 1. It is straightforward to check that the point \(e_{1}\) is an extreme point of \(B_{(X,\|\cdot\|)}\). So it actually follows from [19, Remark 3.14] that additionally to being a super Daugavet point, \(e_{1}\) is also a ccw \(\Delta\)-point (i.e. satisfies a \(\Delta\)-like condition for convex combinations of relative weakly open subsets of the unit ball, see [19]). Similarly, \(e_{1}^{*}\) is an extreme point of \(B_{(E,\|\cdot\|)}\), so additionally to being a weak\({}^{*}\) super Daugavet point, it is also a weak\({}^{*}\) ccw \(\Delta\)-point. In strongly regular spaces (respectively weak\({}^{*}\) strongly regular duals), this is the best diametral property we can ask for points of the unit ball, as the existence of a (weak*) ccw Daugavet point implies the (weak\({}^{*}\)) SD2P (see [19, Proposition 3.12]). 2. It is actually possible to prove, up to some technical and longish adjustments in the above proof (that rely once again strongly on unconditionality), that for every \(w\in B_{(X,\left\|\cdot\right\|)}\), there exists a sequence \((w_{n})\) in \(B_{(X,\left\|\cdot\right\|)}\) that converges weakly to \(z\) and such that \(\left\|\cdot\right\|e_{1}-w_{n}\left\|\to 2\). Hence \(e_{1}\) is actually a _sequential super Daugavet point_ in the lines of [1, Definition 5.22]. Recall that in general, this is strictly stronger than merely being a super Daugavet point, as there exists a Banach space with the Daugavet property and the Schur property [15]. Also, the sequence \((w_{n})\) could easily be modified (as in the second part of the above proof) so that \(\left\|\cdot\right\|e_{1}\pm w_{n}\left\|\right\|=2\) for every \(n\in\mathbb{N}\), so \(e_{1}\) satisfies a sequential symmetric super Daugavet condition similar to the one from Lemma 3.20. It is not clear whether those two sequential properties are equivalent in general. It is well known that every shrinking basis is weakly null (a.k.a. _semi-shrinking_) and that every unconditional basis in a space that does not contain a copy of \(\ell_{1}\) is shrinking. So we immediately get the following corollary. On the other hand, let us note that there exists bases which are semi-shrinking but not shrinking (e.g. the Faber-Schauder basis of \(C[0,1]\)), and actually that a continuum of mutually non-similar such bases can be constructed using tensor products [13]. Furthermore, there exists a weakly null sequence with no shrinking subsequence [21]. In the previous examples, the bases are all conditional, but unconditional semi-shrinking bases which are not shrinking do also exist (see e.g. [22] or [7, Examples 2]). **Corollary 4.8**.: _Let \(X\) be a infinite dimensional Banach space with an unconditional Schauder basis. If \(X\) does not contain a copy of \(\ell_{1}\), and in particular if \(X\) is reflexive, then \(X\) can be renormed so that \(X\) contains a super Daugavet point and \(X^{*}\) contains a weak\({}^{*}\) super Daugavet point._ _Remark 4.9_.: If the sequence \((e_{n}^{*})\) of biorthogonal functionals is also assumed to be weakly null in Theorem 4.2, and in particular if the space \(X\) is reflexive, then the functional \(e_{1}^{*}\) is actually also a (symmetric sequential) super Daugavet point in \((E,\left\|\cdot\right\|\)). Note that the basis from [7, Examples 2] is unconditonal, boundedly complete and semi-shrinking; but as it is not shrinking, the space is not reflexive, so there exits non-trivial examples of this kind. Recall that in general, diametral properties of points in a dual space are way stronger than their weak* counterparts, as e.g. every point in the unit ball of \(C[0,1]^{*}\) is a weak* ccw Daugavet point while \(B_{C[0,1]^{*}}\) contains denting points. In particular, let us highlight that combining Corollary 4.8 with Remarks 4.7 (i) and 4.9, we get the following theorem. **Theorem 4.10**.: _Let \((e_{n})\) be the unit vector basis of \(\ell_{2}\). There exists a renorming of \(\ell_{2}\) for which \(e_{1}\) is an extreme super Daugavet - hence ccw \(\Delta\) - point in the new norm and its dual norm._ In [1, Example 5.20], the renorming of \(\ell_{2}\) with a super \(\Delta\)-point from [1, Theorem 3.1] was used to provide a non-reflexive M-embedded space with a super \(\Delta\)-point and a super \(\Delta\)-point in its dual. By using Theorem 4.10, we can produce a similar construction for super Daugavet point. **Corollary 4.11**.: _There exists a non-reflexive M-embedded space \(Y\) such that \(Y\) and its dual contain a super Daugavet point._ Proof.: Let \(X:=(\ell_{2},\left\|\cdot\right\|)\) be the renorming of \(\ell_{2}\) for which \(e_{1}\) is a super Daugavet point in \(X\) and \(X^{*}\). As reflexive spaces are trivially M-embedded, it follows from [12, Theorem III.1.6] that the space \(Y:=c_{0}(X)\) is M-embedded. Then \(Y^{*}\equiv\ell_{1}(X^{*})\), and it follows from [19, Remark 3.28] that the point \((e_{1},0,0,\dots)\) is a super Daugavet point in \(Y\) and \(Y^{*}\). It was proved in [1, Theorem 4.1] that every infinite dimensional Banach spaces can be renormed to have a \(\Delta\)-point. It is thus natural to ask the following. **Question 4.12**.: _Can every infinite dimensional Banach space be renormed to have a Daugavet point?_ By Theorem 4.2 and by [1, Theorem 2.1], the answer is yes if \(X\) is infinite dimensional and has a weakly null unconditional Schauder basis, or if \(X:=\ell_{1}\). Combining those two results, we will now prove that more generally, the answer is yes for every infinite dimensional Banach space with an unconditional Schauder basis, as well as for any Banach space that contains a complemented copy of such a space. Note that some of the key ingredients for the \(\Delta\)-renormalings from [1, Theorem 4.1] were a classic norm extension result and the well known fact that \(\Delta\)-points pass to superspaces. This is no longer true for Daugavet points, e.g. because it was proved in [3] that \(\ell_{2}\)-sums of Banach spaces never contain such points. However, let us point out that we can still get, up to renorming, some similar result whenever the considered point lives in a space that is complemented in the superspace. **Proposition 4.13**.: _Let \(X\) be a Banach space. If \(X\) contains a complemented subspace \(Y\) that can be renormed with a Daugavet point, then \(X\) can be renormed with a Daugavet point. Moreover, if \(Y\) can be renormed so that \(Y^{*}\) contains a (weak\({}^{*}\)) Daugavet point, then so can \(X\). The same does hold for (weak\({}^{*}\)) super Daugavet points._ Proof.: As \(Y\) is complemented in \(X\), we have that \(X\) is isomorphic to \(Y\oplus_{1}Z\) for some Banach space \(Z\). As this space is also isomorphic to \((Y,\left\|\cdot\right\|)\oplus_{1}Z\) for every equivalent norm \(\left\|\cdot\right\|\) on \(Y\), we may simply assume that \(Y\) contains a Daugavet point. But now if \(y\in S_{Y}\) is a Daugavet point, then it follows from [11, Proposition 2.3] that \(x:=(y,0)\) is a Daugavet point in \(Y\oplus_{1}Z\). For super Daugavet points, the result follows analogously using [19, Remark 3.28]. Furthermore, as the dual of this space is isometric to \((Y^{*},\left\|\cdot\right\|)\oplus_{\infty}Z^{*}\), then the dual part of Proposition 4.13 follows from known transfer results of Daugavet and super Daugavet points through \(\ell_{\infty}\)-sums (see the following remark). _Remark 4.14_.: If we had initially taken an \(\ell_{\infty}\)-sum instead of an \(\ell_{1}\)-sum in the proof of the previous result, then observe that we would actually get a renorming of \(X\) with infinitely many (super) Daugavet points. Indeed, it follows from [11, Proposition 2.4] that if \(y\in S_{Y}\) is a Daugavet point, then \((y,z)\) is a Daugavet point in \(Y\oplus_{\infty}Z\) for every \(z\in B_{Z}\). For super Daugavet points, this follows again from [19, Remark 3.28]. As a corollary, we get that every Banach space with an unconditional basis can be renormed with a Daugavet point and a weak* Daugavet point in its dual. **Corollary 4.15**.: _Let \(X\) be a Banach space with an unconditional basis \((e_{n})\). Then \(X\) can be renormed with a Daugavet point. More precisely:_ 1. _If_ \((e_{n})\) _is shrinking, or if_ \((e_{n})\) _is neither shrinking nor boundedly complete, then_ \(X\) _can be renormed with a super Daugavet point and a weak_* _super Daugavet point in its dual;_ 2. _If_ \((e_{n})\) _is not shrinking, then_ \(X\) _can be renormed with a Daugavet point in the space and its dual._ Proof.: If \((e_{n})\) is shrinking, then this is Corollary 4.8. The remaining cases are direct consequences of Proposition 4.13 together with classic results from James about unconditional bases: If \((e_{n})\) is not boundedly complete, then \(X\) contains a complemented copy of \(c_{0}\), and if \((e_{n})\) is not shrinking, then \(X\) contains a complemented copy of \(\ell_{1}\) (see e.g. [5, Theorems 3.3.2 and 3.3.1]). The fact that \(\ell_{1}\) can be renormed with a Daugavet point in the space and its dual follows by combining [1, Theorem 2.1] and [25, Proposition 5.1]. Using once again Proposition 4.13, we ultimately get the result that was stated in the introduction (Theorem 1.3). Finally, observe that calling to some other classic results from the literature, we also have the following statements. **Corollary 4.16**.: _Let \(X\) be a Banach space._ 1. _If_ \(X\) _is separable and contains a copy of_ \(c_{0}\)_, then_ \(X\) _can be renormed with a super Daugavet point and a weak_* _super Daugavet point in its dual._ 2. _If_ \(X^{*}\) _contains a copy of_ \(c_{0}\)_, then_ \(X\) _can be renormed with a Daugavet point in the space and its dual._ 3. _If_ \(X\) _contains a copy of_ \(\ell_{\infty}\)_, then_ \(X\) _can be renormed with a super Daugavet point in the space and its dual._ Proof.: It was proved by Sobczyk in [23] that copies of \(c_{0}\) in separable Banach spaces are always complemented. In the same paper, it was also observed that a previous extension result from Phillips can be used to show that the same is true for copies of \(\ell_{\infty}\) in arbitrary Banach spaces. With modern terminology, this is a consequence of the fact that the space \(c_{0}\) is separably injective, and that the space \(\ell_{\infty}\) is isometrically injective. We refer to [5, Section 2.5] for more details. It is also a well known result, due to Bessaga and Pelczynski, that if the dual \(X^{*}\) of a Banach space \(X\) contains a copy of \(c_{0}\), then the space \(X\) contains a complemented copy of \(\ell_{1}\) (see e.g. [10, Theorem 4.4]). With those results at hand, Corollary 4.16 immediately follows from Proposition 4.13 together with Corollary 4.15 and the well known fact that \(\ell_{\infty}\), as a \(C(K)\)-space, admits super Daugavet points (combining e.g. [3, Corollary 5.4] and [19, Corollary 4.3]). That \(\ell_{\infty}^{*}\) also admits super Daugavet points can be obtained as follows. First, it is well known that \(\ell_{\infty}^{*}\) contains an isometric copy of the space \(L_{1}[0,1]\). Indeed, \(\ell_{\infty}\) contains an isometric copy of \(\ell_{1}\), so the latter follows e.g. from the results from [8]. As \(\Delta\)-points pass to superspaces, it follows that \(\ell_{\infty}^{*}\) contains \(\Delta\)-points. Now as \(\ell_{\infty}^{*}\) is known to be isometrically isomorphic to some \(L_{1}(\mu)\)-space, it follows from [19, Corollary 4.1] that these point are actually super Daugavet points. Observe that Corollary 4.16 (ii) applies to any infinite dimensional Lipschitz-free space. Moreover, a positive answer to the following question would immediately provide some improvements to this result. **Question 4.17**.: _Can the space \(\ell_{1}\) be renormed to have a super Daugavet point?_ Let us recall that it is unknown whether the Daugavet molecule from Veeorg's space [24] is a super \(\Delta\) or a super Daugavet point.
2306.02791
Dating young open clusters using delta Scuti stars. Results for Trumpler 10 and Praesepe
Aims. The main goal of this work is to date young open clusters using $\delta$ Sct stars. Seismic indices such as the large separation and the frequency at maximum power can help to constrain the models to better characterise the stars. We propose a reliable method to identify some radial modes, which gives us greater confidence in the constrained models. Methods. We extract the frequency content of a sample of $\delta$ Sct stars belonging to the same open cluster. We estimate the low-order large separation by means of different techniques and the frequency at maximum power for each member of the sample. We use a grid of models built with the typical parameters of $\delta$ Sct stars, including mass, metallicity and rotation as independent variables, and determine the oscillation modes. We select the observed frequencies whose ratios match those of the models. Once we find a range of radial modes matching the observed frequencies, mainly the fundamental mode, we add it to the other seismic parameters to derive the stellar age. Assuming star groups have similar chemistry and age, we estimate their mean age by computing a weighted probability density function fit to the age distribution of the seismically constrained models. Results. We estimate the age of Trumpler 10 to be $30_{-20}{+30}$ Myr, and that of Praesepe to be $580 \pm 230$ Myr. In this latter case, we find two apparent populations of $\delta$ Sct stars in the same cluster, one at $510 \pm 140$ Myr and another at $890 \pm 140$ Myr. This may be due to two different formation events, different rotational velocities of the members in our sample of stars (as rapid rotation may modify the observed large separation), or to membership of unresolved binary systems.
D. Pamos Ortega, G. M. Mirouh, A. García Hernández, J. C. Suárez Yanes, S. Barceló Forteza
2023-06-05T11:37:21Z
http://arxiv.org/abs/2306.02791v1
# Dating young open clusters using \(\delta\) Scuti stars ###### Abstract Context: Aims:The main goal of this work is to date young open clusters using \(\delta\) Sct stars. Seismic indices such as the large separation and the frequency at maximum power can help to constrain the models to better characterise the stars. We propose a reliable method to identify some radial modes, which gives us greater confidence in the constrained models. Methods:We extract the frequency content of a sample of \(\delta\) Sct stars belonging to the same open cluster. We estimate the low-order large separation by means of different techniques and the frequency at maximum power for each member of the sample. We use a grid of models built with the typical parameters of \(\delta\) Sct stars, including mass, metallicity and rotation as independent variables, and determine the oscillation modes. We select the observed frequencies whose ratios match those of the models. Once we find a range of radial modes matching the observed frequencies, mainly the fundamental mode, we add it to the other seismic parameters to derive the stellar age. Assuming star groups have similar chemistry and age, we estimate their mean age by computing a weighted probability density function fit to the age distribution of the seismically constrained models. Results:We estimate the age of Trumpler 10 to be \(30^{+30}_{-20}\) Myr, and that of Praesepe to be \(580\pm 230\) Myr. In this latter case, we find two apparent populations of \(\delta\) Sct stars in the same cluster, one at \(510\pm 140\) Myr and another at \(890\pm 140\) Myr. This may be due to two different formation events, different rotational velocities of the members in our sample of stars (as rapid rotation may modify the observed large separation), or to membership of unresolved binary systems. Conclusions: ## 1 Introduction Determining the age of a star is essential to know its internal physics. Regarding the dating of a star cluster, the importance lies in understanding the structure and evolution of the galaxy. However, age is not a direct observable and inferring it accurately is not an easy task. In addition, ambiguity arises since we cannot be sure that all the stars in the cluster formed at the same epoch. Recent works have shown that different populations, or generations, of stars may coexist within the same cluster (e.g. Bastian & Lardo 2018). For instance, Costa et al. (2019) find two distinct populations of stars aged 176 Myr and 288 Myr in NGC 1866, by combining an analysis of its best-studied Cepheids with that of a very accurate colour-magnitude diagram obtained with the Hubble Space Telescope photometry. Other works, such as Krause et al. (2020), assume that most open clusters feature a single population, as they remain essentially clear of gas and winds after one stellar formation event. We have assumed this hypothesis in order to determine a mean age for each of thr clusters we analyse. Traditionally, isochrone fitting on the Hertzsprung-Russell diagram (HRD) has been used to date clusters. This method works when dealing with old globular clusters, where we can find a large sample of stars leaving the main sequence and evolving past the turn-off point. However, the ambiguity of this method is greater with young clusters, in which a majority of stars still on the main sequence (MS). The method based on spectroscopic observations of lithium (Basri & Martin 1999; Stauffer et al. 1999) also generates large ambiguities because of unresolved binary stars (Martin et al. 2001). The relation between the rotation rate and the age of late F to M stars, called gyrochronology (Barnes 2003; Angus et al. 2022; Messina et al. 2022), seems to provide a method to reduce the uncertainty on the age of evolved clusters. Other methods based on chemical clocks (da Silva et al. 2012; Spina et al. 2017; Moya et al. 2022) seem to help reduce the uncertainties, making use of machine learning techniques. The drawback is that these algorithms are trained with models of highly evolved stars, for which it has been possible to obtain reliable spectroscopic observations. Therefore, despite the progress achieved with these new techniques, we still do not have a reliable method to date young open clusters. In Ramos Ortega et al. (2022) (Paper I from now on), we proposed the use of seismic parameters to date a group of four \(\delta\) Sct stars belonging to the young open cluster \(\alpha\) Per. One of these seismic indices is the large separation, defined as the difference between acoustic modes of the same degree and consecutive radial orders, related to the mean density and the surface gravity of the star. This regularity in the frequency pattern is also present in the low-order regime (\(n=[2,8]\)) (Suarez et al. 2014; Garcia Hernandez et al. 2015, 2017; Mirouh et al. 2019), where \(\delta\) Sct stars show their oscillation modes. Another parameter is the frequency at maximum power, directly related to the effective temperature, used in solar-type stars and found in \(\delta\) Sct stars as well (Barcelo Forteza et al. 2018, 2020; Bowman & Kurtz 2018; Hasanzadeh et al. 2021). In this work, we date the young open clusters Trumpler 10 and Praesepe, using a corresponding sample of \(\delta\) Sct stars. We use only seismic parameters, such as the low-order large separation and the frequency at maximum power. We also include the identification of the fundamental mode of each star, estimated by comparing the frequency ratios of the observed frequencies with that of the models. These clusters are of very different ages, which allows us to establish the possibilities of the method. The structure of the paper is as follows: in Sect. 2, we provide the estimated ages of the clusters Trumpler 10 and Praesepe from previous works. In Sect. 3, we introduce the sample of \(\delta\) Sct stars used in this research. In Sect. 4, we present how their seismic parameters have been computed. In Sect. 5, we describe the details of the grids of models built to characterise our \(\delta\) Sct stars. In Sect. 6, we explain the method to estimate the mean age of the cluster. In Sect. 7, we present the ages we estimate and discuss their reliability by comparing them with the observed parameters and the literature. And finally, in Sect. 8, we lay out the main conclusions of our research and leads to improve our method. ## 2 The ages of Trumpler 10 and Praesepe in the literature Trumpler 10 and Praesepe have been dated through many different techniques. Table 1 summarises the ages and metallicities derived in the last twenty years. Trumpler 10 (C 0846-423) is an open cluster located in the Vela constellation. According to the Milky Way Star Clusters catalogue (MWSC, Kharchenko et al., 2013), it is at a distance of about 417 pc from the Sun and has an age of log(age) = 7.380 (\(\approx\) 34 Myr). According to Netopil et al. (2016), it has an age of 40\(\pm\)10 Myr, with a metallicity of \(\rm[Fe/H]=-0.12\pm 0.06\), obtained from various photometric systems and calibrations. The work of Dias et al. (2021) computes the age for this cluster in log(age) = \(7.753\pm 0.026\) (\(\approx\) 57 Myr), with a metallicity of \(\rm[Fe/H]=0.043\pm 0.050\). For these estimates, they used Gaia DR2 photometry and a grid of Padova isochrones. According to all these references, the age of Trumpler 10 lies between 34 Myr and 57 Myr. Praesepe (M44, NGC2632) is an open cluster located in the Cancer constellation. Being one of the closest clusters to the Sun, it is also one of the most studied (see for example Suarez et al., 2002; Meibom & Mathieu, 2005; Fossati et al., 2008; Brandt & Huang, 2015; Choi et al., 2016; Cummings et al., 2017; Gaia Collaboration et al., 2018, and references therein). Also taking as reference the MWSC survey, it is at a distance of about 187 pc and has an age of log(age) = 8.920 (\(\approx\) 729 Myr). According to Netopil et al. (2016), it has an age of 730\(\pm\)190 Myr and a metallicity of \(\rm[Fe/H]=0.13\pm 0.03\), also obtained from different photometric systems, as for Trumpler 10. The work of Dias et al. (2021) yields log(age) = \(8.882\pm 0.035\) (\(\approx\) 762 Myr), with a metallicity of \(\rm[Fe/H]=0.196\pm 0.039\). Zhong et al. (2020) estimate the metallicity at \(\rm[Fe/H]=0.22\pm 0.08\), using LAMOST spectroscopy. Meibom & Mathieu (2005) provides an age estimate for Praesepe of about 630 Myr. They used a completely different technique, by looking at the circularization of binary systems of solar-like stars: this circularization happens over time, so systems become circular at higher and higher separations, so that the measurement of the period of circular systems yields an age value. Douglas et al. (2019) estimate an age of \(670\pm 67\) Myr. Actually, they computed the age of the open cluster Hyades, using a gyrochronology model tuned with slow rotators in Praesepe and the Sun, assuming that the two clusters are coeval, based on the similarity of their colour-magnitude diagrams, activity, rotation and lithium abundance. In short, all these references of the last twenty years provide ages for Praesepe between 590 and 790 Myr. ## 3 The data Firstly, we cross-match the VizieR Online Data Catalogue Gaia DR2 of open cluster members (Cantat-Gaudin et al., 2018) and the TESS Input Catalogue (TIC, Stassun, 2019), searching possible \(\delta\) Sct stars belonging to the same open cluster. According to the definition of a pure \(\delta\) Sct from Grigahcene et al. (2010) and Uytterhoeven et al. (2011), we find five candidates in the field of Trumpler 10 (Table 2) and six in the field of Praesepe (Table 3). For our Praesepe stars, we find values for the projected rotational velocity \(v\sin i\), the metallicity and the spectral type, consulting the available references in the Simbad Astronomy Database1. For the Trumpler 10 stars, only data about the spectral type is available. These parameters are useful in our discussion of the results in Sect. 7. Footnote 1: [https://simbad.unistra.fr/simbad/](https://simbad.unistra.fr/simbad/) We perform a frequency analysis using data from sector 35 of the _TESS_ mission (Ricker et al., 2014) for the Trumpler 10 sample, with approximately 13800 points; and from sector 45 for Praesepe, with approximately 15500 points. In both cases, the cadence is about 2 minutes and the Rayleigh resolution is approximately 0.041 \(d^{-1}\). We use the Pre-Search Data Conditioned (PDC) light curves, corrected for instrumental effects, that are publicly available through the TESS Asteroseismic Science Consortium2 (TASC). Footnote 2: [https://hasoc.dk](https://hasoc.dk) Using MultiModes3, we extract the frequency content of each star in our sample. It is a Python code that calculates the Fast Lomb-Scargle periodogram (Press & Rybicki, 1989) of a light curve. It extracts, one by one, a limited number of significant signals, and uses their corresponding parameters (amplitudes, frequencies) to fit the total signal to a multisine function with a non-linear least squares minimisation. We adopt a signal-to-noise ratio S/N \(>\) 4.0 as a stop criterion (Breger et al., 1993), to avoid spurious frequencies, and we filter possible frequency combinations. The code is presented in detail in Paper I and in the public repository. Footnote 3: [https://github.com/davidpamos/MultiModes](https://github.com/davidpamos/MultiModes) Fig. 1 and Fig. 2 show, respectively, the extracted frequency spectrum of each \(\delta\) Sct candidate in Trumpler 10 and Praesepe. The values of the ten highest amplitudes frequencies for each star in both clusters are shown in Table A.1 (Appendix A). The table of all extracted frequencies is available on-line. ## 4 Seismic parameters Sometimes it is possible to find regularities in the complex frequency pattern of a \(\delta\) Sct star (Garcia Hernandez et al., 2009; Paparo et al., 2016; Bedding et al., 2020). Following the same techniques as in Garcia Hernandez et al. (2009, 2013); Ramon-Ballesta et al. (2021) and used in Paper I, we estimate the large separation in the low-order regime, \(\Delta\nu_{low}\). Fig. 3 shows an example of this, where we use the discrete Fourier transform (DFT), the autocorrelation diagram (AC) applied on the frequencies, the frequency difference histogram (FDH) and the echelle diagram (ED), in order to find regularities in the frequency content of TIC 28943819 (see Appendix B for the rest of our sample). The theoretical works of Garcia Hernandez et al. (2009); Reese et al. (2017) use the AC and the FT to search for the low-order large separation. They point out that we expect to see the large separation and its submultiples in the DFT, and its multiples in the AC and the FDH (except in the case where the \(l=1\) modes lie halfway between the \(l=0\) modes, as in solar-type stars, where we also find half the large separation). No method, by itself, is objective enough to yield a reliable measurement of \(\Delta\nu_{low}\), except in very few cases. Our criterion requires finding the same peaks in at least two of the methods used for measuring \(\Delta\nu_{\rm low}\) or half \(\Delta\nu_{\rm low}\). We estimate the uncertainties using the width of the peaks in the DFT or the AC, depending on the case. Of all the stars analysed this way, TIC 28944596 and TIC 271062192 are the most difficult cases. TIC 28944596 (see Fig. 2) only has a few frequencies. Its DFT shows a peak a bit above 20 \(\mu\)Hz and another around 40 \(\mu\)Hz. The AC also shows two peaks very close to 80 \(\mu\)Hz, and the FDH shows three peaks between 70 and 80 \(\mu\)Hz. For all these reasons, we estimate the large separation to be 80 \(\mu\)Hz for this star. TIC 271062192 (Fig. 4) is an even more complicated case, because the AC, the FDH and the ED do not show regularities in the frequency content. The only evidence here is the DFT, that shows three congruent peaks around 19, 38 and 76 \(\mu\)Hz. This is why we have retained a large separation estimate of 76 \(\mu\)Hz. envelope. Despite the large uncertainty involved in using \(\nu_{\rm max}\) as a seismic parameter, we decide to trust it as a seismic indicator. Following Barcelo Forteza et al. (2020), we relate \(\nu_{\rm max}\) and \(\tilde{T}_{\rm eff}\). The relation depends on the value of \(\log g\), so for Trumpler 10 (\(\log g\approx 4.3\)) we use \[\tilde{T}_{\rm eff}=(3.5\pm 0.1)\nu_{\rm max}^{(\mu T_{\rm eff})}+(6460\pm 40)^ {(K)}, \tag{2}\] while for Praesepe (\(\log g\approx 4.0\)) we use \[\tilde{T}_{\rm eff}=(3.8\pm 0.2)\nu_{\rm max}^{(\mu T_{\rm eff})}+(6750\pm 40) ^{(K)}. \tag{3}\] The estimated values for \(\Delta\nu_{\rm low}\), \(\nu_{\rm max}\) and their corresponding seismic temperatures, \(\tilde{T}_{\rm eff}\), for our sample of \(\delta\) Sct stars in Trumpler 10 and Praesepe are shown in Table 4. Comparing the values of TIC effective temperature and the seismic temperature, we see that the agreement is good for the Figure 1: Frequency spectrum of the sample of \(\delta\) Sct stars candidates in the field of Trumpler 10. The red dotted line is the significance threshold, S/N=4. Figure 2: Same as Fig. 1 for the Praesepe sample. majority of the sample, taking into account the large uncertainties of the seismic temperature. However, TIC 30307085 and TIC 271062192 are discrepant, with differences between the two estimates above 1000K, which can be the signature of a binary companion or gravity darkening induced by rapid rotation. ## 5 The grids of models As \(\delta\) Sct are usually moderate to fast rotators (\(v\sin i>100\) km s\({}^{-1}\)), the models have to take into account the stellar structure deformation that occurs at these speeds. This centrifugal flattening reduces the value of the mean density, directly related to one of the seismic indices that we are using here, the large separation. For this reason, we compute our models with the mesa code, version 15140, (Paxton 2019), and the related oscillations with the fldo code (Suarez & Goupil 2008), taking rotation into account up to second order in the perturbative theory for the adiabatic oscillation computation (including near-degeneracy effects and stellar structure deformation). Following Paper I, we build two grids of representative models to characterise \(\delta\) Sct stars during their stay on the Pre-Main Sequence (PMS) and the MS, one for Trumpler 10 and another for Praesegne. In Table 5 we introduce their main parameters. We used the default nuclear reactions network, basic.net, provided by mesa. At the zero-age main sequence (ZAMS), X being constant, when Z increases, Y decreases by the same amount, so that X + Y + Z = 1. After testing exponential overshooting values \(f_{0}=0.002\) and \(f=0.022\), we find no significant impact and thus use no overshooting. We include internal differential rotation. We computed models initiating rotation at the ZAMS. The values for the initial angular velocity to critical velocity ratio are between \(0.1\Omega_{\rm c}\) and \(0.5\Omega_{\rm c}\), avoiding higher values that may lie beyond the limits of the perturbative theory. We compute \(p\) modes between \(n=1\) and the cut-off frequency, the limiting frequency up to which acoustic modes can propagate without damping. We use the modes in the low-order regime, between \(n=2\) and \(n=8\), and with degrees between \(l=0\) and \(l=3\), to calculate the low-order large separation, as proposed by Suarez et al. (2014). ## 6 The method for estimating ages We estimate the seismic age of each cluster following these steps: \begin{table} \begin{tabular}{c c c c c} \hline TIC & \(\Delta\nu_{\rm mw}(\mu\rm H_{0})\) & \(\nu_{\rm mw}(\mu\rm H_{0})\) & \(T_{\rm eff}\) (K) & TIC \(T_{\rm eff}\) (K) \\ \hline \multicolumn{5}{c}{Trumpler 10} \\ \hline 28943819 & \(82\pm 2\) & \(510\pm 30\) & \(8250\pm 200\) & \(8646\pm 161\) \\ 30307085 & \(84\pm 1\) & \(710\pm 6\) & \(8950\pm 320\) & \(9931\pm 202\) \\ 28944596 & \(80\pm 2\) & \(330\pm 80\) & \(7620\pm 350\) & \(8383\pm 149\) \\ 271061334 & \(80\pm 2\) & \(650\pm 50\) & \(8704\pm 280\) & \(8773\pm 170\) \\ 271062192 & \(76\pm 2\) & \(200\pm 90\) & \(7480\pm 380\) & \(8689\pm 158\) \\ \hline \multicolumn{5}{c}{Prasegne} \\ \hline 175194881 & \(58\pm 1\) & \(350\pm 30\) & \(8080\pm 220\) & \(7873\pm 125\) \\ 175264376 & \(52\pm 3\) & \(210\pm 60\) & \(7550\pm 310\) & \(7416\pm 141\) \\ 175268007 & \(57\pm 2\) & \(360\pm 40\) & \(8120\pm 260\) & \(7826\pm 126\) \\ 175291778 & \(52\pm 3\) & \(200\pm 70\) & \(7510\pm 350\) & \(7865\pm 126\) \\ 180914805 & \(56\pm 1\) & \(320\pm 60\) & \(7970\pm 330\) & \(7369\pm 108\) \\ 180917633 & \(56\pm 1\) & \(270\pm 80\) & \(7780\pm 400\) & \(7463\pm 122\) \\ \hline \end{tabular} \end{table} Table 4: Seismic indices of the selected targets from Trumpler 10 and Praesepe: the low-order large separation, the frequency at maximum power and its corresponding seismic and TIC effective temperature. Figure 3: Estimated low-order large separation for TIC 28943819, using the autocorrelation diagram (top left), the discrete Fourier transform (top right), the frequency difference histogram (bottom left), and the échelle diagram (Hey & Ball 2020, bottom right). 1. For each star, we constrain the models using its estimated values of \(\Delta\nu_{\rm low}\) and \(\tilde{T}_{\rm eff}\), taking into account their corresponding uncertainties. 2. We compute the ratios of the observed frequencies, in order to select the frequencies with ratios that match those of the models (Table 6). 3. Once we find a range of radial modes matching the observed frequencies, we use them with the models selected at step 1, to better constrain the stellar ages. 4. After applying steps 1-3 to all stars in the same group, we plot the age distribution weighted histograms of all the constrained models. 5. We compute the best weighted probability density function (WPDF) over the histogram. For that, we assume a normal distribution, and we use, as initial guesses, the maximum likelihood estimation: the weighted mean of all the ages of the constrained models, and its corresponding weighted standard deviation. We finally take, for the age of the cluster and its corresponding standard deviation, the parameters of the fitted WPDF in a \(\chi^{2}\) minimisation process. We compute the weight of each constrained model, \(p\) given in Eq. (4), taking into account the following assumptions: 1. Each evolutionary track computed with MESA is oversampled at low ages. To compensate for this effect, our WPDF is proportional to the time step, \(\Delta t\), divided by the total time, \(t\), of the model in its evolutionary track. 2. We also evaluate the probability of the models corresponding to each star in the sample. Models corresponding to stars with better measured \(\Delta\nu_{\rm low}\) and \(\tilde{T}_{\rm eff}\) contribute with a greater probability in the estimated age of the cluster. Then the WPDF is inversely proportional to the relative uncertainties of \(\frac{e_{\rm max}}{\Delta\nu_{\rm max}}\) and \(\frac{e_{\rm eff}}{\tilde{T}_{\rm eff}}\) of each star. 3. Regarding the probability that a model is the age of the whole cluster, we assign it a weight proportional to the number of stars that are this same age, \(n_{\rm stars}\) divided by the total number of stars in the sample, \(N_{\rm stars}\). 4. The number of constrained models with the same age, \(n\), divided by the total number of models in the grid, \(N\). Combining everything, we obtain \[p=\frac{\Delta t}{t}\frac{\Delta\nu_{\rm low}}{e_{\Delta\nu_{\rm max}}}\frac{ \tilde{T}_{\rm eff}}{e_{T_{\rm eff}}}\frac{n_{\rm stars}}{N}. \tag{4}\] The formula for \(\chi^{2}\) (Eq. 5) has been applied over the densities of the histogram, his(age), in order to obtain the best possible fit to a normal distribution, norm(age): \[\chi^{2}=\sum_{\rm bins}\frac{({\rm his(age)-norm(age)})^{2}}{{\rm norm(age)}^ {2}}. \tag{5}\] Fig. 4 show the positions and the ranges of the possible radial overtones in the frequency spectrum of TIC 28943819 (see Appendix C for the rest of our sample). These ranges are too wide in some cases because we have sampled the whole grid for identification. Then, the inclusion of the fundamental mode has a minimal impact on the constraints we derive on the models, but it helps confirm what we obtain from the other seismic parameters. The identification has failed estimating the ranges for the possible radial overtones only in the cases with fewer than 30 extracted significant frequencies: TIC 175194881 in Praesepe, and TIC 30307085 and TIC 271061334 in Trumpler 10. That's why in Appendix C there are seven figures, instead of 10, the total number of stars in our sample, plus TIC 28943819. ## 7 Results and discussion ### Trumpler 10 The HRD of Fig. 5 (left panel) shows the ages of the seismically constrained models of our sample of \(\delta\) Sct stars in Trumpler 10. We can see that they are very close to the ZAMS. Focussing on models between 1.60 and 2.00 \(M_{\odot}\) (Fig. 5, right panel), the constrained models show that TIC 28944596 and TIC 271062192, the least massive stars of the sample, seem to be older than the rest of the sample. Maybe both stars are actually older, or maybe these apparent older ages have to do with a gravity-darkening effect or a possible binary companion. The observed larger radii, the lower densities and higher luminosities (presented in Table 2) could explain the three hypotheses. To determine the mean age of the group, we first compute the age weighted histograms corresponding to every star of the sample, from the seismically constrained models (Fig. 6, left panel). We then estimate the mean age of the whole group, as a single population, by calculating the best possible distribution on the histograms, using a normal WPDF, as explained in Sec. 6 (Fig. 6). The result is a mean age of around \(30^{+30}_{-20}\) Myr, very close to the ZAMS. This is a younger age estimate than those referenced in Sect. 2, compatible with estimates of Kharchenko et al. (2013); Netopil et al. (2016), of around 34 Myr and 40 Myr, respectively. Uncertainties probably emerge because seismic parameters, including the large separation, evolve rapidly for stars on the PMS. Recent theoretical works show that the PMS is a complex phase. For example, Kunitomo et al. (2017) claim that the spread in luminosity during the PMS can be explained through different efficiencies at which the accreted material is converted into internal energy for each star. For these very young clusters, it seems that we need other parameters, in addition to the seismic ones we use, to date the stars with greater accuracy. This is confirmed by Steindl et al. (2022), according to whom different PMS accretion \begin{table} \begin{tabular}{c c} \hline Relationship & Value with \(1\sigma\) uncertainty \\ \hline \(f_{1}/f_{2}\) & \(0.77\pm 0.01\) \\ \(f_{1}/f_{3}\) & \(0.63\pm 0.02\) \\ \(f_{1}/f_{4}\) & \(0.53\pm 0.02\) \\ \(f_{1}/f_{5}\) & \(0.45\pm 0.02\) \\ \(f_{1}/f_{6}\) & \(0.40\pm 0.02\) \\ \(f_{1}/f_{7}\) & \(0.35\pm 0.01\) \\ \(f_{1}/f_{8}\) & \(0.31\pm 0.01\) \\ \hline \end{tabular} \end{table} Table 6: The fundamental mode to radial overtone ratios in our MESA/PLMO grids of models, with their corresponding standard deviations. \begin{table} \begin{tabular}{c c c} \hline Parameter & Range & Step \\ \hline \(M({\rm M}_{\odot})\) & [1.60, 2.50] & 0.01 \({\rm M}_{\odot}\) \\ Z (Trumpler 10) & [0.016, 0.020] & 0.002 \\ Z (Praesepe) & [0.028, 0.032] & 0.002 \\ \(\Omega/\Omega_{\odot}\) & [0.1, 0.5] & 0.1 \\ \(\alpha\) & 2.0 & Fixed \\ \end{tabular} \end{table} Table 5: Parameters of the stellar model grids built with the MESA code. From top to bottom: mass, metallicity (both for Trumpler 10 and Praesepe), the initial angular velocity to critical velocity ratio and the mixing-length parameter. scenarios cause differences in the pulsation modes, thus leaving an imprint on the frequency content of a \(\delta\) Sct star. Seismology of PMS stars has a lot to say about their interior structure. Compared to the works of Murphy et al. (2021); Steindl et al. (2022), our uncertainties are one order of magnitude larger with the stars of Trumpler 10. Murphy et al. (2021) compute \(\Delta\nu_{\rm low}\)\(=6.83\pm 0.01\)\(d^{-1}\) for the PMS star HD 139614, a value one order of magnitude more precise than our measured large separations. By scanning a variety of models for mode identification, we sample the entire relevant parameter space, which makes uncertainties larger but more realistic. We limit our mode identification precision by avoiding over-reliance on our models. ### Praesepe Fig. 7 shows the HRD of the seismically constrained models for the \(\delta\) Sct stars group in Praesepe. Two stars, TIC 175264376 and TIC 175291778, clearly appear to be older than the rest of the sample (right panel). This is more evident in Fig. 8 (top left panel), where we plot the age weighted histogram for every star of the sample. If we consider the sample as a single population, then the computed WPDF (bottom panel) give us a mean age of \(580\pm 230\) Myr, younger if we compare it to the references cited in Sect. 2, but in good agreement with them. It is significant that our estimate is very near of the age used by Fossati et al. (2008), \(590^{+150}_{-120}\) Myr, where they calculated the metallicity of the cluster from an abundance analysis of A- and F-type stars. Five of them have been used in the present work. The discrepant large separations and densities of TIC 175264376 and TIC 175291778 can be explained through rotation, a different age, binary systems or less trustful estimated seismic parameters. First, a rapid rotation may modify the value of the large separation, although not the scaling relation between the large separation and the mean density (Garcia Hernandez et al. 2015; Mirouh et al. 2019). Their relatively higher TIC radii and lower TIC densities (Table 3), are in line with the lower low-order large separations and frequencies at maximum power we estimate for both stars (Table 4). The high value of the projected rotation velocity of TIC 175264376 (200 km s\({}^{-1}\)) is very significant in this sense. Our 1D models cannot be reliably applied to such high rotational velocities. We then need 2D models to characterise rapidly-rotating stars. In a more evolved cluster as Praesepe, rotation and other internal mixing phenomena can affect the stars differently over time. Second, these bigger and lower-density stars with masses similar to the others, can also simply be more evolved. In Table 3 we can see that these stars, plus TIC 175194881, may be late A or earlier F stars, while the other three may be middle A stars, according to Fossati et al. (2008). These could point to two different populations of stars. Third, the outlier stars may be in binary systems. Their luminosities would be brighter than models of single stars would suggest, making them appear more evolved. Then, we revisit our one-population assumption, as there could be two populations, in which the four younger stars would be 'Pop 1' and TIC 175264376 and TIC 175291778 would be 'Pop 2' (Fig. 8, top right panel). Once the WPDF of both populations are computed, we obtain a mean age of \(510\pm 140\) Myr for 'Pop 1', and a mean age of \(890\pm 140\) Myr for 'Pop 2'. These histograms are quite different from the histogram with a single population, which brings us to a fourth explanation for this apparent bimodality. Compared to 'Pop 1' stars, those in 'Pop 2' contribute less weight to the WPDF of the totality of the constrained models, due to their larger uncertainties in the measurements of the low-large separation and the seismic effective temperature. Then, assuming one single population, the age of the cluster is closer to the age of 'Pop 1' than the age of 'Pop 2'. As we can see in Tables 7 and 8, the models are not well constrained in terms of rotational velocity. Estimating the rotation rate of some stars of the group would yield further constraints on the models, especially given the dependence of the seismic parameters such as the large separation or the frequency ratios on rotation (Suarez et al. 2006). Figure 4: Ranges for the possible radial overtones in the frequency spectrum of TIC 28943819. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline TIC & \(M(M_{\odot})\) & \(R(R_{\odot})\) & \(\mu(\beta_{\odot})\) & log \(g\) & \(\hat{T}^{\mu}(K)\) & \(log(L/L_{\odot})\) & \(\nu_{\nu}(\mathrm{km\,s^{-1}})\) & Age (Myr) \\ \hline 28943819 & \(1.72\pm 0.04\) & \(1.51\pm 0.02\) & \(0.50\pm 0.02\) & \(4.31\pm 0.01\) & \(8250\pm 120\) & \(0.97\pm 0.03\) & \(30^{+0.0}_{-0.0}\) & \(30^{+0.0}_{-0.0}\) \\ 30307085 & \(1.87\pm 0.05\) & \(1.53\pm 0.02\) & \(0.52\pm 0.01\) & \(4.34\pm 0.01\) & \(8920\pm 180\) & \(1.12\pm 0.04\) & \(60^{+10}_{-10}\) & \(20^{+10}_{-20}\) \\ 28944596 & \(1.64\pm 0.03\) & \(1.51\pm 0.02\) & \(0.48\pm 0.02\) & \(4.29\pm 0.01\) & \(7780\pm 120\) & \(0.87\pm 0.03\) & \(80^{+0.0}_{-0.0}\) & \(40^{+0.0}_{-0.0}\) \\ 271061334 & \(1.87\pm 0.05\) & \(1.58\pm 0.03\) & \(0.48\pm 0.02\) & \(4.31\pm 0.01\) & \(8740\pm 160\) & \(1.11\pm 0.04\) & \(80^{+0.0}_{-0.0}\) & \(20^{+10}_{-15}\) \\ 271062192 & \(1.64\pm 0.03\) & \(1.55\pm 0.02\) & \(0.44\pm 0.02\) & \(4.27\pm 0.01\) & \(7670\pm 120\) & \(0.87\pm 0.03\) & \(110^{+10}_{-50}\) & \(40^{+0.0}_{-0.0}\) \\ \hline \hline \end{tabular} \end{table} Table 7: Constrained parameters of the models corresponding to our selected targets in Trumpler 10, with their corresponding standard deviations. Figure 5: Left: HRDs of the evolutionary tracks of our grid of representative models for the sample of \(\delta\) Sct stars in Trumpler 10, the ages of the seismically constrained models have been colour coded. Right: Zoom between 1.60 \(M_{\odot}\) and 2.00 \(M_{\odot}\), distinguishing between models for each of the stars in the sample by different colours. Figure 6: Age weighted histograms of our sample of stars in Trumpler 10. Left: Weighted histogram of each member of the sample. Right: Weighted histogram of the whole sample. The red solid line shows the computed PDF with a mean age of \(30^{+50}_{-20}\) Myr. Murphy et al. (2022) use a grid of non-rotating stars to model the three slowest rotators from the Pleiades sample, in order to verify their mode identification. However, they do not model the other two stars, rapid rotators V1228 Tau (vsini = 200 km s\({}^{-1}\)) and V650 Tau (vsini = 230 km s\({}^{-1}\)), for which the echelle diagrams are more ambiguous. They also conclude that rotating models are required for a more accurate inference of the asteroseismic parameters, including the mode identification. A high rotation mixes the modes in such a way that it is not easy to identify them within such a complex spectrum. In this analysis, and that of Paper I, we have included rotating models and defined a strategy that will serve as a stepping stone towards a complete mode identification in rapid rotators. Then, to advance this strategy, we need a method to help us confidently interpret the rotation frequency in the dense frequency spectrum of \(\delta\) Sct stars. It is also crucial to obtain a more reliable mode identification, through longer exposure observations that allow a higher resolution in the frequency spectrum. We hope that future missions, such as the ESA projects PLATO4 (PLAnetary Transits and Oscillations of stars, Rauer et al., 2014) and HAYDN5 (High-precision AsteroseismologY in DeNse stellar fields, Miglio et al., 2021), will provide higher resolution photometry of stars belonging to clusters, leading to accurate age estimates. Footnote 4: [https://sci.esa.int/web/plato](https://sci.esa.int/web/plato) Footnote 5: [http://www.asterochronometry.eu/haydn/](http://www.asterochronometry.eu/haydn/) ## 8 Conclusions The use of asteroseismology to date young open clusters provides promising results, despite the limitations of statistical techniques applied to samples with such small numbers of stars. We have tested a seismic method with three open clusters of different ages. With \(\alpha\) Per we obtained the first results in Ramos Ortega et al. (2022). In this work we extend the research to Trumpler 10 and Praesepe, and we estimate the ranges with the possible values of some radial overtones and the fundamental mode. Regarding Trumpler 10, we find five \(\delta\) Sct star candidates never before classified as such, with a mean age of \(30^{+30}_{-20}\) Myr. The uncertainty is large due to how close they are to the PMS, where stars evolve rapidly. Other parameters are needed to better constrain the models near the ZAMS, and to more accurately date stars. Regarding Praesepe, we find a new possible \(\delta\) Scuti star, TIC 184917633, that we add to our sample of five previously known \(\delta\) Sct stars. We estimate the mean age of this star group to \(580\pm 230\) Myr, in good agreement with the literature. Two of the six stars in the sample seem to be older than the rest. The different values in their parameters, especially the spectral type, support the thesis of two stellar populations: one with a mean age of \(510\pm 140\) Myr and another with a mean age of \(890\pm 140\) Myr. This apparent bimodality in the age distribution could also be due to the effects of gravity-darkening in rapidly-rotating stars. The lower values in the low-order large separation and the frequency at maximum power, in addition to the measured large projected rotation velocity of both stars, support this idea. The 1D models that we have used in this work are not the most suitable to stars with such high rotation rates. Two-dimensional models are needed in order to take more into account the deformation that occurs in them, and that greatly impact the seismic parameters, such as the large separation and a reliable determination of the rotation frequency. Finally, we cannot rule out an unreliable estimate of the seismic parameters of the outlier stars. A greater weight of the other four stars in the WPDF corrects for this apparent bimodality. ###### Acknowledgements. We appreciate the comments and questions from the anonymous referee, because they have contributed to improving the paper. DPO and AGH acknowledge funding support from 'FEDER/junta de Andalucios-Consejeria de Economium' under project E-FQM-041-UGR18 by Universidad de Granda, JCS, GMM and SBF acknowledge funding support from the Spanish State Research Agency (AEI) project PID2019-107061GB-064. This paper includes data collected with the TESS mission, obtained from the TASC data archive. Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
2306.13844
Propulsion-Free Cross-Track Control of a LEO Small-Satellite Constellation with Differential Drag
In this work, we achieve propellantless control of both cross-track and along-track separation of a satellite formation by manipulating atmospheric drag. Increasing the differential drag of one satellite with respect to another directly introduces along-track separation, while cross-track separation can be achieved by taking advantage of higher-order terms in the Earth's gravitational field that are functions of altitude. We present an algorithm for solving an n-satellite formation flying problem based on linear programming. We demonstrate this algorithm in a receeding-horizon control scheme in the presence of disturbances and modeling errors in a high-fidelity closed-loop orbital dynamics simulation. Our results show that separation distances of hundreds of kilometers can be achieved by a small-satellite formation in low-Earth orbit over a few months.
Giusy Falcone, Jacob B. Willis, Zachary Manchester
2023-06-24T02:57:58Z
http://arxiv.org/abs/2306.13844v1
# Propulsion-Free Cross-Track Control of a LEO Small-Satellite Constellation with Differential Drag ###### Abstract In this work, we achieve propellantless control of both cross-track and along-track separation of a satellite formation by manipulating atmospheric drag. Increasing the differential drag of one satellite with respect to another directly introduces along-track separation, while cross-track separation can be achieved by taking advantage of higher-order terms in the Earth's gravitational field that are functions of altitude. We present an algorithm for solving an n-satellite formation flying problem based on linear programming. We demonstrate this algorithm in a receeding-horizon control scheme in the presence of disturbances and modeling errors in a high-fidelity closed-loop orbital dynamics simulation. Our results show that separation distances of hundreds of kilometers can be achieved by a small-satellite formation in low-Earth orbit over a few months. ## I Introduction Formations of multiple satellites are frequently used to perform tasks that a single satellite cannot accomplish alone. Examples include satellite navigation systems, like the global positioning system (GPS), and communications constellations like Iridium and Starlink. The ability to maneuver and control the relative positions of such satellites is key to establishing and maintaining a formation. Typically, such maneuvering requires a propulsion system and the consumption of propellant, the supply of which can be very limited, especially on smaller spacecraft. In fact, many small spacecraft are not equipped with a propulsion system at all. As an alternative to propulsion, external perturbation forces can be harnessed to affect the orbit of a satellite. In low-Earth orbit (LEO), where most small satellites operate, two perturbation forces dominate satellites' orbital dynamics. The first perturbation force is atmospheric drag, which acts in the orbital plane and only directly influences the altitude of a satellite. As drag changes a satellite's altitude, its orbital velocity and along-track position also change. As depicted in Fig. 1, the drag area of a spacecraft can be changed by controlling the attitude of the spacecraft. By placing some spacecraft in a high-drag state and others in a low-drag state, a differential drag between satellites can be introduced and the relative along-track positions of satellites can be changed. On-orbit, this method has been used to establish and control the along-track positions for constellations of up to 100 satellites [1]. The second perturbation force on a satellite in LEO is nodal precession. Nodal precession is due to the Earth not being a perfect sphere and causes orbits to precess, or rotate, around the Earth's axis. This effect introduces a small cross-track acceleration on a satellite that varies with altitude. By establishing a large differential altitude between spacecraft, the nodal precession of those spacecraft will occur at different rates, and a cross-track orbital change can be made. Most differential-drag formation flying methods ignore the cross-track influence of nodal precession because it is small compared to the along-track drift, requiring large altitude differences and long time horizons to have a significant effect. In this paper, we describe a method for performing long-time-horizon differential drag maneuvers that utilize nodal precession to accomplish both along-track and cross-track changes in a formation configuration. Our contributions include: * An analytic expression for the first-order relationship between along-track and cross-track separation changes. This defines a fundamental limit on what along-track and cross-track separations are simultaneously achievable * A convex trajectory optimization formulation to compute differential-drag sequences that achieve desired formation maneuvers * A receding-horizon control strategy that re-plans maneuvers every few orbits to compensate for disturbances and modeling errors * Simulation results demonstrating our receding-horizon controller performing several different maneuvers in a high-fidelity orbital dynamics simulation The paper proceeds as follows: In Section II we review previous research and on-orbit demonstrations of drag-based formation flying. Section III introduces background concepts that are used in the along-track and cross-track formation flying linear trajectory optimization that we develop in Section IV. The results of a single convex trajectory optimization Fig. 1: High and low drag configurations for a satellite with attitude-controlled drag modulation. and closed loop simulations with the trajectory optimization as a feedback controller are shown and discussed in Section V. We then conclude and comment on future work in Section VI. ## II Related Work To avoid the need for a traditional propulsion system, many studies have investigated using drag modulation to control the relative positions of satellites in a formation. Leonard, et al. first proposed the use of drag modulation for maintaining the relative separation of spacecraft already in formation [2]. At a similar time, Mathews and Leszkiewicz [3] investigated a drag-propulsion combination to maintain a cyclical altitude and phase relationship between a spacecraft and a space station. Additional methods for along-track formation keeping using drag have been proposed since then [4, 5, 6]. Reconfiguration of a formation using differential drag or solar radiation pressure are considered by Spiller et al. [7]. They use the linear Hill-Clohessy-Wiltshire equations for the relative dynamics, so the formation size must remain small. Differential drag control has also been studied in the context of along-track rendezvous. Bevilacqua and Romano include \(J_{2}\) perturbations in their model, but do not include cross-track separation in their relative state [8]. They solve the drag-based rendezvous problem using a two-step analytic method. An optimization approach to solving this problem was proposed by Harris and Ackmege [9]. They solve the problem as a constrained linear program with minimum-time cost. Most differential drag methods assume binary drag states where a satellite is in either a low or high drag configuration. Harris et al. investigate a continuous drag modulation scheme based on the coupling of spacecraft attitude and drag [10]. There have been multiple demonstrations of differential-drag control on orbit. The ORBCOMM communications constellation, launched in 1997-1999, used modulation of differential drag, along with occasional propulsive maneuvers, to maintain the along-track separation for their network of thirty spacecraft [11]. Each week a new plan for the ORBCOMM constellation was created that oriented the solar arrays in high or low drag configurations during the eclipse phase of each orbit. On orbit, the system performed within mission parameters for ten years. A limited demonstration of differential drag modulation using deployable panels was performed on-orbit by the AeroCube-4 CubeSat mission in 2012 [12]. Perhaps the most well-known and complete on-orbit demonstration of differential drag was for the Planet Earth-imaging constellation [1]. The Planet constellation performed both the initial slot allocation and phasing of satellites as well as station keeping of satellites using differential drag. After deployment and initial contact, the slot allocation and phasing problem was solved by a ground control system using a genetic algorithm. This used the initial differences in satellite position to reduce the phasing time. In its original form, the dynamics are written as a two-dimensional linear system using the Gauss variational equations and the drag-control is considered binary. A continuous optimization of the Planet slot allocation and phasing problem was formulated by Blatner [13]. Repeated updates to handle perturbations, and continuous controls were presented by Sin et al. [14]. The CYGNSS constellation also included differential-drag modulation for along-track phasing in its mission design [15]. The control design includes operational constraints for sun pointing of solar panels and nadir pointing of the science instruments. The constellation was deployed at a higher than expected altitude, resulting in lower drag than was anticipated, and the differential drag maneuvers were ineffective within the initial mission timeframe. An adaptive Lyapunov controller for handling uncertain drag conditions on-line is presented in [16]. None of the previously discussed works consider large out-of-plane or cross-track motion of the satellites. Using nodal precession to modify the cross-track separation of a formation was demonstrated on-orbit by the FORMOSAT-3/COSMIC mission [17]. For this mission, propulsion was used to raise the orbital altitude of some satellites, the cross-track separation was allowed to increase for a period of time, and the orbital altitudes of all satellites were then matched to eliminate drift. Similar studies have extended these propulsive methods to non-circular orbits [18], low-thrust [19], and handling the perturbing affects of drag [20]. Two works [21, 22] combine differential drag and nodal precession to modify the cross-track separation of satellites. These works are the most similar to ours. Leppinen [21] performs a feasibility study and demonstrates that it is possible to obtain a sufficient altitude separation for nodal precession to change the RAAN of a satellite. No control methods are presented. Lee and Bang [22] present a method for modifying the ground-tracks of satellites in a constellation using differential drag and nodal precession. They use a series of processes to solve both the slot allocation and phasing problems. Synchronization of the along-track and cross-track state of the satellites is not investigated in either of these prior works. This is a key contribution of our work. There has been interest in utilizing aerodynamic lift for modifying a satellite's cross-track trajectories [23, 24]. However, the lift force is at maximum an order of magnitude smaller than drag for a typical spacecraft, and for a symmetric spacecraft the lift often averages to zero [23], so we do not consider its effect here. We formulate the differential-drag control problem as a convex trajectory optimization with a linear cost and linear constraints. Tillerson, et al. solved spacecraft formation flying problems with convex trajectory optimization over twenty years ago [25]. Since that time, convex trajectory optimization has gained popularity for solving many aerospace problems including orbital maneuvering, rocket soft landing, and planetary aerocapture [26, 27]. In all of these domains, convex trajectory optimization provides the advantage of solving highly constrained control problems in a computationally tractable manner. ## III Background ### _Keplerian Motion_ The unperturbed Keplerian dynamics of a satellite orbiting around the Earth are described by the two-body equation \[\ddot{\mathbf{r}}=-\frac{\mu}{r^{3}}\mathbf{r} \tag{1}\] where \(\mathbf{r}\) is the position vector of the spacecraft in the Earth-centered inertial frame, \(\ddot{\mathbf{r}}\) is the acceleration vector and \(\mu\) is the Earth's gravitational parameter. Equation (1) assumes that the spacecraft is influenced only by the spherically symmetric gravitational field of the Earth. In reality, a spacecraft experiences a large number of secondary perturbation forces. The largest perturbation forces on a satellite in LEO are due to atmospheric drag and the Earth's non-spherical gravitational field. To account for perturbations, (1) can be written as \[\ddot{\mathbf{r}}=-\frac{\mu}{r^{3}}\mathbf{r}+\mathbf{p} \tag{2}\] where \(\mathbf{p}\) is the perturbative acceleration vector of the satellite. The orbital state of a satellite is commonly described using six quantities known as the orbital elements [28]. The orbital elements are: \(a\), the semi-major axis; \(e\), the eccentricity of the orbit ellipse; \(i\), the inclination; \(\Omega\), the right ascension of the ascending node (RAAN); \(\omega\), the argument of periapsis; and \(\nu\), the true anomaly. In this work we consider circular orbits, so \(e=0\) and \(\omega\) is undefined; since \(\nu\) is referenced from \(\omega\), it is poorly defined. Instead, we use \(\theta\), the argument of latitude (AoL), which measures along-track orbital position from the equatorial plane. In the remainder of this work our focus will be on the dynamics of \(a\), \(\Omega\), and \(\theta\). The dynamics of the other orbital elements are minor with respect to drag and nodal precession. Figure 2 shows how \(a\), \(i\), \(\Omega\), and \(\theta\) describe the orbital state of a satellite in a circular orbit. ### _Atmospheric Drag_ In LEO, atmospheric drag is modeled by \[\mathbf{D}=-\frac{1}{2m}\rho AC_{D}v(\mathbf{v}-\mathbf{v_{atm}}) \tag{3}\] where \(\mathbf{D}\) is the drag force, \(\rho\) is the atmospheric density, \(A\) is the satellite's incident cross-sectional area, \(C_{D}\) is the drag coefficient, \(m\) is the satellite mass, \(\mathbf{v}\) is the inertial velocity vector of the satellite, \(\mathbf{v_{atm}}\) is the velocity of the atmosphere, and \(v=\|\mathbf{v}-\mathbf{v_{atm}}\|\) is the relative velocity vector magnitude [29, 30]. According to (4), the drag force increases with atmospheric density; however, it can also be modulated by changing the cross-sectional area of the satellite. The cross-sectional area can be modified either through deployable panels or by changing the spacecraft attitude [1, 31]. Drag always acts in the direction opposing velocity, and therefore can only directly affect the motion of a spacecraft within the orbital plane, decreasing its total orbital energy. In the orbital plane, drag enters the dynamic equations for eccentricity and the semi-major axis. Drag circularizes an orbit, decreasing its eccentricity [32]; since we are assuming circular orbits, we do not consider the eccentricity dynamics due to drag here. The semi-major axis dynamics due to drag are \[\dot{a}=2\sqrt{\frac{a^{3}}{\mu}}D \tag{5}\] where \(D=\|\mathbf{D}\|\) is the magnitude of the drag vector. ### _Nodal Precession and The Method of Averaging_ As discussed previously, the Earth is not a perfect sphere, but resembles an oblate spheroid; this causes the gravitational field of the Earth to deviate from the point mass model in (1). Models of the Earth's gravitational field are expressed by a spherical harmonic expansion with coefficients \(J_{*}\)[28, 30]. We use a simple model based on the first non-trival term, \(J_{2}\). It captures the dominating effect of the Earth's oblateness and is three orders of magnitude larger than the next spherical harmonic term. In addition, the \(J_{2}\) model is rotationally symmetric around the Earth so it only requires knowledge of an orbit's inclination. This makes it consistend across all possible RAAN angles and separations in a formation. On short timescales, the \(J_{2}\) perturbation affects all the orbital elements. However, on long timescales it introduces only a mean variation on \(\Omega\), and averages to zero for \(\theta\). We denote the mean RAAN \(\bar{\Omega}\) and the mean AoL \(\bar{\theta}\). Their mean dynamics are \[\dot{\bar{\Omega}} =-\left[\frac{3}{2}\frac{J_{2}\sqrt{\mu}R_{E}^{2}}{a^{7/2}}\right]\cos i \tag{6}\] \[\dot{\bar{\theta}} =\sqrt{\mu/a^{3}} \tag{7}\] where \(R_{E}\) is the Earth's equatorial radius. ## IV Formation Flying From (6) and (7), the nodal precession rate and the AoL rate are both functions of the semi-major axis. The semi-major axis can be modulated through drag variation (4). This means that drag modulation can be used to affect changes in the AoL and RAAN for a satellite and establish satellite formations with coplanar and non-coplanar separations. In Fig. 2: Notation used to describe the orbital state of a satellite in a circular orbit. Here, the blue plane is the Earth’s equatorial plane and \(\Omega\) is referenced to an inertially fixed direction. this section we develop the linear dynamics model and the trajectory optimization method we use for drag-based formation flying. ### _Linearized Dynamics_ Using a first-order Taylor expansion, (6) and (7) can be linearized with respect to a reference semi-major axis, \(a\), and (4) can be linearized with respect to a reference drag, \(D\), as follows \[\Delta\dot{\bar{\theta}} =-\frac{3}{2}\sqrt{\frac{\mu}{a^{5}}}\Delta a\triangleq k_{1}\Delta a \tag{8}\] \[\Delta\dot{a} =2\sqrt{\frac{a^{3}}{\mu}}D\Delta D\triangleq k_{3}\Delta D\] (9) \[\Delta\dot{\bar{\Omega}} =\frac{21}{4}J_{2}\sqrt{\frac{\mu}{a^{9}}}R_{E}^{2}\cos i\Delta a \triangleq k_{2}\Delta a. \tag{10}\] According to (8) and (10), the rates of \(\Delta\bar{\theta}\) and \(\Delta\bar{\Omega}\) are both governed by \(\Delta a\), so they cannot be altered independently of each other. Assuming \(\Delta\bar{\theta}=\Delta\bar{\Omega}=0\) initially, any \(\Delta\bar{\theta}\) achieved results in \[\Delta\bar{\Omega}=\frac{k_{2}}{k_{1}}\Delta\bar{\theta}\triangleq k_{4}\Delta \bar{\theta} \tag{11}\] where \(k_{4}=k_{2}/k_{1}\) is a dimensionless constant that depends only on the reference orbit. The linear equations in (8) to (10) can be put in the standard form of a linear dynamical system, \[\dot{\mathbf{x}}=A\mathbf{x}+B\mathbf{u} \tag{12}\] where \[\mathbf{x}=\begin{bmatrix}\Delta\bar{\theta}\\ \Delta a\end{bmatrix},A=\begin{bmatrix}0&k_{1}\\ 0&0\end{bmatrix},B=\begin{bmatrix}0\\ k_{3}\end{bmatrix} \tag{14}\] and \(\mathbf{u}=\begin{bmatrix}\Delta D\end{bmatrix}.\) We omit \(\Delta\bar{\Omega}\) from the state since (11) establishes a relationship between \(\Delta\bar{\Omega}\) and \(\Delta\bar{\theta}\). To extend this method to the case of \(n>2\) satellites the first satellite is chosen as the chief satellite, and the other satellite's \(\Delta\) states are all referenced to the chief. We concatenate \(n-1\) copies of (14) to rewrite (13) as a \(2(n-1)\) state system. When referring to the relative state between the chief and another satellite, we use the notation \(\Delta a^{1-p}\) and \(\Delta\bar{\theta}^{1-p}\), where \(p\) is the index of the satellite. ### _Constraints on the Final Conditions of Drag-Based Formation Control_ Given a pair of satellites deployed at the same initial orbit (i.e. \(\mathbf{x}_{0}=0\)), our goal is to manipulate the differential drag \(\Delta D\) over time to achieve a final formation configuration \(\mathbf{x}_{f}\) at some future time \(t_{f}\). The basic control strategy is to lower the orbital altitude of one satellite such that its nodal precession rate is larger than the other satellite. The satellites then remain in this configuration, with \(\Delta D=0\) until a desired \(\Delta\bar{\theta}\), and therefore a desired \(\Delta\bar{\Omega}\), is achieved, at which time the higher satellite lowers its altitude to match the first satellite. To maintain a fixed final formation configuration, we must have \(\dot{\mathbf{x}}_{f}=0\). To satisfy this, (8) to (10) shows that \(\Delta a_{f}\) and \(\Delta D_{f}\) must be zero -- the satellites must be at the same final altitude and in the same drag configuration. Modifying (11) to account for the fact that \(\Delta\bar{\theta}\) is an angular quantity, the possible \(\Delta\bar{\Omega}\) for a desired final \(\Delta\bar{\theta}_{f}\) are given by \[\Delta\bar{\Omega}_{f}=k_{4}(\Delta\bar{\theta}_{f}+2\pi\ell) \tag{15}\] where \(\ell\) is any integer. To first order, (15) defines the argument of latitude and right ascension separations achievable using drag modulation. For differential-drag formation control to be feasible, (15) is a fundamental limit that must be obeyed when selecting the final \(\Delta\bar{\theta}\) and \(\Delta\bar{\Omega}\) of a formation. ### _Optimization-Based Drag Maneuver Planning_ Given \(n\) satellites deployed in the same orbit (i.e., \(\mathbf{x}_{0}=0\)), we desire to maneuver these satellites into a formation configuration at a final time \(t_{f}\). To do so with differential drag, we must choose the final state \(\mathbf{x}_{f}\) by choosing the desired value for either \(\Delta\Omega_{f}\) or \(\Delta\theta_{f}\) and selecting the other in accordance to (15). The final altitude or final time are then a result of this choice. It then remains to find the necessary control inputs to achieve this formation. A full trajectory of drag modulation inputs that drives the satellite formation from \(\mathbf{x}_{0}\) to \(\mathbf{x}_{f}\) can be planned by solving the convex optimization problem \[\underset{\mathbf{x}_{1:N},\mathbf{u}_{1:N-1}}{\operatorname{ minimize}} g_{f}(\mathbf{x}_{N})+\sum_{i=1}^{N-1}g(\mathbf{x}_{i},\mathbf{u}_{i}) \tag{16}\] \[\operatorname{subject\ to} \mathbf{x}_{i+1}=A\mathbf{x}_{i}+B\mathbf{u}_{i},\] \[\begin{bmatrix}\Delta a_{N}^{1-2},...,\Delta a_{N}^{1-n}\end{bmatrix} =0,\] \[\Delta a_{\min}\leq\begin{bmatrix}\Delta a_{i}^{1-2},...,\Delta a_ {i}^{1-n}\end{bmatrix}\leq\Delta a_{\max},\] \[u_{\min}\leq\mathbf{u}_{i}\leq u_{\max}\] where \(g(x,u)\) is a convex stage cost function, and \(g_{f}(x)\) is a convex terminal cost function. The first constraint enforces the discretized form of the linear dynamics from (13), the second constraint ensures the satellites end at the same final altitude, the third constraint restricts the minimum and maximum altitude differences for each couple of satellites to be within \(\Delta a_{\min}\) and \(\Delta a_{\max}\), and the final constraint enforces \(u_{\min}\) and \(u_{\max}\) as lower and upper bounds on the drag achievable by each satellite. In this work, meeting the \(\Delta\theta\) final conditions is not treated as a constraint but included in the cost function; this relaxes the problem and avoids ill-conditioning. The cost functions \(g\) and \(g_{f}\) can be chosen to shape the overall system behavior. To produce minimum-time bang-bang control commands, \(L_{1}\) costs can be used [33]: \[g_{f}(\mathbf{x}_{N}) =\left\|\Delta\bar{\theta}_{N}^{1-2}-\Delta\bar{\theta}_{f}^{1-2} \right\|_{1}+...+\left\|\Delta\bar{\theta}_{N}^{1-n}-\Delta\bar{\theta}_{f}^{1- n}\right\|_{1} \tag{17}\] \[g(\mathbf{x},\mathbf{u}) =\left\|\Delta\bar{\theta}^{1-2}-\Delta\bar{\theta}_{f}^{1-2} \right\|_{1}+...+\left\|\Delta\bar{\theta}^{1-n}-\Delta\bar{\theta}_{f}^{1-n} \right\|_{1}\] \[\qquad+\left\|u_{1}\right\|_{1}+\left\|u_{2}\right\|_{1}+...+ \left\|u_{n}\right\|_{1}.\] Other convex cost functions, such as a quadratic cost, are also possible. ## V Simulation Experiments The convex optimization problem in (16) with the cost function in (17) is a linear program and can be solved with many standard solvers such as ECOS [34], GLPK [35], or MOSEK [36]. In these experiments, (16) and (17) are implemented in Julia using the Convex.jl modeling toolbox [37] and solved with the MOSEK solver. In all of our experiments we consider a constellation in which each satellite is a \(1.5\mathrm{kg}\) CubeSat with a \(15\mathrm{cm}\times 10\mathrm{cm}\times 10\mathrm{cm}\) chassis and equipped with two deployable solar panels each with dimension \(20\mathrm{cm}\times 15\mathrm{cm}\times 0.3\mathrm{cm}\). A notional model of the satellite and its high and low drag configurations is shown in Figure 1. The drag ratio of the satellite is 7.5:1. To be conservative we use a 5:1 drag ratio here, which results in setting the input limits, \(u_{\mathrm{min}}\) and \(u_{\mathrm{max}}\), of (16) to 0.2 and 1, respectively. ### _Trajectory Optimization_ Here we solve (16) for a pair of satellites deployed at 440km altitude and with an inclination of \(51.5^{\circ}\) -- conditions that approximate deployment from the International Space Station (ISS). The final conditions are set to \(\Delta\bar{\theta}_{f}=0\) and \(\ell=2\). From (15), this results in \(\Delta\bar{\Omega}_{f}=1.4^{\circ}\), a spherical distance of 165 km. In this scenario, the altitude limits, \(\Delta a_{\mathrm{max}}\) and \(\Delta a_{\mathrm{min}}\), are set to \(\pm 10\mathrm{km}\). The results of a single, 1500 orbit time horizon, trajectory optimization for these conditions are shown in Fig. 3. The top plot shows the drag control trajectory for the two satellites. To increase the relative AoL and RAAN, the orbital altitude of the second satellite is decreased first. The bottom three plots show the change in \(\Delta a\), \(\Delta\bar{\theta}\), and \(\Delta\bar{\Omega}\) respectively. From these plots we can see that the the relative AoL increases by \(720^{\circ}\), or two full orbits, and that the first satellite lowers its altitude to exactly reach \(\Delta\bar{\theta}_{f}=0\). We can also see that the \(10\mathrm{km}\) altitude constraint was satisfied. This optimization took \(0.8\)s to solve on a MacBook Pro with an Apple M1 Pro processor. ### _Closed-Loop Simulation Results_ To capture the effect of realistic modeling errors and disturbances, closed-loop simulations are performed using nonlinear dynamics with additional perturbations not modeled in (13). Specifically, we incorporate the effects of Earth's rotation on drag, the first five zonal harmonics (\(J_{1}\)-\(J_{6}\)) for gravity, and a small eccentricity, \(e=0.005\), of the initial orbit. To successfully execute planned maneuvers in the presence of modeling errors and disturbances we compute solutions to (16) in a receding-horizon or model-predictive control (MPC) loop. On each loop, the optimization problem is solved using the current measured state of the spacecraft formation, and the computed control inputs are applied over the next timestep. The repeated solving corrects for disturbances on the spacecraft. The timesteps of the control loop are on the order of one to five orbital periods. This receding-horizon control algorithm has been used to solve two scenarios, depicted in Figs. 3(a) and 3(b) and described in Sections V-B1 and V-B2. The initial state is chosen to approximate deployment with two common CubeSat deployment methods: the ISS and a SpaceX Transporter launch. Both scenarios assume that all of the satellites have the same drag ratio and the same initial state. #### V-B1 Scenario 1 -- Line Formation Scenario 1 assumes that four satellites are deployed from the ISS, with an altitude of \(440\mathrm{km}\), eccentricity of \(0.005\), and inclination of \(51.5^{\circ}\). The goal is to maneuver the satellites to be equally distributed in the cross-track direction with zero change in AoL so they pass over the equator in a line, as depicted in Fig. 3(a). This corresponds to \(\Delta\bar{\theta}_{f}=0\) and \(\Delta\bar{\Omega}_{f}=k_{4}2\pi\ell\) with \(\ell=1,2,3\). In this scenario, the receding-horizon control policy is resolved once per orbit over a time horizon of 1400 orbits and the altitude limits \(\Delta a_{max}\) and \(\Delta a_{min}\) were set to \(\pm 100\)km. The results of the first scenario are presented in Figs. 5 and 6 and Table I. The final orbit has a \(385.5\mathrm{km}\) altitude, eccentricity of \(0.002\), and inclination of \(51.49\) deg. The top plot of Fig. 5 shows the control trajectories for the four satellites. The fourth satellite, the satellite that aims to reach the largest \(\Delta\bar{\Omega}\), drives the overall differential drag required for the formation. The bottom plot shows the altitude variation for the four satellites; when the altitude rate is steeper, the Fig. 3: Linear trajectory optimization solution for a two-satellite formation. Top: the drag ratios for the satellites. Second: the relative altitude between the satellites. Third: the relative argument of latitude between the two satellites. Bottom: the relative right angle of the ascending node between the two satellites. The satellites end at the same altitude, resulting in a constant final argument of latitude and right angle of the ascending node. satellite is in a high drag configuration. Contrarily, where the altitude rate is shallower, the satellite is in a low drag configuration. Figure 6 shows the AoL and RAAN difference for the three satellite pairs. The difference is calculated with respect to the chief satellite. Table I reports the overall maneuver time, the final difference in the AoL and the RAAN, and the spherical distance between the chief satellite and the other satellite. It takes three months to reach the final configuration, and the maximum distance between two satellites is 268.2 km. #### V-A2 Scenario 2 -- Square Formation The second scenario assumes that four satellites are deployed from an approximately sun-synchronous SpaceX Transporter launch, corresponding to an altitude of 550 km, an eccentricity of 0.005, and an inclination of \(98^{\circ}\). The goal is to maneuver the satellites to be distributed in AoL and RAAN to form the vertices of a square, as depicted in Fig. (b)b. For this scenario, the \(\ell\) values are 0, 6, and 6, while the \(\Delta\bar{\theta}_{f}\) are 0.03, 0, and 0.03. The receding-horizon control policy is re-solved every five orbits over a time horizon of 5000 orbits. The results of scenario 2 are presented in Fig. 7, Fig. 8, and Table II. The final orbit has a \(518.1\mathrm{km}\) altitude, eccentricity of 0.0022, and inclination of \(98^{\circ}\). The top plot of Fig. 7 shows the control input, and the bottom plot shows the altitude change for the four satellites. The plots in Fig. 8 report the AoL and RAAN difference for the three pairs. As before, the difference is evaluated with respect to the chief satellite. Table II reports the overall maneuver time, the AoL and RAAN final differences, and the spherical distance between the chief satellite and every other satellite. This scenario takes longer than the first scenario; however, the results show that in less advantageous initial conditions a spacecraft formation with both along-track and cross-track separations can be established using our presented drag-based method. Furthermore, the algorithm is able to define the control trajectory in the presence of disturbances and \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Pair & \(t_{f}\), months & \(\Delta\theta_{f}\), deg & \(\Delta\Omega_{f}\), deg & \begin{tabular}{l} Spherical \\ Distance, km \\ \end{tabular} \\ \hline Sat. 1 - 2 & & 0 & -0.75 & 89.26 \\ \hline Sat. 1 - 3 & & -0.008 & -1.5 & 178.7 \\ \hline Sat. 1 - 4 & & -0.213 & -2.25 & 268.2 \\ \hline \end{tabular} \end{table} TABLE I: Results for Scenario 1 Fig. 4: (a) Scenario 1: formation of four satellites in a line with equally distributed right angle of the ascending node. (b) Scenario 2: formation of four satellites distributed in argument of latitude and right angle of the ascending node to form the vertices of a square. Fig. 5: Scenario 1. Top: The control trajectories for the four satellites. Bottom: The altitude variation of the four satellites during the maneuver. Unlike in Fig. 3, the control trajectories are not piecewise constant due to the on-line correction of disturbances. Fig. 6: Scenario 1. Top: Right ascension of the ascending node difference with respect to the chief satellite. Bottom: Argument of latitude difference with respect to the chief satellite. All the satellite pairs reach the same final argument of latitude. modeling errors. The two scenarios have interesting differences from a mission-design viewpoint. Lower orbits, like the ISS orbit, result in faster natural orbital decay due to drag, reducing the possible altitude change. However, the lower inclination for the ISS results in a larger \(k_{4}\) and faster \(\Delta\bar{\Omega}\) rate of \(0.745^{\circ}\) per \(2\pi\) revolution of\(\Delta\bar{\theta}\). Contrarily, deployment from the SpaceX Transporter allows a larger available overall altitude change but a smaller \(\Delta\bar{\Omega}\) rate of 0.16 deg per \(2\pi\) revolution of\(\Delta\bar{\theta}\). This is why scenario 2 takes longer to complete than scenario 1. The effectiveness of differential drag for cross-track formation control is highly dependent on orbital inclination. ## VI Conclusions We have presented a novel control scheme that is able to maneuver a low-Earth orbit satellite formation in both along-track and cross-track directions without expending propellant. We formulate the drag-based formation control problem as a linear program that can be solved very efficiently. This allows it to be used in a receding-horizon manner, updating the control inputs and trajectory for a satellite once per orbit. Our simulation results show that our method is robust to disturbances and unmodeled dynamics, and is viable for autonomous implementation on-orbit. While we assume a known atmospheric density, in reality the atmospheric density in low-Earth orbit is widely varying. An important extension of this work will be to accurately estimate the atmospheric drag; this estimate can be easily incorporated into our trajectory optimization formulation, and will ensure robust performance. Our contributions can dramatically reduce the cost and complexity associated with deploying and managing multiple-plane satellite formations by eliminating the need for propulsion systems onboard the satellites. ## VII Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. 2111751. A portion of this work was supported by the United States Department of Defense National Defense Science and Engineering Graduate Fellowship (NDSEG).
2305.18299
One-dimensional Dexter-type excitonic topological phase transition
Recently topogical excitons have attracted much attention. However, studies on the topological properties of excitons in one dimension are still rare. Here we have computed the Zak phase for a generic one-dimensional dimerised excitonic model. Tuning relevant hopping parameters gives rise to a rich spectrum of physics, including non-trivial topological phase in uniform chain unlike the conventional Su-Shcrieffer-Heeger model, topologically nontrivial flat bands, and exotic fractional phase. a new concept of ``composite chiral site" was developed to interpret the Zak phase of $\pi$ in our calculations. Our finite-chain calculations substantiate topological edge states, providing more information about their characteristics. Most importantly, in the first time, a topological phase transition assisted by the Dexter electron exchange process has been found.
Jianhua Zhu, Ji Chen, Wei Wu
2023-04-27T21:43:07Z
http://arxiv.org/abs/2305.18299v4
Topological properties of a one-dimensional excitonic model combining local excitation and charge transfer ###### Abstract We have computed the Zak phase for a one-dimensional excitonic model, which takes into account dimerisation, local and charge-transfer excited states. There are four hopping parameters, which can be varied to give rise to a rich spectrum of physics. By turning on more than one parameters, we can find (i) the topological phase could be \(\pi\) even for a uniform chain, which is related to topological order, (ii) there exist topologically nontrivial flat bands, suggesting an interesting correlation between flat bands and topology, (iii) exotic fractional phases, which are due to quantum interference and relevant to anyon and fractional statistics, and (iv) a phase transition related to second-order hopping event - excitonic hopping. We have also developed the concept of effective chiral states (linear combination of excitonic states) to interpret our calculations. Our model is sufficiently general to describe excitonic topological properties for one-dimensional chain structures formed by physical unit such as atom, molecule, semiconductor dopant, and quantum dot. ## I Introduction Excitons (electron-hole pairs) not only play a vital role in physics [1] but also for many important applications in Bose-Einstein condensation, photonics, carbon-nanotubes, two-dimensional materials, solar cells and light-emitting diodes (LED) [2; 3; 4; 5; 6; 7; 8; 9], which need to be understood properly from a microscopic perspective. Meanwhile one-dimensional chain structures consisting of atoms, molecules, quantum dots, dopants have recently attracted much attention due to their interesting topological properties [9; 10; 11; 12; 13; 14; 15; 16]. The combination of the two, exciton in one dimension, is of great interest for fundamental studies of low-dimensional optics and photonics from a bosonic point of view [3; 17; 18; 19; 20]. One-dimensional topological insulator described by the Su-Shericffer-Heeger (SSH) model respects the chiral symmetry [21; 22], leading to quantized Zak phase (either odd or even multiples of \(\pi\); the corresponding integer is called the winding number [23; 24]. In the SSH model, when the hopping strength between cells is greater than that within the cell, there appears a topologically nontrivial phase. This type of one-dimensional model was first discovered in polyacetylene polymer chain with alternating single- and double-carbon bonds, but can also be found in the impurity chains in semiconductors [25; 26; 27]. Another fascinating aspect for one-dimensional systems is flat band, which has recently stimulated intense research interest in condensed matter physics, photonics, and meta-materials [28; 29; 30; 31; 32; 33; 34]. Flat bands, fundamentally very interesting, have many potential applications such as slow light [20]. In addition, one-dimensional systems could be useful for the demonstration of fascinating topological order - a phase transition or state without symmetry breaking [35]. Locally excited (LE) and charge-transfer (CT) excited states are universal excitons that occur in all the physical systems. Previously we have established an excitonic model for one-dimensional molecular chains, which took into account both the intramolecular LE and intermolecular CT excited states that could interpret a body of optical experimental results [19]. Both of these excited states are important for the optical and charge dynamics in optics, especially in solar cells and LED [36; 37]. More importantly, this model can be generalized to any one-dimensional physical system formed by atoms, molecules, quantum dots, impurities, and so on [25; 26; 27; 16; 38]. It would therefore be great to understand this model systematically. Here we have mapped all the states in the cells to better illustrate the couplings between them, as shown in Fig.1, where we have also labeled the states by A-F for the convenience of discussion. The couplings \(t_{1}\), \(t_{2}\) have been differentiated from \(t_{1}^{\prime}\) and \(t_{2}^{\prime}\) to account for dimerisation. The model consists of four hopping parameters, \(t_{1}\), \(t_{2}\), \(t_{1}^{\prime}\), and \(t_{2}^{\prime}\); let us set \(d=0\) for the moment. Our calculation results are consistent with the previous experimental and numerical works, which have shown the out-of-equilibrium systems can exhibit varied topologies due to dynamically broken symmetry [39]. Moreover, there could be a plethora of choices for setting up these parameters, which can lead to a rich spectrum of physics, ranging from fractional topological phase to nontrivial flat bands. ## II Results and discussions ### One- and two-parameter model In this model, if we only turn on one or two parameters, there would be degeneracy easily, which implies that we can always use a unitary transformation to construct a new set of eigenvectors. As we can always choose \(k\)-independent unitary transformation, the summation of the Zak phases in the degenerate manifold will be conserved. We can therefore only use this summation in the degenerate manifold to characterize topological properties. For one-parameter model, when turning \(t_{1}^{\prime}\) or \(t_{2}^{\prime}\) only, we can have two topologically non-trivial states with the Zak phase of \(\pi\), which is similar to the SSH model. The eigenvectors are \(\frac{1}{\sqrt{2}}[\pm e^{-ik},0,0,0,1,0]\), which consist of states not coupled within the cell but across it. This situation also respects the chiral symmetry if checking the corresponding eigenvectors. For two-parameter model, we have computed analytically all the Zak phases when \(t_{1}=1\) and \(t_{1}^{\prime}=x\neq 0\), the eigenvalues and the corresponding Zak phases are tabulated in Table.1, from which we can see that all the bands are flat. When \(x\rightarrow\infty\), the Zak phases return to the one-parameter model with only \(t_{1}^{\prime}\) nonzero. Notice that all the Zak phases can take fractional number of \(\pi\). Moreover, when \(x=1\), i.e., \(t_{1}=t_{1}^{\prime}\), the two flat bands at zero energy becomes topologically nontrivial as the summation of the phases for these two bands is \(\pi\). In the other word, when \(t_{1}=t_{1}^{\prime}\), exchanging CT1 and CT3 will lead to a phase of \(\pi\). Notice that the \(x\) parameter can be tuned continuously, which implies the symmetry is not broken. This indicate topological order of this excitonic model, which is by definition a phase transition without symmetry breaking [35]. In addition, one of the two eigenvectors for the zero-energy flat band is \(\frac{1}{\sqrt{x^{2}+1}}[0,0,-e^{-ik},0,1,0]\) that gives rise to the fractional phase \(2\pi\frac{x^{2}}{x^{2}+1}\) due to exchanging CT1 and CT3. This is similar to the concept of anyon in two dimension (there appears a phase factor of \(e^{i\theta}\) when exchanging two particles), which can lead to fractional statistics, especially fractional quantum Hall effect [40; 41; 42]. In Fig.2, we show the calculations for \(\gamma_{1}+\gamma_{2}\) and \(\gamma_{3}+\gamma_{4}\) for this scenario, in which the \(\gamma_{1}+\gamma_{2}\) reproduces qualitatively the numerical and experimental results in Ref.[39]. Here we can see an interesting intrinsic correlation between the local flat bands and the chiral symmetry. When turning on \(t_{2}=1\) and \(t_{2}^{\prime}=x\), \(t_{1}=1\) and \(t_{2}^{\prime}=x\), or \(t_{2}=1\) and \(t_{1}^{\prime}=x\), we have the same eigenvalues and corresponding Zak phases. Another situation is where we turn on \(t_{1}^{\prime}=1\) and \(t_{2}^{\prime}=x\neq 1\). The eigenvalues and corresponding Zak phases are tabulated in Table.1. The Zak phases for the flat bands at energy zero are 0. By contrast, for the other bands, the Zak phases take fractional number of \(\pi\) - \(\frac{\pi}{2}\). For this, we have analysed the corresponding eigenvector, which is formed by LE1, LE2, CT3 and CT4, as shown in Fig.3(a). The interference between the state coupled via \(t_{1}^{\prime}\) and that coupled via \(t_{2}^{\prime}\) will lead to a \(\pi/2\) phase. Notice that when \(x=1\), there is four-fold degeneracy for the flat bands at zero energy; the sum of the Zak phases for these four bands is \(\pi\), while the other two bands have \(\frac{1}{2}\pi\) phase. The phase of \(\pi\) for the flat bands at zero energy stems from the chiral states formed by LE1 and LE2 (Fig.3b), while the half-\(\pi\) phases originate from the coupling between this chiral state and CT3 and CT4. Figure 1: (Colour online.) We have mapped the previous model to a state-based structure with couplings and state labels. Figure 2: (Colour online.) The Zak-phase summation \(\gamma_{1}+\gamma_{2}\) (red) and \(\gamma_{3}+\gamma_{4}\) (or \(\gamma_{5}+\gamma_{6}\), blue) for \(t_{1}=1,t_{1}^{\prime}=1\). Notice that the red curve (the Zak phases for two flat bands at energy zero) reproduces qualitatively the experimental and theoretical results in Ref.[39]. \begin{table} \begin{tabular}{c c c c c} Eigenvalues & 0 (2) & \(-\sqrt{1+x^{2}}\) (2) & \(\sqrt{1+x^{2}}\) (2) \\ \hline Zak phase & \(\gamma_{1}+\gamma_{2}\) & \(\gamma_{3}+\gamma_{4}\) & \(\gamma_{5}+\gamma_{6}\) \\ & \(2\pi\frac{x^{2}}{1+x^{2}}\) & \(\pi\frac{x^{2}+2}{1+x^{2}}\) & \(\pi\frac{x^{2}+2}{1+x^{2}}\) \\ \end{tabular} \end{table} Table 1: Calculated Zak phases for turning \(t_{1}=1\) and \(t_{1}^{\prime}=x\neq 0\). State degeneracy is in the bracket. The summation of the Zak phases is shown for degenerate bands. ### Three-parameter model We have computed all the 12 combinations for the four hopping parameters in the set of \(0,1,x\), and \(y\), in which we have provided the analytical formulae for all the Zak phases in the supplementary information (SI). For the three-parameter model, all the bands are flat. From here, we adopt the label \(\{t_{1},t_{2},t_{1}^{\prime},t_{2}^{\prime}\}\) to describe our parameter set. We show here an interesting example for \(\{0,1,x,y\}\) in Table.3. The flat bands at energy zero could be topologically nontrivial when \(x^{2}+y^{2}=1\). Especially when \(x=y\), the chiral symmetry for the eigenvector of the third and fourth bands is resumed as shown in Fig.4, by using the effective chiral states (linear combination of excitonic states), leading to a phase of \(\pi\). The two associated eigenvectors are \(v_{2}=\frac{1}{2}[e^{-ik},-1,-e^{-ik},1,0,0]\) and \(v_{5}=\frac{1}{2}[-e^{-ik},1,-e^{-ik},1,0,0]\). We can see that there would be a minor sign if exchanging A-D and B-C pairs for \(v_{2}\) simultaneously at the \(\Gamma-\)point. For \(v_{5}\), we can swap A-D and B-C pairs or A-C and B-D pairs. The left-hand state (\(L\)) is formed by LE1 and CT1 coupled by \(t_{2}\) and \(t_{2}^{\prime}\) while the right-hand state (\(R\)) is formed by LE2 and CT2. These two effective chiral states are decoupled by \(t_{1}\), which is the key to resume the chiral symmetry and render the topologically non-trivial band. Similarly when \(x=-y\), the fifth and sixth bands are topologically non-trivial. Once again, due to the continuity of parameters, we have a topological order here as well. In fact, this topological order can occur in other situations as suggested in the tables in SI. In Fig.1 of SI, we have shown more examples for the scenarios with the Zak phase of \(\pi\), which share the similar features as discussed above. As we discussed the Zak phase for the flat bands at zero energy, we have analysed the associated eigenvectors for \(\{1,0,1,0\}\) and \(\{0,1,x,y\}\) with \(x^{2}+y^{2}=1\); in both scenarios, the sum of the phases for the flat bands is \(\pi\) (Fig.5). These flat bands are formed by four CT states. We can also implement the idea of effective chiral states coupled by non-zero hopping, which are decoupled by other parameter to form L and R states. As shown in Fig.5 (b), \(L\) state is formed by CT2-4 (coupled by \(t_{1}^{\prime}\), \(t_{2}\), and \(t_{2}^{\prime}\)), and \(R\) is formed by CT1, both of which are decoupled by \(t_{1}\). As we can see the tables in the SI, the phase of \(\frac{3}{2}\pi\) starts to appear, which shares the similar feature with that illustrated in Fig.3, i.e. LE1, LE2 are coupled with CT3 and CT4. The only difference is that the wave function propagates from left to right when the phase is equal to \(\frac{3}{2}\pi\), whereas it propagates in the opposite direction for \(\frac{1}{2}\pi\). In summary, we can see that (i) when we can clearly decouple group of states within a cell, we can have the phase of \(\pi\) and (ii) once the coupling across the cell for multiple pairs of states, fractional phase will appear, which is related to the quantum interference between two pairs of states with the phase of \(\pi\) and \(0\), respectively. ### Four-parameter model For the four-parameter model, we have first focused on the uniform chains, i.e. \(t_{1}=t_{1}^{\prime}=1,t_{2}=t_{2}^{\prime}=x\). When \(x\rightarrow\infty\), the phases will approach \(\frac{1}{2}\pi\) asymptotically, leading to a phase sum of \(\frac{1}{2}\pi\), which is consistent with the uniform chain calculation shown in SI. This is different from the case for \(t_{1}=t_{1}^{\prime}=0,t_{2}=t_{2}^{\prime}=1\) due to the Figure 3: (Colour online.) An illustration of the components in the eigenvectors for \(t_{1}^{\prime}=1\) and \(t_{2}^{\prime}=x\) in the two-parameter model. Here we use red (blue) colour to represent non-zero (zero) hopping. Figure 4: (Colour online.) An illustration of the two effective chiral states for \(t_{1}=0,t_{2}=1,t_{1}^{\prime}=t_{2}^{\prime}=x\neq 0\): the left-hand state (\(L\)) formed by LE1 and CT1 coupled by \(t_{2}\) (circled in yellow) and the right-hand state (\(R\)) formed by LE2 and CT2 (circled in blue). \(L\) and \(R\) are decoupled by \(t_{1}\). symmetry and quantum interference. Two of the bands could be topologically nontrivial when \(x=\pm(\sqrt{2}-1)\), as shown in Fig.6. It is unexpected that we can have topologically nontrivial band even for a uniform chain in this model. For \(x=\sqrt{2}-1\), we have checked the eigenvectors at the \(\Gamma\)-point, for which the coefficients for the states in the eigenvector with the Zak phase of \(\pi\) is shown in Fig.7. The effective chiral states can be formed by the states in the red circle (left-hand) and in the blue circle (right-hand); their coefficients have opposite signs with same magnitude. In this case, the three states A, B, and F are exchanged simultaneously to D, E, and C, respectively, which would also respect the chiral symmetry. In addition, when \(x\) approaching zero, there would be a two-fold degeneracy for the bottom two bands; the sum of their phases is \(\frac{1}{2}\pi\) (\(\frac{5}{2}\pi\)), which is different from the two parameter model (\(\frac{3}{2}\pi\)), representing a phase transition due to symmetry breaking. As illustrated in Fig.7, the coupling map changes from stripes (\(t_{2}=t_{2}^{\prime}=0\)) to cross-nets (all the coupling turned on). Most Importantly, we have also found the phase sum of the flat bands at zero energy is always equal to \(\pi\) when \(x\neq 1\), which is consistent with the two-parameter model as shown in Table.1 and the uniform-chain calculations shown in SI. This phase is robust against any variation of the hopping parameters in the uniform chain, which, in combination with the robustness of the degeneracy of the zero-energy flat bands, is a strong indication of topological order in this model [35]. We have also computed the case where \(t_{1}=t_{2}=1\) and \(t_{1}^{\prime}=x,t_{2}^{\prime}=y\). The Zak phases for the bottom two bands as a function of \(x\) and \(y\) are shown in Fig.8(a-b). At \(|t_{1}^{\prime}t_{2}^{\prime}|=1\), there is a phase transition as suggested by the Zak phase going from \(-\frac{\pi}{2}\) to \(\frac{\pi}{2}\), which is once again induced by the topological order when tuning the hopping parameter continuously, which is more remarkable compared with the decoupling mechanism that is essentially Figure 5: (Colour online.) An illustration of the components in the eigenvectors for for \(\{1,0,1,0\}\) (a) and \(\{0,1,x,y\}\) with \(x^{2}+y^{2}=1\) (b). Here we use red (blue) colour to represent non-zero (zero) hopping. \(L\) state is in orange, while \(R\) state is in blue. Figure 6: (Colour online.) The Zak phases for the bottom two bands (the lowest band in blue) as a function of \(x\) for a uniform chain (\(t_{1}=t_{1}^{\prime}=1,t_{2}=t_{2}^{\prime}=x\)). We have plotted the phases down to \(-0.01\) (\(0.01\)) from the left-hand side (right-hand side). Notice that the phase could be \(\pi\) even for a uniform chain at \(x\simeq\pm(\sqrt{2}-1)\). The red points are the phases of \(0\) and \(\frac{3}{2}\pi\) for the case \(t_{1}=t_{1}^{\prime}=1,t_{2}=t_{2}^{\prime}=0\). Therefore there is a phase transition at \(x=0\), turning from \(\frac{5}{2}\pi\) (\(\frac{1}{2}\pi\)) to \(\frac{3}{2}\pi\) for the sum of the phases. On the other hand, when \(x\) goes to infinity, the phases will approach \(\frac{1}{4}\pi\) asymptotically. breaking symmetry. This topological phase transition is due to the second-order hopping effect between LE1 and LE2 where the effective hopping strengths are \(|t_{1}t_{2}|\) and \(|t^{\prime}_{1}t^{\prime}_{2}|\). Here the condition for the quantum phase transition is that \(|t_{1}t_{2}|=|t^{\prime}_{1}t^{\prime}_{2}|\), which is similar to the situation when the cross-cell hopping is equal to the intracell hopping in the SSH model. This is actually corresponding to the hopping of the entire exciton as shown in the coupling map in Fig.1, which has been discussed extensively previously for indirect exchange via spin polarization [43]. Notice that we also have another transition when \(|t^{\prime}_{1}|=|t^{\prime}_{2}|\) owing to a change of the symmetry, which is fundamentally different from topological order. This phase transition is further supported by our calculations for \(t_{1}=t^{\prime}_{1}=1\) and \(t_{2}=x,t^{\prime}_{2}=y\) (Fig.8(c-d)), showing a linear relationship between \(t_{2}\) and \(t^{\prime}_{2}\) at the phase transition. In Fig.8(e-f), the band structures and corresponding normalized optical absorption spectra at the points (\(t_{1}=t_{2}=1,t^{\prime}_{1}=1,t^{\prime}_{2}=\frac{1}{2}\)), (\(t_{1}=t_{2}=1,t^{\prime}_{1}=1,t^{\prime}_{2}=1\)), and (\(t_{1}=t_{2}=1,t^{\prime}_{1}=1,t^{\prime}_{2}=2\)) are shown to demonstrate the effect of phase transition. From the band structures, we can see the band gap opening and closing at \(k=0\) and \(k=\pm\pi\). Moreover, as the flat bands at energy zero are of great interest, we have also studied the phase sum for the scenario of \(\{x,y,1,1\}\), where generally \(x\neq y\). Then the phase sum is equal to \(\pi|x^{2}-y^{2}|\), which is \(\pi\) when \(|x^{2}-y^{2}|=1\). We have derived the Zak phase formalism for a general scenario \(\{1,x,y,z\}\) as shown in the SI, where in general \(x\neq y\neq z\) (all of them none-zero). Here we need to point out that the two-fold degeneracy for the flat bands is robust against any perturbation including non-zero \(d\), which should be topological in the sense of topological order [44]. ## III Conclusions In summary, we have studied the topological properties of one-dimensional excitonic model that takes into account dimerization, the LE and CT excited states. When turning on more than one hopping parameters, we have found (i) topologically nontrivial phase of \(\pi\) could exist even for a uniform chain, especially the robust topologically nontrivial phase and degeneracy for the zero-energy flat bands, (ii) exotic fractional phase, which is due to the quantum interference and related to anyon and fractional statistics, (iii) an interesting correlation between excitonic flat bands and topology, and (iv) a topological order, and especially a topological phase transition in four-parameter model, related to exciton hopping. We have also developed the concept of effective chiral states for this system when we have more than two states in a cell. ## IV Methods We first solve the eigenvalues and eigenvectors for the Hamiltonian in the momentum space below. Notice that when \(d\neq 0\), the model becomes closer to the Rice-Mele model [45]. \[\hat{H}_{k}=\begin{pmatrix}-\frac{d}{2}&0&t_{2}&t_{1}&t^{\prime}_{1}e^{-ik}& t^{\prime}_{2}e^{-ik}\\ 0&-\frac{d}{2}&t_{1}&t_{2}&t^{\prime}_{2}&t^{\prime}_{1}\\ t_{2}&t_{1}&\frac{d}{2}&0&0&0\\ t_{1}&t_{2}&0&\frac{d}{2}&0&0\\ t^{\prime}_{1}e^{ik}&t^{\prime}_{2}&0&0&\frac{d}{2}&0\\ t^{\prime}_{2}e^{ik}&t^{\prime}_{1}&0&0&0&\frac{d}{2}\end{pmatrix} \tag{1}\] Then we compute the Zak phase by using the analytical and the numerical formalisms in the eq.2 and eq.3, respectively. These have been used extensively in the previous work on the calculations of topological phases in one dimension [46; 47; 23; 38]. \[\gamma_{n}=i\int_{-\pi}^{\pi}dk\langle\phi_{n}(k)|\partial_{k}|\phi_{n}(k)\rangle. \tag{2}\] Here \(\phi_{n}(k)\) is the Block wave function. We can still use eq.2 to compute the Zak phase analytically if there are less than four non-zero parameters. However, for models with four non-zero parameters, we need to use the numerical formalism eq.3 as below. Here by \(n\)-parameter, we mean there are \(n\) non-zero parameters in the model. \[\gamma_{n}=\text{Mod}(i\ln[\prod_{s=1}^{M}\langle\phi_{n}(k_{s})|\phi_{n}(k_{ s+1})\rangle],2\pi), \tag{3}\] Here \(\phi_{n}(k_{s})\) is the eigenvector of the Hamiltonian at \(k_{s}\), \(n\) labels the band, and \(k_{s}\) run from \(-\pi\) to \(\pi\). We Figure 7: (Colour online.) We show the coupling maps when \(t_{2}=t^{\prime}_{2}=0\) (a) and \(t_{2}=t^{\prime}_{2}\neq 0\) (b), with \(t_{1}=t^{\prime}_{1}=1\). We also show the coefficients for the eigenvector with the Zak phase of \(\pi\) for \(t_{1}=t^{\prime}_{1}=1,t_{2}=t^{\prime}_{2}\simeq\pm(\sqrt{2}-1)\). The coefficients have opposite signs in the red and blue circles, forming effective chiral states. have tested the numerical robustness of our computational methods by using a series of different number of discritised points up to \(1\times 10^{6}\). We have found that \(1\times 10^{5}\) points are sufficient for the accuracy. When producing absorption spectra, we have followed the previous methods detailed in Ref.[19]. We assume the oscillator strengths for LE and CT to be 1 and 0.1, respectively. We have used a Gaussian-type broadening of \(0.1t_{1}\). As the model only takes into account the relative energy difference between LE and CT states, we have also include a rigid energy shift of \(2t_{1}\) in the calculations of absorption spectra. ## Data availability All the computer code and data that support the findings of this study are available from the corresponding author upon reasonable request.
2310.04406
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
While language models (LMs) have shown potential across a range of decision-making tasks, their reliance on simple acting processes limits their broad deployment as autonomous agents. In this paper, we introduce Language Agent Tree Search (LATS) -- the first general framework that synergizes the capabilities of LMs in reasoning, acting, and planning. By leveraging the in-context learning ability of LMs, we integrate Monte Carlo Tree Search into LATS to enable LMs as agents, along with LM-powered value functions and self-reflections for proficient exploration and enhanced decision-making. A key feature of our approach is the incorporation of an environment for external feedback, which offers a more deliberate and adaptive problem-solving mechanism that surpasses the constraints of existing techniques. Our experimental evaluation across diverse domains, including programming, interactive question-answering (QA), web navigation, and math, validates the effectiveness and generality of LATS in decision-making while maintaining competitive or improved reasoning performance. Notably, LATS achieves state-of-the-art pass@1 accuracy (92.7%) for programming on HumanEval with GPT-4 and demonstrates gradient-free performance (average score of 75.9) comparable to gradient-based fine-tuning for web navigation on WebShop with GPT-3.5. Code can be found at https://github.com/lapisrocks/LanguageAgentTreeSearch
Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, Yu-Xiong Wang
2023-10-06T17:55:11Z
http://arxiv.org/abs/2310.04406v3
# Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models ###### Abstract While large language models (LLMs) have demonstrated impressive performance on a range of decision-making tasks, they rely on simple acting processes and fall short of broad deployment as autonomous agents. We introduce LATS (Language Agent Tree Search), a general framework that synergizes the capabilities of LLMs in planning, acting, and reasoning. Drawing inspiration from Monte Carlo tree search commonly used in model-based reinforcement learning, LATS employs LLMs as agents, value functions, and optimizers, repurposing their latent strengths for enhanced decision-making. What is crucial in this method is the use of an environment for external feedback, which offers a more deliberate and adaptive problem-solving mechanism that moves beyond the limitations of existing techniques. Our experimental evaluation across diverse domains, such as programming, HotPotQA, and WebShop, illustrates the applicability of LATS for decision-making while maintaining competitive reasoning performance. In particular, LATS achieves 94.4% for programming on HumanEval with GPT-4 and an average score of 75.9 for web browsing on WebShop with GPT-3.5, demonstrating the effectiveness and generality of our method. ## 1 Introduction General autonomous agents capable of reasoning and decision-making in a variety of environments (Wooldridge Jennings, 1995) have been of longstanding interest in the field of artificial intelligence. While this has traditionally been studied in reinforcement learning, the recent rise of large language models (LLMs) (Brown et al., 2020; Chowdrey et al., 2022; Touvron et al., 2023; OpenAI, 2023) with strong reasoning and general adaptability offers an alternative paradigm. Not only have LLMs excelled on standard NLP tasks such as text summarization (Nallapati et al., 2016) or natural language inference (Bowman et al., 2015), but they have been adapted to an increasingly diverse set of tasks that often require advanced common-sense reasoning or quantitative skills (Cobbe et al., 2021; Saparov & He, 2022). LLMs are also capable of performing in complex environments that involve knowledge and reasoning, such as web navigation (Yao et al., 2022; Deng et al., 2023), tool-use (Schick et al., 2023), or open-ended games (Fan et al., 2022). Reasoning and acting abilities have also been improved by prompting techniques that augment LLMs with feedback or observations from an external environment (Yao et al., 2023; Gao et al., 2022; Shinn et al., 2023). This eliminates the need to rely entirely on the base abilities of the Language Model (LM), enhancing it through external tools or semantic feedback. Despite this strength, these methods are reflexive and fall short of humans' deliberate and thoughtful decision-making characteristics to solve problems (Sloman, 1996; Evans, 2010). Figure 1: An overview of LATS. LATS uses an external environment and self-reflection to improve reasoning and decision-making. In particular, such methods fail to consider multiple reasoning paths or to plan ahead. Recent search-guided LLM works (Xie et al., 2023; Yao et al., 2023; Hao et al., 2023) address this issue by searching over multiple reasoning chains. While these methods enable planning, these methods operate in isolation and do not incorporate external feedback that can improve reasoning. To help address these issues, we propose LATS (Language Agent Tree Search), a general framework for decision-making and reasoning with language models. LATS unifies LM planning, acting, and reasoning strategies by expanding ReAct (Yao et al., 2023) into a search over a combinatorial space of possible reasoning and acting steps. We adapt Monte Carlo tree search (MCTS) from model-based reinforcement learning (Silver et al., 2017; Anthony et al., 2017; Jiang et al., 2018) to language agents, repurposing a pretrained LLM as an agent, value function, and optimizer. Utilizing the strong natural language understanding and in-context learning ability of modern LMs, we use text as an interface between each component of the framework, allowing LATS to adapt planning to environmental conditions without additional training. To the best of our knowledge, _LATS is the first framework that combines reasoning, acting, and planning to enhance LLMs_. Notably, LATS doubles the performance of GPT-3.5 on HotPotQA (Yang et al., 2018) over ReAct (Yao et al., 2023) and raises the average score by \(22.1\) on WebShop (Yao et al., 2022). When used with GPT-4, LATS achieves a \(94.4\) Pass@1 rate for programming on HumanEval (Chen et al., 2021), setting the state of the art. To summarize, our **contributions** are the following: * We introduce an LM-based Monte Carlo tree search variant to deliberately construct the best trajectory from sampled actions, enabling more flexible and adaptive problem-solving compared to reflexive prompting methods. This is guided by heuristics from the LM. * By integrating external feedback and self-reflection, LATS enhances model sensibility and enables agents to learn from experience, surpassing reasoning-based search methods. * Through experiments across diverse domains like programming, interactive QA, and web navigation, we demonstrate the versatility of LATS in harnessing LLMs for autonomous reasoning and decision-making. ## 2 Related Work **LLMs for reasoning.** For LLMs, reasoning typically involves decomposing complex inputs into sequential intermediate steps towards a final answer (Cobbe et al., 2021), demonstrated with Chain-of-Thought (CoT) prompting (Wei et al., 2022) and its variants (Wei et al., 2022; Kojima et al., 2022; Wang et al., 2022). However, these methods, which create chains autoregressively in a single step, often suffer from error propagation as the number of steps increases (Guo et al., 2018; Chen et al., 2022) due to compound errors. Various advancements aim to mitigate this issue; some approaches, such as Self-Consistency (Wang et al., 2022), employ majority voting over sampled chains, while others focus on multi-step decomposition, such as least-to-most prompting (Zhou et al., 2022), or use of external tools such as a scratchpad (Nye et al., 2021) or compiler (Gao et al., 2022). Recently, CoT has been improved with search algorithms (Yao et al., 2023; Hao et al., 2023; Besta et al., 2023) that can sample trajectories more effectively. Tree-of-thought (ToT) prompting (Yao et al., \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Approach** & **Reasoning** & **Acting** & **Planning** & **Self** & **External** \\ & & & & **Reflection** & **Memory** \\ \hline CoT (Wei et al., 2022) & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ ReAct (Yao et al., 2023) & ✓ & ✓ & \(\times\) & \(\times\) & \(\times\) \\ ToT (Yao et al., 2023) & ✓ & \(\times\) & ✓ & ✓ & ✓ \\ RAP (Hao et al., 2023) & ✓ & \(\times\) & ✓ & \(\times\) & ✓ \\ Self-Refine (Madan et al., 2023) & ✓ & \(\times\) & \(\times\) & ✓ & \(\times\) \\ Beam Search (Xie et al., 2023) & ✓ & \(\times\) & \(\times\) & ✓ & \(\times\) \\ Reflexion (Shinn et al., 2023) & ✓ & ✓ & \(\times\) & ✓ & ✓ \\ **LATS (Ours)** & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: A summary of related work on reasoning, acting, and planning. LATS is the first work incorporating designs from all three domains, allowing use in all corresponding tasks. We refer to planning as the use of a search algorithm, self-reflection as the use of LM-generated feedback, and external memory as foraging past text context for future updates of solution. 2023a) uses DFS or BFS-based search guided by an LM-generated heuristic while Reasoning via Planning (RAP) (Hao et al., 2023) uses MCTS with rollouts simulated by the LM. However, they rely solely on LM internal knowledge and cannot adapt to useful external feedback. **LLMs for acting.** The strong reasoning and common-sense abilities of LLMs have also been adapted for decision-making or acting tasks as a policy model in interactive environments. In the realm of robotics LLMs have been employed as high-level controllers of control policies (Ahn et al., 2022; Huang et al., 2022; Driss et al., 2023). Similar work (Baker et al., 2022; Wang et al., 2023; Zhu et al., 2023) has also adapted LLM agents to complex multimodal games such as Minecraft (Guss et al., 2019; Fan et al., 2022). LLMs are particularly useful in text-based environments (Liu et al., 2018; Shridhar et al., 2020; Liu et al., 2023), where acting-based prompting techniques such as ReAct (Yao et al., 2023b) have seen success. Similar to CoT, ReAct is limited by its simplicity and cannot effectively adapt to environment conditions. Many extensions have been proposed to address this, including Self-refine (Madan et al., 2023) and Reflexion (Shinn et al., 2023; Yao et al., 2023c), which uses self-reflection to enhance reasoning and decision-making, and AdaPlanner (Sun et al., 2023), which incorporates both positive and negative environmental feedback. However these methods focus on refining an individual plan or trajectory and do not consider alternative choices at each step. In addition, recent work (Huang et al., 2023) has suggested LLMs cannot self-correct their internal reasoning, making it critical to use external feedback. Alternatively to pure decision-making environments, the reasoning and practical abilities of LLMs have been enhanced by access to external tools, such as APIs, search engines, calculators, or other models (Schick et al., 2023; Shen et al., 2023; Suris et al., 2023). Contrary to reasoning-based approaches, these methods have not been improved with planning, limiting their effectiveness. We summarize them in Tab. 1. **Tree-based search.** Tree-based search, where multiple branches of outcomes are explored during search, is widely used in many planning algorithms (Swiechowski et al., 2023; LaValle et al., 2001) and Reinforcement Learning (RL) (Hafner et al., 2019; Du et al., 2023; Wu et al., 2023) algorithms for its good exploration-exploitation trade-off. Though tree-based search requires an environment model that can expand from arbitrary state (Vodopivec et al., 2017), which often requires extra training in RL (Hafner et al., 2023), such problem does not exist for LM tasks as we can conveniently backup to any state by setting the input to be the context and corresponding previous output by the LM. Thus, we work on the tree-based framework and use MCTS (Swiechowski et al., 2023) to fully release the potential of LMs, while avoiding the cost of training a value function over language descriptions by leveraging the in-context learning (Brown et al., 2020) abilities of LLMs. Figure 2: An overview of the differences between LATS and recently proposed LM search algorithms ToT (Yao et al., 2023a) and RAP (Hao et al., 2023). LATS leverages environmental feedback and self-reflection to further adapt search and improve performance. ## 3 Preliminaries ### Problem Setting and Prompting Before describing LATS, we first define our problem and outline a few established methods that leverage large language models for reasoning or decision-making. In LM reasoning or decision making, we are given an input \(x\) in natural language and a pretrained language model \(p_{\theta}(x)\) parameterized by \(\theta\); our goal is to generate a final output \(y\sim p_{\theta}(x)\) corresponding to the answer (reasoning) or completes the task (decision-making). Both \(x\) and \(y\) are language _sequences_, which are comprised of a list of _tokens_ (the basic elements of natural language, often words), denoted as \(x=(x[1],\ldots,x[n])\) and \(y=(y[1],\ldots,y[n])\). The LM decodes text autoregressively, i.e., without other inputs, the probability for an LM to generate a sequence \(x\) is given by \(p_{\theta}(x)=\prod_{i=1}^{n}p_{\theta}(x[i]|x[1\ldots i-1])\). Usually, to improve the LM, _prompts_ are provided along with the input \(x\), which are specific instructions or few-shot input-output examples. We denote the generic process where an input \(x\) is transformed into an output \(y\) by LM: \(y\sim p_{\theta}(y|\texttt{prompt}_{IO}(x))\), where \(\texttt{prompt}_{IO}(x)\) denotes the input \(x\). **Chain-of-thought (CoT) Prompting**(Wei et al., 2022) was introduced to cater to scenarios where direct mapping from \(x\) to \(y\) is intricate, such as when \(x\) is from a mathematical query or challenging question. This method hinges on creating _thoughts_\(z_{1},\ldots,z_{n}\) that act as stepping stones between \(x\) and \(y\); each thought \(z_{i}\) is a language sequence. To employ CoT prompting, thoughts are extracted sequentially as \(z_{i}\sim p_{\theta}^{coT}(z_{i}|x,z_{1\ldots i-1})\), with the final output being \(y\sim p_{\theta}^{coT}(y|x,z_{1\ldots n})\). **Tree-of-thought (ToT) Prompting**(Yao et al., 2023) extends CoT prompting by exploring multiple reasoning paths over thoughts. It frames problems as a search over a tree where each node \(s=[x,z_{1\ldots i}]\) represents a partial solution state comprising the original input \(x\) and thought sequence \(z_{1\ldots i}\). Thoughts \(z_{i}\) are generated by proposal or sampling with CoT \(z_{i}\sim p_{\theta}^{CoT}(z_{i}|x,z_{1\ldots i-1})\). Deliberate search algorithms like breadth-first or depth-first search are used to systematically explore the tree, guided by heuristics based on language model evaluations \(V(s)\) of each state. **Reasoning via Planning**(RAP) (Hao et al., 2023) is similar to ToT, except that MCTS is used over DFS or BFS. Heuristics are designed from an LM, such as the likelihood or confidence of an action, and the LM is used as a world model to predict subsequent states during the simulation step. **ReAct**(Yao et al., 2023) extends language models to tasks where the mapping from \(x\) to \(y\) is enhanced by or requires interactions with an external environment, such as a game or API. This technique constructs an action space \(\hat{A}=A\cup Z\) that adds permissible actions \(a\) to the reasoning traces \(z\) from CoT. Observations \(o\) from the environment are used to improve both reasoning and acting. To solve problems with ReAct, after each observation, actions are generated from \(p_{\theta}\) sequentially as \(a_{i}\sim p_{\theta}^{ReAct}(a_{i}|x,o_{1\ldots i-1},a_{1\ldots i-1})\), with the final output being \(y\sim p_{\theta}^{ReAct}(y\mid x,o_{1\ldots n},a_{1\ldots n})\). While the previously described prompting techniques improve LM performance on reasoning tasks, they falter on difficult tasks that involve multifaceted decision-making due to several shortcomings: 1) _Flexibility_: Base prompting methods (CoT or ReAct) autoregressively sample from the LM, neglecting potential alternative continuations from specific states. 2) _Sensibility_: Reasoning-based methods (CoT, RAP, or ToT) rely solely on the internal representations of the LM and cannot consider external observations. This dependency risks fact hallucination and error propagation while setting a performance ceiling. 3) _Adaptability_: Current planning frameworks (RAP or ToT) use simple search algorithms such as BFS or cannot leverage environmental feedback to improve planning. Additionally, the agent is static and cannot reuse previous experience or learn from trial and error. While RAP also adopts MCTS, it is constrained to tasks where the LM can become a world model and accurately predict states. These shortcomings limit the ability of LMs to be deployed as general problem-solving agents and form the motivation for LATS. ### Monte-Carlo Tree Search (MCTS) Monte-Carlo Tree Search (MCTS) is a heuristic search algorithm that is proved successful on many decision-making environments such as Atari (Ye et al., 2021) and Go (Silver et al., 2016). MCTS builds a decision tree where every node in the tree is a state and edge is an action. MCTS runs for \(k\) episodes; for each episode, it starts from the root (i.e., initial state) and iteratively conducts two steps to expand the tree: 1) _Expansion_, where multiple children states \(s\) are explored from the current parent state \(p\) by sampling \(n\) actions, and 2) _Selection_, where the children with the highest UCT (_Upper Confidence bounds applied to Trees_) (Kocsis and Szepesvari, 2006) value is selected by the next iteration. The UCT of a child state \(s\) is calculated as follows: \[UCT(s)=V(s)+w\sqrt{\frac{\ln N(p)}{N(s)}}, \tag{1}\] where \(N(s)\) is the number of visits to a node \(s\), \(V(s)\) is the value function (expected return) from the subtree of \(s\), \(w\) is the exploration weight, and \(p\) is the parent node of \(s\). The child node with the highest UCT value is selected for expansion in the next iteration. When the end of an episode is reached, a _backpropagation_ is carried out: the return \(r\) is used for updating every \(V(s)\) along the path with the formula \(V(s)=\frac{V_{\text{old}}(s)(N(s)-1)+r}{N(s)}\), where \(V_{\text{old}}(s)\) is the old value function. Normally, the major shortcoming of MCTS is that it requires an environment model to undo previous steps and form a searching tree, which is often a strong assumption. However, such a limitation does not exist for LMs, as we can conveniently reset to any step by simply copy-pasting historical text input. Such a special property is the key motivation of our work. ## 4 Unifying Planning, Reasoning, and Acting ### LM Agent LATS supports sequential reasoning or decision-making tasks on the basis of ReAct. At time step \(t\), an agent receives an observation \(o_{t}\in O\) from the environment and takes an action \(a_{t}\in A\) following some policy \(\pi(a_{t}|x,o_{1\dots i-1},a_{1\dots i-1})\), where \(x\) consists of the task instruction and a number of few-shot examples. We initialize the agent with \(p_{\theta}\) to leverage the useful language representations of an LM as a base decision-maker. We follow the ReAct instantiation in which the action space \(\hat{A}=A\cup Z\) consists of both the space of permissible actions \(A\) and language space of reasoning traces \(Z\). Actions directly affect the environment and result in observation, while thoughts are used to formalize decisions by organizing information, planning future actions, or injecting internal knowledge. The exact instantiation of the action space depends on the particular environment; for decision-making tasks actions might consist of commands on a website while for reasoning tasks the action space might be limited to a few external tools or APIs. Instead of greedily decoding one trajectory or solution, we sample \(n\) actions from \(p_{\theta}\) using the current state. This is based on the intuition that for complex decision-making tasks, there is likely to be a range of potential trajectories or reasoning paths that are correct (Evans, 2010). Sampling a diverse set of candidates at each step mitigates the stochastic nature of LM text generation and enables greater exploration in both the decision-making and reasoning space. We wrap \(p_{\theta}\) within our proposed search algorithm to deliberately construct the best trajectory from sampled actions. ### Lats The main component of LATS is a search algorithm that controls the overall problem-solving process with deliberate planning. To find the most promising trajectory and systemically balance exploration with exploitation, we adopt a variant of Monte Carlo Tree Search (MCTS) that frames decision-making as a tree search, in which each node \(s=[x,a_{1\dots i},o_{1\dots i}]\) represents a state comprising the original input \(x\), action sequence \(a_{1\dots i}\), and observation sequence \(o_{1\dots i}\). To adapt MCTS for language agents, LATS repurposes \(p_{\theta}\) as an agent, state evaluator, and feedback generator, leveraging the useful language priors of modern LMs to facilitate planning. While standard MCTS and RAP Hao et al. (2023) rely on internal dynamics models to facilitate simulation, LATS is model-free and uses environment interaction. LATS consists of a series of operations, _selection, expansion, evaluation, simulation, backpropagation, and reflection_, performed in succession until the task is successfully completed or a computational limit is reached. The full psuedocode of LATS can be found in Sec. A in the Appendix. **Selection.** In the first operation, the algorithm identifies a segment of the current tree most suitable for subsequent expansion. Starting from the root node, denoted as the initial state \(s_{0}\), a child node is selected at each tree level until a leaf node is reached. To balance exploration and exploitation, we use the UCT algorithm as shown in Eq. 1. **Expansion.** After selecting a node, the second operation expands the tree by sampling \(n\) actions from \(p_{\theta}\), as described in the prior section. The environment receives each action and returns corresponding feedback as an observation. This results in \(n\) new child nodes added to the tree. This tree is stored in an external long-term memory structure. **Evaluation.** The third operation assigns a scalar value to each new child node to be used for selection and backpropagation. This value effectively quantifies the agent's progress in task completion, serving as a heuristic to steer the search algorithm towards the most promising regions of the tree. Following Yao et al. (2023) we repurpose \(p_{\theta}\) into a value function by prompting it to reason about a given state. To obtain a scalar value, we instruct \(p_{\theta}\) to end its reasoning trace with a score indicating the correctness of the trajectory. This method offers enhanced flexibility over programmed heuristics (Campbell et al., 2002) and greater efficiency than learned heuristics (Silver et al., 2017). **Simulation.** The fourth operation expands the currently selected node until a terminal state is reached. At each depth level we sample and evaluate nodes with the same operations, but prioritize nodes of highest value. Reaching a terminal state provides objective feedback on the correctness of a trajectory. If the task is completed successfully, then LATS terminates the search. If the solution is partially successful or unsuccessful, then we perform two additional operations as described below. **Backpropagation.** This operation updates the values of the tree based on the outcome of a trajectory. For each node \(s_{0},s_{1},\dots,s_{n}\) in the trajectory from root (initial state \(s_{0}\)) of the searching tree to leaf (terminal state \(s_{n}\)), its value is updated to reflect the outcome of the simulation by \(N(s_{i})=N_{\text{old}}(s_{i})+1\) and \(V(s_{i})=\frac{r+N_{\text{old}}(s_{i})V_{\text{old}}(s_{i})}{N(s_{i})}\), where \(r\) is the return and \(N_{\text{old}},V_{\text{old}}\) are the old number of visits and value function. These updated values are used in the UCT formula (Eq. 1) to guide the selection of the next node for exploration. **Reflection.** In addition to the environmental feedback, we also leverage _self-reflection_ to further refine the decision-making process (Shinn et al., 2023; Madan et al., 2023). Upon encountering an unsuccessful terminal node, \(p_{\theta}\) is prompted with the trajectory and final reward to provide a verbal self-reflection that summarizes the errors in the reasoning or acting process and proposes superior alternatives. We store both failed trajectories and corresponding reflections in the memory. In subsequent iterations, these are integrated as additional context to the agent and value function, refining both through in-context learning. This imparts a semantic gradient signal more useful than a scalar value, enabling the agent to learn from trial and error without the cost of expensive optimization processes such as reinforcement learning. Conceptually, LATS has the following advantages as a general framework for reasoning and decision-making with LM agents. (1) _Generality_: LATS supports both reasoning and decision Figure 3: An overview of the six operations of LATS. A node is _selected_, _expanded_, _evaluated_, then _simulated_ until a terminal node is reached, then the resulting value is _backpropagated_. If the trajectory fails, a _reflection_ is generated and used as additional context for future trials. These operations are performed in succession until the budget is reached or task is successful. making tasks by defining a shared space of thoughts and actions. (2) _Deliberate_: The use of MCTS and LM value function ensures a principled search that selects options with high value while exploring promising alternatives. (3) _Adaptability_: LATS is designed around the use of external feedback through observations and self-reflection, enabling greater adaptation during problem-solving. (4) _Flexibility_: LATS can accommodate different scenarios, environments, and resource stipulations by modifying state design and tree dimensions. (5) _Modularity_: The base LM agent, reflection generator, and value function can be independently altered and adapted to individual LM properties. ## 5 Experiments To demonstrate the general applicability of LATS, we evaluate our method on a variety of decision-making domains that requires both reasoning and acting ability: programming (Chen et al., 2021; Austin et al., 2021), HotPotQA (Yang et al., 2018), and WebShop (Yao et al., 2022). ### HotPotQA For a task that can be approached with both reasoning-based and acting-based strategies, we consider HotPotQA (Yang et al., 2018), a multi-hop question-answering benchmark that requires retrieval over two or more Wikipedia passages. For the action space, in addition to LM thoughts we follow the setup from Yao et al. (2023b), which provides the agent with API calls to search and lookup information. The output of these API calls and self-generated reflections form the observation space. We use a subset of 100 questions and three few-shot examples for each method. For ToT, we use DFS as the base search algorithm and scoring with the LM as the heuristic. For all methods that involve sampling, including LATS, we sample \(k=50\) trajectories. More details and prompts can be found in Sec. D and Sec. E in the Appendix. We evaluate internal reasoning strategies by removing actions and observations from the context, corresponding to CoT (Wei et al., 2022) and its variants, CoT-SC (Wang et al., 2022), ToT (Yao et al., 2023a), and RAP (Hao et al., 2023). These methods rely solely on the agent's existing knowledge to answer the question. We also consider acting-based methods ReAct, Reflexion, and LATS, which augment the agent with the interactive API environment and primarily evaluate its information retrieval abilities. While LATS is designed for scenarios where external feedback can enhance reasoning, we also implement a reasoning-only version with CoT as the base prompt. We also combine internal and external reasoning in LATS by first prompting with a CoT-based prompt, then switching to a ReAct-based prompt upon failure. This is closer to how humans might approach this task, by using tools to lookup additional information only when the answer is not already known. **Results.** We observe in Tab. 2 that both internal reasoning and external retrieval strategies perform well on HotPotQA. Due to their large-scale training corpus, modern LLMs already encode factual knowledge and can often directly answer the question correctly. While CoT can slightly enhance performance on questions requiring reasoning, larger gains are observed with search methods ToT and RAP, which can sample and explore more outputs. We observe similar results for acting-based methods. LATS surpasses ReAct, even when sampling the same number of trajectories, by expanding more nodes with principled search (see Fig. 5 in Appendix D for a qualitative sample). This is \begin{table} \begin{tabular}{l|c} \hline \hline **Prompt Method** & **HotpotQA (EM)** \\ \hline I/O & 0.32 \\ CoT (Wei et al., 2022) & 0.34 \\ CoT - SC (Wang et al., 2022) & 0.38 \\ ToT (Yao et al., 2023a) & 0.55 \\ RAP (Hao et al., 2023) & 0.60 \\ RAP (n = 10) & 0.60 \\ LATS (CoT) & **0.60** \\ \hline \hline \end{tabular} \begin{tabular}{l|c} \hline \hline **Prompt Method** & **HotpotQA (EM)** \\ \hline ReAct (Yao et al., 2023b) & 0.32 \\ ReAct (best of k) & 0.38 \\ Reflexion (Shim et al., 2023) & 0.51 \\ LATS & 0.61 \\ LATS (n = 3) & 0.56 \\ LATS (n = 10) & 0.64 \\ LATS (CoT + ReAct) & **0.71** \\ \hline \hline \end{tabular} \end{table} Table 2: GPT-3.5 reasoning-based prompting (left) and acting-based prompting (right) results on HotpotQA. LATS achieves the highest exact match (EM) for acting and is competitive on reasoning. Unless otherwise specified, we sample \(n=5\) nodes during expansion and \(k=50\) trajectories. demonstrated when modifying \(n\), the number of nodes expanded during each iteration. Increasing \(n\) can consistently improve performance, although at greater computational and inference costs. LATS is also competitive to RAP on internal reasoning but performs worse than acting. Combining internal and external reasoning in LATS results in the highest performance, indicating the importance of external feedback in augmenting reasoning even in tasks the base LM can already perform. ### Programming To demonstrate the importance of external observations for complex reasoning tasks, we evaluate the baselines and LATS on programming with Humaneval (Chen et al., 2021) and MBPP (Austin et al., 2021). Both datasets measure the correctness of synthesized programs in Python from natural language docstrings. We use individual solutions as the action space and test suite and compiler feedback as the external observation. We follow Chen et al. (2022) and use an LLM to generate a synthetic test suite of syntactically valid "assert" statements for each question. For each step, the solution is evaluated on this test suite, and the results including successful and failed tests and compiler output, are added to the context as an observation. We use the same test suite for Reflexion. For this task, the reasoning and acting baselines share an action space, but acting methods are able to incorporate observations as additional context. For LATS, since each action corresponds to a complete solution, we skip the simulation step of LATS and directly use the percentage of passed tests as the backpropagated reward. We use \(k=8\) iterations, set the number of generated tests at \(4\), and sample \(n=5\) solutions during expansion. After the search is completed, we select the solution with the highest value and evaluate it on the real test suite for the pass@1 accuracy evaluation. More details and prompts can be found in Sec. D and Sec. F in the Appendix. **Results.** We find in Tab 3 that both search and semantic feedback are crucial for better performance. Despite not using observations, ToT and RAP are competitive with Reflexion. LATS has the highest performance on both datasets. Since RAP uses a similar search algorithm as LATS, this reveals the importance of external feedback for difficult reasoning tasks such as programming. With GPT-4, using LATS sets the state of the art for HumanEval, showing LATS can be used with more advanced LLMs for higher performance. ### Weshshop For a complex decision-making environment with practical applications, we consider WebShop (Yao et al., 2022), an online shopping environment composed of a website with 1.18M real-world products and 12k human instructions. Agents must navigate a website through a variety of commands to purchase an item matching a user specification. We use the preconstructed action space of search and click commands and browser feedback and reflections for the observation. The performance is gauged using two metrics: an average score, reflecting the percentage of user-specified attributes met by the selected product, and a success rate, indicating the frequency with which the chosen product fulfills all given conditions. We compare against acting-based prompting methods and RL-based \begin{table} \begin{tabular}{l c|c} \hline \hline **Prompt Method** & **Model** & **Pass@1** \\ \hline CoT (Wei et al., 2022) & GPT-3.5 & 46.9 \\ ReAct (Yao et al., 2023b) & GPT-3.5 & 56.9 \\ Reflexion (Shim et al., 2023) & GPT-3.5 & 68.1 \\ ToT (Yao et al., 2023a) & GPT-3.5 & 54.4 \\ RAP (Hao et al., 2023) & GPT-3.5 & 63.1 \\ LATS (Ours) & GPT-3.5 & **83.8** \\ \hline I/O & GPT-4 & 80.1 \\ Reflexion & GPT-4 & 91.0 \\ LATS & GPT-4 & **94.4** \\ \hline \hline \end{tabular} \end{table} Table 3: GPT-3.5 and GPT-4 Pass@1 accuracy on HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021). Prompting with LATS achieves the highest performance. We sample 5 solutions during expansion for 8 iterations. approaches. We evaluate on 50 instructions, expand \(n=5\) children for LATS, and set \(k=30\) for LATS, ReAct best of \(k\), and Reflexion. More details and prompts are in Appendix D and G. **Results.** We find in Tab. 4 that GPT-3.5 with ReAct is competitive to imitation learning, and can exceed reinforcement learning techniques with stronger prompting strategies. Sampling \(k=30\) trajectories with ReAct and Reflexion results in a similar performance, suggesting the semantic feedback is not as helpful in complex environments like WebShop. Indeed like in Shinn et al. (2023), we find that generated reflections are often generic and do not provide useful feedback, resulting in a tendency for the agent to become stuck in local minima. However, using LATS indeed results in a noticeable improvement, indicating a more effective exploration for the same number of iterations. ### Additional Observations We also conduct additional experiments on HotPotQA to demonstrate the effect of each component of LATS. We also design a version of ToT and RAP with ReAct prompt and can handle external observations. We use HotPotQA as our setup incorporates both reasoning (through thoughts) and acting (through API calls); the results are shown in Tab. 5. More ablations for token consumption on HotPotQA are in Tab. 7 in Appendix C. Note that baselines generally perform worse than the reasoning-only setting of HotPotQA, which indicates that the acting-based setting is more challenging and adaption of search algorithms to decision-making scenarios is non-trivial. **Self-reflection.** We use self-reflection to provide additional semantic signals for the agent. We observe \(a_{0}.05\) performance drop when removed from LATS, suggesting this is useful. This is a smaller gain Reflexion (Shinn et al., 2023) observes over ReAct (Yao et al., 2023b) as shown in Tab. 2, suggesting overlap between the types of questions where there is an improvement with self-reflection and search. This variant outperforms RAP-ReAct, reflecting our improvements to MCTS. **Search Algorithm.** MCTS is a more principled search algorithm than variants like A* or DFS search and the basis for observed performance gains. We observe the effects of using DFS, and incorporate the LM-based heuristic used in ToT (Yao et al., 2023a) in which branches with low values are pruned. This removes the selection and backpropagation operations, and we observe a \(0.08\) drop in performance when sampling the same number of nodes, but outperforms ToT-ReAct. ## 6 Conclusion In this work, we introduce Language Agent Tree Search (LATS), the first framework to unify planning, acting, and reasoning for enhanced LLM problem solving. By deliberately constructing trajectories with search algorithms, incorporating external feedback, and enabling agents to learn from experience, LATS addresses key limitations of prior prompting techniques. Our evaluations demonstrate the ability of LATS to harness LLM capabilities for a variety of decision-making tasks while keeping its reasoning ability without additional training. The proposed synergies between search, \begin{table} \begin{tabular}{c|c c} \hline Method & Score & SR \\ \hline ReAct (Yao et al., 2023b) & 53.8 & 28.0 \\ ReAct (best of k) & 59.1 & 32.0 \\ Reflexion (Shinn et al., 2023) & 64.2 & 35.0 \\ LATS & **75.9** & **38.0** \\ \hline IL & 59.9 & 29.1 \\ IL+RL & 62.4 & 28.7 \\ Fine-tuning (Furts et al., 2023) & 67.5 & 45.0 \\ \hline Expert & 82.1 & 59.6 \\ \hline \end{tabular} \end{table} Table 4: Score and success rate (SR) on WebShop. Table is separated into prompting, RL-based training, and human performance. For the same number of iterations, LATS improves both score and success rate, and surpasses RL-based training. IL/IL+RL taken from Yao et al. (2022). \begin{table} \begin{tabular}{l|c} \hline Prompt Method & HotPotQA (EM) \\ \hline ToT (ReAct) & 0.39 \\ RAP (ReAct) & 0.54 \\ LATS (No LM Heuristic) & 0.37 \\ LATS (DFS) & 0.42 \\ LATS (No Reflection) & 0.56 \\ LATS & 0.61 \\ \hline \end{tabular} \end{table} Table 5: Ablation results on LATS and baseline variants in HotPotQA; we use ReAct as the base prompt and sample \(n=5\) children and \(k=50\) maximum trajectories. LATS requires every component and operation for optimal performance. interaction, and reflection offer a versatile approach to autonomous decision-making, highlighting the potential of LLMs as generalist agents. A full discussion of the limitations and broader impacts is in Appendix B.
2301.12525
Composer's Assistant: An Interactive Transformer for Multi-Track MIDI Infilling
We introduce Composer's Assistant, a system for interactive human-computer composition in the REAPER digital audio workstation. We consider the task of multi-track MIDI infilling when arbitrary track-measures have been deleted from a contiguous slice of measures from a MIDI file, and we train a T5-like model to accomplish this task. Composer's Assistant consists of this model together with scripts that enable interaction with the model in REAPER. We conduct objective and subjective tests of our model. We release our complete system, consisting of source code, pretrained models, and REAPER scripts. Our models were trained only on permissively-licensed MIDI files.
Martin E. Malandro
2023-01-29T19:45:10Z
http://arxiv.org/abs/2301.12525v2
# Composer's Assistant: Interactive Transformers for Multi-Track Midi Infilling ###### Abstract. We consider the task of multi-track MIDI infilling when arbitrary (track, measure) pairs of information have been deleted from a contiguous slice of measures from a MIDI file. We train two T5-like models to solve this task, one using a basic MIDI-like event vocabulary and one using a joined word-like version of this vocabulary. We introduce a new test set, created from the Lakh MIDI dataset, consisting of 9 multi-track MIDI infilling tasks. We evaluate our models on these tasks and find that one model works better on some tasks while the other works better on others. Our results have implications for the training of neural networks in other small-vocabulary domains, such as byte sequence modeling and protein sequence modeling. We release our source code, and we demonstrate that our models are capable of enabling real-time human-computer interactive composition in the REAPER digital audio workstation. ## 1. Introduction There has been a recent explosion in the development and application of AI models in fields such as art, music, and programming. The most popular AI tools available today give users without domain knowledge the ability to create high-level works in the domain via prompting--see, e.g., Dall-E-2 [18], Stable Diffusion [20], and AIVA [1]. Other tools have been built to enhance the workflow of users within their fields, typically also via some form of prompting--see, e.g., GitHub Copilot [3]. In this paper we apply transformers [25] to the task of multi-track MIDI infilling, with the goal of building an interactive tool to help composers flesh out, continue, and/or create variations within their compositions. We employ a track-and-measure-based infilling approach that allows composers to generate new notes for arbitrary (track, measure) subsets of their compositions, conditioned on their surrounding contexts. ### Previous Work Previous generative models for music include Music Transformer [7], SymphoynNet [10], MuseNet [12], and DeepBach [5]. Most generative music models are called via command line or basic web interface, and offer no interactive ability. For instance, while Music Transformer is capable of harmonizing a given melody, it offers the user no ability to keep part of the harmonization and regenerate the other part. Similarly, SymphonyNet and MuseNet allow the user to input a prompt and receive a continuation of the prompt, but do not allow the user to regenerate individual instruments or measures within the continuation while keeping the rest of the continuation intact. When we conceived this project, DeepBach was the only exception we were aware of: It allows the user to regenerate notes in any rectangular window of a music editor. After we trained the models described in this paper, we became aware of two other recent projects that also worked on the task of multi-track MIDI infilling: First, in [2] the authors train two separate GPT-2-like [14] models on the tasks of measure infilling and track infilling. They train 4-bar and 8-bar models for each task. They restrict their training to examples with a maximum of 12 tracks for the 4-bar models and 6 tracks for the 8-bar models, and do not evaluate their models. Their web demo is limited to inputs with a time signature of 4/4. Second, in [4] the authors target arbitrary (track, measure) subsets as we do and offer several useful user control tokens for generation, but limit their training, inputs, and tests to a maximum of 3 tracks (one melody track, one bass track, and one accompaniment track, with no drums), 16 bars, a temporal resolution of 16th notes, and time signatures of 4/4, 3/4, 2/4, and 6/8. ### Our Contributions Most generative music models are trained on uncurated MIDI datasets downloaded from the internet, with no apparent cleaning procedure, even though these datasets have issues with file duplicates and fitness for training algorithms. Our first contribution is a data cleaning method that can be applied to MIDI datasets. This method includes a new technique for detecting whether the notes in a MIDI file correspond to the grid defined by the file (Section 2.1), a new file deduping technique (Section 2.2), and a new technique for detecting and removing shifted near-duplicate tracks (Section 2.4) within a MIDI file. Our second contribution is a new test dataset, created from 10,000 files selected from the Lakh MIDI
2308.11527
BERT4CTR: An Efficient Framework to Combine Pre-trained Language Model with Non-textual Features for CTR Prediction
Although deep pre-trained language models have shown promising benefit in a large set of industrial scenarios, including Click-Through-Rate (CTR) prediction, how to integrate pre-trained language models that handle only textual signals into a prediction pipeline with non-textual features is challenging. Up to now two directions have been explored to integrate multi-modal inputs in fine-tuning of pre-trained language models. One consists of fusing the outcome of language models and non-textual features through an aggregation layer, resulting into ensemble framework, where the cross-information between textual and non-textual inputs are only learned in the aggregation layer. The second one consists of splitting non-textual features into fine-grained fragments and transforming the fragments to new tokens combined with textual ones, so that they can be fed directly to transformer layers in language models. However, this approach increases the complexity of the learning and inference because of the numerous additional tokens. To address these limitations, we propose in this work a novel framework BERT4CTR, with the Uni-Attention mechanism that can benefit from the interactions between non-textual and textual features while maintaining low time-costs in training and inference through a dimensionality reduction. Comprehensive experiments on both public and commercial data demonstrate that BERT4CTR can outperform significantly the state-of-the-art frameworks to handle multi-modal inputs and be applicable to CTR prediction.
Dong Wang, Kavé Salamatian, Yunqing Xia, Weiwei Deng, Qi Zhiang
2023-08-17T08:25:54Z
http://arxiv.org/abs/2308.11527v1
BERT4CTR: An Efficient Framework to Combine Pre-trained Language Model with Non-textual Features for CTR Prediction ###### Abstract. Although deep pre-trained language models have shown promising benefit in a large set of industrial scenarios, including Click-Through-Rate (CTR) prediction, how to integrate pre-trained language models that handle only textual signals into a prediction pipeline with non-textual features is challenging. Up to now two directions have been explored to integrate multimodal inputs in fine-tuning of pre-trained language models. One consists of fusing the outcome of language models and non-textual features through an aggregation layer, resulting into ensemble framework, where the cross-information between textual and non-textual inputs are only learned in the aggregation layer. The second one consists of splitting non-textual features into fine-grained fragments and transforming the fragments to new tokens combined with textual ones, so that they can be fed directly to transformer layers in language models. However, this approach increases the complexity of the learning and inference because of the numerous additional tokens. To address these limitations, we propose in this work a novel framework BERT4CTR, with the Uni-Attention mechanism that can benefit from the interactions between non-textual and textual features while maintaining low time-costs in training and inference through a dimensionality reduction. Comprehensive experiments on both public and commercial data demonstrate that BERT4CTR can outperform significantly the state-of-the-art frameworks to handle multi-modal inputs and be applicable to CTR prediction. Non-textual features, Multi-modal inputs, Pre-trained language model, CTR prediction, Uni-Attention + Footnote †: [leftmargin=*] *Information systems Online advertising Recommender systems ## 1. Introduction Machine learning has frequently to deal with multi-modal inputs that are mixing numerical, ordinal, categorical, and textual data. This is especially the case for the Click-Through-Rate (CTR) prediction, where one tries to predict the likelihood that a recommended candidate shown after a query entered on a search engine, will be clicked based on not only the semantic relevance between the query and the description of candidate, but also the user's attributes such as user's ID, user's gender, user's category, _etc._, which are non-textual. In recent work, the pre-trained language models like BERT (Devlin et al., 2019) and RoBERTa (Kal Up to now two directions have been explored to integrate multi-modal inputs in fine-tuning of pre-trained language models. In the first approach, called here as "Shallow Interaction", the language model with textual input is treated as a separated and specific network, and the outcome of this network (final output score or [CLS] pooling layer) is fused into the other network dealing with non-textual inputs through an aggregation layer. This approach has been adopted in (Beng et al., 2017)(Li et al., 2017), resulting into ensemble learning framework, and (Wang et al., 2018) has done an in-depth analysis on this approach. In this approach, interaction between textual and non-textual features only happens in the last aggregation layer. As a consequence cross-information between textual and non-textual inputs are not dug enough to fine tune the model. In the second class of approach, non-textual features are directly fed as the inputs of transformer layers in the language model, which enables to leverage the non-textual inputs at the beginning stage of model learning. Such an approach is at the core of VideoBERT (Wang et al., 2018), VL-BERT (Wang et al., 2019) and NumBERT (Wang et al., 2019), where non-textual signals, such as images or numbers, are split into fine-grained fragments (_e.g._, regions-of-interest in images or digits) each of which is transformed as a new token and combined with textual tokens. However, there are always several hundreds of non-textual features in the task of CTR prediction, where the overlong additional inputs complicate the computations and make the time-costs in learning and inference of model intractable. Given the limitations of the two approaches in literature, we introduce a simple and light framework, named _BERT4CTR_, to handle multi-modal inputs mixing textual and non-textual features in pre-trained language models. Our approach is based on a _Uni-Attention_ mechanism integrating the semantic extraction from textual features, with cross-information between textual and non-textual features. We apply a dimensionality reduction operation in order to decrease the time-costs both in learning and inference. Besides, a two-steps joint-training is introduced to fine-tune the model and further improve the accuracy of prediction. The proposed approach scales well with the growing of non-textual features, that can be expected to improve the industrial CTR prediction tasks with large number of features. Through empirical evaluation on both commercial and public data, we show that, compared with these state-of-the-art approaches combining textual and non-textual features in CTR prediction mentioned above, BERT4CTR significantly improves the accuracy of predictions while keeping low latency in training and inference. In particular, our results indicate that by increasing the number of non-textual inputs, the advantages of BERT4CTR are enhanced, _e.g._, on the public data set with 57 non-textual features, BERT4CTR compared with NumBERT shows a significant gain of 0.7% for the Area Under the ROC Curve (AUC) along with a decrease in training cost of 30%, and a decrease in inference cost of 29%. In the meanwhile, on the commercial data set with 90 non-textual features, BERT4CTR provides an AUC gain of 0.6%, along with a decrease in training cost of 64%, and in inference cost of 52%. In section 2, we present the related work. The section 3 introduces the design of BERT4CTR. The evaluation is presented in section 4. Finally concluding remarks are provided. ## 2. Related Work This section presents the related works on multi-modal inputs handling that combine non-textual features with pre-trained language models, and its application to CTR prediction. ### Multi-modal Inputs Handling The issue of handling inputs that are mixing textual and non-textual input, and integrating semantic insights coming from pre-trained language models like BERT (Wang et al., 2018) has been already investigated in the literature VideoBERT (Wang et al., 2018), VL-BERT (Wang et al., 2019), NumBERT (Wang et al., 2019) and CTR-BERT (Li et al., 2017). The approach followed in these works consists of splitting the non-textual signals into fine-grained fragments each of which is transformed as a new token and combined with textual tokens as the inputs of transformer layers. However, the addition of tokens representing the non-textual features complicates the language model structure and can make the learning and inference phases too costly for model updating and online serving. ### Models for CTR Prediction CTR prediction is one of the major practical applications of deep learning. Clicks made on advertisements or candidates shown along with search results, or web content presentation, are the main source of revenue for a large set of web actors. In this context, models are always used to select quality advertisements or candidates to present according to the web contents, _e.g._, in sponsored search engines (Li et al., 2017) or personal recommendation systems (Wang et al., 2019), which should achieve both low-latency and high-accuracy. For example, the CTR prediction model in Baidu.com uses a deep neural network, called _Phoenix Nest_, fed with a handcrafted set of features extracted from the user, the query, and advertisement properties (Bai et al., 2017). Google Ads is using the "Follow The Regularized Leader" (FTRL) model to predict CTR (Fried et al., 2017), while Google play is using a Wide & Deep model described in (Bai et al., 2017). The work (Wang et al., 2019) introduces "Product based Neural Network" (PNN) model to capture interactive patterns between features. This PNN model is extended in (Chen et al., 2018) to DeepFM model that emphasizes the interactions between low- and high-order feature. Microsoft Bing.com has adopted a Neural Network boosted with GBDT ensemble model (Gidaris et al., 2017) for ads CTR prediction. This is the commercial scenario we are considering through this paper. The features used in these CTR prediction models can be grouped into two categories: one is the raw texts from user, query and ad, and the other is the non-textual features including the attributes of users and items such as gender, age, UserId, Add, _etc._ and the outputs generated from sub-models, such as LR model (Gidaris et al., 2017)(Fried et al., 2017), pre-trained language model (Gidaris et al., 2017)(Li et al., 2017), _etc._. ### Application of Pre-trained Language Models in CTR Prediction Recent work has shown the abilities of pre-trained language models to extract deep relationship in a sentence pair (Chen et al., 2018)(Gidaris et al., 2017)(Gidaris et al., 2017)(Gidaris et al., 2017), that are useful for augmenting the semantic features of query and recommendation pair in CTR prediction (Gidaris et al., 2017)(Li et al., 2017)(Li et al., 2017)(Li et al., 2017)(Li et al., 2017)(Li et al., 2017). Generally, the pre-trained language models are trained against the real click data, targeting directly the prediction of click/non-click labels. Thereafter, the score from the final layer (Li et al., 2017)(Li et al., 2017), or the embedding from the intermediate layer (Gidaris et al., 2017)(Li et al., 2017) of these fine-tuned language models is used as an additional NLP input feature in the CTR prediction model. For example, Microsoft Bing Ads uses the embedding from the hidden layer of TwinBERT model as a semantic feature (Krizhevsky et al., 2017) while Meituan.com and JD.com use the output score of BERT (Zhu et al., 2018)[(2018)]. Besides that cascading framework, some works consider the fusion between the outputs of language models and non-textual features through an aggregation layer, resulting into ensemble learning frameworks (called "Shallow Interaction" in this paper), such as BST (Botto and Carmona, 2017) and CTR-BERT (Krizhevsky et al., 2017), and (Zhu et al., 2018) has done an in-depth analysis on the Shallow Interaction frameworks, where the cross-information between textual and non-textual inputs are not dug enough to fine tune the language models. ## 3. Description of Bert4Ctr ### Problem Statement CTR prediction models are always using multi-modal inputs, mixing \(N\) textual features like searching query, titles and URLs of potential ads to show, _etc._, that are denoted as \(\mathcal{T}=\{t_{1},t_{2},...,t_{N}\}\), and \(M\) non-textual features of different type, _e.g._, dense features such as historical CTR of the query, last click time of the user, _etc._, sparse features, like ID, category of user, _etc._, denoted as \(\mathcal{C}=\{c_{1},c_{2},...,c_{M}\}\). In traditional usage of pre-trained language models in CTR prediction, where only textual features are used in fine-tuning on click data, the learning process can be formalized as calibrating a network of which the predicted score can approximate the conditional probability of all outcome alternatives, click or non-click for CTR application, given the textual contexts of query and candidates: \[P_{click}=P(click=1|\mathcal{T}) \tag{1}\] As stated above, the non-textual features are also crucial in CTR prediction and should not be ignored. When non-textual features are added the conditional probability becomes: \[P_{click}=P(click=1|\mathcal{T},\mathcal{C}) \tag{2}\] The goal in this paper is to design an efficient network structure that can generate scores approximating the distribution \(P_{click}\) in Equation 2, while maintaining acceptable training and inference time-costs for industrial application. ### Model Design We then describe the evolution of our proposed network structure, BERT4CTR, by beginning with the NumBERT framework and gradually adding new components to it. #### 3.2.1. NumBERT Description NumBERT (Zhu et al., 2018), is the widely-used systematic approach to integrate textual and numerical features in pre-trained language models. Pre-trained language models like BERT, along with a large class of neural networks, are using attention layers, that enhance over time some part of the input to enable the training process to concentrate on learning them. In each attention layer, a feed-forward network and a residual network are used to control the Vanishing/Exploding gradient issue (Vinyals et al., 2015). NumBERT uses a similar structure with several layers of bidirectional self-attention. The core idea in NumBERT is to replace all numerical instances with their scientific notation representations, _i.e._, the number 35 is replaced by "35 [EXP] 1", where [EXP] is a new token that is added to the vocabulary. These transformed non-textual inputs are thereafter considered as normal texts and are fed to the language model. For the CTR prediction application, several transformed non-textual inputs might be concatenated using separator token [SEP], to distinguish one numerical feature from other, generating a long string of text which is appended to the end of \(<query,ad>\) textual input, and is used for the fine-tuning of language model on click data. Figure 1 depicts an example of transformation from original non-textual features to transformed inputs in NumBERT. While NumBERT approach enables the language model to understand the numbers in the non-textual signals, the model still misses two crucial elements. First, the contextual relationship between textual features and non-textual ones in CTR prediction are always not obvious. For example, the numerical features such as the historical CTR of user, the ID of user _etc._, are less correlated with \(<query,ad>\) texts in semantics. Second issue is related to the fact that the positions of these transformed tokens from non-textual features do not bear semantic meanings as normal texts have, _i.e._, the numerical features in CTR prediction models are always independent of each other. These two limitations indicate that sharing the same attention weights and mechanisms for both textual features and the transformed non-textual ones appended are not optimal in fine-tuning, and using simply NumBERT approach to integrate non-textual inputs cannot improve the performance of learning objectives well, as will be shown later in Section 4. #### 3.2.2. Uni-Attention To address these two issues, we have improved the architecture of NumBERT. We are using the same bidirectional self-attention mechanism as in NumBERT with inputs only from textual tokens. However, for non-textual part, a new type of attention mechanism is introduced, called _Uni-Attention_. It is still a Query-Key-Value (QKV) attention function (Botto and Carmona, 2017), where the Query is coming only from non-textual tokens, while the Key and Value are coming from textual tokens in the same layer, _i.e._, in the calculation of uni-attention on each token in non-textual part, one input is the matrix projected from the value of that token itself, and the other input is the matrix projected from values of all tokens in textual part. In the uni-attention mechanism, the non-textual components have no positional-embedding, which avoids the issue on positional semantics described above. Moreover, this hybrid framework allows the tokens in textual part to dig deep for semantic relationship between each other by the aid of pre-trained attention weights, while grasping the cross-information between textual and non-textual ones in parallel. Feed-forward and residual networks are also used on each uni-attention output to control the Vanishing/Exploding gradient issue. We show in Figure 2 the "Uni-Attention" design. In the last attention layer, all uni-attention outputs from transformed non-textual inputs are gathered as a single hidden layer, which is concatenated with the Figure 1. Example of transformed and concatenated non-textual and textual inputs for NumBERT [CLS] pooling layer from textual part. Thereafter the concatenated layer is fed to a MultiLayer Perception (MLP) that will finally predict the probability of click/non-click. We will show in Section 4 that the proposed design improves strongly the final AUC for both commercial and public data, compared with simple NumBERT. #### 3.2.3. Dimensionality Reduction The number of non-textual features used in industry for CTR prediction models can be very large, _e.g._, Microsoft Bing Ads uses 90 numerical features transformed each into 4 tokens (accounting for the [EXP] and [SEP] tokens). This large size of inputs impacts negatively the learning cost and the prediction latency in CTR prediction. One way to solve this issue is to apply dimensionality reduction on the non-textual features. Such approaches have already been explored in several previous works like [3][7]. Followed by these works, our approach consists of representing each non-textual feature in \(C\) as a \(N\)-dimensional point in space. The resulting \(N\times|C|\) space is then mapped to a \(K\)-dimensional embedding (\(K\ll N\times|C|\)) through a fully connected network that is fed, along with the embedding from textual tokens, to the calculation of uni-attentions. The mapping to the initial \(N\)-dimensional space is done differently depending if the non-textual features, are dense, _e.g._, the length of the query, the historical value of CTR _etc._, or sparse, _e.g._, user's gender, query's category _etc._. For sparse features, we use an embedding table that lists the \(N\)-dimensional embedding corresponding to each given value. Dense features are first normalized using a max-min normalization and thereafter expanded into a 101-dimensional one-hot vectors with 0.01 buckets, that are used as index in an embedding table to find the \(N\)-dimensional embedding. We show in Figure 3 the embedding of non-textual features used in BERT4CTR. Similar with NumBERT, the attention alignment score in textual part is calculated as a dot-product form where the dimensions of Query and Key should be the same. However, in the non-textual part after dimensionality reduction, there is not anymore guarantee that the dimension of embedding in non-textual part is equivalent to the one in textual part. Considering the flexibility of our model, we introduce additive attention [1] instead of dot-product attention in non-textual part. Additive attention, also known as Bahdanau attention, uses a one-hidden layer feed-forward network to calculate the attention alignment score and the formula for attention alignment score between Query and Key is as follows: \[f_{att}(Q,K)=v_{a}^{T}tanh(W_{a}[Q;K]) \tag{3}\] where \([Q;K]\) is the concatenation of Query and Key, and \(v_{a}\) and \(W_{a}\) are learned attention parameters. In [29] it is shown that additive attention and dot-product attention are equivalent in computing the attention alignment score between Query and Key, while additive one does not require Query and Key with same embedding dimensions. In Section 4, we will show in Table 3 and Table 4 that the dimensionality reduction operation proposed here can hold more than 90% of the best AUC achieved with uni-attention while substantially reducing the time-costs of training and inference. #### 3.2.4. Two-steps Joint-training The calibration of BERT4CTR consists of joint-training with both textual and non-textual features. In [30] it is shown that a two-steps training can significantly improve the accuracy of prediction on such joint-training framework and inspired with this, BERT4CTR is also trained in two-steps. In the first step, called _warm-up step_, we pre-train the standard language model with only textual features using a Mask Language Model (MLM) task and then fine-tune this model on the same textual data with click label. The non-textual part with dimensionality reduction, is also pre-trained using a MultiLayer Perceptron (MLP) that predicts the click probability using non-textual features alone. This pre-training phase will calibrate the dimensionality reduction parameters. A cross entropy loss function is used for this learning. The second step of training, called _joint-training step_, is initialized with the pre-trained textual model, as well as the pre-trained non-textual one, and continues training the whole network of BERT4CTR, mixing textual and non-textual inputs, with a small learning rate. We demonstrate this two-steps joint-training in Figure 4. The results in Section 4 will show that the two-steps joint-training provides significant AUC gain on both commercial and public data sets. ## 4. Experiments and Evaluations In this section, we evaluate BERT4CTR over two data sets: one is from Microsoft Bing Ads, called commercial data set, and the other one is from KDD CUP 2012, called public data set. At first, we will describe the experimental settings on the data sets, the pre-trained language models, the baselines, the evaluation metrics and the environments used. We then compare four incrementally Figure 3. Dimensionality Reduction on embedding of non-textual features in BERT4CTR Figure 2. Framework of Uni-Attention complete versions of the proposed framework, followed by the introductions in Section 3, _i.e._, NumBERT alone, with uni-attention added, with dimensionality reduction for non-textual feature embedding added, and with the two-steps joint-training. This will show the improvements coming from each individual component gradually. We will also compare BERT4CTR with three current state-of-the-art frameworks that can handle multi-modal inputs in CTR prediction. These comparisons will provide evidences that BERT4CTR is an efficient framework to combine pre-trained language model with non-textual features for CTR prediction. ### Experimental Settings #### 4.1.1. Data Sets We use the following two data sets for evaluation. Moreover, to evaluate the robustness of our work, the experiments on different data sets are also based on different pre-trained language models. **Microsoft Bing Ads Data Set**: Microsoft Bing Ads is a commercial system used by Microsoft to select the ads presented to users after a requested search. This data set consists of 190 million \(<query,ad>\) pairs with click labels, which are randomly sampled from Bing Ads logs obtained in April, 2022. The samples in the first three weeks of April are used as training set, and the remaining as validation set. Similarly to (Wang et al., 2018), we use the text of query, and ad title concatenated with ad display URL as two textual features. In addition, a set of 90 non-textual features are also available in this data set, which can be grouped as: (1) dense features representing continuous numerical values, such as historical value of CTR per user, number of historical impressions per ad _etc._; (2) sparse features representing discrete values, such as the user's gender, searching query's category _etc._; (3) ads position, a special feature in CTR prediction. As in (Liu et al., 2018)(Wang et al., 2018), the displayed position of an ad is assumed to be independent of other features, _i.e._, we consider that the displayed position and the quality of an ad are independent on the likelihood of a click. **KDD CUP 2012 Data Set1**: We also use in our experiments a public data set coming from KDD CUP 2012 Track 2 (KDD et al., 2012)(Wang et al., 2018). This data set contains 235 million \(<query,ad>\) pairs with click label, sampled from the logs of Tencent search engine Soso.com. This data set contains 57 non-textual features which can also be classified into dense and sparse features, along with the position of ads. Footnote 1: [https://www.kaggle.com/c/kddcup2012-track2](https://www.kaggle.com/c/kddcup2012-track2) However, in this data set, there is no time information as in Bing Ads, meaning that it is not possible to split the training and validation data based on time. Thus, we have generated the training and validation set by randomly selecting \(1/11\) of samples as validation data and the remaining as training data. #### 4.1.2. Pre-trained Language Model Settings The textual part of Bing Ads data set is initialized over the RoBERTa-Large model with 24 layers (abbreviated as RoBERTa-24) created by Facebook (Koo et al., 2017). The pre-training of the RoBERTa-24 model is done using the popular Mask Language Model (MLM) task similarly to (Bengio et al., 2018)(Wang et al., 2018). For the textual part in KDD CUP 2012 data set, a BERT-Base model with 12 layers (abbreviated as BERT-12) (Bengio et al., 2018) is pre-trained with MLM task and then used as initial model for further experiments. To enable reproducibility, we present all the details of the experiments run on the KDD CUP 2012 data, including the data pre-processing steps, hyper-parameter settings and pseudo codes, in the appendix. #### 4.1.3. Baseline Setups We compare here BERT4CTR with three state-of -the-art frameworks handling pre-trained language model and non-textual features for CTR prediction. The first baseline framework, called _Cascading Framework_, is a traditional way to introduce pre-trained language models in CTR prediction. It consists of injecting the outcome (final score or intermediate embedding) of language model fine-tuned on the textual inputs alone as a new input feature, along with the non-textual features, for the CTR prediction. Here we first fine-tune one language model (RoBERTa-24 or BERT-12) sufficiently with Figure 4. Two-steps Joint-training in BERT4CTR only \(<query,ad>\) textual pairs, and then feed the predicted score of such fine-tuned language model as a new feature into a downstream CTR prediction model. To show the generality of our work, we choose three different CTR prediction models: (1) Wide & Deep (Wie and Deep, 2017), introduced by Google, which combines a shallow linear model with a deep neural network; (2) DeepFM (Dai et al., 2017), an improved version of Wide & Deep, which replaces the linear model with a Factorization-Machine (FM); (3) NN boosted GBDT (Krizhevsky et al., 2014), used for Microsoft Bing Ads, that consists of a Neural Network (NN) boosted with a Gradient Boosting Decision Tree (GBDT) ensemble model. The second baseline is called _Shallow Interaction Framework_, which is also widely used in practice (Beng et al., 2015)(Wang et al., 2017). It consists of fusing the non-textual embedding layer and the last layer of pre-trained language model, _e.g._, the [CLS] pooling layer, through an aggregation layer. We use two variants of this approach: the first one, called _Shallow Interaction-1 Layer_, connects the language model and the non-textual embedding layer directly through a MultiLayer Perception (MLP). The second one, called _Shallow Interaction-N Layers_, uses the same number of feed-forward network (FFN) and residual network layers stacked above the non-textual embedding layer, as the ones used in the language model, followed by a MLP. The second variant provides a more fair comparison as the depths of network in textual and non-textual part are the same as BERT4CTR. The third baseline is the NumBERT framework (Wang et al., 2017) described in Section 3.2.1. #### 4.1.4. Evaluation Metrics The Area Under the ROC Curve (AUC) (Dai et al., 2017) and Relative Information Gain (RIG) (Wang et al., 2017) are two crucial metrics to evaluate the performance of predictive models, which are also used in our evaluations. Besides the measurements on the whole validation data (called as _ALL Slice_), we also focus on the infrequent \(<query,ad>\) pairs (called _Tail Slice_), which could lead to cold starting problem in CTR prediction. As reported in (Dai et al., 2017), 0.1% improvement on AUC or RIG can be seen as significant gain for industrial use. We also use t-test results with \(\alpha=0.05\) to compare the performances of different models, _i.e._, a difference between two AUCs (or RIGs) with t-value larger than 3 can be considered as significant and confident (Dai et al., 2017). Besides the AUC and RIG, we also use the average, median, 90th percentile and 95th percentile of the time-costs (milliseconds per sample), both for training and inference, as two additional performance metrics. These two metrics are important for CTR prediction in practice. First, the CTR prediction models have to update frequently to adapt to user's interest drift, _e.g._, the CTR model is refreshed weekly in Microsoft Bing Ads. Therefore, the training time should be less than the refreshing interval. Second, the time-cost in inference is directly related to the online serving latency and should be set as low as possible. It is noteworthy that for different frameworks, the calculations of time-cost are also different. In terms of cascading framework, the time-cost in training/inference is the summation of time-cost in training/inference of language model and the one in training/inference of downstream CTR prediction model. For both Shallow Interaction and BERT4CTR which need two-steps joint-training, the time-costs in training are calculated as the summation of time taken in warm-up step and the one in joint-training step, while the time-cost in training of NumBERT is only considered as the time taken in pre-training and fine-tuning. The time-costs in inference of all these three no-cascading frameworks are measured as the time taken in single prediction by language models. #### 4.1.5. Environments All model evaluations are implemented with TensorFlow running on NVIDIA V100 GPUs with 32GB memory. The maximum lengths of sequence for \(<query,ad>\) are set as 64, and the batch sizes are set as 10 on both RoBERTa-24 and BERT-12 model. To avoid the random deviation, each experiment for time-cost is repeated for twenty times to obtain metrics. Without explicit statement, all AUCs and RIGs shown in this section are obtained at the best step during training. ### Performance of Components in BERT4CTR In this section, we evaluate the improvement on model performance coming from each individual component of BERT4CTR described in Section 3.3. #### 4.2.1. NumBERT's Performance We present in Table 1 the performance of NumBERT for CTR prediction over the two data sets used in this paper. In this case two baseline models can be used for comparison. The first one, called _TextOnly_, uses the pre-trained language model fine-tuned with only \(<query,ad>\) textual input and without any non-textual features. The second one is the Shallow Interaction-1 Layer described above. We show in the table along with absolute value of AUC and RIG for each model, the difference of metrics between two models with t-values, _e.g._, \(\Delta AU_{\text{M3-M1}}\), the AUC difference between Model 3 (NumBERT) and Model 1 (TextOnly). One can observe, from Table 1, that NumBERT has been able to benefit from non-textual features. It brings, when compared with the model without non-textual features, 2.7% AUC improvement over Bing Ads data and 6.8% AUC improvement on KDD CUP 2012 data. However, compared with the Shallow Interaction model, NumBERT does not provide benefits on both AUC and RIG, and shows worse performance. This means that even if NumBERT allows textual and non-textual features to interact through complex bidirectional self-attention with multi-layers, it is not efficient in learning the cross-information between multi-modal signals. #### 4.2.2. Uni-Attention's Performance We follow up with evaluating the improvements coming from uni-attention architecture. We show in Table 2 the performance achieved by NumBERT compared with _NumBERT + Uni-Attention_, _i.e._, transformed non-textual features (as depicted in Figure 1) are fed to uni-attention architecture, as shown in Figure 2. Table 2 shows that the uni-attention architecture can bring significant AUC and RIG gains, compared with the NumBERT model without uni-attention, over both data sets. For example, the uni-attention architecture can bring additional 0.3% AUC gain and 0.5% RIG gain on Tail Slice of Bing Ads data. These gains are even more obvious for KDD CUP 2012 data, where the AUC gain is 0.5% and the RIG improves by 0.6% over Tail Slice. All these changes are statistically significant with t-values larger than 70. #### 4.2.3. Dimensionality Reduction's Performance Here, we evaluate the impact of dimensionality reduction of non-textual features, shown in Figure 3, that is made mandatory because of the large number of non-textual inputs in industrial CTR prediction models. The performances of dimensionality reduction on the two data sets are shown in Table 3, where _NumBERT + Uni-Attention + Dimensionality Reduction_ is the NumBERT model with uni-attention framework, as in Figure 2, that is completed with a dimensionality reduction operation in non-textual part, as shown in Figure 3. Table 3 reports that AUC and RIG for both alternative models are close on the two data sets. Besides, no one of the performance differences is statistically significant, _i.e._, the performance equality hypothesis cannot be refuted. Besides the accuracy of prediction, the time-costs in training and inference of these two models are also evaluated in Table 4. One can observe, that dimensionality reduction reduces strongly the time-cost, up to 45% of training cost and 24% of inference cost on KDD CUP 2012 data, with 57 non-textual features, and up to 68% in training and 43% in inference on Bing Ads data with 90 non-textual features. This means that dimensionality reduction does not entail a significant performance reduction while reducing obviously the time-costs. #### 4.2.4. Two-steps Joint-training's Performance The last component to be evaluated is the two-steps joint-training described in Section 3.2.4. For this purpose, we compare three initialization approaches for the textual and non-textual parts: (1) Pre-trained but not fine-tuned language model for textual part + Random weights in non-textual part (abbreviated as _No Fine-tuned + Randomly Initialized_ in Table 5); (2) Fine-tuned weights in textual part + Random weights in non-textual part (abbreviated as _Fine-tuned + Randomly initialized_ in Table 5), where the weights in the textual part are initialized using the language model pre-trained and fine-tuned on our \(<\)_query, ads \(>\)_ textual pairs, and random initial weights are used in non-textual part; (3) Two-steps joint-training where both weights in textual part and non-textual part are initialized with the weights trained in advance as described in Section 3.2.4. This last setting is the one used for the BERT4CTR model introduced in this paper. Table 5 shows the AUC/RIG performance of these three settings on both data sets. From this table, one can find that two-steps joint-training brings significant gain for both data sets. On Bing Ads data, the AUC gain is more than 0.3%, and more than 0.4% over KDD CUP 2012 data. All these gains are shown by the t-tests to be significant. #### 4.2.5. Aggregated Training Loss We show in Figure 5 the evaluation of training loss for all the alternative models. The aggregated log-loss is derived after training per million samples, and the trends are reported in Figure 5 for the first training epoch over Bing Ads data set. The figure leads to four observations. First, the training loss of the model without non-textual features (_i.e._, TextOnly model) is higher than the ones of the other alternative models, indicating that non-textual features are important for CTR prediction. Second, training loss for NumBERT with uni-attention is below the one of NumBERT, which provides another evidence that uni-attention architecture improves CTR prediction. Third, the training loss curves for NumBERT with uni-attention are close, with and without dimensionality reduction. This means that dimensionality reduction does not compromise the accuracy of prediction much while reducing the time-costs of training and inference. Finally, training loss for BERT4CTR is the lowest one, showing clearly that the two-steps joint-training improves the performance of CTR prediction. These observations are consistent with the ones obtained based on AUC and RIG metrics. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Slice} & \multicolumn{2}{c|}{Model 1} & \multicolumn{2}{c|}{Model 2} & \multirow{2}{*}{\(\Delta\)\(AUC_{\text{GH}-M1}\)} & \multicolumn{2}{c|}{\(\Delta\)\(AUC_{\text{GH}-M1}\)} & \multicolumn{2}{c|}{\(\Delta\)\(AUC_{\text{GH}-M1}\)} \\ \cline{3-14} & & \multicolumn{2}{c|}{NonBERT} & \multicolumn{2}{c|}{NonBERT} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ \cline{3-14} & & AUC & RIG & AUC & RIG & Diff & T & Diff & T & Diff & T & Diff & T & Diff & T \\ \hline Bing Ads & ALL & 0.8691 & 0.4987 & 0.5968 & 0.5360 & 0.9860 & 0.5393 & -0.4008 & 2.408 & -0.4004 & 1.97 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & \multicolumn{1}{c}{} \\ \hline \end{tabular} \end{table} Table 1. AUC and RIG performance of NumBERT on two data sets \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Slice} & \multicolumn{2}{c|}{Model 1} & \multicolumn{2}{c|}{Model 2} & \multirow{2}{*}{\(\Delta\)\(AUC_{\text{GH}-M1}\)} & \multicolumn{2}{c|}{\(\Delta\)\(AUC_{\text{GH}-M1}\)} & \multicolumn{2}{c|}{\(\Delta\)\(REG_{\text{GH}-M1}\)} \\ \cline{3-14} & & \multicolumn{2}{c|}{NonBERT} & \multicolumn{2}{c|}{NonBERT} & \multicolumn{2}{c|}{NonBERT} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ \cline{3-14} & & AUC & RIG & AUC & RIG & Diff & T & Diff & T & Diff & T \\ \hline Bing Ads & ALL & 0.8691 & 0.5348 & 0.9888 & 0.5397 & 0.6804 & 0.5393 & -0.4008 & 2.408 & -0.4004 & 1.97 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & -0.51 & \multicolumn{1}{c}{} \\ \hline \end{tabular} \end{table} Table 2. AUC and RIG performance of Uni-Attention on two data sets Figure 5. Curves of aggregated training loss on Bing Ads data \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Slice} & \multicolumn{2}{c|}{Model 1} & \multicolumn{2}{c|}{Model 2} & \multirow{2}{*}{\(\Delta\)\(AUC_{\text{GH}-M1}\)} & \multicolumn{2}{c|}{\(\Delta\)\(AUC_{\text{GH}-M1}\)} & \multicolumn{2}{c|}{\(\Delta\)\(AUC_{\text{GH}-M1}\)} \\ \cline{3-14} & & \multicolumn{2}{c|}{NonBERT} & \multicolumn{2}{c|}{NonBERT} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ \cline{3-14} & & AUC & RIG & AUC & RIG & Diff & T & Diff & T & Diff & T & Diff & T \\ \hline Bing Ads & ALL & 0.8691 & 0.4987 & 0.5968 & 0.5360 & 0.9860 & 0.5393 & -0.4008 & 2.408 & -0.4004 & 1.97 & ### Comparison of BERT4CTR with Other Multi-modal Frameworks In this part, we compare BERT4CTR performances with the three alternative frameworks, cascading framework, Shallow Interaction framework and NumBERT, that can handle multi-modal inputs for CTR prediction. In Table 6, the AUC and RIG performance are shown for all possible alternatives. Three major observations can be extracted from this table. First, cross-information learning between textual and non-textual features during fine-tuning phase can improve the accuracy of prediction significantly, _e.g._, BERT4CTR brings more than 0.5% AUC gain on both Bing Ads data and KDD CUP 2012 data, compared with all cascading methods (Wide & Deep, DeepFM and NN+GBDT). Second, although increasing the depth of network in non-textual part improves the accuracy of CTR prediction, deep uni-attentions between textual features and non-textual features still bring considerable improvement for CTR prediction, _e.g._, BERT4CTR can bring 0.4% AUC gain on both Bing Ads data and KDD CUP 2012 data, compared with Shallow Interaction-N Layers model, showing the pure benefits brought by the uni-attention architecture. Finally, among all seven alternative models in Table 6, BERT4CTR shows the highest AUC on both data sets, which gives evidence that the design presented in Section 3 is an effective way to learn the cross-information between multi-modal inputs for CTR prediction. The time-costs of training and inference for the alternative models are shown in Table 7. At first, we observe that BERT4CTR does not bring significant increases in training time compared with Shallow Interaction. In detail, the training time of BERT4CTR only increases by 7% compared with Shallow Interaction-N Layers and by 14% compared with Shallow Interaction-1 Layer that have been widely used in industry (Wang et al., 2019)(Li et al., 2020). For example, in Microsoft Bing Ads, Shallow Interaction-1 Layer framework is used to refresh a RoBERTa-24 model. The training takes 5 days in one cycle. According to Table 7, BERT4CTR will take 5.8 days on the same settings, that is still less than the weekly re-calibration deadline. The inference delay of BERT4CTR is close to cascading, and Shallow Interaction framework, and much less than NumBERT, _e.g._, BERT4CTR can reduce inference delay by 52% (resp., 29%) on Bing Ads data (resp., KDD CUP 2012 data), compared with NumBERT. The results from Table 6 and Table 7 give strong evidences that BERT4CTR can achieve both high accuracy and low training and inference delay for CTR prediction. ## 5. Discussion Although, we used in this paper the CTR prediction with numerical features as our main applicative scenario, the framework of BERT4CTR proposed is applicable to other scenarios mixing textual and non-textual features. For example, one can extract the representative embedding from images through VGGNet (Wang et al., 2019), ResNet (He et al., 2019) _etc._ to replace the token \(E_{E}\) in Figure 4 and calculate the uni-attentions. Besides, the Knowledge-Distillation technology (He et al., 2019) can be adopted on BERT4CTR, where a light model handling textual and non-textual inputs, with uni-attention and dimensionality reduction, can be learned under the supervision of predicted scores from a well-trained BERT4CTR model with deep layers. ## 6. Conclusion In this paper, we focused on the design of an efficient framework to combine pre-trained language model with non-textual features for CTR prediction. We started from NumBERT which is the traditional usage of pre-trained language model to integrate textual features and numerical ones, and introduced three improvements, uni-attention, dimensionality reduction and two-steps joint-training, \begin{table} \end{table} Table 4. Time-cost performance (ms/sample) of Dimensionality Reduction on two data sets \begin{table} \end{table} Table 5. AUC and RIG performance of Two-steps Joint-training on two data sets to compose a novel framework BERT4CTR. Comprehensive experiments on both commercial data and public data showed that BERT4CTR can achieve significant gains in accuracy of prediction while keeping low time-costs of training and inference, and therefore provide a promising solution of CTR prediction in the real world.
2308.09368
A tailored Handwritten-Text-Recognition System for Medieval Latin
The Bavarian Academy of Sciences and Humanities aims to digitize its Medieval Latin Dictionary. This dictionary entails record cards referring to lemmas in medieval Latin, a low-resource language. A crucial step of the digitization process is the Handwritten Text Recognition (HTR) of the handwritten lemmas found on these record cards. In our work, we introduce an end-to-end pipeline, tailored to the medieval Latin dictionary, for locating, extracting, and transcribing the lemmas. We employ two state-of-the-art (SOTA) image segmentation models to prepare the initial data set for the HTR task. Furthermore, we experiment with different transformer-based models and conduct a set of experiments to explore the capabilities of different combinations of vision encoders with a GPT-2 decoder. Additionally, we also apply extensive data augmentation resulting in a highly competitive model. The best-performing setup achieved a Character Error Rate (CER) of 0.015, which is even superior to the commercial Google Cloud Vision model, and shows more stable performance.
Philipp Koch, Gilary Vera Nuñez, Esteban Garces Arias, Christian Heumann, Matthias Schöffel, Alexander Häberlin, Matthias Aßenmacher
2023-08-18T08:02:52Z
http://arxiv.org/abs/2308.09368v1
# A tailored Handwritten-Text-Recognition System for Medieval Latin ###### Abstract The Bavarian Academy of Sciences and Humanities aims to digitize its Medieval Latin Dictionary. This dictionary entails record cards referring to lemmas in medieval Latin, a low-resource language. A crucial step of the digitization process is the Handwritten Text Recognition (HTR) of the handwritten lemmas found on these record cards. In our work, we introduce an end-to-end pipeline, tailored to the medieval Latin dictionary, for locating, extracting, and transcribing the lemmas. We employ two state-of-the-art (SOTA) image segmentation models to prepare the initial data set for the HTR task. Furthermore, we experiment with different transformer-based models and conduct a set of experiments to explore the capabilities of different combinations of vision encoders with a GPT-2 decoder. Additionally, we also apply extensive data augmentation resulting in a highly competitive model. The best-performing setup achieved a Character Error Rate (CER) of 0.015, which is even superior to the commercial Google Cloud Vision model, and shows more stable performance. + Footnote †: This paper has been accepted at the First Workshop on Ancient Language Processing, co-located with RANLP 2023. + Footnote †: This paper has been accepted at the First Workshop on Ancient Language Processing, co-located with RANLP 2023. ## 1 Introduction The Medieval Latin Dictionary (MLW)1, located at the Bavarian Academy of Sciences, deals with Latin texts that were created between 500 and 1280 in the German-speaking region. The foundations for this project have been developed from 1948 onwards and since then, the dictionary has been continuously published in individual partial editions since 1959. Currently, the letter \(S\) is being worked on in particular. The basis of the dictionary consists of 50 selected texts that have been fully transcribed onto DIN-A6 sheets (record cards) constituting about 40% of the note material. Later, another 2,500 texts were excerpted and transcribed manually onto DIN-A6 record cards, using a typewriter (cf. Fig. 1). In addition, there are so-called "index cards", a type of record card, that helps to uncover often hundreds of additional references. In total, it is estimated that 1.3 million reference points have been recorded for the MLW. These record cards were sorted alphabetically by the first letter of the keyword (lemma), and serve as the foundation for creating a dictionary. By 2025, at least half of the note material is planned to be scanned and recorded in a database. Footnote 1: In German: _Mittellateinisches_ Wörterbuch (MLW) To digitize the material, the lemmas - always found in the upper left corner of the record cards, either hand- or machine-written - must be extracted from the cards and recognized using an Optical Character Recognition (OCR) or HTR procedure. Around 200,000 record cards have been scanned Figure 1: Record card from the MLW data set. (cf. Fig. 1), and annotated with their respective lemma. The accurate extraction and transcription of the lemma present a challenge, which is further compounded by the limited resources available for medieval Latin. To address this, we develop an end-to-end pipeline that begins by extracting the lemma from the record cards and subsequently utilizes an elaborated HTR system to recognize the text. ### Contributions 1. We present a novel end-to-end HTR pipeline specifically designed for detecting and transcribing handwritten medieval Latin text. Notably, it surpasses commercial applications currently considered SOTA for related tasks. 2. We successfully train a detection model without relying on human-annotated bounding boxes for the lemmas. 3. We conduct extensive experiments to compare various vision encoders and evaluate the effectiveness of data augmentation techniques. 4. We make our codebase, models, and data sets publicly available. ## 2 Related Work We provide an overview of the field of HTR, which is the main challenge of this work. We also deal with an instance of object detection to prepare the training data. However, since this problem is only an intermediate step and not the aim of this work, we do not cover it extensively. We refer to the survey of zaidi2021learning for a detailed overview. The recognition of handwritten text differs from OCR insofar as it needs to deal with less standardized data. Previous approaches have focused on applying deep learning to tackle these tasks. Here, the objective of Connectionist Temporal Classification (CTC) graves2006context comes into play. CTC is a technique in which a neural network - initially a Recurrent Neural Network (RNN) but other networks might also be used chaudhary2022learning - is trained to predict a matrix of conditional transition probabilities. The input image, represented as a vector representation through a Convolutional Neural Network (CNN), is fed to the network, and for each input (i.e. the activation maps of the CNN) the network predicts the character. After obtaining the probabilities, a matrix of conditional transition probabilities can be constructed. A unique void character is introduced to avoid false repetitions, and the final sequence can be obtained and compared with the ground truth. Since many sequences can be obtained from the matrix, the network is trained to maximize the correct conditional transition probabilities. During inference time, the model cannot compute the path of all likely sequences but instead needs to predict the class just in time. For this purpose, search algorithms like beam search or infix search are used. CTC, combined with CNNs and RNNs, often yielded competitive results, such as shown by puigcerver2017neural and bluche2017neural. Furthermore, approaches applying only CNNs and CTC also exist chaudhary2021learning,2022learning. The model Easter2.0 achieved competitive results on the IAM data set marti2002learning, a data set consisting of English handwritten text and being widely used for HTR. A recent work that achieved SOTA results on the IAM data set is the T\(\mathrm{\ddot{o}}\)rock model li2022learning, based on the transformer vaswani2017attention. The model consists of a vision encoder and a text decoder, deviating from previous approaches in which CNNs and RNNs have been primarily used. The input is processed through the encoder and represented in vector space. A language model for decoding subsequently produces the text to be predicted. However, with the emergence of the transformer in the vision domain dosovitskiy2021attention, Bao2022, end-to-end modeling has become possible. In the work of barrere2022learning, another transformer-based model is applied for HTR. The main difference to Tr\(\mathrm{\ddot{o}}\)CR is a different embedding technique for visual features based on a CNN. Furthermore, the model also applies CTC during training. The results have also been shown to be competitive on the IAM data set. diaz2021learning compared different encoder-decoder models' performance on HTR. In their study, they used different models in the encoder and decoder parts, so a transformer encoder is used before using a CTC-based decoder. Furthermore, they found that a transformer encoder and a CTC-trained decoder enriched with a language model achieved SOTA results on the IAM data set. The Tr\(\mathrm{\ddot{o}}\)CR framework has been successfully applied to historical data akin to our task. In the work of strobel2022learning, a T\(\mathrm{\ddot{o}}\)rock instance was fine-tuned to handwritten Latin from the 16th century stotz2021learning, referred to as _Gwalther_), achieving competitive results. Data Our data set comprises 114,653 images (18,9 GB), corresponding to 3,507 distinct lemmas. All images are in RGB, but not uniform in size, i.e. height, and width differ from image to image. Additionally, the information on the corresponding lemma (i.e., the ground truth) is available for each image as well as the dictionary's vocabulary. Image dataFigure 1 shows one (arbitrarily chosen) sample from the data set. Most record cards follow the same structure being composed of three main parts, highlighted via green boxes. The first one (1), and the one we deem most challenging, is the lemma, which is always located in the upper left corner of the record card. The second part (2) is the index of the text where the lemma is found. The third part (3) contains a text extract in which the word (corresponding to the lemma) occurs in context. Lemma AnnotationOur analysis is based on lemma annotations on an image level, i.e. which lemma is on the corresponding record cards. There is a total of 17 different first letters, eight of which are each upper- and lowercase, as well as one special character. The capitalization of a word plays a crucial role since a word's meaning changes depending on capitalization. Since the majority of our data stems from the \(S\)-series of the dictionary, most lemmas start with the letter "s". Likewise, we found a large number of lemmas starting with the letters "m", "v", "t", "u", "1", and "n" (cf. Fig. 2). We also analyzed the number of record cards available per lemma. In this analysis, we found that some lemmas are under-represented in the data set, while a few constitute a large chunk of the data. A total of 2,420 lemmas (69%) were found to have ten record cards or less; 854 lemmas (24.4%), between 10 and 100 record cards, and just 233 lemmas (6.6%), more than 100 record cards. It is worth mentioning that 1,123 lemmas (approx. 36.7%) had only one record card. Finally, we analyze the length of the lemmas (cf. Fig. 3). We observe lemmas from a length of one character up to a maximum of 19 characters. The average length of the lemmas lies between five and six characters. The presence of such long lemmas motivated the decision of additionally using a weighted metric for model evaluation, as will be explained in Section 5.3. ## 4 Lemma Extraction Pipeline In this section, we delve into the details of the custom-designed pipeline for the extraction of the lemma from the record cards. ### Visual Detection Due to the data structure, we are confronted with the problem of finding suitable bounding boxes Figure 4: Visualization of the designed pipeline, encompassing three building blocks: (1) the visual detection of the lemma, followed by (2) the encoding in the latent space, and (3) the decoding into plain text. Figure 3: Length distribution of the lemmas. Figure 2: Distribution of the first letters of the lemmas. to extract the lemmas from the upper left of the record cards. When using the entire record cards for the recognition task, the majority of the image is noise, making model training significantly more difficult. Since the lemmas are not annotated with their exact locations, training a custom object detection model for extraction is not feasible. In order to still retrieve the locations of the bounding boxes for some lemmas, we transform the problem into an instance of visual grounding by providing a model with an image and the description of an object in the image, upon which it is expected to return the object's location. We use the One For All (OFA) transformer Wang et al. (2022), fine-tuned on RefCOCO Kazemzadeh et al. (2014). To ensure the quality of the extracted lemma, we experiment with multiple prompts and examine their results (cf. Appendix A). After obtaining a training data set of 20,000 instances, each of them annotated with bounding boxes, we train a YOLOv8 model Jocher et al. (2023) based on the You Only Look Once (YOLO) architecture Redmon et al. (2016). The model predictions from our YOLO model, are then subject to two post-processing steps (described in the following) to ensure the quality of the images. Multiple Bounding Boxes:For 17,674 images (15.42% of the data), the model predicted more than just one bounding box. We visually examined the cases and found that other handwritten text was often recognized as a lemma, sometimes scattered throughout the record cards (e.g. upper or lower right). The distribution of the bounding boxes throughout the record cards is displayed in Figure 12 (Appendix B). Missing Bounding Box:We visually examined the 202 cases where no bounding box was detected, some stemming from machine writing (instead of cursive handwriting) or scanning errors. For some images that follow the standard layout of the record cards, the model also failed. We disregard this set constituting less than 0.2% of the entire data set. Determining the Bounding BoxTaking all aspects into account, we introduce two rules to determine the appropriate bounding box: (1) choose the largest bounding box, and (2) the bounding box has to be in the upper left quarter of the entire image. The result after applying these rules is displayed in Figure 13 (Appendix B). The final data set consists of 114,451 samples, exhibiting a difference of the 202 samples to the initial 114,653 image-label pairs. We make our data available on HuggingFace.2 Footnote 2: [https://huggingface.co/misoda](https://huggingface.co/misoda) ### HTR Model We use a transformer as the main model akin to TrOCR. For the encoder, we consider three different architectures, while we use GPT-2 Radford et al. (2019) as a decoder model for all setups. All models are trained from scratch, although we use pre-trained image processors for the encoder models and train a tokenizer for our custom alphabet. TokenizerWe use a customized byte-level BPE Sennrich et al. (2016) tokenizer for the dictionary's vocabulary. The tokenizer is trained on the labels from our data set. Vision EncodersWe consider three different encoder architectures, namely Vision Transformer (ViT) Dosovitskiy et al. (2021), Bidirectional Encoder representation for Image Transformers (BEiT) Bao et al. (2022), and Shifted Window Transformer (Swin) Liu et al. (2021). ViT is a transformer-encoder-based model employing 16 x 16 image patching for transforming images into sequences. Additionally, a class patch is concatenated to the sequence of patches, which is used for classification tasks and entails general information about the sequence, similar to the CLS token in BERT Devlin et al. (2019). For training, a feed-forward neural network is stacked on top of the encoder, serving as an adapter between the encoder and the targets during pre-training. Akin to ViT, BEiT builds on image patching and an image vocabulary. For pre-training, masked image modeling is introduced, inspired by masked language modeling from natural language processing Devlin et al. (2019). Further, the encoder exhibits a visual vocabulary, and a Variational Auto Encoder Kingma and Welling (2022) is trained in advance to encode an image to a lower dimension. The decoder reproduces the image from the latent codes which can be used as a visual vocabulary, and based on this vocabulary, an image can be represented through a sequence of visual tokens. For our purpose, we train the model from scratch and only re-use the pre-trained image processor. A problem in the vision domain is the high dimensionality and the often spatially related information in the data. Splitting the image into large patches might break the often fine-grained relations of entities on the images. To overcome this issue, self-attention can be applied more fine-grained, however, this results in higher computational cost. In Swin, this issue is tackled using a new encoder block structure, differing substantially from the other transformers. To overcome the above-mentioned problems, the self-attention mechanism is applied differently to account for different aspects of the image. In the lower layers, the image is divided into small patches, and the self-attention mechanism is applied to the small patches in windows. These windows are shifted in upper layers to connect the different patches. Furthermore, the windows are enlarged in upper layers producing a hierarchical representation. Our model uses a newly initialized Swin transformer alongside a pre-trained image processor. Text DecoderGPT-2 Radford et al. (2019) is a decoder-only transformer that has shown competitive capabilities in text generation. A decoder can only be trained to predict the next token based on the previous sequence while relying on encoded information from the encoder. Since the problem of predicting the next token is a classification task, the Cross-Entropy Loss is used. GPT-2 has been shown to be able to capture the underlying patterns and structures of natural language, making it capable of generating coherent and contextually appropriate text. Due to the overall strong performance of GPT-2, we chose to use it as a decoder for our model. We train it from scratch, i.e., we do not use the pre-trained weights since we deal with a specific task in a low-resource language setting. Implementation DetailsWe use the HuggingFace transformers library Wolf et al. (2020) and PyTorch Paszke et al. (2019) to train the HTR pipeline. Our codebase, containing all scripts (experiments and training) is available via GitHub3, and the final model is on pypi.4 All the experiments were conducted using a Tesla V100 GPU (16 GB). Footnote 3: [https://github.com/slds-lmu/mlw-htr](https://github.com/slds-lmu/mlw-htr) Footnote 4: [https://pypi.org/project/mlw-lectiomat/](https://pypi.org/project/mlw-lectiomat/) ## 5 Experiments ### Standard Training After shuffling the data, we randomly split it into a train (85% - 97,283 samples) and a test (15% - 17,168 samples) set. In the train split, 94.53% (3,315) of the lemmas are present. For all training procedures, we use the AdamW optimizer Loshchilov and Hutter (2019) and did not engage in hyperparameter tuning. Further details are reported in Appendix C. For standard training, the model is trained using a data set that includes the cut images from the record cards as input and their respective lemmas as the labels to be predicted. We train each of the models for a total of 5 epochs. ### Data Augmentation Augmentation is a common technique used in deep learning to diversify the training data by applying different modifications without changing the underlying semantics of the data. The goal of augmentation is to provide the model with a diverse set of examples, helping it generalize better and improve performance. Yang et al. (2022) show that augmentation can notably improve the results of deep learning models. We apply on-the-fly augmentation to our data due to the large data set size. To provide maximal modification and increase the diversity of the training data, different augmentation techniques are applied at random on-the-fly. These techniques include random rotation, blurring, or modifications related to color perception. Since the augmentation is applied on-the-fly, it is necessary to increase the number of epochs so that the model has also enough opportunities to observe the original, unmodified data and the augmented variations. We increased the number of epochs to 20 (compared to 5 for the standard training). We use three different augmentation pipelines, one of which is randomly chosen with \(p=\frac{1}{3}\). In the following, we will illustrate each of them at the example of the lemmas shown in Figure 5. saturation, sharpness, and hue. The specific alterations for each instance are again determined randomly, also including the possibility of no modifications at all (cf. Fig. 7). Pipeline CThe third pipeline combines the modifications from the previous two (cf. Fig. 8). In addition to the described techniques, all augmentation pipelines include random masking, where rectangles of the images are blackened, and random rotation within a range of -10 to 10 degrees. Decoder Pre-TrainingWe experiment with pre-training the decoder in order to incorporate prior knowledge about the vocabulary we want to predict in the medieval Latin language. After pre-training the decoder on a corpus of the concatenated lemmas, we combine it with the encoder and continue training as described in Section 5.1. While pre-training is performed for a total of 10 epochs, the training of the entire transformer is conducted for 20 epochs. In this approach, the same augmentation techniques, as outlined before, are applied to the training data. ### Performance metrics We assess the model performance using the CER, which is computed by summing up edit operations and dividing by the length of the lemma. \[CER=\frac{S+D+I}{N}=\frac{S+D+I}{S+D+C}, \tag{1}\] where \(S\) is the number of substitutions, \(D\) is the number of deletions, \(I\) is number of insertions, \(C\) is the number of correct characters, and \(N\) is number of characters in the label. To account for the varying length of the lemmas, we further utilize the weighted CER. \[WeightCER=\frac{\sum_{i=1}^{n}l_{i}*CER_{i}}{\sum_{i=1}^{n}l_{i}}, \tag{2}\] where \(l_{i}=\) is the number of characters of label \(i\), and \(CER_{i}\) is the CER for example \(i,i=1,\dots,n\). ### Experimental Results The main results of our work are reported in Table 1. The BEiT+GPT-2 architecture achieved the best results in case of the standard training regime, exhibiting a CER of 0.258, followed by Swin+GPT-2 (0.349) and ViT+GPT-2 (0.418). Applying the augmentation pipelines, as described in Section 5.2, notably improves model performance compared to the standard training for all three models. The best model with augmentation is Swin+GPT-2, achieving a CER of 0.017. As for the other two models, the CER is 0.073 for ViT+GPT-2 and 0.110 for BEiT+GPT-2. Pre-training of the decoder does, on average, not lead to further improvement. ViT+GPT-2 is the exception, for which the CER drops to 0.049. We observe no improvements for the other models (BEiT+GPT-2: 0.114, Swin+GPT-2: 0.018). To summarize, the best results are achieved when using a Swin+GPT-2 model with data augmentations, reaching a CER value of 0.017. ### Ablation Study To investigate the impact of the data augmentation, we perform three ablations, removing individual steps from the augmentation pipelines. Our ablations include applying modifications to the image regarding sharpness, brightness, color, and blurring (cf. Sec. 5.2). We also apply random rotation and random erasing of some image parts, resulting in \begin{table} \begin{tabular}{l c c c} \hline \hline & ViT & Swin & BEiT \\ \hline Standard & 0.418 & 0.349 & 0.258 \\ + Data Augmentation & 0.073 & **0.017** & 0.110 \\ + Decoder Pre-Training & 0.049 & 0.018 & 0.114 \\ \hline \hline \end{tabular} \end{table} Table 1: CER-Results for different encoder configurations. Figure 8: Exemplary samples from pipeline C. Figure 6: Exemplary samples from pipeline A. Figure 7: Exemplary samples from pipeline B. black rectangles (masking). To investigate the individual effects of each augmentation technique, we train the model without a specific augmentation method and report the resulting CER. The results of the ablation study can be seen in Table 2. Excluding the masking step from the pipeline leads to an actual improvement of model performance, such that the CER improves to 0.015. However, excluding random rotations of the images leads to an increase in CER to 0.021, while augmentation without applying the color-related augmentations results in a CER of 0.017, equal to the initial model trained with all augmentation techniques. Please note, that only the specific technique was left out at a time, while the other modifications were still in use. From the results of these ablations, we can conclude that adding rotation seems to be a major contribution to prediction quality. Random masking decreases the performance, while color does not seem to have an impact on the performance. ### Google Cloud Vision Comparison To compare the results of our model, we decided to use a highly competitive model for HTR, the Gloogle Cloud Vision (GCV) model. It is capable of recognizing handwritten text and has been proven performative in practical applications (Thammarak et al., 2022). As already mentioned, some of the cut record cards which contain the lemmas in our data set contain extra characters and/or suffixes that are not part of the true lemma. We observe that GCV often predicts these extra characters as well. Considering this issue, we decided to post-process the predictions by GCV for a fair comparison. This work consisted of deleting extra characters and words after the first word or after a '-' or a '('. Nevertheless, it was not possible to remove all artifacts from the predictions. In some cases, separating them was impossible since the characters were predicted altogether. Figure 9 shows the comparison of our model with GCV. The violin plots of the (unweighted) CERs show a concentration of the CER values around 0 (= correct prediction) for both models. For our model, the most extreme values are at a CER of 3, for GCV the maximum is nearly twice as high and we observe an overall higher standard deviation compared to our model. Note, that these extreme values originate from the problem that the models sometimes predict too many characters, which are not part of the true (annotated) lemma. To conclude, our best model exhibits a weighted CER of 0.0153, while GCV only reaches 0.1045. Overall, our model correctly predicts 97,09% of all lemmas, while GCV only does so for 78.26%. ### Performance of other HTR systems Table 3 illustrates the CERs of other systems on different HTR data sets. Strobel et al. (2022) use the Rudolph Gwalther data set, while all other papers evaluate their systems on the IAM data set. Our model achieves the lowest CER. However, it must be considered that we did not evaluate the same data set, which makes a direct comparison impossible. In contrast to the other transformer-based models, our best model uses Swin as an encoder which we have not found in other work. ## 6 Discussion and Outlook Due to the focus on recognizing the lemma, we did not experiment with other object detection or image segmentation techniques. Since the record cards include much more information than the one we extracted, we recommend further research into various extraction techniques. With the recent publication of Segment Anything Model, Kirillov et al. (2023) introduce a model that might be able to extract features from the record cards with much higher accuracy. The next objective could be to extract the inflected lemmas (cf. Sec. 3). We did neither experiment with the initial TrOCR architecture nor did we fine-tune a pre-trained TrOCR instance for this task. However, the results of Strobel et al. (2022) suggest a strong performance of TrOCR. Thus we also recommend \begin{table} \begin{tabular}{l c} \hline \hline Swin+GPT-2 (Full augmentation pipelines) & 0.017 \\ \hline w/o masking augmentation & **0.015** \\ w/o rotation augmentation & 0.021 \\ w/o color augmentation & 0.017 \\ \hline \hline \end{tabular} \end{table} Table 2: CER-Results of different model configurations. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline Model & CER & Data set & Architecture \\ \hline Ours (Best) & **0.0183** & MLW & Transformer \\ \hline \hline T�OCR Large (Strobel et al., 2022) & 0.0255 & Gwalther & Transformer \\ TOCR Large (Li et al., 2022) & 0.0289 & IAM & Transformer \\ EASTER2.0( Chaudhary and Bali, 2021) & 0.0621 & IAM & CNN-CTC \\ Light Transformer (Barrere et al., 2022) & 0.0570 & IAM & CNN-Transformer \\ Self-Att-CTC+LM (Diaz et al., 2021) & 0.0275 & IAM & Trf+CTC+LM \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of contemporary HTR systems evaluated on different data sets. training it on the MLW data set. On the other hand, the results of using the Swin encoder indicate a powerful performance compared to the other models we have used. Thus, we also suggest more research into investigating the usage of Swin as an encoder for this task. ## 7 Conclusion We present a novel end-to-end pipeline for the Medieval Latin dictionary. Our library includes an image-detection-based model for lemma extraction and a tailored HTR model. We experiment with training different configurations of transformers using the ViT, BEiT, and Swin encoders while using a GPT-2 decoder. Employing data augmentation, our best model (Swin+GPT-2) achieves a CER of 0.015. The evaluation of the results exhibits a weaker performance on longer lemmas and on lemmas that appear less frequently in the training data. Further experiments with generative models to produce synthetic data (not reported in the paper) were not successful, however, we recommend further research into the direction of creating synthetic data. To conclude, our approach presents a promising HTR solution for Medieval Latin. Future research can build upon our work, and explore its generalizability to other languages and data sets by making use of our pip-installable Python package: [https://pypi.org/project/mlw-lectiomat/](https://pypi.org/project/mlw-lectiomat/) Figure 9: Violin plots for the comparison of our Swin+GPT-2 model (left) to Google Cloud Vision (right). ### Limitations Our approach has several limitations that can be addressed to improve its efficiency further. There are issues regarding the data set (cf. Sec. 3) that might be reflected in the model's performance. As discussed in Section 3, some lemmas are stroked out partially or entirely, introducing a notable noise to the data. Further, handwritten comments or other annotations have been added to some of the record cards, and some images are not correctly labeled, which might have distorted the recognition capabilities of our model. Since our pipeline was mostly trained on data from the \(S\)-series of the dictionary, many words starting with other letters were not seen by the model during training. Therefore, the performance of the proposed approach, when applied to other series, remains somewhat uncertain. As elaborated in section 7, the model tends to perform weaker on unseen lemmas. Further, there are indications that the model might perform worse on longer lemmas. The lemma-detection model (YOLOv8) is not guaranteed to predict the correct bounding box for the lemma consistently. Errors at this early stage of the pipeline may severely impact the result. Although the failure rate for the training dataset in which no bounding box was predicted is close to zero, the problem can still appear during inference. ## Ethics Statement We affirm that our research adheres to the ACL Ethics Policy. This work involves the use of publicly available data sets and does not involve human subjects or any personally identifiable information. We declare that we have no conflicts of interest that could potentially influence the outcomes, interpretations, or conclusions of this research. All funding sources supporting this study are acknowledged. We have made our best effort to document our methodology, experiments, and results accurately and are committed to sharing our code, data, and other relevant resources to foster reproducibility and further advancements in research. ## Acknowledgements We wish to thank the Bavarian Academy of Sciences for providing us with the guidance and required access to the handwritten material. This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) as part of BERD@NFDI - grant number 460037581.
2303.13042
Space Astronomy at TIFR: From Balloons to Satellites
Tata Institute of Fundamental Research (TIFR) has a very long tradition of conducting space astronomy experiments. Within a few years of the discovery of the first non-solar X-ray source in 1962, TIFR leveraged its expertise in balloon technology to make significant contributions to balloon-borne hard X-ray astronomy. This initial enthusiasm led to extremely divergent all-round efforts in space astronomy: balloon-borne X-ray and infrared experiments, rocket and satellite-based X-ray experiments and a host of other new initiatives. In the early eighties, however, TIFR could not keep up with the torrent of results coming from the highly sophisticated satellite experiments from around the world but kept the flag flying by continuing research in a few low-key experiments. These efforts culminated in the landmark project, AstroSat, the first multi-wavelength observatory from India, with TIFR playing a pivotal role in it. In this article, I will present a highly personalised and anecdotal sketch of these exciting developments.
A. R. Rao
2023-03-23T05:13:18Z
http://arxiv.org/abs/2303.13042v1
# Space Astronomy at TIFR: ###### Abstract Tata Institute of Fundamental Research (TIFR) has a very long tradition of conducting space astronomy experiments. Within a few years of the discovery of the first non-solar X-ray source in 1962, TIFR leveraged its expertise in balloon technology to make significant contributions to balloon-borne hard X-ray astronomy. This initial enthusiasm led to extremely divergent all-round efforts in space astronomy: balloon-borne X-ray and infrared experiments, rocket and satellite-based X-ray experiments and a host of other new initiatives. In the early eighties, however, TIFR could not keep up with the torrent of results coming from the highly sophisticated satellite experiments from around the world but kept the flag flying by continuing research in a few low-key experiments. These efforts culminated in the landmark project, AstroSat, the first multi-wavelength observatory from India, with TIFR playing a pivotal role in it. In this article, I will present a highly personalised and anecdotal sketch of these exciting developments. ## 1 Space astronomy: probing the cosmos using the invisible rays Space astronomy is the child of the space era. Traditionally, the optical region is the only wavelength through which astronomical observations were possible. The tremendous development and insight obtained in Physics during the early twentieth century were duly applied to
2305.01910
Distributional Instance Segmentation: Modeling Uncertainty and High Confidence Predictions with Latent-MaskRCNN
Object recognition and instance segmentation are fundamental skills in any robotic or autonomous system. Existing state-of-the-art methods are often unable to capture meaningful uncertainty in challenging or ambiguous scenes, and as such can cause critical errors in high-performance applications. In this paper, we explore a class of distributional instance segmentation models using latent codes that can model uncertainty over plausible hypotheses of object masks. For robotic picking applications, we propose a confidence mask method to achieve the high precision necessary in industrial use cases. We show that our method can significantly reduce critical errors in robotic systems, including our newly released dataset of ambiguous scenes in a robotic application. On a real-world apparel-picking robot, our method significantly reduces double pick errors while maintaining high performance.
YuXuan Liu, Nikhil Mishra, Pieter Abbeel, Xi Chen
2023-05-03T05:57:29Z
http://arxiv.org/abs/2305.01910v1
Distributional Instance Segmentation: Modeling Uncertainty and High Confidence Predictions with Latent-MaskRCNN ###### Abstract Object recognition and instance segmentation are fundamental skills in any robotic or autonomous system. Existing state-of-the-art methods are often unable to capture meaningful uncertainty in challenging or ambiguous scenes, and as such can cause critical errors in high-performance applications. In this paper, we explore a class of distributional instance segmentation models using latent codes that can model uncertainty over plausible hypotheses of object masks. For robotic picking applications, we propose a confidence mask method to achieve the high precision necessary in industrial use cases. We show that our method can significantly reduce critical errors in robotic systems, including our newly released dataset of ambiguous scenes in a robotic application. On a real-world apparel-picking robot, our method significantly reduces double pick errors while maintaining high performance. ## I Introduction Instance segmentation is a fundamental problem in many real-world robotic systems. The goal of instance segmentation is to enumerate the objects (or _instances_) that appear in an image, specifying which pixels in the image belong to each object. In the past few years, recent work has mostly focused on developing specialized architectures that make the instance segmentation task more amenable to deep learning. For example, _detect-then-segment_ methods [1, 2, 3, 4, 5, 6], rely on a cascade of classification, regression, and filtering to first identify a _bounding box_ for each instance (a related problem known as _object detection_), followed by an additional step to predict each instance's mask given its bounding box. Another example is _pixel-embedding_ methods [7, 8], which optimize pixel-level auxiliary tasks, and then use a specialized clustering procedure to extract instance predictions from the dense pixel representation. We observe that existing methods are not well equipped to deal with the inherent ambiguity that exists in the real world. We posit that this stems from a phenomenon we describe as limited _distributional expressiveness_, namely, that most instance segmentation models are designed to predict only _one_ possible segmentation hypothesis (a single set of objects). Making only a single prediction is limiting in terms of the accuracy attainable by high-performance autonomous systems: a robot picking application may only tolerate \(<1\%\) of errors caused by incorrect segmentation. To overcome these limitations, we propose _distributional instance segmentation_ which models a distribution over plausible hypotheses of objects. The key contributions of this work are: 1. We introduce a distributional instance segmentation model using latent codes, Latent-MaskRCNN, which can predict multiple hypotheses of object masks. 2. We propose new methods for using the output of a distributional instance segmentation model. For robotic applications, we propose high-precision predictions with Confidence Masks, and we achieve high recall with Union-NMS. 3. We are releasing a dataset of over 5000 annotated images from a real-world robotics application that highlights the ambiguity in instance segmentation. We show our method achieves high performance on this dataset as well as popular driving and instance segmentation datasets. 4. On a real-world apparel picking robot, our method can significantly reduce critical errors while achieving a high level of performance (Fig. 1). Fig. 1: Traditional instance segmentation models such as MaskRCNN, cannot model uncertainty over object masks. For robotics, this can result in critical errors such as unintationally picking two objects. Our Latent-MRCNN can predict multiple hypotheses of object masks and use these to make high-confidence predictions, reducing the rate of double pick errors. ## II Related Work **Detect-then-segment** methods are the most popular instance segmentation methods, and MaskRCNN belongs to this category. While they all first perform object detection and then segment each instance given its bounding box, there are some variations. For example, YOLACT [9] follows the same structure as MaskRCNN, but uses YOLO [10] as the object detector instead of FasterRCNN [2]. YOLO is very similar to FasterRCNN, making architectural changes that sacrifice some accuracy in exchange for real-time inference speed. Thus, we expect YOLACT to have the same distributional limitations as MaskRCNN. Other methods [11, 12, 13] explore how to express uncertainty during the detection step, but they consider distributions of individual boxes rather than over sets of object masks. **Mask-proposal** methods [14, 15, 16] aim to circumvent bounding boxes as an intermediate representation. They are structured like FasterRCNN [2], but propose masks directly. Empirically, they do not behave much differently than MaskRCNN. Distributionally, they suffer from many of the same limitations as MaskRCNN: each proposal still models each pixel independently of the others, and they still rely on NMS to filter proposals. **Pixel-embedding**[7, 8, 17, 18] methods work in a substantially different way than either of the above two families. They generally optimize some auxiliary task that encourages pixels in the same instance to have similar representations. Then they rely on a clustering-based inference procedure to extract instance predictions from their pixelwise representations. However, their performance has lagged quite far behind that of detect-then-segment methods, which has made them relatively unpopular. They can model per-pixel uncertainty in a manner similar to a naive semantic segmentation method, but this is likely insufficient for distributional expressiveness. A number of methods explore how to express uncertainty in other structured prediction tasks. However, many of these do so by training multiple replicas of the entire model or some subset of the parameters, and modifying the training objective in a way that encourages diversity amongst the replicas [19, 20, 21, 22]. This incurs a multiplicative increase in the computational cost and memory footprint required at training time, which can be prohibitively expensive for large models. Other latent-variable formulations [23, 24, 25] offer improvements on medical _semantic_ segmentation and video segmentation tasks. We find, however, that _instance_ segmentation poses a richer set of challenges and has different application-specific uses. ## III Distributional Instance Segmentation with Latent Variables ### _Latent Variable Formulation_ How can we turn instance segmentation into distributionally expressive models, while retaining the inductive biases of existing model architectures? Drawing on prior work in variational inference [23, 26] we consider a latent-variable formulation where we incorporate latent codes in the style of a variational autoencoder. If we adopt this framework, then an instance segmentation model becomes a conditional VAE that is trained to maximize the evidence lower-bound: \[\log p(y|x)\geq\mathbb{E}_{z\sim q}[\log p(y|x,z)]-D_{KL}(q||p(z|x)) \tag{1}\] Typically \(q(z|y,x)\) is known as the encoder, \(p(y|x,z)\) as the decoder, and \(p(z|x)\) as the prior, and these components are all learned to maximize the lower-bound. The decoder is essentially an instance segmentation model in the traditional sense, except that it is augmented to additionally consume a latent code \(z\). This general technique allows us to reuse any existing instance segmentation model to implement our decoder (and train it in the same way), with only a slight modification to incorporate \(z\) as an input. During inference, we can sample from \(p(y|x)\) by sampling different latent codes \(z^{(k)}\sim p(z|x)\), and decoding them into different instance predictions \(y^{(k)}\sim p(y|x,z)\). This can be quite powerful since we can now sample multiple structured and expressive hypotheses for a given image. ### _Latent-MaskRCNN_ In principle, the latent-variable method can be applied to any existing instance segmentation model. In this section, we explore how it might be applied to MaskRCNN. We call the resulting model _Latent-MaskRCNN_ (Fig. 2). We chose MaskRCNN since it is one of the most popular instance segmentation models and has served as the basis for most state-of-the-art methods in recent years. Fig. 2: Overview of Latent-MaskRCNN: At training time, the encoder \(q_{\theta}\) uses features extracted from the image \(x\) and labels \(y\) to sample a latent code \(z\) which is passed into the decoder. The decoder conditions on \(z\) and uses a typical MaskRCNN architecture to predict masks including region proposal, classifier, box, and mask heads. At inference time, \(z\) is sampled through the prior \(p_{\theta}(z|x)\) which only takes the image \(x\) as input. \(D_{KL}(q||p_{\theta}(z|x))\) ensures the prior has good coverage over the latent space. The decoder of Latent-MaskRCNN uses the same architecture and training objective as MaskRCNN, with the main change being that it needs to incorporate latent codes. To allow them to influence as much of the prediction as possible, we want to do this relatively early in the model. We chose to inject the latent codes directly before region proposal, so that they can influence region proposal network, object detection head, and mask head. We tile the latent codes across the spatial dimensions of the image and concatenate them with the feature maps from the Feature Pyramid Network (FPN) [27]. Then we use a few convolutional layers to project the combined feature maps back down to their original channel dimensionality. The encoder of Latent-MaskRCNN takes in an image \(x\) along with a set of ground-truth instances \(y\), and produces a distribution over latent codes \(q_{\theta}(z|y,x)=\mathcal{N}(\mu_{\theta}(y,x),\sigma_{\theta}^{2}(y,x))\). The architecture for \(\mu_{\theta}(y,x)\) and \(\sigma_{\theta}(y,x)\) takes inspiration from the mask head of MaskRCNN: it acts like a "reverse mask head" that operates on each ground-truth instance, and then pools features from across all instances. For each ground truth instance \(y_{i}\), we extract ROI-aligned features from the FPN feature maps. Then we use a small CNN to embed each one into a single feature vector. At this point, we employ a graph neural network [28] to accumulate information from per-instance features since we need to a single latent code for the entire image. After several graph network layers, we mean-pool across the node features and use a fully-connected layer to produce a mean and log-variance for our latent distribution. The encoder is only used at training time since it has access to the ground truth mask labels. At inference time, we must sample latent codes from the prior to produce mask samples. This prior takes in an image \(x\) and produces a distribution over latent codes \(p_{\theta}(z|x)=\mathcal{N}(\mu_{\theta}(x),\sigma_{\theta}^{2}(x))\). We apply a few convolutional layers to the FPN feature maps, mean-pool across the spatial dimensions, and then predict a mean and log-variance using a small MLP. For all latent distributions, we use a fixed 64-dimension Gaussian with diagonal covariance. For training, we use the encoder \(q_{\theta}\) to sample latent codes \(z\), which are passed to the Mask-RCNN decoder. We maximization the evidence lower bound (Equation 1) objective, where \(-\log p(y|x,z)=\mathcal{L}_{M}(x,y,z)\) is the usual Mask-RCNN loss: \[\mathcal{L}_{M}(x,y,z)=\mathcal{L}_{RPN}+\mathcal{L}_{cls}+\mathcal{L}_{ box}+\mathcal{L}_{mask} \tag{2}\] \(D_{KL}(q||p(z|x))\) ensures the prior has good coverage over the encoder distribution. During inference, we sample latent codes from the prior (instead of from the encoder), but the decoder consumes them in the same way as during training. We found it helpful to use a KL warm-up, as is a common practice for training VAEs [29]. The total training loss for Latent-MaskRCNN then becomes: \[\mathcal{L}(x,y)=\mathbb{E}_{z\sim q}[\mathcal{L}_{M}(x,y,z)]+\beta D_{KL}(q|| p(z|x)) \tag{3}\] In the first part of training, we use \(\beta=0\) and increase \(\beta\) towards the end of training. This allows the latent code to encode useful information early on as the rest of the model is still learning; towards the end of the training, higher \(\beta\) pushes the latent space to be covered by the prior for better samples. For more details on our models and code, please refer to our website segm.yuxuanliu.com. ## IV Applying Distributional Instance Segmentation Given a distributionally-expressive segmentation model, a natural question might be, how a downstream application can consume its distributional output? Instance segmentation often occurs at the beginning of the perception pipeline, and it's not immediately clear how samples from a distributional segmentation model can be used downstream. Moreover, each application may have varying error asymmetries: failing to detect an object can be catastrophic in autonomous driving but acceptable in robotic picking, while grouping two objects as one is a critical error in robotic picking but more reasonable in driving applications. In this section, we show how a single Latent-MaskRCNN model can be used flexibly across a number of applications with different requirements (Fig. 3). ### _High-Precision with p-Confidence Masks_ In some applications, it can be very costly to make _undergementation_ errors, when an instance's mask is predicted to be larger than it actually is. For example, consider a robotic manipulation application, where the robot must pick objects one at a time and feed them into a sortation process. If the model undersegments an instance, it may inadvertently pick multiple objects, which can be an expensive error for the downstream application. How can we ensure that these errors don't occur? Suppose we draw several samples from Latent-MaskRCNN. If two pixels belong to the same instance mask in many samples, then we can be reasonably confident that they actually do belong to the same ground-truth instance. Drawing on this intuition, we can then compute a \(p\)-_confidence mask_, consisting of pixels that are all likely to be contained in a single ground-truth instance. For a given confidence requirement \(p\), we define a confidence mask \(c_{p}\) as a mask that is fully contained in a ground truth mask \(m\) with probability at least \(p\): Fig. 3: At inference time, the encoder \(q_{\theta}\) is discarded, and latent variables \(z_{i}\) are sampled from the image \(x\) conditioned prior \(p_{\theta}(z|x)\). Each latent is decoded using \(p_{\theta}(y|x,z_{i})\) into a set of masks, which can be used for our high precision or recall predictions depending on the application. \(m\)\(\geq\)\(p\). Using Latent-MaskRCNN, we can approximate this probability as: \[\mathbb{P}(c_{p}\subseteq m)=\mathbb{E}_{p(m|x)}[\mathds{1}\{c_{p}\subseteq m\}] \approx\frac{1}{k}\sum_{i=1}^{k}\mathds{1}\{c_{p}\subseteq m_{i}\}\] In the finite sample regime, \(\hat{c}_{p}\) is an empirical confidence mask if it is contained within a sampled mask for \(p\) fraction of the samples. Now consider any subset of masks \(I\) consisting of one mask \(m_{j}\) from at least \(kp\) different samples. If we take the intersection of all of the masks in \(I\), \(\hat{c}_{p}=\bigcap_{m_{j}\in I}m_{j}\), then this intersection mask must be contained in each of the masks used in the intersection \(\hat{c}_{p}\subseteq m_{j}\). Therefore we have: \[\frac{1}{k}\sum_{m_{j}\in I}\mathds{1}\{\hat{c}_{p}\subseteq m_{j}\}=\frac{|I |}{k}\geq p\] and \(\hat{c}_{p}\) is an empirical confidence mask by construction. Figure 4 illustrates confidence mask predictions for different \(p\). Notice as the confidence requirement increases, the unconfident extents of the masks shrink, and some uncertain masks are eliminated. We can also see how constructing confidence masks via intersection leads to high confidence region predictions. ### _Scoring Confidence Masks_ Since each confidence mask is an intersection of masks \(c_{p}=\bigcap_{I}m_{j}\), how should we assign the score of a confidence mask prediction? One intuitive approach might be to take the average of the score \(s_{i}\) of each mask in the intersection: \(\frac{1}{|I|}\sum s_{j}\). However, a confidence mask is not an average of masks but rather an intersection. To formulate a better score for our confidence mask prediction, consider two scenarios. In the first scenario, the model is very confident about an object's mask so it predicts roughly the same mask in every sample. The resulting confidence mask \(c_{p}\) has high IoU with each of the masks used in the sample \(m_{j}\). On the other hand, consider an unconfident prediction where the object's mask varies significantly across samples. Here, the confidence mask \(c_{p}\) represents a small but confident region of the object whose extent is highly uncertain. The resulting IoU between \(c_{p}\) and each \(m_{j}\) will be smaller than when the model is confident and masks are not varying across samples. We score each confidence mask as the mean score-weighted IoU between the predicted mask \(c_{p}\) and every mask used in the intersection: \[s_{c_{p}}=\frac{1}{|I|}\sum_{m_{j}\in I}s_{j}\frac{|c_{p}\cap m_{j}|}{|c_{p} \cup m_{j}|}\] When \(s_{c_{p}}\) is large, this indicates that \(c_{p}\) is a confident intersection of masks with very similar IoU. On the other hand, a small \(s_{c_{p}}\) indicates that \(c_{p}\) has low score or IoU with its samples and likely does not capture the full extent of the object well. To predict a set of confidence masks for an image, we iteratively select the highest scoring confidence mask, excluding the pixels of all of the confidence masks predicted so far. This algorithm greedily approximates the maximum scoring confidence mask selection optimization. ### _High-Recall with Union-NMS_ Other applications might be concerned about _over-segmentation_, the complement of under-segmentation. For example, in autonomous driving, failing to identify a pedestrian, or predicting them to be smaller than they actually are, can lead to a catastrophic error. To make high-recall predictions with Latent-MaskRCNN, we use a procedure called _Union-NMS_. We first sample multiple segmentations from the model, and run NMS on the predicted masks. It checks if any two masks \(m_{i},m_{j}\) have IoU greater than some threshold, and then discards the lower-scoring one. Suppose that some mask \(m_{i}\) remains after we perform NMS. Then Union-NMS returns the union of \(m_{i}\) with every mask that it suppressed, achieving higher recall by incorporating masks that would have otherwise been ignored. ### _Vanilla Prediction with the Prior Mean_ Some applications may not have any specific performance requirements or may have strict inference time requirements. In these cases, a point estimate can be sufficient. With Latent-MaskRCNN, we can achieve this by always decoding the mean of the prior: \(z=\mu_{\theta}(x)\) where \(\mu_{\theta}(x)\) is the mean of the prior \(p_{\theta}(z|x)\) (for Gaussian \(p_{\theta}(z|x)\), it is also the mode of the distribution). We found that this scheme typically matches or yields a small improvement over MaskRCNN predictions, suggesting that Latent-MaskRCNN strictly increases the expressiveness of MaskRCNN and no performance is lost by using a more expressive distribution. ## V Experiments We conducted experiments seeking to answer the following questions: 1. Can Latent-MaskRCNN with Confidence Masks make high-precision predictions across a variety of datasets? 2. Can Union-NMS make high-recall predictions? 3. Can Latent-MaskRCNN reduce critical double pick errors in robotic picking applications? Fig. 4: Top: Confidence Mask predictions with different \(p\). Notice that as the confidence requirement \(p\) increases, single objects can be split into two, ambiguous object extents are reduced, and uncertain objects are eliminated entirely. Bottom: Constructing an empirical confidence mask \(\hat{c}_{p}\) by taking intersection of samples \(m_{1},m_{2},m_{3}\). When an object’s extent is uncertain, a high confidence mask prediction will only consist of pixels that are highly likely to be contained within an object as determined by the samples. ### _Datasets_ To help us answer these questions, we compared MaskRCNN and Latent-MaskRCNN across several datasets, each with its own set of challenges. **COCO**[30]: This large dataset is the standard benchmark for instance segmentation. There are many object categories and a huge variety in image composition. **Cityscapes**[31]: A real-world dataset from an autonomous driving application. Although it is smaller and more specialized than COCO, it is still a popular benchmark for instance segmentation. One notable challenge is that there are many background instances that are still important to segment (e.g. pedestrians), but the limited image resolution can introduce some uncertainty. **Apparel-5k**: We collected this dataset of roughly 5000 images from a robot picking application. We use 4198 images in the training set and 463 in the validation set. There is only one object category, but the images exhibit a lot of inherent ambiguity due to complex occlusions, lighting, transparency, etc. We are releasing this dataset on our website segm.yuxuanliu.com for the broader community to build upon our work. For each dataset, we trained both MaskRCNN and Latent-MaskRCNN on 8 GPUs using MaskRCNN's released hyper-parameters and training schedules. We use the same publicly available train/val splits for all experiments and datasets. We used a Resnet-50 backbone [32], initialized from pretrained-Imagenet [33] weights (for COCO) or pretrained COCO weights (for other datasets). Inference with MaskRCNN can take 80-100ms depending on the number of objects, and inference with Latent-MaskRCNN can take 500-1000ms depending on the number of samples and objects. ### _Evaluating p-Confidence Masks_ In Section IV-A, we introduced high precision predictions with p-Confidence masks to address the problem of under-segmentation. In those cases, we care that predictions have high Intersection-over-Prediction: \(\text{IoP}(m_{i},g)=\frac{|m_{i}\cap g|}{|m_{i}|}\). When IoP is high, errors due to under-segmentation are less likely to occur. When evaluating models in this regime, we need to trade off precision (in terms of IoP) with recall (to avoid degenerate solutions). To do this, we consider the _max recall at high precision_ (MR@HP): \[\text{MR@HP}=\frac{1}{|p|\cdot|\tau|}\sum_{p_{i}\in p_{\tau},\tau_{j}\in\tau} \max_{t:\text{Precision}(\tau_{j})\geq p_{i}}\text{Recall}(t,\tau_{j})\] For a given precision threshold \(p_{i}\) and IoP threshold \(\tau_{j}\), we can compute the max recall that each model achieves (or zero, if it never achieves precision \(p_{i}\)). The MR@HP metric is the average of these recalls, over a range of precision threshold \(p\) and IoP thresholds \(\tau\). For high precision use-cases, we care about performance at high values of these thresholds, therefore we use \(p=\tau=[0.75,0.8,0.85,0.9,0.95]\). In Table I, we evaluated Latent-MaskRCNN using both the prior-mean scheme from Section IV-D as well as confidence masks with a confidence level \(p=0.9\). Across all three datasets, we find that latent confidence masks yield the best performance in terms of MR@HP. As for mAP, we find that Latent Prior Mean can match if not exceed the performance of MaskRCNN on all three datasets. On the challenging Apparel-5k dataset, we find that Latent Confidence Mask and Latent Prior Mean significantly outperform MaskRCNN in terms of MR@HP and mAP. Overall, we find that Latent-MaskRCNN is a strict improvement over MaskRCNN by matching overall detection performance in terms of mAP and offering the best high-precision performance in terms of MR@HP. ### _Evaluating Union NMS_ For the over-segmentation problem, we introduced the Union NMS method in Section IV-C. In such cases, we care that we have high recall (that we detect every instance that exists), and that each mask prediction has high _IoG_ (intersection-over-ground-truth): \(\text{IoG}(m_{i},g)=\frac{|m_{i}\cap g|}{|g|}\). To capture both of these considerations, we consider the average-recall (AR) [30] using IoG. This measures both recall while also penalizing over-segmentation (are there any predicted masks that are too small). We evaluated Latent-MaskRCNN using both its prior-mean (Section IV-D) and Union-NMS. In Table I, we show that the prior-mean predictions are similar to MaskRCNN on all three datasets, while Union-NMS achieves substantially higher AR (using IoG). This suggests that Latent-MaskRCNN with Union NMS can more effectively cover different modes of uncertainty, for high-recall applications. ### _Can confidence masks reduce double pick errors on Apparel-5K?_ In a robotic picking application, it is costly for the robot to pick up two items accidentally, thinking it had only picked one item since it affects inventory counts and downstream orders. For the Apparel-5k dataset, we can estimate the double pick rate of a model's segmentation prediction by approximating the robot's gripper as a circle with a fixed radius in pixel space. Then, we randomly sample circles on the image and count the number of circles \(D\) that land within one predicted mask but more than one ground truth mask. We divide this by the number of circles \(N\) that land within one predicted mask, to arrive at the estimated double pick rate \(R=\frac{D}{N}\). Empirically we find that this simulated double pick rate is correlated with double pick rates on a real robot. Another metric that we are concerned with in industrial robot picking is pickable area, the amount of visible surfaces that the robot can pick from. A model that predicts higher, more accurate pickable areas enables the robot to have more flexibility in its grasping strategy. To this end, we compute the area of all the predicted masks over the area of all the ground truth masks, as the fraction of pickable area available. We compare MaskRCNN and Latent-MaskRCNN with varying \(p\)-confidence masks in Figure 5. We find that Latent-MaskRCNN outperforms MaskRCNN in fraction of pickable area and double pick rate in all cases. Moreover, the tunable parameter \(p\) in Latent confidence masks allows for application-specific tradeoffs between double pick rate and pickable area. Higher values of \(p\) tend to correspond to lower double pick rate and less pickable area, as the confidence requirement for each prediction is increased. With traditional MaskRCNN, only one double pick rate and pickable area fraction is realizable since no tunable knob exists. ### _Can confidence masks reduce double pick errors on a real-world apparel-picking robot?_ To evaluate whether our dataset evaluation translates to real-robot performance, we compare MaskRCNN and Latent-MaskRCNN on an apparel-picking robot. We use an ABB1300 with a 9-cup suction gripper to pick apparel items in polybags between two totes (Fig. 1). The robot uses two overhead camera systems to perform instance segmentation and then grasp point generation. The grasp points are optimized to land as many suction cups as possible on a single object detected by the segmentation model. For our evaluation, we only change which segmentation model is used while holding other parts of the system constant, including hardware, object set, and grasp point generation. Each segmentation model is trained on the same Apparel-5k dataset. We run each model with several hundred grasps and record the number of double picks, grasps that unintentionally pick two objects. In an industrial warehouse application, these double picks are very costly errors since they result in incorrect inventory counts and cause errors in downstream sortation and order fulfillment systems. A typical high automation warehouse can tolerate at most 1% double pick rate before the robot is causing more problems than it solves. We also measure the average number of sealed cups on a grasped item, since the suction holding force is proportional to the number of sealed cups. Grasps that use less sealed cups tend to result in more dropped objects, which leads to jams, lost inventory, and costly human intervention. A robotic system can reduce double pick rate by shrinking object mask sizes, chopping up bigger masks into smaller ones, or only using a single small suction cup. However, all of these approaches indiscriminately reduce the suction holding force on all items, whereas our approach will be conservative only when ambiguity is present. Table II reports the results of our apparel-picking experiments. We find that Latent-MaskRCNN with 0.9-Confidence mask significantly reduces the double pick rate. This validates our simulated findings on Apparel-5K in Section V-D. Moreover, Latent-MaskRCNN achieves slightly better average number of sealed cups, suggesting that suction stability was not sacrificed. This suggests that our method can make high-confidence predictions and make the appropriate trade-offs in the face of uncertainty. ## VI Discussion We proposed a new family of models that builds on top of existing instance segmentation models by using latent variables to achieve more distributional expressiveness. Latent-MaskRCNN can express a wide range of uncertainty where existing instance segmentation models often fall short. We can leverage uncertainty expressed by the model using Confidence Masks and Union-NMS to achieve high precision and high recall respectively. These methods demonstrate strong performance across robotics, autonomous driving, and general object datasets. On a real apparel-picking robot, we find that our model can significantly reduce the rate of critical errors while maintaining high performance. Finally, we have highlighted the importance of distributional expressiveness and hope that future work in instance segmentation can continue to build on top of our work and datasets shared in this paper. Fig. 5: Latent confidence masks achieve lower double pick rates and generally more pickable area compared to MaskRCNN.
2304.07090
Delta Denoising Score
We introduce Delta Denoising Score (DDS), a novel scoring function for text-based image editing that guides minimal modifications of an input image towards the content described in a target prompt. DDS leverages the rich generative prior of text-to-image diffusion models and can be used as a loss term in an optimization problem to steer an image towards a desired direction dictated by a text. DDS utilizes the Score Distillation Sampling (SDS) mechanism for the purpose of image editing. We show that using only SDS often produces non-detailed and blurry outputs due to noisy gradients. To address this issue, DDS uses a prompt that matches the input image to identify and remove undesired erroneous directions of SDS. Our key premise is that SDS should be zero when calculated on pairs of matched prompts and images, meaning that if the score is non-zero, its gradients can be attributed to the erroneous component of SDS. Our analysis demonstrates the competence of DDS for text based image-to-image translation. We further show that DDS can be used to train an effective zero-shot image translation model. Experimental results indicate that DDS outperforms existing methods in terms of stability and quality, highlighting its potential for real-world applications in text-based image editing.
Amir Hertz, Kfir Aberman, Daniel Cohen-Or
2023-04-14T12:22:41Z
http://arxiv.org/abs/2304.07090v1
# Delta Denoising Score ###### Abstract We introduce Delta Denoising Score (DDS), a novel scoring function for text-based image editing that guides minimal modifications of an input image towards the content described in a target prompt. DDS leverages the rich generative prior of text-to-image diffusion models and can be used as a loss term in an optimization problem to steer an image towards a desired direction dictated by a text. DDS utilizes the Score Distillation Sampling (SDS) mechanism for the purpose of image editing. We show that using only SDS often produces non-detailed and blurry outputs due to noisy gradients. To address this issue, DDS uses a prompt that matches the input image to identify and remove undesired erroneous directions of SDS. Our key premise is that SDS should be zero when calculated on pairs of matched prompts and images, meaning that if the score is non-zero, its gradients can be attributed to the erroneous component of SDS. Our analysis demonstrates the competence of DDS for text based image-to-image translation. We further show that DDS can be used to train an effective zero-shot image translation model. Experimental results indicate that DDS outperforms existing methods in terms of stability and quality, highlighting its potential for real-world applications in text-based image editing. For code and additional results, please visit our project page: [https://de1ta-denoising-score.github.io/](https://de1ta-denoising-score.github.io/). ## 1 Introduction Large-scale language-vision models have revolutionized the way images and visual content, in general, can be generated and edited. Recently, we have witnessed a surge in the development of text-to-image generative models, which utilize textual input to condition the generation of images. A promising avenue in this field is Score Distillation Sampling (SDS) [27] - a sampling mechanism that utilizes probability density distillation to optimize a parametric image generator using a 2D diffusion model as a prior. The effectiveness of the SDS stems from rich generative prior of the diffusion model it samples from. This is in Figure 1: **Score Distillation Sampling (SDS) vs. Delta Denoising Score (DDS).**_Top: SDS mechanism optimizes a given image by querying the denoising model on the noisy version of the image and a target text prompt. The resulting image can often be blurry and unfaithful to the target prompt. Bottom: DDS queries an additional reference branch with a matched text-prompt, and generates delta scores that represent the difference between the outputs of the two queries. DDS provides cleaner gradient directions that modify the edited portions of the optimized image, while leaving the other parts unchanged._ contrast to the direct use of a language-vision model, like CLIP, which was trained using contrastive loss [28]. The prior of large generative diffusion models, like Stable Diffusion [31], DALLE-2 [30] and Imagen [35] is particularly rich and expressive and has been demonstrated to be highly effective in generating visually stunning assets across various domains, including images and 3D models, among others. Despite its usefulness, one of the primary issues associated with SDS is its tendency to converge towards specific modes, which often leads to the production of blurry outputs that only capture the elements explicitly described in the prompt. In particular, using SDS to _edit an existing image_ by initializing the optimization procedure from that image, may result in significant blurring of the image beyond the edited elements. In this paper, we introduce a new diffusion-based scoring technique for optimizing a parametric model for the task of editing. Unlike SDS, which queries the generative model with a pair of image and text, our method utilizes an additional query of a reference image-text pair, where the text matches the content of the image. Then, the output score is the difference, or _delta_, between the results of the two queries (see Figure 1). We refer to this scoring technique as Delta Denoising Score (DDS). In its basic form, DDS is applied on two pairs of images and texts, one is a reference image-text that remains intact during the optimization, and the other is a target image that is optimized to match a target text prompt. The delta scoring provides effective gradients, which modify the edited portions of the image, while leaving the others unchanged. The key idea is that the source image and its text description, can be used for estimating undesirable and noisy gradients directions introduced by SDS. Then if we want to alter only a portion of the image using a new text description, we can use our reference estimation and get a cleaner gradient direction to update the image. DDS can be used as a prompt-to-prompt editing technique that can modify images by only editing their captions, where no mask is provided or computed. Beyond that, Delta Denoising Score enables us to train a distilled image-to-image model without the need of paired training dataset, yielding a zero-shot image translation technique. Training the model, requires only dataset of the source distribution, associated with simple captions that describe the source and target image distributions. As we will show, such zero-shot training can be applied on a single or multi-task image translation, and the source distribution can include synthetically generated and real images. To demonstrate the effectiveness of our approach, we conducted experiments comparing our model to existing state-of-the-art text-driven editing techniques. ## 2 Related Work Text-to-Image models [34, 30, 31], have recently raised the bar for the task of generating images conditioned on a text prompt, exploiting the powerful architecture of diffusion models [13, 36, 39, 13, 37, 31], which can be used to various image editing and guided synthesis tasks [32, 17, 45, 44]. Recent works have attempted to adapt text-guided diffusion models to the fundamental challenge of single-image editing, aiming to exploit their rich and diverse semantic knowledge. Meng et al. [21] add noise to the input image and then perform a text-guided denoising process from a predefined step. Yet, they struggle to accurately preserve the input image details, which were preserved by a user provided mask in other works [24, 2, 1]. DiffEdit [7] uses DDIM inversion for image editing, but avoids the emerged distortion by automatically producing a mask that allows background preservation. While some text-only editing approaches are bound to global editing [8, 20, 18, 26], Bar-Tal et al. [4] propose a text-based localized editing technique without using any mask. Their technique allows high-quality texture editing, but not modifying complex structures, since only CLIP [28] is employed as guidance instead of a generative diffusion model. Prompt-to-prompt [12] suggests an intuitive editing technique that enables manipulation of local or global details for images that were synthesized by a text-to-image network. [23] proposed an approach to invert real images into the latent space of the diffusion model, such that prompt-to-prompt can be applied to real images. Imagic [17] and UniTune[43] have demonstrated impressive text Figure 2: **Sampling text-to-image diffusion models.**_Generation via SDS optimization starting from random noises (left) vs. conventional diffusion-based image generation (right). Both samples are generated with respect to a given text prompt (top). Generating images based on SDS only leads to less diverse results and mode collapse where the main subject in the text appears in front of a blurry background._ driven editing capabilities, but require the costly fine-tuning of the model. InstructPix2Pix [5], plug-and-play [41] and [25] can get an instruction or target prompt and manipulate real images towrds the desired edit DreamFusion [27] proposed the SDS score as a 2D prior which can be used to generate 3D assets [22, 29]. SDS is also used in [38] to direct a StyleGAN generator for the domain adaption task. This is conceptually similar to StyleGAN-NADA [10] which uses instead CLIP [28] to translate the domain of a StyleGAN generator to other domains based only textual description. Our work explores the usage of SDS score in the context of image editing, and propose a new technique to clean the undesired gradients of SDS which grab the optimization process into noisy direction that smooth out relevant detailed from the original image. ## 3 Delta Denoising Score (DDS) We begin with a brief overview of the SDS loss function and explain the challenges in sampling and editing images with SDS, based on empirical observations. In particular, we demonstrate that SDS introduces a noisy direction when applied to the task of image editing. We then introduce our Delta Denoising Score (DDS), which utilizes a reference pair of image and text to correct the noisy direction of SDS and offers a new technique for the task of prompt-to-prompt editing [12]. We conduct all our experiments using the latent model- Stable Diffusion [31], nevertheless, in our overview and results, we refer to the models latents and output channels as images and pixels respectively. ### SDS overview Given an input image \(\mathbf{z}\), a conditioning text embedding \(y\), a denoising model \(\epsilon_{\phi}\) with parameters set \(\phi\), a randomly sampled timestep \(t\sim\mathcal{U}(0,1)\) drawn from the uniform distribution, and noise \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\) following a normal distribution, the diffusion loss can be expressed as: \[\mathcal{L}_{\text{Diff}}\left(\phi,\mathbf{z},y,\epsilon,t\right)=w(t)|| \epsilon_{\phi}\left(\mathbf{z_{t}},y,t\right)-\epsilon||_{2}^{2},\] where \(w(t)\) is a weighting function, and \(\mathbf{z_{t}}\) refers to the noisy version of \(\mathbf{z}\) obtained via a stochastic noising forward process given by \(\mathbf{z_{t}}=\sqrt{\alpha_{t}}\mathbf{z}+\sqrt{1-\alpha_{t}}\epsilon\), with \(\alpha_{t}\) being the noise scheduler. For simplicity, we omit the weighting factor in the remainder of this section. The text conditioned diffusion models use classifier-free guidance (CFG [14]) that consists of two components, one that is conditioned on text input, and another that is unconditioned. During inference, the two components are used to denoise the image via \[\epsilon_{\phi}^{\omega}\left(\mathbf{z_{t}},y,t\right)=\left(1+\omega \right)\epsilon_{\phi}\left(\mathbf{z_{t}},y,t\right)-\omega\epsilon_{\phi} \left(\mathbf{z_{t}},t\right),\] where the components are balanced using a guidance parameter \(\omega\). Given an arbitrary differentiable parametric function that renders images, \(g_{\theta}\), the gradient of the diffusion loss function with respect to the parameters \(\theta\) is given by: \[\nabla_{\theta}\mathcal{L}_{\text{Diff}}=\left(\epsilon_{\phi}^{\omega}\left( \mathbf{z_{t}},y,t\right)-\epsilon\right)\frac{\partial\epsilon_{\phi}^{ \omega}\left(\mathbf{z},y,t\right)}{\partial\mathbf{z_{t}}}\frac{\partial \mathbf{z_{t}}}{\partial\theta}.\] It has been demonstrated in [27] that omitting the U-Net Jacobian term (middle term) leads to an effective gradient for optimizing a parametric generator with diffusion models: \[\nabla_{\theta}\mathcal{L}_{\text{SDS}}(\mathbf{z},y,\epsilon,t)=\epsilon_{ \phi}^{\omega}\left(\left(\mathbf{z_{t}},y,t\right)-\epsilon\right)\frac{ \partial\mathbf{z_{t}}}{\partial\theta}. \tag{1}\] Incrementally updating the parameters of the generator in the direction of the gradient, produces images that exhibit a higher degree of fidelity to the prompt. However, SDS suffers from the tendency to converge towards specific modes any relevant citation in mind, resulting in non-diverse and blurry outputs that only highlight elements mentioned in the prompt. Figure 2 showcases a comparison between sampling Stable-Diffusion with SDS vs. sampling it with a standard reverse process of the diffusion model, demonstrating this issue with 2D image samples. Figure 3: **Bias in SDS optimization.**_Left column: an image generated by the prompt “Panda snowboarding”. Top rows show the difference between SDS to DDS optimization when changing the animal in the prompt (“Panda” to “Squirrel”). Bottom row shows SDS optimization applied using the original prompt. Even in this case, the image becomes blurry._ The original purpose of SDS was to generate samples via optimization from a text-conditioned diffusion model. It is noteworthy that \(g_{\theta}\) can be an arbitrary parametric function that renders images. In the following sections we demonstrate our results with \(g_{\theta}=\theta\), namely, a trivial generator that renders a single image, where the optimization variables are the image pixels themselves, however, note that the derivation is general. ### Editing with SDS The original purpose of SDS was to generate samples from a distribution conditioned solely on a text prompt. However, we now aim to extend SDS to the task of editing, which involves conditioning the sampling process on both an image and text. Our objective is to synthesize an output image \(\mathbf{z}\) that incorporates the structure and details of an input source image \(\mathbf{\hat{z}}\), while conforming to the content specified in a target prompt \(y\). This is a standard text-driven image-to-image translation problem, where modifications may be applied locally or globally [12, 5]. One potential approach to utilize SDS is to initialize the optimization variable with the source image \(\mathbf{z}_{0}=\mathbf{\hat{z}}\) and applying SDS while conditioning on \(y\). However, we have observed that similarly to the non image conditioned SDS, this approach leads to blurred outputs and a loss of details, particularly those that are unrelated to the input prompt. Figure 3 (top row) demonstrates such example where the panda transforms into a squirrel at the cost of blurring out other details. Based on our observations, we define a decomposition for the gradients \(\nabla_{\theta}\mathcal{L}_{\text{SDS}}\) to two components: one component \(\delta_{\text{text}}\) is a desired direction that directs the image to the closest image that matches the text. And another, undesired component, \(\delta_{\text{bias}}\) that interferes with the process and causes the image to become smooth and blurry in some parts. Formally: \[\nabla_{\theta}\mathcal{L}_{\text{SDS}}(\mathbf{z},y,\epsilon,t)\coloneqq \delta_{\text{text}}+\delta_{\text{bias}}, \tag{2}\] where both \(\delta_{\text{text}}\) and \(\delta_{\text{bias}}\) are random variables that depend on \(\mathbf{z}\), \(y\), \(\epsilon\) and \(t\). Under this definition, to address this issue and enable high-quality or distilled image editing with SDS, we have to isolate and extract the text-aligned part \(\delta_{\text{text}}\) and follow it during the optimization while avoiding the \(\delta_{\text{bias}}\) direction that may take the image to unintended places. ### Denoising the Editing Direction We next aim to find the noisy direction of the SDS score, when applied for editing purposes, and remove it during the optimization process. The gist of our method is that since we already have a source image and its text description, they can be used for estimating the noisy direction \(\delta_{\text{bias}}\), that biases the edit towards undesired directions. Then, if we want to alter only a portion of the image using a new text description, we can use our reference estimation and get a _cleaner_ gradient direction to update the image. In practice, we use a reference branch that calculates the SDS score of the given image \(\mathbf{\hat{z}}\) with a corresponding, matched, text prompt \(\hat{y}\), and subtract it from the main SDS optimization branch to yield a distilled edit. Formally, given matched and unmatched image-text embedding pairs \(\mathbf{\hat{z}}\), \(\hat{y}\), \(\mathbf{z}\), \(y\) respectively, the delta denoising loss is given by: \[\mathcal{L}_{\text{DD}}\left(\phi,\mathbf{z},y,\mathbf{\hat{z}},\hat{y}, \epsilon,t\right)=||\epsilon_{\phi}^{\omega}\left(\mathbf{z_{t}},y,t\right)- \epsilon_{\phi}^{\omega}\left(\mathbf{\hat{z}_{t}},\hat{y},t\right)||_{2}^{2},\] where \(\mathbf{z_{t}}\) and \(\mathbf{\hat{z_{t}}}\) share the same sampled noise \(\epsilon\) and timestep \(t\). Then, the gradient over \(g_{\theta}=\mathbf{z}\), are given by \[\nabla_{\theta}\mathcal{L}_{\text{DD}}=\left(\epsilon_{\phi}^{\omega}\left( \mathbf{z_{t}},y,t\right)-\epsilon_{\phi}^{\omega}\left(\mathbf{\hat{z}_{t}}, \hat{y},t\right)\right)\frac{\partial\epsilon_{\phi}^{\omega}\left(\mathbf{z_ {t}},y,t\right)}{\partial\mathbf{z_{t}}}\frac{\partial\mathbf{z}}{\partial \theta}.\] Again, we omit the differentiation thorough the diffusion model to obtain the Delta Denoising Score, \[\nabla_{\theta}\mathcal{L}_{\text{DDS}}=\left(\epsilon_{\phi}^{\omega}\left( \mathbf{z_{t}},y,t\right)-\epsilon_{\phi}^{\omega}\left(\mathbf{\hat{z}_{t}}, \hat{y},t\right)\right)\frac{\partial\mathbf{z}}{\partial\theta}. \tag{3}\] We state that DDS pushes the optimized image into the direction of the target prompt without the interference of the noise component, namely, \(\nabla_{\theta}\mathcal{L}_{\text{DDS}}\approx\delta_{\text{text}}\). By adding and subtracting \(\epsilon\) from the term in (3), we can represent DDS as a difference between two SDS scores: \[\nabla_{\theta}\mathcal{L}_{\text{DDS}} = \nabla_{\theta}\mathcal{L}_{\text{SDS}}(\mathbf{z},y)-\nabla_{ \theta}\mathcal{L}_{\text{SDS}}(\mathbf{\hat{z}},\hat{y}). \tag{4}\] Figure 4: **DDS gradients.**_Top: Visualization of 4 steps in the DDS optimization process, where an image of a “Flamingo rollerskating” (left) gradually transforms into a “Stork rollerskating” (right). Bottom: By subtracting the SDS gradients of the reference image and the source prompt (left) from the SDS gradients of the edited image and the target prompt (middle), we obtain cleaner DDS gradients (right)._ We first claim that the score provided by the reference branch is equivalent to the noisy direction. This is because, ideally, a matched image-text pair should have a low average SDS gradient across various timesteps and noise instances. Therefore, any non-zero gradient can be attributed to the noisy direction, thus, \[\nabla_{\theta}\mathcal{L}_{\text{SDS}}(\mathbf{\hat{z}},\hat{y})=\hat{\delta}_ {\text{bias}}. \tag{5}\] Evidently, the score of a matched text-to-image pair is non-zero. As can be seen in Figure 3 (bottom, row), even when the optimization process starts with an image that was generated by the text, there are gradients that pull the image towards the non-desired modes. For further empirical results of the estimation of \(\delta_{\text{bias}}\), please refer to Section 5. We next claim that the noisy component \(\delta_{\text{noise}}\) of closely related images (e.g., images with similar structure that were created with close prompts) is similar. This is demonstrated in the DDS evaluation experiment in Section 5 and in Figure 8 which shows that the consine similarity between the directions of the matched pair is high. This means that \(\delta_{\text{bias}}\approx\hat{\delta}_{\text{bias}}\). By combining the conclusions drawn from the above-mentioned experiments, we get \(\nabla_{\theta}\mathcal{L}_{\text{DDS}}\approx\hat{\delta}_{\text{text{text}}}\), which indicates that our DDS can be considered a distilled direction that concentrates on editing the relevant portion of the image, such that it matches the target text. Figure 4 visualizes the key idea behind DDS, The figure shows the two noisy SDS scores, of the matched and unmatched pair, along with their difference, which comprises DDS. Notably, subtracting the two noisy scores produces a clear and concise score that concentrates solely on the targeted modification in the image. Effect of CFG on DDSAs previously noted, the Classifier Free Guidance (CFG) parameter \(\omega\), regulates the relative influence of the text-conditioned and unconditional components of the denoising objective. Apparently, despite the subtraction of the two distinct branches in DDS, \(\omega\) still has a discernible impact on the resulting image output. Our experiments show that small values of \(\omega\) yield slower convergence rates and a correspondingly diminished fidelity to the text prompt, while larger \(\omega\) values result in an attenuated fidelity to the input image. This observed phenomenon is visualized in Figure 5 and empirically evaluated in Section 5. ## 4 Image-to-Image Translation With our Delta Denoising Score, we can apply a direct optimization over the image pixel space, i.e. optimizing for \(z=\theta\) as illustrated in Figure 1. However, optimizing an image for each editing operation presents several drawbacks. Firstly, it necessitates captions for both the input and the desired edited image. Secondly, the results obtained on real images are inferior to those obtained from synthetic images. Lastly, the time required for inference is long (\(\sim\)20 seconds per edit). To overcome these limitations, we introduce a novel unsupervised training pipeline for text-driven image-to-image translation based on our proposed DDS. Unsupervised training with DDSUsing DDS, we introduce an unsupervised training framework for a neural network that learns to translate images based on a caption that describes a _known source distribution_ and another caption that describes an _unknown target distribution_. Given a dataset of source images \(\{\mathbf{\hat{z}}_{i}\}\), source caption \(\hat{y}\) and a target caption \(y\), our goal is to learn a mapping \(\mathbf{z}=g_{\theta}(\mathbf{\hat{z}})\) such that \(\mathbf{z}\) has high fidelity to both: the input image \(\mathbf{\hat{z}}\) and to Figure 5: **DDS optimization results using different values of the classifier-free guidance scale \(\omega\).** On one hand, using small values of \(\omega\) leads to slow convergence and low fidelity to the text prompt. On the other hand, using large values of \(\omega\) results in low fidelity to the input image._ Figure 6: **Unsupervised training for multi task image-to-image translation network.** _Given an input image \(\mathbf{\hat{z}}\) (left) and a sampled task embedding (top), our network is trained using the Delta Denoising Score (DDS) and corresponding text embeddings (bottom) that describe the input image and the desired edited image result \(\mathbf{z}\). During inference, our network can then translate arbitrary real images based on the specified task, within a single feedforward pass._ the target caption \(y\). As illustrated in Figure 6, on the bottom, we utilize the DDS formulation in (4) to optimize our network. Naturally, we can extend the network capabilities to be task conditioned. Under those settings, the network learns a finite set of \(M\) image-translation tasks that are defined by multiple target captions \(\{y_{i}\}_{j=1}^{M}\) and corresponding learned task embeddings \(\{k_{j}\}_{j=1}^{M}\), see Figure 6. At each optimization iteration, we sample a source image \(\mathbf{\hat{z}}_{i}\) with its source caption \(\hat{y}_{i}\), a task embedding \(k_{j}\) with the corresponding target caption \(y_{j}\). Then the network is optimized by the DDS 4 where \(\mathbf{z}=g_{\theta}(\mathbf{\hat{z}}_{i}|k_{j})\). To maintain the fidelity to the input image, we add a weighted identity regularization term: \[\mathcal{L}_{\text{ID}}=\lambda_{id}(t)||g_{\theta}(\mathbf{\hat{z}}_{i}|k_{j} )-\mathbf{\hat{z}}_{i}||_{2}^{2},\] where the weight \(\lambda_{\text{ID}}(t)\) is a function of the training iteration \(t\), such that at the beginning of the training, we inject prior knowledge on the desired output, and gradually reduce it during training with a cosine decay. DDS with CFG warmupDuring the training of the aforementioned network, we experienced a familiar _mode collapse_ phenomena associated with the training of generative adversarial network (GAN) [11], where the network optimizations led to a local minima. In our case, the network has learned to produce a fixed object, in a fixed location within the input image, as demonstrated in Figure 7, where the same type of lion appears in the same pose and locations in all the outputs without respecting the input image. The reason for the mode collapse in our case can be explained thorough the analogy to GANs. The discriminator output score that discriminates between real and fake images can be replaced by the delta denoising score. At a local minimum point, our network succeeded to _fool_ the DDS such that the output has high fidelity to \(y\) at the fixed region and high fidelity to \(\mathbf{\hat{z}}\) elsewhere. To address this issue we have found that implementing a warmup scheduler for the classifier free guidance parameter \(\omega\), utilized in the estimation of the DDS gradient can be effective. As we have demonstrated earlier, adopting a low value for \(\omega\) during zero-shot optimization is associated with a notably slow convergence rate. Conversely, high values push the image aggressively towards \(y\) and lead the training to mode collapse. By gradually increasing the guidance scale, the network gradually learns to make larger changes to the input image with respect to the translation task and avoids local minima. ## 5 Evaluations and Experiments In this section we evaluate our observation regarding the SDS and DDS scores, compare our approach to other state-of-the-art zero-shot editing methods and conduct an ablation study to show the effectiveness of different choices in our system. SDS evaluationWe measure the expected SDS norm as a function of the timestamp \(t\) for matched and unmatched image-text pairs. The matched pairs obtained by generating images using Stable Diffusion [31] with subset of \(100\) captions from COCO validation dataset [6]. Then, for each image \(\mathbf{z}\), caption \(y\) and timestep \(t\) we estimate the value \(\mathbb{E}_{\text{\tiny{c}}\sim\mathcal{N}(0,\mathbf{0})}||\nabla_{\mathbf{z}} \mathcal{L}_{\text{SDS}}(\mathbf{z},y)||_{2}\) by averaging the result of \(200\) measurements and report the average value of the \(100\) estimations. To provide a reference, we also perform the experiment on \(100\) unmatched image-text pairs obtained by permuting the captions of the matched set. The results are Figure 8: **Expected SDS gradients.**_Left: Expected SDS norm \(||\nabla\mathcal{L}_{\text{SDS}}(z,y)||_{2}\) across different timesteps for matched (blue curve) and unmatched (orange curve) synthetic image-text pairs. Right: Cosine similarity between the SDS directions in (4) on matched (blue) and unmatched (orange) images from the InstructPix2Pix dataset [5]._ Figure 7: **Ablation study.**_We train a cat-to-lion image translation network under various settings. The first and second columns show the input and output results of our full method, respectively. The third row shows the results when training without CFG warmup, and the last column shows the results when training with SDS instead of DDS._ shown in Figure 8 (left). As can be seen, SDS exhibits non-negligible high gradients for matched pairs. In addition, the gap between matched and unmatched pairs supports our observation in Section 3 that there is an inherent noise direction \(\delta_{\text{bias}}\) in the SDS gradient. DDS evaluationNext, we evaluate our estimation that for a matched pair of similar images with their corresponding text, the SDS noise directions, \(\delta_{\text{bias}}\) and \(\hat{\delta}_{\text{bias}}\), are correlated. For this experiment we use a subset of 10000 synthetic image pairs \(\mathbf{z}\) and \(\hat{\mathbf{z}}\) with their corresponding captions \(y\) and \(\hat{y}\) from InstructPix2Pix [5] dataset. For each timestamp, we estimate the cosine similarity between \(\nabla_{\mathbf{z}}\mathcal{L}_{\text{SDS}}(\mathbf{z},y)\) and \(\nabla_{\hat{\mathbf{z}}}\mathcal{L}_{\text{SDS}}(\hat{\mathbf{z}},\hat{y})\) and report the average result across all pairs. Here again, we applied the same experiment to unmatched pairs for reference. Note that the caption for each SDS estimation remained aligned to its image. The results are summarized in Figure 8, on right. As can be seen, the matched pairs are strongly correlated which supports our assumption that an estimation for \(\delta_{\text{bias}}\) of reference image and text can be used the eliminate the same term from similar pair. Comparison to zero-shot editing methodsTo evaluate our editing capability using a direct DDS optimization over the pixel space of a synthetic generated image, we use a randomly selected subset of \(1000\) pairs of source and target prompts from the dataset of InstructPix2Pix [5]. The dataset already includes the paired image results obtained by Prompt-to-Prompt (P2P) [12], from which we took only the source images. For each editing result we measure the text-image correspondence using CLIP score[28]. In addition, we evaluate the similarity between the original and the edited images using the LPIPS perceptual distance [46]. We compare our method to additional zeros shot methods: SDEdit [21], and Plug-and-Play (PnP) [41]. It can be seen in Figure 10 that comparing to other methods, our approach demonstrates higher fidelity to the text prompt and to the source image on average. The quantitative results are summarized in Figure 9 where we show the metrics of our method for different numbers of classifier free guidance scale. Notice, that as observed in Figure 5, the improvement to the fidelity to text that obtained by using a large value of CFG is negligible compared to the deterioration in the fidelity to the source image. Image-to-image translation trainingWe train different multi-task networks as described in Section 4. For each training instance, we generate a synthetic dataset of \(5000\) images using the Stable Diffusion model conditioned on manually written captions (5-20 captions for each dataset). Each training starts from a pre-trained Stable Diffusion model, modified as follows: The latent noise inputs are replaced with latents of images from our synthetic dataset. The text embedding condition is replaced with our learned \begin{table} \begin{tabular}{l c c} \hline \hline & CLIP score \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline PnP & \(0.221\pm 0.036\) & \(0.31\pm 0.075\) \\ InstructPix2Pix & \(0.2190\pm 0.037\) & \(0.322\pm 0.215\) \\ DDS (ours) & \(\mathbf{0.225\pm 0.031}\) & \(\mathbf{0.104\pm 0.061}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison for the multi-task image-to-image translation network. We measure text-image correspondence using CLIP [28]. In addition, we evaluate the similarity between the original and the edited images using the LPIPS [46] perceptual distance. Figure 10: **Zero shot image editing qualitative comparison.**_We compare our approach to SDEdit [21] Plug-and-Play (PnP) [41] and prompt-to-prompt [12]. Our method showcases its ability to better apply both structural and color changes described in the target text prompt while simultaneously preserving high fidelity to the input image._ Figure 9: **Zero shot image editing quantitative comparison.**_Using our DDS optimization technique, we tested various CFG values on a dataset of 1000 images and prompts from the InstructPix2Pix training set [5], and compare our approach to SDEdit [21] with different numbers of forward diffusion (noise addition) steps and Plug-and-Play (PnP) [41]. Our outputs have higher fidelity to the source images (low LPIPS scores) while also achieving high fidelity to the edits described in the text prompts (high CLIP scores)._ task embeddings, initialized with text embedding that describes the task embedding. For example, for the task of adding snow in an image, we use the phrase "snowing". Finally, the timestep of the diffusion process is no longer used since our model inference process contains a single feed-forward. Therefore, we re-use the timestep condition as additional per-task _learned_ embedding which is initialized with a positional embedding of \(t=0.5\). While the text condition is injected by cross attention, time by adaptive group normalization (ADAGN). Additional implementation details are provided in the supplementary material. Image-to-image translation comparisonWe evaluate a _Cat-to-Other_ network trained to translate images of cats to images of four different animals: a dog, a lion, a bear, and a squirrel. We tested our network using a collection of \(500\) cat images from ILSVRC [33] and COCO [6] validation set; overall, we tested the results of \(2000\) image translations. We use the same LPIPS and CLIP scores to estimate fidelity to the source image and target text that describes the target distribution, for example, "A photo of a lion". We compare our method to PnP [41] and InstructPix2Pix [5] which also utilize the generative prior of a pre-trained diffusion model for the task of image-to-image translation. Unlike our method, InstructPix2Pix fine tune a diffusion model using synthetic pairs of images and therefor it is sensitive to quality of the pairing method. The results are summarized in Table 1 and Figure 11. As can be seen, our method achieves both: better fidelity to the input image and to desired target domain. Additionally, our method operates via a single feed-forward pass during inference, making it \(\times 50\) faster than the other diffusion-based methods that require a full iterative diffusion process in each inference sampling. A qualitative comparison is shown in Figure 11. As can be seen, our method better preserves the structure of the cat when translating to other animals. Moreover, our distilled training results in a more robust network that can better distinguish between regions in the image that had to be changed and areas to preserve. Ablation StudyWe evaluate key components of our image-to-image translation network on a single task of cat-to-lion image translation. First, we show our results without the CFG scaling warmup. As shown in Figure 7 (third column), it results in mode collapse where roughly the same lion appears in the same location regardless of the cat in the input. In addition, we train a network with the _Vanilla_ SDS term instead of our DDS while the other components, the \(\mathcal{L}_{\text{ID}}\) term and the CFG warmup, remain untouched and prevent the mode collapse. As can be seen (right column in Figure 7), the quality of the translation to a lion is worse than our full settings. Moreover, the SDS training struggles to preserve high-frequency details in the input image. For example, see the patterns of the purple wool hat in the first row. Figure 11: **Image-to-Image translation comparison.**_Our multi-task network was trained to translate cats to different animals (lion, dog, squirrel) using DDS. It was trained on synthetic cat photos and evaluated on a subset of real images from the COCO and Imagenet datasets. Our results (second row) better preserve the structure of the cat in the input image as well as its background._ Figure 12: **Limitations.**_Biases of the diffusion model or limitations in language understanding affect the DDS optimization. Top: we would like to change the color of the curtains in the bedroom, but the color of the pillows is also changed. Bottom: replacing the dog’s reflection with a shadow also causes changes in the lighting, weather and background details._ ## 6 Conclusions, Limitations and Future work We have presented, Delta Denoising Score, a new diffusion scoring technique that allows optimizing a given image as means to edit it with respect to a text-prompt. Delta Denoising Score uses the SDS score applied to input image to calculate cleaner gradients during the optimization, which leads to a distilled edit. We have also showed an image-to-image translation model trained with our new score. The model is training with no supervision, requires no pairs of images, and thus can be trained on real images. Our Delta Denoising Score works well in distilling text-driven image-to-image translations. However, there are cases that the results are imperfect due to the inherent limitation of the text-to-image model, in particular its language model. A noticeable problem is the binding of adjectives to the nouns. A typical example is demonstrated in Figure 12 (top), where the orange color is not well bind to the curtains, and thus lacks to the entire bedroom. Another example, displayed in the bottom row, where the dog's reflection is replaced with a shadow and caused unwanted changes in the background details. We also acknowledge that the multi-task model can be better trained and may be further improved by combining multiple experts training [3], which uses multiple network, or utilize subset of paired data and train our network under semi-supervised settings. The scope of Delta Denoising Score is wide, and its generalization across various editing tasks [5] should be explored in the future. Furthermore, we believe that it can be extended to other modalities, such as text-driven 3D shape editing, video editing and motion editing [22, 16, 40]. The objective of this work is to extract efficient and clean gradients that can facilitate the optimization of an image towards a distilled edit. This, we believe, is an important step towards enhancing our understanding of how to effectively extract and utilize the rich knowledge that is concealed within large-scale generative models. ## 7 Acknowledgement We thank Ben Poole, Elad Richardson, Ron Mokady and Jason Baldridge for their valuable inputs that helped improve this work.
2308.16248
Augmented Reality in Higher Education: a Case Study in Medical Education
During lockdown, we piloted a variety of augmented reality (AR) experiences in collaboration with subject matter experts from different fields aiming at creating remote teaching and training experiences. In this paper, we present a case study on how AR can be used as a teaching aid for medical education with pertinent focus on remote and social distanced learning. We describe the process of creating an AR experience that can enhance the knowledge and understanding of anatomy for medical students. The Anatomy Experience is an AR enhanced learning experience developed in collaboration with the Medical School of the University of Edinburgh aiming to assist medical students understand the complex geometry of different parts of the human body. After conducting a focus group study with medical students, trainees, and trainers, we received very positive feedback on the Anatomy Experience and its effects on understanding anatomy, enriching the learning process, and using it as a tool for anatomy teaching.
Danai Korre, Andrew Sherlock
2023-08-30T18:11:58Z
http://arxiv.org/abs/2308.16248v1
# Augmented reality in higher education: a case study in medical education ###### Abstract During lockdown, we piloted a variety of augmented reality (AR) experiences in collaboration with subject matter experts from different fields aiming at creating remote teaching and training experiences. In this paper, we present a case study on how AR can be used as a teaching aid for medical education with pertinent focus on remote and social distanced learning. We describe the process of creating an AR experience that can enhance the knowledge and understanding of anatomy for medical students. The Anatomy Experience is an AR enhanced learning experience developed in collaboration with the Medical School of the University of Edinburgh aiming to assist medical students understand the complex geometry of different parts of the human body. After conducting a focus group study with medical students, trainees, and trainers, we received very positive feedback on the Anatomy Experience and its effects on understanding anatomy, enriching the learning process, and using it as a tool for anatomy teaching. Keywords:Higher education, educational technology, augmented reality (AR), mixed reality (MR), medical education ## 1 Introduction The resurgence of extended reality applications in recent years created a renewed research interest in educational augmented and mixed reality (AR/MR). It has been shown that augmented reality (AR) can enhance teaching more than a textbook [1] due to its interactive nature, enhanced visualisation, and immersion. The advancing technology of AR allows for interactions between real objects and virtual objects thus allowing for a better understanding of complex geometries [2],[3],[4]. By using AR, students can experience and interact with objects or procedures in detail, by simply using a smartphone, tablet, or AR glasses. This makes AR highly effective when teaching geometric and spatial concepts, often found in science, technology, engineering, and math. An enhanced learning experience based on AR has been developed in collaboration with the Medical School of the University of Edinburgh (UoE) and is referred to as the Anatomy Experience. The aim of this experience is to help teach students the orientation and position of X-rays for complex parts of human anatomy such as the pelvis. The physical three-dimensional (3D) model of the pelvis can be examined and with the help of AR the user can control the axial, coronal and sagittal planes that are used to transect the body and see how the corresponding X-rays look like. ### Requirements and scope of the Anatomy Experience Several meetings have taken place between EdAR and the University of Edinburgh's Medical School to discuss the specifications and possible applications of AR in medical higher education. It has been considered that human anatomy would be a potential application area because of its complex geometry combined with the ability of AR-based learning to enhance the comprehension of complex information and simplify its delivery [5]. By following the EdAR process of developing experiences, we initiated a consultation with a subject matter expert (SME) to initiate collaboration, set a vision and build a time plan. We then collected the requirements, analysed them, and planned the next steps. By using a custom structured learning objectives questionnaire for collecting valuable data on subject matter expertise, we defined the areas that the experience will cover, important features, insights, requirements, learning objectives, aims, constraints and technical needs. By using these data in collaboration with the SME, the instructional designer generated an instructional needs analysis, and structured the content outline. ### Training Objective **Skills to be gained:** * Understand the orientation and position of X-rays for complex parts of human anatomy such as the pelvis. * Identify where the computed tomography (CT) image corresponds to the pelvic anatomy. **Concepts to be covered:** * Anatomy of human pelvis * CT scan imaging **Outcomes:** By the end of this Experience students should be able to recognise the orientation and position of X-rays for complex parts of human anatomy and be able to locate where in the actual anatomy the imaging corresponds. ## 2 Design and development After collecting the necessary data, the development and interaction design team generated the functional design of the experience. We also made user interface decisions and created a prototype. At the development stage, the SME feedback was integrated, and development commenced. The necessary content was identified and authored while the definition of the experience was developed in JavaScript Object Notation (JSON). The 3D printed parts and quick response (QR) cards were designed and generated at this point. A demo of the experience was presented to the SME for feedback. There are two sizes for the pelvis's physical models, a large and a mini which can be used by the student (Figure 1). The large model is intended for use in person in the classroom and the small model can be shipped to the student so they can practice on their own time and environment. There is also a version without the physical model where the pelvis is displayed digitally. Figure 1: On the left is the large 3D printed model of the pelvis. In the middle is the small 3D printed model of the pelvis. Image on the right is a screenshot of the AR-experience alpha prototype as seen on a smartphone screen. Testing and integration COVID 19 lockdown restrictions limited our access to sites and students which affected the development of the Experience, mainly the testing part of our development life cycle. We were however able to run a session with students during the UoE Clinical Skills Workshop where we piloted our Anatomy Experience (Figure 2). The cohort consisted of 19 undergraduate medical students, three postgraduate trainees and two junior trainers (non - consultant) who demoed the Anatomy Experience using our 3D printed models (Figure 1). The survey was conducted in English. The survey included questions about their impressions of the application for educational purposes and overall experience. The purpose of this survey was to gather some preliminary data and first impressions about the Anatomy Experience. The research results are intended to be used as a basis to update the existing application. Closed questions were used followed by a range of pre-coded responses apart from the comments section. Based on the responses, most participants strongly agreed that the application could enrich the learning process, improve their understanding of pelvic anatomy, and assist in the teaching of pelvic anatomy. Overall, the reception of the Anatomy Experience by the medical students, trainees and trainers has been very positive with a plethora of suggestions for expanding to other parts of the anatomy such as the knee, shoulder, and tissues. ## 4 Challenges, summary, and conclusions Screen estate is one of the challenges and, therefore, we launched a technographic survey collecting data from current medical students to design responsive user interfaces and exploit the screen size better. Another challenge with the proposed experience is that the 3D printed pelvises must be shipped to each student. Educational AR has grown significantly in the last few years. The application we built combines the benefits of AR with the physical object of the pelvis, thus creating a mixed reality experience. Having a physical object was deemed to be particularly useful when teaching human anatomy both in the focus group and a workshop we run demonstrating the experience [6]. By creating small and cost-effective 3D prints of the pelvis that can be sent out to each student makes the Anatomy Experience ideal for remote teaching; this goes beyond the traditional media commonly used such as textbooks, video, and video calls. Figure 2: Medical students and EdAR members trying out the Anatomy Experience ## Acknowledgements We would like to thank Andrew Hall, clinical teacher and PhD research fellow, for his assistance during testing. EdAR is a spin out company resulted from a European Institute of Innovation and Technology (EIT) digital funded project.
2309.00703
Quark stars with a unified interacting equation of state in regularized 4D Einstein-Gauss-Bonnet gravity
Since the derivation of a well-defined $D\rightarrow 4$ limit for 4D Einstein Gauss-Bonnet (4DEGB) gravity coupled to a scalar field, there has been interest in testing it as an alternative to Einstein's general theory of relativity. Using the Tolman-Oppenheimer-Volkoff (TOV) equations modified for 4DEGB gravity, we model the stellar structure of quark stars using a novel interacting quark matter equation of state. We find that increasing the Gauss-Bonnet coupling constant $\alpha$ or the interaction parameter $\lambda$ both tend to increase the mass-radius profiles of quark stars described by this theory, allowing a given central pressure to support larger quark stars in general. These results logically extend to cases where $\lambda < 0$, in which increasing the magnitude of the interaction effects instead diminishes masses and radii. We also analytically identify a critical central pressure in both regimes, below which no quark star solutions exist due to the pressure function having no roots. Most interestingly, we find that quark stars can exist below the general relativistic Buchdahl bound and Schwarzschild radius $R=2M$, due to the lack of a mass gap between black holes and compact stars in 4DEGB. Even for small $\alpha$ well within current observational constraints, we find that quark star solutions in this theory can describe Extreme Compact Objects (ECOs), objects whose radii are smaller than what is allowed by general relativity.
Michael Gammon, Sarah Rourke, Robert B. Mann
2023-09-01T18:54:28Z
http://arxiv.org/abs/2309.00703v4
# Unified Interacting Quark Stars in Regularized 4D Einstein Gauss-Bonnet Gravity ###### Abstract Since the derivation of a well-defined \(D\to 4\) limit for 4D Einstein Gauss-Bonnet (4DEGB) gravity [1] coupled to a scalar field, there has been interest in testing it as an alternative to Einstein's general theory of relativity. Using the Tolman-Oppenheimer-Volkoff (TOV) equations modified for 4DEGB gravity, we model the stellar structure of quark stars using a novel interacting quark matter equation of state [2]. We find that increasing the Gauss-Bonnet coupling constant \(\alpha\) or the interaction parameter \(\lambda\) both tend to increase the mass-radius profiles of quark stars described by this theory, allowing a given central pressure to support larger quark stars in general. These results logically extend to cases where \(\lambda<0\), in which increasing the magnitude of the interaction effects instead diminishes masses and radii. We also analytically identify a critical central pressure in both regimes, below which no quark star solutions exist due to the pressure function having no roots. Most interestingly, we find that quark stars can exist below the general relativistic Buchdahl bound and Schwarzschild radius \(R=2M\), due to the lack of a mass gap between black holes and compact stars in 4DEGB. Even for small \(\alpha\) well within current observational constraints, we find that quark star solutions in this theory can describe Extreme Compact Objects (ECOs), objects whose radii are smaller than what is allowed by general relativity. Introduction Modified theories of gravity continue to attract attention despite the empirical success of general relativity (GR). Such theories are motivated by a variety of problems, including addressing issues in in modern cosmology, quantizing gravity, eliminating singularities, and, perhaps most importantly, finding viable phenomenological competitors against which GR can be tested in the most stringent manner possible. Higher curvature theories (or HCTs) are amongst the most popular modifications. An HCT modifies the assumed linear relationship in GR between the curvature and the stress-energy, replacing the former with an arbitrary sum of powers of the curvature tensor (appropriately contracted to two indices). Such modifications could conceivably improve the empirical success of GR, while also making new testable predictions. Many quantum gravity proposals [3] and cosmological puzzles (such as dark energy, dark matter, and early-time inflation [4]) suggest that HCTs could play an important role in physics. Lovelock theories [5] have long been at the forefront of this search, since they posses the distinctive feature of having 2nd order differential equations of motion. The physical significance of such theories has been unclear, however, since their higher order terms yield non-trivial contributions to the equations of motion only in more than four spacetime dimensions (\(D>4\)). Recently this restriction was circumvented in the quadratic case, or what is better known as "Einstein-Gauss-Bonnet" gravity. The Gauss-Bonnet (GB) contribution to the gravitational action is \[S_{D}^{GB}=\alpha\int d^{D}x\sqrt{-g}\left[R^{\mu\nu\rho\tau}R_{\mu\nu\rho\tau }-4R^{\mu\nu}R_{\mu\nu}+R^{2}\right]\equiv\alpha\int d^{D}x\sqrt{-g}{\cal G} \tag{1}\] (where \(R_{\mu\nu\rho\tau}\) is the Riemann curvature tensor), which becomes the integral of a total derivative in \(D=4\), and thus cannot contribute to a system's gravitational dynamics in less than five dimensions. For this reason it is often referred to as a "topological term" having no relevance to physical problems. Indeed, the Lovelock theorem [5] ensures that a \(D=4\) dimensional metric theory of gravity must incorporate additional fields in order to have second order equations of motion and diffeomorphism invariance. Recently it was noted [6] that several exact solutions to \(D\)-dimensional Einstein-Gauss Bonnet gravity have a sensible limit under the rescaling \[\lim_{D\to 4}(D-4)\alpha\rightarrow\alpha, \tag{2}\] of the Gauss-Bonnet coupling constant. Using this approach a variety of 4-dimensional metrics can be obtained, including cosmological [6; 7; 8], spherical black hole [6; 9; 10; 11; 12], collapsing [13], star-like [14; 15], and radiating [16] metrics, each carrying imprints of the quadratic curvature effects of their \(D>4\) counterparts. However a number of objections to this approach were subsequently raised [17; 18; 19], based on the fact that the existence of a limiting solution does not imply the existence of a well-defined 4D theory whose field equations have that solution. This shortcoming was quickly addressed when it was shown that the \(D\to 4\) limit in (2) can be taken in the gravitational action [1; 20], generalizing an earlier procedure employed in obtaining the \(D\to 2\) limit of GR [21]. One can also compact ify \(D\)-dimensional Gauss-Bonnet gravity on a \((D-4)\)-dimensional maximally symmetric space and then use (2) to obtain a \(D=4\) HCT [22]. This approach yields the same result (up to trivial field redefinitions), in addition to terms depending on the curvature of the maximally symmetric \((D-4)\)-dimensional space. Taking this to vanish yields \[S_{4}^{GB}= \alpha\int d^{4}x\sqrt{-g}\left[\phi\mathcal{G}+4G_{\mu\nu}\nabla ^{\mu}\phi\nabla^{\nu}\phi-4(\nabla\phi)^{2}\square\phi+2(\nabla\phi)^{4}\right] \tag{3}\] where we see that an additional scalar field \(\phi\) appears. Surprisingly, the spherically symmetric black hole solutions to the field equations match those from the naive \(D\to 4\) limit of solutions [6]. The resultant 4D scalar-tensor theory is a particular type of Horndeski theory [23], and solutions to its equations of motion can be obtained without ever referencing a higher dimensional spacetime [1]. We are interested here in what is called 4D Einstein-Gauss-Bonnet gravity (4DEGB), whose action is given by (4) plus the Einstein-Hilbert term: \[S=S^{GR}+S_{4}^{GB}= \int d^{4}x\sqrt{-g}\left[R+\alpha\left\{\phi\mathcal{G}+4G_{\mu \nu}\nabla^{\mu}\phi\nabla^{\nu}\phi-4(\nabla\phi)^{2}\square\phi+2(\nabla \phi)^{4}\right\}\right] \tag{4}\] which has been shown to be an interesting phenomenological competitor to GR [24]. Despite much exploration [25], it is unclear whether these higher curvature terms play an important role in real gravitational dynamics. One important arena for testing such theories against standard general relativity is via observations of compact astrophysical objects like neutron stars. The correct theory should be able to accurately describe recent gravitational wave observations of astrophysical objects existing in the mass gap between the heaviest compact stars and the lightest black holes. Modern observational astrophysics is rich in findings of compact objects and as such our understanding of highly dense gravitational objects is rapidly advancing. However there is as of yet no strong consensus on their underlying physics. A number of such objects have been recently observed that are inconsistent with standard GR and a simple neutron star equation of state. It was recently shown [2, 26] that in standard general relativity, the secondary component of the merger GW190814 could feasibly be a quark star with an interacting equation of state governed by a single parameter \(\lambda\). This parameterization of strong interaction effects was inspired by another recent theoretical study showing that non-strange quark matter could feasibly be the ground state of baryonic matter at sufficient density and temperature [27]. Similar analyses with a different equation of state and/or QM phase [28, 29, 30] found similarly promising results. This same object was subsequently shown to be well described as a slowly-rotating neutron star in the 4DEGB theory without resorting to exotic quark matter EOSs, while also demonstrating that the equilibrium sequence of neutron stars asymptotically matches the black hole limit, thus closing the mass gap between NS/black holes of the same radius [15]. More recently, some groups have also been interested in modelling ECOs [31] as well as unusually light compact stars [32, 30] (like that in the gamma ray remnant J1731-347, which is inconsistent with minimum mass calculations of neutron stars generated by iron cores) as quark stars to explain their unusual properties. To further illuminate the range of possibilities, we consider in this paper quark star solutions to the 4DEGB theory. Inspired by the unified interacting quark matter equation of state derived in [2], we present a simple model for interacting quark stars in the regularized 4DEGB theory in which all of the strong interaction effects (including corrections from perturbative QCD, colour superconductivity, and the strange quark mass) are characterized by a single parameter \(\lambda\) in the equation of state. In doing so, we solve the modified TOV equations for a number of combinations of Gauss-Bonnet coupling and QM interaction strengths (constrained by observational limits). Although quark star solutions have previously been considered in the context of 4DEGB [33; 34; 35], a different (less general) equation of state was used, and the upper limit of the coupling \(\alpha\) was taken to be much smaller than that allowed by current observational constraints [24; 25] and the 4DEGB Buchdahl bound [36]. In considering a unified, interacting equation of state [2] and values of \(\alpha\) up to presently allowed bounds, we obtain a number of interesting novel results. Our most intriguing result is that quark stars in 4DEGB can be Extreme Compact Objects (ECOs), objects whose radii are smaller than that allowed by the Buchdahl bound in GR. Indeed, there exist quark stars in 4DEGB whose radii are smaller than that of a corresponding black hole of the same mass in GR1. Observations of these latter objects, apart from indicating a new class of astrophysical phenomena [37], would provide strong evidence for 4DEGB as a physical theory. These ECOs respect a generalization of the Buchdahl bound, whose small-radius limit is that of the horizon radius of the corresponding black hole. Footnote 1: This phenomenon was also shown to be present for neutron stars [15], though the relationship with the Buchdahl bound was not noted. We find in general that for a given central pressure, quark stars in 4DEGB have larger mass and radius than their GR counterparts with the same pressure. This can be attributed to the 'less attractive' nature of gravity in 4DEGB: for \(\alpha>0\) the gradient of the effective potential yields a weaker force than is the case in GR. We consequently find a larger maximal mass for a given value of \(\lambda\) in 4DEGB than in GR. Surprisingly we also find a critical central pressure for certain \(\alpha/\lambda\) combinations, below which quark stars cannot form, and above which the mass and radius are greater than or equal to those of a black hole in the 4DEGB theory. With this, it is clear that for such parameter combinations no stable quark star solutions exist. We analytically derive an expression for this critical quantity in terms of the couplings \(\alpha\) and \(\lambda\). Moreover, we find that both increasing the 4DEGB coupling constant \(\alpha\) and increasing the interaction parameter \(\lambda\) make the mass-radius profiles of the stars larger (and consequently lead to a larger maximum mass point for a given parameter set). The converse is also true for smaller values of these parameters, with the trend continuing as \(\lambda<0\). The outline of our paper is as follows: In section II we introduce the basic theory underlying 4DEGB gravity, as well as the unified, interacting quark matter equation of state that we make use of. Following this, a perfect fluid stress-energy tensor is employed to derive the 4DEGB TOV equations, and current observational constraints on the coupling constant are briefly discussed. Section III outlines the results of our numerical calculations, with the first part of the section covering \(\lambda>0\) solutions (positive interaction strength), and the second part seeing how these results change when \(\lambda<0\). We conclude our results with a brief analysis of the stability of unified, interacting 4DEGB quark stars. Section V summarizes our key findings and suggests topics for future study. Theory ### 4D Einstein-Gauss-Bonnet Gravity The field equations of 4DEGB are obtained from a straightforward variational principle applied to the action (4). Variation with respect to the scalar \(\phi\) yields \[\begin{split}\mathcal{E}_{\phi}=&-\mathcal{G}+8G^{ \mu\nu}\nabla_{\nu}\nabla_{\mu}\phi+8R^{\mu\nu}\nabla_{\mu}\phi\nabla_{\nu} \phi-8(\square\phi)^{2}+8(\nabla\phi)^{2}\square\phi+16\nabla^{a}\phi\nabla^{ \nu}\phi\nabla_{\nu}\nabla_{\mu}\phi\\ &+8\nabla_{\nu}\nabla_{\mu}\phi\nabla^{\nu}\nabla^{\mu}\phi\\ &=0\end{split} \tag{5}\] and the variation with respect to the metric gives \[\begin{split}\mathcal{E}_{\mu\nu}&=\Lambda g_{\mu\nu }+G_{\mu\nu}+\alpha\left[\phi H_{\mu\nu}-2R\left[\left(\nabla_{\mu}\phi\right) \left(\nabla_{\nu}\phi\right)+\nabla_{\nu}\nabla_{\mu}\phi\right]+8R^{\sigma}_ {(\mu}\nabla_{\nu)}\nabla_{\sigma}\phi+8R^{\sigma}_{(\mu}\left(\nabla_{\nu)} \phi\right)\left(\nabla_{\sigma}\phi\right)\\ &-2G_{\mu\nu}\left[\left(\nabla\phi\right)^{2}+2\square\phi\right] -4\left[\left(\nabla_{\mu}\phi\right)\left(\nabla_{\nu}\phi\right)+\nabla_{ \nu}\nabla_{\mu}\phi\right]\square\phi-\left[g_{\mu\nu}(\nabla\phi)^{2}-4 \left(\nabla_{\mu}\phi\right)\left(\nabla_{\nu}\phi\right)\right](\nabla\phi )^{2}\\ &+8\left(\nabla_{(\mu}\phi\right)\left(\nabla_{\nu)}\nabla_{\sigma }\phi\right)\nabla^{\sigma}\phi-4g_{\mu\nu}R^{\sigma\rho}\left[\nabla_{\sigma }\nabla_{\rho}\phi+\left(\nabla_{\sigma}\phi\right)\left(\nabla_{\rho}\phi \right)\right]+2g_{\mu\nu}(\square\phi)^{2}\\ &-4g_{\mu\nu}\left(\nabla^{\sigma}\phi\right)\left(\nabla^{\rho} \phi\right)\left(\nabla_{\sigma}\nabla_{\rho}\phi\right)+4\left(\nabla_{ \sigma}\nabla_{\nu}\phi\right)\left(\nabla^{\sigma}\nabla_{\mu}\phi\right)\\ &-2g_{\mu\nu}\left(\nabla_{\sigma}\nabla_{\rho}\phi\right)\left( \nabla^{\sigma}\nabla^{\rho}\phi\right)+4R_{\mu\nu\sigma\rho}\left[\left( \nabla^{\sigma}\phi\right)\left(\nabla^{\rho}\phi\right)+\nabla^{\rho}\nabla ^{\sigma}\phi\right]]\\ &=\ T_{\mu\nu}\end{split} \tag{6}\] where \[H_{\mu\nu}=2\Big{[}RR_{\mu\nu}-2R_{\mu\alpha\nu\beta}R^{\alpha\beta}+R_{\mu \alpha\beta\sigma}R_{\nu}^{\alpha\beta\sigma}-2R_{\mu\alpha}R_{\nu}^{\alpha}- \frac{1}{4}g_{\mu\nu}\mathcal{G}\Big{]} \tag{7}\] is the Gauss-Bonnet tensor. These field equations satisfy the following relationship \[g^{\mu\nu}T_{\mu\nu}=g^{\mu\nu}\mathcal{E}_{\mu\nu}+\frac{\alpha}{2}\mathcal{ E}_{\phi}=4\Lambda-R-\frac{\alpha}{2}\mathcal{G} \tag{8}\] which can act as a useful consistency check to see whether prior solutions generated via the Glavin/Lin method are even possible solutions to the theory. For example, using (8) it is easy to verify that the rotating metrics generated from a Newman-Janis algorithm [11; 38] are not solutions to the field equations of the scalar-tensor 4DEGB theory. ### Unified Interacting Quark Matter Equation of State Compact stars described in terms of deconfined quark degrees of freedom are often modelled using the simple, non-interacting quark matter equation of state [39] \[p(r)=\frac{1}{3}(\rho(r)-4B_{\text{eff}}) \tag{9}\] where \(p\) and \(\rho\) are pressure and mass density (respectively), and \(B_{\text{eff}}\) is the effective bag constant from the MIT bag model for quark confinement. The bag constant values associated with conventional non-interacting strange quark matter (SQM) or up-down quark matter (_ud_QM) are not small enough to explain the large compact star masses found in recent binary merger events using GR alone [40; 41; 42]. Accordingly, Zhang and Mann [2] investigated whether a strongly interacting equation of state would allow a quark star to fit the observational constraints in the context of GR. This EOS was inspired by a recent theoretical development [27] showing that _ud_QM could be the ground state of baryonic matter at sufficient density and temperature. In employing this, they also provided a unified framework for all strongly interacting phases of quark matter, condensing the entirety of strong interaction effects into a single parameter \(\lambda\). This greatly simplifies the problem as it is no longer necessary to solve the same equations for different phases of quark matter (namely the two-flavour superconducting without/with strange quarks (2SC, 2SC+s), and colour-flavour locking (CFL) phases), different values of the gap parameter from colour superconductivity, different perturbative contributions etc. as all these effects are unified in the model. They found that the interaction strength parameter can be written explicitly as [2] \[\lambda=\frac{\xi_{2a}\Delta^{2}-\xi_{2b}m_{s}^{2}}{\sqrt{\xi_{4}a_{4}}} \tag{10}\] where \(\Delta\) is the gap parameter, \(m_{s}\) accounts for corrections from the finite strange quark mass (if applicable), \(a_{4}\) represents the perturbative Quantum Chromodynamics (pQCD) contribution from one-gluon exchange, and the constant coefficients \[(\xi_{4},\xi_{2a},\xi_{2b})=\begin{cases}\left(\left(\left(\frac{1}{3}\right)^ {\frac{4}{3}}+\left(\frac{2}{3}\right)^{\frac{4}{3}}\right)^{-3},1,0\right)&2 \text{SC phase}\\ (3,1,3/4)&2\text{SC+s phase}\\ (3,3,3/4)&\text{CFL phase}\end{cases} \tag{11}\] account for the various possible QM phases. It is easy to see from here how different combinations of parameters may result in the same unified interaction strength. The parameter \(\lambda\) takes into account quark star physics through its inclusion in the following unified, interacting QM EOS [2]: \[p(r)=\frac{1}{3}\left(\rho(r)-4B_{\text{eff}}\,\right)+\frac{4\lambda^{2}}{9 \pi^{2}}\left(-1+\text{sgn}(\lambda)\sqrt{1+3\pi^{2}\frac{\left(\rho(r)-B_{ \text{eff}}\,\right)}{\lambda^{2}}}\right) \tag{12}\] where \(\text{sgn}(\lambda)\) represents the sign of \(\lambda\). This can be further generalized by dividing out the \(B_{\text{eff}}\) dependence, leaving us with fully dimensionless equations. Performing the rescalings \[\bar{\rho}=\frac{\rho}{4B_{\text{eff}}},\bar{p}=\frac{p}{4B_{\text{eff}}}, \tag{13}\] and \[\bar{\lambda}=\frac{\lambda^{2}}{4B_{\text{eff}}}=\frac{\left(\xi_{2a} \Delta^{2}-\xi_{2b}m_{s}^{2}\right)^{2}}{4B_{\text{eff}}\,\xi_{4}a_{4}}, \tag{14}\] allows us to rewrite the EOS (12) in terms of these dimensionless parameters as \[\bar{p}(\bar{r})=\frac{1}{3}(\bar{\rho}(\bar{r})-1)+\frac{4}{9\pi^{2}}\bar{ \lambda}\left(-1+\text{sgn}(\lambda)\sqrt{1+\frac{3\pi^{2}}{\bar{\lambda}}\left( \bar{\rho}(\bar{r})-\frac{1}{4}\right)}\right) \tag{15}\] where \[\bar{m}=m\sqrt{4B_{\text{eff}}}\qquad\bar{r}=r\sqrt{4B_{\text{eff}}}\qquad\bar{ \alpha}=\alpha\cdot 4B_{\text{eff}} \tag{16}\] and in the sequel we shall assume a characteristic value of \(B_{\text{eff}}=60\text{ MeV}/\text{fm}^{3}\). In the limit \(\bar{\lambda}\to 0\), (15) reduces back to the expected non-interacting EOS \(\bar{p}(\bar{r})=\frac{1}{3}(\bar{\rho}(\bar{r})-1)\), whereas in the limit of extreme positive interaction strength (\(\lambda\rightarrow+\infty\)), (15) approaches the form \[\bar{p}|_{\bar{\lambda}\rightarrow\infty}=\bar{\rho}-\frac{1}{2} \tag{17}\] or, equivalently, \(p(r)=\rho(r)-2B_{\text{eff}}\). In effect, this means that the strong interaction can reduce the surface mass density of the quark star from \(\rho_{0}=4B_{\text{eff}}\) to \(\rho_{0}=2B_{\text{eff}}\), and increase the speed of sound in QM from \(\frac{1}{3}c\) to \(c\) maximally. On the other hand, when \(\lambda\) is negative no well-defined large magnitude limit exists. ### 4DEGB TOV Equations The standard Tolman-Oppenheimer-Volkoff (TOV) equations for stellar structure are well-known in GR. To model the structure of a quark star in 4DEGB gravity, these relations need to be re-derived. In the following we do this, starting with a static, spherically symmetric metric ansatz in natural units (\(G=c=1\)): \[ds^{2}=-e^{2\Phi(r)}c^{2}dt^{2}+e^{2\Lambda(r)}dr^{2}+r^{2}d\Omega^{2}. \tag{18}\] As usual [1; 43], so long as \(e^{2\Phi}=e^{2\Lambda}\) outside the star, the combination \(\mathcal{E}_{0}^{0}-\mathcal{E}_{1}^{1}\) of the field equations can be used to derive the following equation for the scalar field: \[\left(\phi^{\prime 2}+\phi^{\prime\prime}\right)\left(1-\left(r\phi^{\prime}-1 \right)^{2}e^{2\Lambda}\right)=0. \tag{19}\] which, apart from the irrelevant \(\phi=\ln\left(\frac{r-r_{0}}{l}\right)\) (with \(r_{0}\) and \(l\) being constants of integration), has the solution \[\phi_{\pm}=\int\frac{e^{\Lambda}\pm 1}{r}dr \tag{20}\] where \(\phi_{-}\) falls off as as \(\frac{1}{r}\) in asymptotically flat spacetimes. Choosing \(\phi=\phi_{-}\) ensures that (5) is automatically satisfied. Modelling the quark matter by a perfect fluid matter source, the stress-energy tensor is \[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{21}\] from which we obtain the equations \[\frac{2}{r}\frac{d\Lambda}{dr}=e^{2\Lambda}\left[\frac{8\pi G}{c^{4} }\rho-\frac{1-e^{-2\Lambda}}{r^{2}}\left(1-\frac{\alpha\left(1-e^{-2\Lambda} \right)}{r^{2}}\right)\right]\left[1+\frac{2\alpha\left(1-e^{-2\Lambda} \right)}{r^{2}}\right]^{-1}, \tag{22}\] \[\frac{2}{r}\frac{d\Phi}{dr}=e^{2\Lambda}\left[\frac{8\pi G}{c^{4} }P+\frac{1-e^{-2\Lambda}}{r^{2}}\left(1-\frac{\alpha\left(1-e^{-2\Lambda} \right)}{r^{2}}\right)\right]\left[1+\frac{2\alpha\left(1-e^{-2\Lambda} \right)}{r^{2}}\right]^{-1},\] (23) \[\frac{dp}{dr}=-(\rho+p)\frac{d\Phi}{dr}, \tag{24}\] in 4DEGB, matching those found previously [34]. Asymptotic flatness as a condition means that \(\Phi(\infty)=\Lambda(\infty)=0\), and regularity at the center of the star implies \(\Lambda(0)=0\). If, in the usual way, we define the gravitational mass \(m\) within a sphere of radius \(r\) through the relation [1] \[e^{-2\Lambda}=1+\frac{r^{2}}{2\alpha}\left(1-\sqrt{1+\frac{8\alpha m(r)}{r^{3} }}\right) \tag{25}\] we arrive at the 4DEGB modified TOV equations, namely \[\frac{dp}{dr}=\frac{\left(p+\rho\right)\left[r^{3}(\Gamma+8\pi \alpha p-1)-2\alpha m\right]}{r^{2}\Gamma\left[r^{2}(\Gamma-1)-2\alpha\right]} \tag{26}\] \[\frac{dm}{dr}=4\pi r^{2}\rho \tag{27}\] along with the EOS (15), where \(\Gamma=\sqrt{1+\frac{8\alpha m}{r^{3}}}\). Note that this expression differs from the equivalent equation in [33, 34] due to our inclusion of the 4DEGB coupling in the definition of gravitational mass, however we do find agreement with [35]. The vacuum solution is given by \(m(r)=M\), where \(M\) is constant, implying that \(\Phi=-\Lambda\). Writing \(e^{-2\Lambda}=1+2\varphi(r)\), we can compute the gravitational force in 4DEGB due to a spherical body \[\vec{F}=-\frac{d\varphi}{dr}\hat{r}=-\frac{r}{2\alpha}\left(1-\frac{r^{3}+2 \alpha M}{r^{3}+8\alpha M}\sqrt{1+\frac{8\alpha M}{r^{3}}}\right)\hat{r} \tag{28}\] which is smaller in magnitude than its Newtonian \(\alpha=0\) counterpart (\(\vec{F}_{N}=-\frac{M}{r^{2}}\hat{r}\)) for \(\alpha>0\). The force in (28) vanishes at \(r=(\alpha M)^{1/3}\), but this is always at a smaller value of \(r\) than the outer horizon \(R_{h}=M-\sqrt{M^{2}-\alpha}\) of the corresponding black hole. Hence the gravitational force outside of any spherical body, while weaker than that in GR, is always attractive provided \(\alpha>0\). If \(\alpha<0\) then the corresponding gravitational force is more attractive than in GR. Rescaling the various quantities using (13), (14), and (16) we obtain the unitless equa tions \[\frac{d\bar{p}}{d\bar{r}}=\frac{\left(\bar{p}+\bar{\rho}\right)\left[ \bar{r}^{3}(\Gamma+8\pi\bar{\alpha}\bar{p}-1)-2\bar{\alpha}\bar{m}\right]}{\bar {r}^{2}\Gamma\left[\bar{r}^{2}(\Gamma-1)-2\bar{\alpha}\right]} \tag{29}\] \[\frac{d\bar{m}}{d\bar{r}}=4\pi\bar{r}^{2}\bar{\rho} \tag{30}\] which may be solved numerically. In the limit \(\alpha\to 0\), the above equations reduce back to the well-known TOV equations for a static, spherically symmetric gravitating body in GR. To solve (29) and (30) numerically we impose the boundary conditions \[m(0)=0,\quad\rho(0)=\rho_{\rm c}, \tag{31}\] where the star's surface radius \(R\) is defined via \(\bar{p}(\bar{R})=0\), namely the radius at which the pressure goes to \(0\) (_i.e._\(p(R)=0\)). We similarly define the total mass of the star to be \(M=m(R)\). Numerical solutions can thus be obtained by scanning through a range of values of \(\rho_{c}\) and solving for the star's total mass and radius. Before proceeding to solve the TOV equations, we consider the behaviour of the scalar field \(\phi\) in the interior. Inserting the interior solution (25) into (19), we find \[\begin{split}\lim_{r\to 0}\phi^{\prime}\approx\sqrt{\frac{m(0)}{2 \alpha r}}+\frac{3m(0)}{4\alpha}+\frac{m(0)\sqrt{r}\left(2\alpha m^{\prime}(0) +5m(0)^{2}\right)}{4\sqrt{2}(\alpha m(0))^{3/2}}\\ +\frac{r\left(8\alpha\left(3m^{\prime}(0)-1\right)+35m(0)^{2} \right)}{32\alpha^{2}}+\mathcal{O}\left(r^{3/2}\right).\end{split} \tag{32}\] Furthermore, provided \(m(r)\) vanishes at least quadratically in \(r\) for small \(r\) (which is ensured from (27) for the boundary conditions (31)) we find that near the origin \[\lim_{r\to 0}\phi^{\prime}\sim-\frac{r}{4\alpha}\approx 0,\,\lim_{r\to 0}\phi \approx K-\frac{r^{2}}{8\alpha}\approx K, \tag{33}\] (where \(K\) is a constant) and thus regularity of the scalar at the origin is ensured. Finally we note that the effective bag constant \(B_{\rm eff}=60\) MeV/fm\({}^{3}\) can be converted to units of inverse length squared ('gravitational units') with the factor \(G/c^{4}\), yielding \[B_{\rm eff}=7.84\times 10^{-5}\;{\rm km}^{-2}. \tag{34}\] ### Observational Constraints on the 4DEGB Coupling Constant A recent study of the observational constraints on the coupling \(\alpha\) yielded [24; 25] \[-10^{-30}{\rm m}^{2}<\alpha<10^{10}\;{\rm m}^{2} \tag{35}\] where the lower bound comes from "early universe cosmology and atomic nuclei" [15], and the upper bound follows from LAGEOS satellite observations. Regarding the lower bound as negligibly close to zero, the dimensionless version of (35) reads \[0<\bar{\alpha}\lesssim 3.2. \tag{36}\] We note that inclusion of preliminary calculations on recent GW data suggest these constraints could potentially tighten to \(0<\alpha\lesssim 10^{7}\) m\({}^{2}\), or alternatively \(0<\bar{\alpha}\lesssim 0.0032\). This would mean that deviations from GR due to 4DEGB would only be detectable in extreme environments such as in the very early universe or near the surface of extremely massive objects like black holes. Even tighter bounds were assumed in previous studies of quark stars, where only solutions with \(\alpha\) below 6 km\({}^{2}\) (\(\bar{\alpha}\leq 0.0019\)) were considered [33; 34; 35]. Adopting such a tight bound would make compact stars near the upper end of the mass gap an ideal candidate for investigation the effects of 4DEGB theory. At this point in time such tighter bounds are not warranted. A proper study of the effects of gravitational radiation in 4DEGB has yet to be carried out. In view of this we shall assume the bound (36), which has strong observational support [24; 25]. ## III Results In this section we numerically obtain the total quark star mass as a function of both total star radius and central density, with the former plots being supplemented by the GR/4DEGB black hole horizon radii [1; 44] and the GR/4DEGB Buchdahl limits [36]. Recall that the vacuum solution to the field equations is given by (25) with \(m(r)=M\), a constant, and \(\Phi=-\Lambda\). Assuming no star, this solution describes a black hole with two horizons provided \(M>\sqrt{\alpha}=M_{\rm min}^{\rm BH}\). The outer horizon radius is [1; 43; 44] \[R_{h}=M+\sqrt{M^{2}-\alpha}\;. \tag{37}\] The Buchdahl bound has been derived in 4DEGB [36] using arguments similar to those in GR, namely that the pressure of the star remains positive throughout and for a given mass, the radius of the star must be larger than that of the (outer) horizon of a black hole of the same mass. This yields \[\sqrt{1-\mu R^{2}}\left(1+\alpha\mu\right)>\frac{1}{3}\left(1-\alpha\mu\right) \tag{38}\] where \[\mu\equiv\frac{1}{2\alpha}(\sqrt{1+\frac{8M\alpha}{R^{3}}}-1)\] for a star of radius \(R\) and mass \(M\). For small \(\alpha\), (38) becomes \[\frac{M}{R}\leq\frac{4}{9}+\frac{16}{27}\left(\frac{\alpha}{R^{2}}\right) \tag{39}\] and we see that the GR Buchdahl bound is recovered in the \(\alpha\to 0\) limit. This latter relation holds provided spherical symmetry and isotropy are valid, regardless of whether the internal density distribution is constant or a function of the radial coordinate [36]. Unlike GR, the 4DEGB theory lacks a mass gap between a compact star and black hole of the same radius. This feature was first observed in for neutron stars, where the \(M\) vs. \(R\) curves were found to asymptote to the outer black hole horizon for sufficiently large \(\alpha\)[15]. To find the point at which the Buchdahl bound intersects the black hole horizon, we substitute (37) into (38) and solve for the mass \(M\). Doing so, we find \[M_{\rm int}=\sqrt{\alpha}=M_{\rm min}^{\rm BH}, \tag{40}\] or that the Buchdahl bound asymptotically approaches the smallest black hole mass allowed by the theory. These results imply that it is possible to have stable compact objects in 4DEGB whose radii are smaller than that of the GR Buchdahl bound \(R\geq 9M/4\) or even that of the Schwarzschild radius \(R=2M\). We shall demonstrate below that this situation is indeed realized, and that quark stars can have radii that are arbitrarily close to that of a black hole of the same mass. ### Results for \(\lambda\geq 0\) Here we present the solutions to the 4DEGB TOV equations for a positive coupling constant \(\lambda\geq 0\). This is equivalent to the condition \[\xi_{2a}\Delta^{2}>\xi_{2b}m_{s}^{2}, \tag{41}\] and clearly from (14) as \(\lambda\) increases in magnitude, so does \(\bar{\lambda}\). Our results are illustrated for a range of values of \(\bar{\alpha}\) and \(\bar{\lambda}\) in Figures 1 and 2. We find that in general, both increasing \(\bar{\alpha}\) and increasing \(\bar{\lambda}\) expand the curves to larger mass and radii for the same choice of central pressure, yielding larger maximum mass points. This is due to gravitational attraction in 4DEGB being increasingly weaker as \(\alpha>0\) increases, as well as a larger \(\lambda\) mapping to a stiffer equation of state as a consequence of the strong interaction effects. We see that for any given \(\alpha\), the hook-shaped \(M\) vs. \(R\) curves move upward and to the right as \(\lambda\) increases. Likewise, for any given \(\lambda\), the curves again move upward and rightward, with the upper part more rapidly asymptoting to the Buchdahl/horizon limit as \(\alpha\) increases. It is quite interesting to note that even for small \(\alpha\), with a high enough central pressure there are stable solutions that are smaller in radius than both the GR Buchdahl bound and the Schwarzschild black hole. We illustrate this in Fig. 3, which shows that, even for \(\bar{\alpha}=0.001\) there is a narrow range for small mass, small radius stars that are within not only the GR Buchdahl bound, but also the \(2M\) Schwarzschild radius. Similar results can be seen in the 2nd panel of Figure 6c for \(\bar{\alpha}=0.0001\) when \(\lambda<0\). The stability of these objects remains an interesting subject for investigation. Our results are commensurate with those of Banerjee et al. [34] in the limited overlapping regions of parameter space, in that increasing the coupling strength to the higher curvature terms tends to increase the mass-radius profiles, in turn yielding a larger local maximal mass point. In our case not all curves have this local extrema, but in the small \(\alpha\) and \(\rho_{c}\) regime we see the same behaviour. In [34], rather than varying coupling strength directly, different values of the MIT bag constant are considered, with a _smaller_\(B_{\rm eff}\) corresponding to larger M-R profiles (which in our case is correlated with a larger positive interaction strength). Furthermore, for sufficiently large \(\alpha/\lambda\), no quark star solutions exist outside of the black hole horizon, since the pressure profiles smaller than criticality diverge to \(\infty\) with no real roots. This in turn implies that there should be a well-defined central pressure/density at which \(p(r)=p(0)={\rm const}\), and below which quark star solutions do not exist. This value can be derived analytically (see Appendix A for a detailed derivation), and we find that below the critical central pressure: \[\begin{split} p_{\rm crit}=\frac{1}{2\pi(9\pi-32\alpha\lambda)^{ 2}}\Bigg{[}& 256\alpha(2\pi\alpha+3)\lambda^{2}+8\pi(2\pi\alpha(8\pi \alpha+21)-9)\lambda+9\pi^{3}(4\pi\alpha-3)\\ &-{\rm sgn}(\lambda)6\sqrt{\lambda(32\alpha\lambda+\pi(8\pi \alpha-3))^{2}\left(16(2\pi\alpha+1)\lambda+\pi^{2}(8\pi\alpha+3)\right)} \Bigg{]},\end{split} \tag{42}\] there are no real roots of \(p(r)\) with which to define the star's surface. This central pressure is never realized physically. We find that all \(M\) vs. \(R\) curves with \(p_{\rm crit}>0\) begin at the 4DEGB black hole horizon and extend to values within the horizon, or in other words, in a disallowed region of parameter space. We illustrate this in Figure 2c, where we notice that the \(M\) vs. \(R\) curves with \(p_{\rm crit}>0\) begin at the 4DEGB black hole horizon (as coloured points). The first threshold of interest is to consider is the smallest value of \(\bar{\alpha}\) at \(\bar{\lambda}=0\) for which none of the solutions lie below the Buchdahl threshold - for such parameter combinations no physical quark star solutions exist. To investigate this further, consider the pressure profile of a star with a central pressure just past criticality (ie. \(p(0)=p_{\rm crit}+\delta\) where \(\delta<<1\)). As shown in Appendix A, the pressure profile is well-approximated by a step function times a constant, allowing us to approximately solve (30) analytically: \[\bar{m}(\bar{R})^{\rm min}=\bar{M}^{\rm min}\approx 4\pi\int_{0}^{\bar{R}} \Theta(\bar{r}-\bar{R})\bar{\rho}(0)\bar{r}^{2}d\bar{r}=\frac{4\pi}{3}\bar{R} ^{3}\bar{\rho}(0). \tag{43}\] Then for the special case of vanishing interaction strength, \[\bar{M}^{\rm min}_{\lambda\to 0}\approx\frac{4\pi}{3}\bar{R}^{3}(3\bar{p}(0)+1). \tag{44}\] Since \(p(0)\approx p_{\rm crit}\) and \(p_{\rm crit}^{\lambda\to 0}=\frac{2\pi\bar{\alpha}}{9}-\frac{1}{6}\), this can be further simplified into the following form: \[\bar{M}^{\rm min}_{\lambda\to 0}\approx\frac{2}{3}\pi(\frac{4\pi}{3}\bar{ \alpha}+1)\bar{R}^{3}. \tag{45}\] Inserting this minimal mass solution into (29) and finding where \(\bar{p}^{\prime}(\bar{r})\) diverges, we obtain the corresponding radius \[\bar{R}^{\rm min}_{\lambda\to 0}\approx\sqrt{\frac{3}{4\pi}} \tag{46}\] Figure 1: Mass vs. radius/central density curves for unified interacting quark stars when \(\lambda>0\). Each plot shows results for a unique value of the 4DEGB coupling. In order from blue to purple curves, the \(\bar{\lambda}\) values considered are 0, 0.5, 10, 100, \(\infty\) respectively. The solid and dashed black lines correspond to the 4DEGB equivalent of the Schwarzschild and Buchdahl limits, respectively (with their red counterparts marking the equivalent bounds in GR). In general a larger \(\alpha\) and/or \(\lambda\) tends to increase the mass and radius of a solution. Figure 2: Additional mass vs. radius/central density curves for unified interacting quark stars when \(\lambda>0\). In order from blue to purple curves, the \(\bar{\lambda}\) values considered are 0, 0.5, 10, 100, \(\infty\) respectively. The solid and dashed black lines correspond to the 4DEGB equivalent of the Schwarzschild and Buchdahl limits, respectively (with their red counterparts marking the equivalent bounds in GR). In general a larger \(\alpha\) and/or \(\lambda\) tends to increase the mass and radius of a solution. At \(\alpha/\lambda\) combinations large enough for \(p_{\rm crit}>0\) we observe solutions which _start_ at the black hole horizon (as shown in the lower two panels represented by coloured dots). These solutions are unstable for all choices of central pressure. of this minimal mass point, which is (interestingly) independent of \(\bar{\alpha}\). Consequently \[\bar{M}^{\rm min}_{\lambda\to 0}\approx\frac{3+4\pi\bar{\alpha}}{4\sqrt{3\pi}}. \tag{47}\] Inserting this into the full 4DEGB Buchdahl bound and solving for \(\bar{\alpha}\), we find that (38) is saturated when \(\bar{\alpha}=\frac{3}{4\pi}\), which consequently is also the solution to \(\lim_{\lambda\to 0}p_{\rm crit}(\bar{\alpha})=0\) (the smallest \(\bar{\alpha}\) with a non-negative critical pressure in the limit of vanishing QM interaction), implying that \(\bar{M}^{\rm min}_{\lambda\to 0}|_{\bar{\alpha}=\frac{3}{4\pi}}=\bar{M}_{\rm int}\). For a more general choice of \(\bar{\alpha}\) it is easy to see that by inserting (46) into the 4DEGB black hole mass \(M_{BH}=\frac{\alpha+R_{H}^{2}}{2R_{H}}\) that \(M_{BH}(\bar{R}^{\rm min}_{\lambda\to 0})=\bar{M}^{\rm min}_{\lambda\to 0}\). In other words, all \(\bar{\lambda}=0\) solution curves with non-zero critical pressure must _start_ at the black hole horizon (or the Buchdahl bound/horizon intersection, in the case where \(\bar{\alpha}=\frac{3}{4\pi}\)), and thus none of these solutions are stable. This can also be checked numerically by considering \(\bar{\alpha}=\frac{3}{4\pi}+\delta\) (where \(\delta<<1\)) and plotting the lowest pressure solution possible, which indeed intersects the minimal mass point of the Buchdahl curve. A similar analysis can be repeated in the limit \(\lambda\to\infty\). Doing so we find an equivalent expression for the mass of the threshold point: \[\bar{M}^{\rm min}_{\lambda\to\infty}\approx\frac{\left(1+2\pi\bar{\alpha}- \sqrt{2\pi\bar{\alpha}+1}\right)}{2\bar{\alpha}}\bar{R}^{3} \tag{48}\] and radius: \[\bar{R}^{\rm min}_{\lambda\to\infty}\approx\sqrt{\frac{2\bar{\alpha}}{\sqrt{8 \pi\bar{\alpha}-4\sqrt{2\pi\bar{\alpha}+1}+5}-1}}, \tag{49}\] the latter of which now has an explicit \(\bar{\alpha}\) dependence. Combining the two we find \[\bar{M}^{\rm min}_{\lambda\to\infty}\approx\sqrt{2\bar{\alpha}}\left(2\pi\bar {\alpha}+1-\sqrt{2\pi\bar{\alpha}+1}\right)\left(\frac{1}{\sqrt{8\pi\bar{ \alpha}-4\sqrt{2\pi\bar{\alpha}+1}+5}-1}\right)^{3/2}. \tag{50}\] Figure 3: Mass vs. radius and central density for \(\bar{\alpha}=.001\) when \(\lambda>0\). We see that even for very small alpha there exists a range of solutions yielding quark stars smaller than the GR Buchdahl bound (dashed red) and the \(2M\) Schwarzschild radius (solid red); the corresponding bounds for 4DEGB are in black. As before, we insert this mass/radius pair into the full 4DEGB Buchdahl bound (38), and find that the expression is saturated when \(\bar{\alpha}=\frac{3}{2\pi}\), once again the solution to \(\lim_{\lambda\to\infty}p_{\rm crit}(\bar{\alpha})=0\). Inserting \(\bar{\alpha}=\frac{3}{2\pi}\) in to (50) we confirm once again that \(\bar{M}_{\lambda\to\infty}^{\rm min}|_{\bar{\alpha}=\frac{3}{2\pi}}=M_{\rm int}\), implying that solutions with critical pressure will _start_ at the intersection point of the 4DEGB Buchdahl bound and black hole horizon, and are thus always unphysical. For an arbitrary \(\bar{\alpha}>\frac{3}{2\pi}\) it is easy to show that again \(M_{BH}(\bar{R}_{\lambda\to\infty}^{\rm min})=\bar{M}_{\lambda\to\infty}^{\rm min}\), indicating that the lowest pressure solution intersects the black hole horizon. As with the non-interacting case, the \(\bar{\lambda}\to\infty\) solutions are also unstable for any central pressure above the critical value, and thus for any \(\bar{\alpha}/\bar{\lambda}\) combination with a positive critical pressure. These results are manifest in Figures 1 and 2 as points lying along the solid black line. The above analysis also breaks the parameter space up into three different regions, which can be seen in Figure 4. ### Results for \(\lambda\leq 0\) Here we summarize the results for \(\lambda<0\), which is equivalent to the condition \[\xi_{2a}\Delta^{2}<\xi_{2b}m_{s}^{2}. \tag{51}\] Figure 4: The solid black line (bordering the gray region) shows \(p_{\rm crit}(\alpha)\) for a test value of \(\bar{\lambda}=0.1\) (and \(\lambda>0\)). For a fixed central pressure of \(\bar{p}_{0}=0.15\) the parameter space has three distinct regions. From \(\bar{\alpha}=0\) to the value that satisfies \(p_{\rm crit}(\alpha)=0\) (green) there exist valid, physical solutions at fixed \(p_{0}\). As \(\bar{\alpha}\) increases past this the red region becomes manifest, having parameter combinations with \(p_{0}>p_{\rm crit}\), and thus with unphysical radii inside the black hole horizon. the gray region has parameter combinations with \(p_{0}<p_{\rm crit}\) (and thus radial pressure profiles that diverge to \(\infty\)). The boundary between the gray and red regions corresponds with solutions lying directly at the black hole horizon - these are included in our M-R plots as coloured points. In this case (and so throughout this section) a larger \(\bar{\lambda}\) corresponds to a negative \(\lambda\) that is larger in magnitude (more negative). As is shown in detail in appendix A, the critical pressure now has a divergence at \(\bar{\alpha}=\frac{9\pi}{32\lambda}\) in this regime. This defines a critical relationship between the 4D Gauss-Bonnet coupling and the strong interaction parameter at which the central pressure must approach infinity for a valid quark star solution. This also means that the allowed range of interaction strengths inside the star vary based on the choice of coupling to the higher curvature term. We illustrate our results in figures 6 and 7. In order to keep the results manageable and easy to interpret, we choose to never use \(\bar{\alpha}-\bar{\lambda}\) combinations that result in a critical centre pressure larger than \(\bar{p}_{\rm crit}=10\) (and define \(\bar{\lambda}_{\rm crit}\) to be that which corresponds with this solution set, since \(\bar{\lambda}_{\rm max}\) has a divergent critical pressure - see Figure 13). By doing this, we can more easily compare a range of \(\bar{\alpha}\) solutions with the same choices of \(\frac{\bar{\lambda}}{\lambda_{\rm crit}}\), and view them on axes of comparable magnitudes. As expected we see that decreasing \(\lambda\) (and hence increasing \(\bar{\lambda}\)) causes the mass and radius of a given solution to trend downward, whereas increasing \(\bar{\alpha}\) yields the same upward trend as before. In other words, in the negative lambda regime strong interaction effects tend to somewhat counteract the effects of non-zero coupling to the Gauss-Bonnet theory. We also notice that the effects of the critical pressure are first manifest on the curves with a larger negative magnitude, and thus the end of the solution space occurs when \(\alpha\) is large enough to trigger criticality for the case of vanishing interaction effects (\(\bar{\lambda}=0\)). As in the previous section this will occur when \(\bar{\alpha}=\frac{3}{4\pi}\), and thus a stronger coupling to 4DEGB theory will result in empty plots as seen in Figures (e)e and (f)f. Finally, we note from Figure 6 that there are again regions where solutions exist that violate the GR Buchdahl bounds and Schwarzschild radius, even for very small values of \(\bar{\alpha}\). The range of solutions gets large as \(\bar{\alpha}\) increases, as shown in Figure 7. Figure 5: Mass vs. radius and central density where the interaction strength is fixed at 0, and \(\bar{\alpha}=0.006\), 0.0065, 0.007, 0.0072, 0.0074, 0.0075, 0.0076, 0.0078, 0.0079, 0.00795, 0.008, 0.0085 (lower \(\bar{\alpha}\) corresponding to lower maximal mass points). The purpose of this plot is to show the progression of the maximum mass point as \(\alpha\) changes slowly. Figure 6: Mass-radius and mass-central density relations in the case where \(\lambda<0\). Each \(\bar{\alpha}\) corresponds to a different set of \(\bar{\lambda}\) for which the solutions are plotted, due to the extreme sensitivity of the critical pressure with respect to negative \(\lambda\). In all cases the ratio \(\frac{\bar{\lambda}}{\lambda_{\text{crit}}}\) takes on the same set of values, namely 0, 1/25, 1/10, 1/2, 1 respectively from the blue to purple curves. The effect of increasing \(\bar{\lambda}\) (decreasing \(\lambda\)) is to suppress the mass and radius, a logical extension of the \(\lambda>0\) results. Figure 7: Mass-radius and mass-central density relations in the case where \(\lambda<0\). Each \(\bar{\alpha}\) corresponds to a different set of \(\bar{\lambda}\) for which the solutions are plotted, due to the extreme sensitivity of the critical pressure with respect to negative \(\lambda\). In all cases the ratio \(\frac{\bar{\lambda}}{\lambda_{\text{crit}}}\) takes on the same set of values, namely 0, 1/25, 1/10, 1/2, 1 respectively from the blue to purple curves. The effect of increasing \(\bar{\lambda}\) (decreasing \(\lambda\)) is to suppress the mass and radius, a logical extension of the \(\lambda>0\) results. ## IV Stability analysis We now consider the stability of the 4DEGB unified interacting quark stars. A necessary but insufficient condition for an uncharged compact star's stability in Einstein gravity is \(dM/d\rho_{c}<0\)[26; 45; 46], corresponding to the parts of the solution curves before maximum mass points are reached. In Einstein-Maxwell theory a net charge can offset stability from the maximum mass point in either direction [2]. In a similar vein, when the 4DEGB theory coupling is non-zero, it is not obvious whether the coincidence of stability and maximum mass point will hold. Leaving a thorough analysis of the fundamental radial oscillation modes for future work, we note that stable compact objects must obey the causality condition that speed of sound \(c_{s}\) never exceeds the speed of light \(c\). Inserting the equation of state (12) directly into the definition for \(c_{s}\), we find \[c_{s}=\left(3\ -\ \frac{\text{sgn}(\lambda)4}{\sqrt{\frac{4\lambda+4\pi^{2} \bar{p}(\bar{r})+\pi^{2}}{\lambda}}}\right)^{-1/2}. \tag{52}\] To attain the limit \(c_{s}\to 1\) from below implies that \(\bar{p}(r)=-1/4\) (and \(\lambda>0\)), a value that is never part of the quark star solution space. In Figure 8 we plot the sound speed as a function of the interaction strength \(\lambda\) for various fixed values of the pressure. We see that for positive pressures the \(c_{s}\to 1\) limit is asymptotically approached from below as \(\lambda\to\infty\); the sound speed is never superluminal. If \(\lambda<0\) instead, the analogous solutions curve downward and asymptotically approach \(c_{s}=1/\sqrt{5}\) as expected from [2; 26]. Another indicator of stability is the effective adiabatic index \(\gamma\) of perturbations. For adiabatic oscillations, this can be defined via the speed of sound through the quark-gluon plasma: \[\gamma_{\text{eff}}\equiv\left(1+\frac{\rho}{P}\right)\left(\frac{dp}{d\rho} \right)_{S} \tag{53}\] where the subscript \(S\) indicates that we consider the sound speed at constant specific en Figure 8: Sound speed inside the unified interacting quark star as a function of interaction strength \(\bar{\lambda}\) when pressure is fixed (note that this quantity is independent of the coupling strength \(\alpha\)). The solid lines correspond to \(\lambda>0\) and the dashed lines \(\lambda<0\). tropy (recalling that \(c_{s}=\sqrt{(\frac{\partial p}{\partial\rho})_{S}}\)). One can consider this quantity to be a bridge "between the relativistic structure of a spherical static object and the equation of state of the interior fluid" [47]. In principle a critical value for \(\langle\gamma_{\rm eff}\rangle\) exists, below which configurations are unstable against radial perturbations. In standard general relativity this critical value can be written [47; 48]\(\gamma_{cr}=\frac{4}{3}+\frac{19}{42}\beta\), where \(\beta=2M/R=R_{S}/R\) is the compactness parameter. If \(\beta\to 0\) the well-known classical Newtonian limit is recovered as expected (\(\langle\gamma_{\rm eff}\rangle\geq\frac{4}{3}\)). An equivalent bound has not yet been derived for the 4DEGB theory. Despite this, it is common practice to plot the adiabatic index of the star relative to the Newtonian critical value [33; 34; 49; 50]. In general \(\gamma\) will depend on \(\lambda,\alpha,\rho_{0}\) and \(r\). Given the size of our parameter space, plotting this would be cumbersome and not particularly illuminating (although an example case is shown in Figure 9). We already know due to the form of the equations that \(\lim_{r\to R}\gamma\to\infty\), and due to the monotonically decreasing nature of our pressure profiles, \(\frac{\partial\gamma}{\partial r}>0\). With this, we are mostly interested in the lower limit of \(\gamma_{r\to 0}\) (ie. where the curves in Figure 9 start from). This information is laid out in table 1 where, noting that \(\lim_{p\to\infty}\gamma_{\rm eff}\) is the lower limit on \(\gamma_{r=0}\), we can see that in all cases \(\gamma_{r=0}\geq\frac{4}{3}\), and also that \(\gamma\) always diverges at the surface of the star. Since there is no well defined upper limit on interaction when \(\lambda<0\), the first column of the third row cannot be further simplified. The results of this table imply that \(\langle\gamma\rangle>\gamma_{\rm crit}^{\rm GR}\) in all cases. \begin{table} \begin{tabular}{|l|l|l|l|} \hline & \(\gamma_{\rm eff}\) & \(\lim_{p\to\infty}\gamma_{\rm eff}\) & \(\lim_{p\to 0}\gamma_{\rm eff}\) (\(\gamma_{r\to R}\)) \\ \hline \(\lambda=0\) & \(\frac{1}{3}(4+\frac{1}{p(r)})\) & \(\frac{4}{3}\) & \(\infty\) \\ \hline \(\lambda\to\infty\) & \(2(1+\frac{1}{4p(r)})\) & \(2\) & \(\infty\) \\ \hline \(\lambda<0\) & \(\frac{\sqrt{\frac{4\lambda+4\pi^{2}p(r)+\pi^{2}}{\lambda}\Big{(}4\lambda+2 \lambda\sqrt{\frac{4\lambda+4\pi^{2}p(r)+\pi^{2}}{\lambda}+4\pi^{2}p(r)+\pi^{2} }\Big{)}}}{\pi^{2}p(r)\Big{(}3\sqrt{\frac{4\lambda+4\pi^{2}p(r)+\pi^{2}}{\lambda }+4}\Big{)}}\) & \(\frac{4}{3}\) & \(\infty\) \\ \hline \end{tabular} \end{table} Table 1: Limits on the adiabatic index \(\gamma\). Figure 9: Adiabatic index \(\gamma_{\rm eff}\) plotted over the surface of the star when \(\bar{\alpha}=0.0001\) and \(\lambda>0\). Different coloured curves represent different central pressures (with the smallest central pressures having the largest \(\gamma_{\rm eff}\) at the origin), and the black curve is the Newtonian limit \(4/3\). Summary In this paper we have investigated the stellar structure of strongly interacting quark stars in the regularized 4D Einstein Gauss-Bonnet theory of gravity for different combinations of the 4DEGB coupling constant \(\alpha\) and the unified strong interaction parameter \(\lambda\) in our interacting quark matter equation of state. In accord with the lack of a mass gap in the 4DEGB theory [15], we find that - for both signs of the coupling \(\lambda\) - even for small \(\alpha\) the quark star solutions asymptotically approach the 4DEGB black hole horizon radius, and thus have solutions with smaller radii than the GR Buchdahl/Schwarzschild limits. It is worth noting that in the \(\lambda<0\) case a much larger central pressure is required to approach this limit as compared to the analogous \(\lambda>0\) case. In general, larger \(\alpha\) and \(\lambda\) tend to increase the mass-radius profile of quark stars, and a large negative \(\lambda\) has the opposite effect, pushing back against the contributions of non-zero coupling to the higher curvature gravity terms. These findings are generally consistent with what was found in the regime of weak coupling to the 4DEGB theory [34]. We have found many additional features in the unexplored regions of parameter space, the most striking of which is that 4DEGB quark stars can exist not only below the GR Buchdahl bound, but also smaller than the \(2M\) Schwarzschild radius. These objects obey the criterion that their sound speed is subluminal and have an average effective adiabatic index above what is required in GR. An interesting avenue for future work would be to study the stability properties of these objects, and of compact 4DEGB stars in general. We have also analytically identified a critical centre pressure (which is a function of \(\alpha/\lambda\) only), below which quark star solutions do not exist. This critical pressure cannot be physically realized; rather it corresponds to quark stars whose radii are either equal to or smaller than the corresponding black hole of the same mass. In principle, the \(M-R\) pairs associated with this critical pressure can be used to constrain the constants of the theory (for instance, if a particular \(\alpha/\lambda\) pair does not allow for \(M-R\) pairs found observationally). In addition to a detailed study of the stability of compact stars in 4DEGB gravity, our work should be extended to consider the effects of charge on stellar structure, since the confining nature of the strong interaction makes quark stars one of the few remaining candidates for stellar objects which are not electrically neutral [26; 35; 51]. Moreover, real astrophysical objects generally also have a net angular momentum, which can alter the stellar structure. Our calculations could be extended to a slowly rotating metric ansatz [43; 52] to see how the introduction of angular momentum changes the mass-radius relations of our interacting 4DEGB quark star. ## Acknowledgements This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. We are grateful to Sharon Morsink for helpful correspondence.
2304.00927
Quantifying Carbon Emissions due to Online Third-Party Tracking
In the past decade, global warming made several headlines and turned the attention of the whole world to it. Carbon footprint is the main factor that drives greenhouse emissions up and results in the temperature increase of the planet with dire consequences. While the attention of the public is turned to reducing carbon emissions by transportation, food consumption and household activities, we ignore the contribution of CO2eq emissions produced by online activities. In the current information era, we spend a big amount of our days browsing online. This activity consumes electricity which in turn produces CO2eq. While website browsing contributes to the production of greenhouse gas emissions, the impact of the Internet on the environment is further exacerbated by the web-tracking practice. Indeed, most webpages are heavily loaded by tracking content used mostly for advertising, data analytics and usability improvements. This extra content implies big data transmissions which results in higher electricity consumption and thus higher greenhouse gas emissions. In this work, we focus on the overhead caused by web tracking and analyse both its network and carbon footprint. By leveraging the browsing telemetry of 100k users and the results of a crawling experiment of 2.7M websites, we find that web tracking increases data transmissions upwards of 21%, which in turn implies the additional emission of around 11 Mt of greenhouse gases in the atmosphere every year. We find such contribution to be far from negligible, and comparable to many activities of modern life, such as meat production, transportation, and even cryptocurrency mining. Our study also highlights that there exist significant inequalities when considering the footprint of different countries, website categories, and tracking organizations, with a few actors contributing to a much greater extent than the remaining ones.
Michalis Pachilakis, Savino Dambra, Iskander Sanchez-Rola, Leyla Bilge
2023-04-03T12:30:28Z
http://arxiv.org/abs/2304.00927v1
# Quantifying Carbon Emissions due to Online Third-Party Tracking ###### Abstract. In the past decade, global warming made several headlines and turned the attention of the whole world to it. Carbon footprint is the main factor that drives greenhouse emissions up and results in the temperature increase of the planet with dire consequences. While the attention of the public is turned to reducing carbon emissions by transportation, food consumption and household activities, we ignore the contribution of CO\({}_{2}\)eq emissions produced by online activities. In the current information era, we spend a big amount of our days browsing online. This activity consumes electricity which in turn produces CO\({}_{2}\)eq. Using the Internet is something that we cannot avoid but several of the things happening behind the scenes during browsing are further contributing to CO\({}_{2}\)eq emissions production. While website browsing contributes to the production of greenhouse gas emissions, the impact of the Internet on the environment is further exacerbated by the web-tracking practice. Indeed, most webpages are heavily loaded by tracking content used mostly for advertising, data analytics and usability improvements. This extra content implies big data transmissions which results in higher electricity consumption and thus higher greenhouse gas emissions. In this work, we focus on the overhead caused by the web-tracking practice and analyze both its network and carbon footprint. By leveraging the browsing telemetry of 100k users and the results of a crawling experiment of 2.7M websites, we find that web tracking increases data transmissions upwards of 21%, which in turn implies the additional emission of around 11 Mt greenhouse gases in the atmosphere every year. We find such contribution to be far from negligible, and comparable to many activities of modern life, such as meat production, transportation, and even cryptocurrency mining. Our study also highlights that there exist significant inequalities when considering the footprint of different countries, website categories, and tracking organizations, with a few actors contributing to a much greater extent than the remaining ones. ## 1. Introduction Global warming has constantly been in the spotlight during the last decade (Sanchez-Rola et al., 2017). Countless articles have discussed its main consequences, from the reduction of glaciers to the extinction of animal species, and from more intense heat waves to the sea-level rise. Combating climate change has become a priority, and many countries try to put in place incentives aimed at reducing their polluting emissions (Sanchez-Rola et al., 2017). Although this requires a collective effort on both national and international sides, it substantially boils down to reducing the carbon footprint of each individual. The carbon footprint of an individual is calculated by estimating the total amount of greenhouse gases (CO\({}_{2}\)eq1) that are produced due to necessities of being alive and of being an active member of the current society (Sanchez-Rola et al., 2017). Globally, those emissions are estimated to be over 34 billion tonnes per year (Sanchez-Rola et al., 2017). Energy production and consumption represent the major factors that contribute to carbon emissions, followed by those caused by agricultural processing, industrial conversions, and waste decomposition (Figure 1). Footnote 1: CO\({}_{2}\) equivalent is a metric measure used to compare the emissions from various greenhouse gases (GHG) on the basis of their global-warming potential With the emergence of computers and mobile devices, individuals started to spend a significant amount of their time using the Internet (Bulge et al., 2017). Thanks to the Internet, we are able to complete everyday errands from the comfort of our house, such as buying groceries and attending meetings with someone on the other side of the world. While seemingly the usage of the Internet helps to reduce CO\({}_{2}\)eq emissions, people might easily overlook that online activities require electricity and consequently generate CO\({}_{2}\)eq as well. Over the last two decades, we observed a dramatic increase in Internet usage (Sanchez-Rola et al., 2017), and experienced another exaggerated rise during the COVID-19 pandemic. According to a recent report (Sanchez-Rola et al., 2017), the average user spends 7 hours per day connected to the Internet. While website browsing contributes to the production of GHG, the impact of the Internet on the environment is further exacerbated by the web-tracking practice. Indeed, the large majority of web Figure 1. Carbon footprint breakdown. Energy represents the major factor with 72.3% of the emissions, followed by Agriculture (18.4%), Industry (CO\({}_{2}\)eq emitted by chemical conversions - 5.2%) and Waste (methane and nitrous oxide emitted due to decomposition of organic residues - 3.2%). pages are heavily loaded by third-party trackers [40; 69] that constantly track users for various reasons, including advertisement, analytics, and usability improvements [49]. The retrieval and execution of tracking-related content implies data transmissions and the use of computing resources, thus resulting in higher electricity consumption and GHG emissions. Measuring the power consumption of the Internet and the corresponding CO2eq emissions are two overly complex problems. A large corpus of studies has already tried to calculate the power consumption in order to transfer 1GB of data on the Internet. But due to different methodologies and system boundaries in each study (e.g., data centers, underwater cables, ISPs) their estimations may vary even up to 5 orders of magnitude. In addition, estimating the CO2eq emissions of the Internet is a much more understudied area that requires an accurate conversion from energy consumption to GHG emission. This needs to take into account a plethora of factors, the most important being the mixture of sources used to produce electricity in each country. The most recent reports consider the Internet responsible for 3.7% of the global GHG emissions (\(\sim\)1 gigatonne of CO2eq) [29]. More specifically, only two studies narrowed their estimation down to web tracking, Parssinen et al. provided an upper bound estimation of 159 Mt [62] for the CO2eq emissions caused by the online advertisement ecosystem, indicating that almost 16% of the global Internet's electricity consumption is due to the third-party trackers. However, authors only considered a worldwide average for quantifying the amount of CO2eq emitted for each kWh; on top of that, they in turn relied on previous estimations for converting the amount of transmitted data to the energy consumed, which introduces a non-negligible uncertainty in their results (i.e. from 20 to 282 TWh consumed for advertisements only). The findings of Cucchietti et al. revealed a more conservative estimation [34]. The authors conducted an experimental study to measure the monthly network traffic produced by third-party cookies on the Tranco top 1M websites and estimated that 11.4 kt (kilottonnes) of CO2eq per month is caused by the transfer of cookies. However, considering the uncertainty ranges from the sources used, they estimate a much larger spectrum (between 1.4 kt to 17.1 kt of CO2eq per month). Since this study only accounts for the size of cookies, the overall consumption is significantly lower than the previous study [62] -- even though the authors use a much higher estimation of the kWh consumed per GB of data transferred. The two studies differ by several orders of magnitude in their final findings since they measure different aspects of the tracking ecosystem, use estimations to understand the user behavior and lack real-world data. On top of that, both studies suffer from big fluctuations in their final findings as lower and upper bound for CO2eq emissions, which comes as a result of the estimations and global averages used during the calculations. To overcome the limitations that characterize those works, we designed an experimental study that analyzed real-browsing telemetry of almost 100k users to perform more accurate estimations of both network and CO2eq footprints due to third-party tracking. At first, the telemetry made it possible to capture real-users browsing behaviors without the need for approximations. Differently from previous studies, we also accounted for potentially-cached content, investigated tracking-related headers, and differentiated results by website categories and tracker organizations. In addition, we paid close attention to the geographical dimension in our measurements, as computing web-tracking footprint is a multidimensional problem that cannot be addressed by blindly considering global averages. In this respect, we looked at the overload that trackers introduce in different geographical locations, and estimated GHG emissions by considering the energy production sources of the country and continent each user mainly browses from. Below we report our most significant findings on the web-tracking footprint: * The overall emissions due to trackers for the global active Internet users account for 10.79 Mt of GHG (6.28 Mt for the most conservative computation). * The average annual data transmitted for a single user on the Internet is 8.14 GB, out of which 1.67 GB for tracking-related purposes only (21.11%). * Asian users emit more CO2eq compared to users from other continents mostly because of the resources used in energy production. * Shopping webpages are characterized by the highest mean data transmitted for a single user due to tracking (182 MB yearly). Websites serving News and media content reveal the highest average ratio of tracking-related content per website (2.2 MB per page). When looking at emissions, Shopping websites represent the biggest offender with 0.82 Mt CO2eq server year * Google emissions, the largest tracker, annually account for 4.38 Mt CO2eq (40% of the global emissions for web tracking) * Globally, web tracking annually generates GHG as 80% of the whole aviation system of the top-10 polluting countries for plane emissions. Similarly, its impact is comparable to 70% of the emissions produced by Bitcoin mining in the US. Electricity required by the tracking ecosystem could heat New York for half a year. The remaining of the paper is structured as follows. In Section 2, we list the polluting impact that human's essential activities have on the environment. We then describe the main datasets used in this study together with methodology details in Section 3. We report the results of our experiment in (Section 4). In Section 5, we compare web-tracking environmental impact with those characterizing human kind's principal necessities. In 6 we report the related work, discuss the main implications and limitations of our work in Section 8, and conclude in Section 9. ## 2. The carbon footprint of a human life In this section, we report figures on essential activities that characterize individuals of the modern society, such as food production, power/electricity consumption and the use of different transportation means. We then focus on the cyberspace and report the consequences of the massive adoption of the Internet and its facilities on the global gas emissions. We analyze previous surveys and provide an equation that quantifies the energy consumption --and in turn the CO2eq emissions-- for each GB of data transferred on the network. In addition, we investigate the environmental impact of Bitcoin adoption and report the carbon footprint resulting from its mining activity. We finally focus on some of the natural carbon dioxide adsorbents by describing their contribution to capturing or converting greenhouse gases from the atmosphere. In the following paragraphs, we discuss per-capita CO\({}_{2}\)eq emissions for the countries that contribute the most to global pollution, and report data about the biggest offenders. After presenting web-tracking footprint in section 4, we will then provide emission comparisons in Section 5. ### Per-capita CO\({}_{2}\)eq emissions According to the Emissions Database for Global Atmospheric Research (EDGAR), each person in the world produces 4.79 tons of CO\({}_{2}\)eq every year (Han et al., 2017), with an annual increase of 0.9% from 2020 to 2021. While this represents the average global value, there exist significant inequalities across countries, mainly due to the different living standards and sources used to produce energy. For example, while the carbon footprint for a Qatar citizen reaches upwards of 37 tonnes of CO\({}_{2}\)eq per year, an inhabitant of Mali is responsible for the emission of 0.09 tons in the same period (Han et al., 2017). Globally, energy production (electricity for buildings, transportation, and industrial applications) is the main polluting factor and it is responsible for 73.2% of the greenhouse gas emissions. On the other hand, activities related to agriculture, direct industrial processes, and waste contribute respectively 18.4%, 5.2% and 3.2% of the total emission. ### Foodprint: Carbon Footprint of What We Eat Food production accounts for 25% of the world's GHG emissions and currently requires half of the earth's surface (Santos et al., 2018). The whole process has a substantial environmental cost, quantified at 13.7 billion tons of CO\({}_{2}\)eq produced every year (Santos et al., 2018). GHG emissions due to food production are mainly caused by land use and processes at the farm stage, such as the application of fertilizers and enteric fermentation --i.e., methane produced in the stomachs of the cattle. The combination of land use and farm-stage emissions account for upwards of 80% of the footprints (Santos et al., 2018). On the contrary, food transportation and supply chain-related activities are small contributors (10% of the emissions each). The carbon footprint varies a lot among food types: while meat, cheese, and eggs have the highest impact, fruit, vegetables, beans, and nuts produce much lower GHG. Meat production unquestionably stands out as the most polluting activity, as animals live longer than plants, imply the destruction of forests to make way for pasture and produce as a result of the digestive process large quantities of methane --which is 34 times more polluting than CO\({}_{2}\)(Santos et al., 2018). According to a study published by Poor et al. (Santos et al., 2018), beef meat is by far the largest offender, generating 60 kg/CO\({}_{2}\)eq per kilogram of meat produced which results in more than double the emissions of the second biggest polluter, lamb meat (23 kg/CO\({}_{2}\)eq per kilogram of product). Plant-based foods have a significantly lower impact, causing on average the production of 1 kg/CO\({}_{2}\)eq per kg of product. Annually, approximately 100 megatones of CO\({}_{2}\)eq are emitted due to the production of beans and nuts, which result in 0.33% of the total CO\({}_{2}\)eq emission versus 15% that is caused by meat production and consumption (Santos et al., 2018). Apart from the food type, emissions also depend on the dietary habits of people with different nationalities, ideologies, or religions. For example, the United States and Australia are the countries with the highest average meat consumption --and consequent CO\({}_{2}\)eq emitted-- per capita, with 98.21 kg/year and 94.04 kg/year respectively. This is more than double the global average --41.90 kg/year per capita-- and more than 26 times compared to India, where the consumption per capita is limited to 3.63 kg/year. ### Transportation carbon footprint In 2020, transportation has been responsible for the production of approximately 7.3 billion metric tons of CO\({}_{2}\)(Beseses, 2018). As reported in Table 2 of the Appendix, passenger cars are responsible for 41% of the emissions, followed by medium and heavy trucks (22%) and cargo carriers (11%). Aviation and rail had a lighter impact on the environment, producing 8% and 3% of the polluting gases. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline **Emissions** & & & & & & **Country** & & & \\ **source** & China & US & India & Russia & Japan & Germany & Canada & Iran & S. Korea & Indonesia \\ \hline Any & 7.38 & 15.52 & 1.91 & 11.44 & 9.70 & 9.44 & 18.58 & 8.08 & 11.85 & 2.03 \\ Meat & 2.90 & 5.80 & 0.20 & 3.50 & 2.40 & 4.80 & 4.10 & 1.90 & 3.40 & 0.80 \\ Passenger car & 0.50 & 4.49 & 0.20 & 1.00 & 1.40 & 1.80 & 4.10 & 1.60 & 2.00 & 0.50 \\ Aviation & 0.09 & 0.58 & 0.02 & 0.15 & 0.14 & 0.70 & 0.50 & 0.03 & 0.40 & 0.04 \\ Electricity & 2.70 & 5.40 & 0.63 & 2.10 & 3.50 & 2.40 & 1.90 & 1.80 & 4.70 & 0.70 \\ \hline \hline \end{tabular} \end{table} Table 1. Estimated emissions (tons CO\({}_{2}\)eq per-capita) for human essential activities. \begin{table} \begin{tabular}{l c c} \hline \hline & Share in the & CO\({}_{2}\)eq \\ Sector & emissions & emissions \\ & (\%) & (tons) \\ \hline Passenger cars & 41\% & 2.99 B \\ Medium/heavy trucks & 22\% & 1.60 B \\ Shipping & 11\% & 0.80 B \\ Aviation & 8\% & 0.58 B \\ Buses and minibuses & 7\% & 0.51 B \\ Light vehicles & 5\% & 0.36 B \\ 2/3 wheelers & 3\% & 0.22 B \\ Rail & 3\% & 0.22 B \\ \hline \hline \end{tabular} \end{table} Table 2. Share and CO\({}_{2}\)eq emissions of each transportation sector Car transportation - In 2019, the European Environment Agency (EEA) quantified in 122.3 g CO\({}_{2}\)eq/km the average emissions of passenger cars (Bogge, 2019). Similar to the footprints, vehicle emissions per capita vary across the globe (Table 1). The US have the highest emissions per-capita, with the average citizen emitting 4,486 kg of CO\({}_{2}\)eq annually, followed by Canada (4,120 kg CO\({}_{2}\)eq) and Saudi Arabia (3,961 kg CO\({}_{2}\)eq). GHG produced are considerably lower in low-income countries such as those in South Asia and Africa, where the average person produces approximately 70 times less emissions annually compared US (e.g., 21 kg CO\({}_{2}\)eq in Congo and 48 kg CO\({}_{2}\)eq in Firtera). Air transportation - According to the International Council of Clean Transportation, domestic and international flights globally account for around 2.5% of CO\({}_{2}\)eq emissions in the atmosphere every year (Kal Plants - The world's forests absorb a total of 15.6 gigatonnes of CO\({}_{2}\)eq per year (i.e., about 34% of the total emissions in the same period) as plants capture and use carbon dioxide during photosynthesis to produce glucose (Gil et al., 2018). On average, a tree is able to absorb between 18 and 20 kg of CO\({}_{2}\)eq every year (Steintein et al., 2018; Gudmund et al., 2018). Soil - Although soil does not absorb CO\({}_{2}\) as quickly as vegetation, its storing capability is much higher: according to a study by Ontl et. al., global soil contains almost twice as much CO\({}_{2}\) as the living flora and the atmosphere combined (Steintein et al., 2018). Oceans - Seas could theoretically absorb 95% of human-made GHG (Gil et al., 2018). In reality, each year oceans are able to store a share of 6.3% of the total emissions (2.9 Gt of CO\({}_{2}\)eq per year) (Gil et al., 2018), because of the slowness of processes that transfer gases from the atmosphere to water and deposit them on the seabed. ## 3. Data Sources and Methodology This section summarizes the methodology and the datasets that constitute the basis of our study. Our main dataset is the telemetry of web-browsing logs provided by a popular antivirus company (Source A). To preserve the privacy of the users, only the domain name of the whole URL was recorded excluding any Personal Identifiable Information (PII). For each domain in the telemetry, its corresponding category is also provided. We visit all of the unique domains in our data through a custom crawler (Source B) and capture all the HTTP requests performed to load the website and its resources. We calculate the amount of network traffic produced by the trackers identified in the crawled websites. Leveraging public reports and services to assess the electricity cost across countries and the power consumption due to the internet traffic (Source C), we estimate electricity consumed due to the traffic produced by the trackers. Finally, we convert the consumed electricity to CO\({}_{2}\)eq emissions to understand how much tracking activities contribute to the annual production. ### Web-browsing telemetry The telemetry at our disposal (Source A) contains the web-browsing history of 100k users, respectively collected on 50k desktop hosts and 50k mobile devices, for a period of 4 weeks (28 days) across the year. These logs are collected by the company's product installed on its user devices. The data is collected from users who voluntarily install the product, accept the company's privacy policy (Kalal et al., 2018), and opt-in to share their data. The customer's identifier is anonymized on the client-side and transmitted in this form to a central data lake: in our analysis, we observe users only through numeric anonymized identifiers that do not contain any attribute or detail that allows us to trace back to their identities. In our study, we estimate the amount of traffic produced by the trackers in an average week, and then calculate the amount for the whole year. To better capture users' browsing behaviors along the year and mitigate potential trends of accessing particular categories of websites tight to specific seasons or events (e.g., summer or Christmas), we analyze 4 different weeks evenly distributed from July 2021 to February 2022 (i.e., 7 consecutive days in 4 different months). Each entry in the telemetry consists of a code reporting the country of the user, the unique user identifier, domain name, its category, and a timestamp. Overall, the 4 weeks of data included 41M entries of 2.75M distinct domains (3.44% were not active anymore) that were later crawled to measure the produced network traffic. We filtered out users whose browsing session was not able to be reconstructed through our crawler (because all their accessed domains were not reachable). Our final dataset consists of \(48,931\) mobile and \(47,995\) desktop users. ### Reconstructing the Browsing Sessions The most precise network traffic production estimation can be done through live data collection at the time of the browsing by the real users. Unfortunately, this method is extremely privacy-concerning and intrusive to the users, which could result in potential data leaks and breach of data confidentiality. For this, we try to reconstruct the data by crawling the domains with a custom crawler (source B). We crawled the domains in the first week of March 2022 using a server powered by two Intel Xeon E7-8890 v3 processors and 2Tb of RAM. We use a single machine located in the US, as the variability of trackers encountered by scanning websites from different locations has been proven to be negligible on large-scale measurements (Steintein et al., 2018). The crawler uses Puppeteer (Puppeteer, 2018) together with a fresh instance of Google Chrome for each website. All the requests and responses performed are gathered by using the HTTP Archive (HAR) facility of the browser and stored in a JSON format. In order to avoid possible detections of our automated browser, we instrument Puppeteer with a plugin that implements state-of-the-art anti-detection techniques (Puppeteer et al., 2018). As we already mention, maintaining user privacy is our top concern and for this reason we exclude the path after the domain name. This means that our crawler crawls the main page of a website, or a subdomain if this is part of the URL. In that sense we manage to capture a lower bound of trackers, since some trackers (e.g. multimedia related) might be embedded deeper in the websites, or some websites might require login before accessing the content. For each domain, we analyze the requests made to load third-party resources and identify those that belong to known trackers (Stein et al., 2018; Steintein et al., 2018; Steintein et al., 2018). Besides providing content for tracking purposes (e.g. javascript libraries), trackers could also provide other functionalities to a website, such as delivering images or videos. In our study, we excluded media content, such as videos or images, since our goal is to only measure the impact of web tracking. Therefore, media content transmitted by the trackers and their impact on the network traffic is out of scope for our paper. For example, _trackerA_ could deliver a video to be played in a publishers webpage. This video is not directly related to tracking even though it is created by a third-party tracker. It is rather needed for the functionality of the visited website. Here, we do not record the video content related traffic. However, the headers that notify the tracker about the user's visit are accounted. Since a single organization might own multiple domains, we also group trackers under the organizations they belong to by using the relationships provided by three previous works. (Stein et al., 2018; Stein et al., 2018; Stein et al., 2018). Additionally, in order to analyze the users' online behavior in the most pragmatic manner, we stored the Cache-Control HTTP headers of all the corresponding tracking domains flagged (Kal et al., 2018). This information will allow us to retroactively enforce those caching policies, and precisely estimate the size associated with each user session in our dataset. More concretely, we follow the multi-keyed HTTP cache model (also known as state or cache partitioning (Yin et al., 2015)) recently implemented in all popular browser families (i.e., Chrome, Firefox, and Safari, (2015; Goyal et al., 2016; Goyal et al., 2017)). According to it, browsers cache content not only by taking into account the content itself, but also where it is being loaded from (based on the eTLD+1 scheme (Frieder et al., 2016)). For example, if both _example.com_ and _test.com_ load _content.com/script.js_, the resource will not be shared but cached individually. Furthermore, while some requests indicate long cache time periods (e.g., max-age=31536000), the cache is usually invalidated before by the browsers, and content is stored not more than 47 hours on average (Goyal et al., 2016). For this work, we consider this same upper bound limit to make results as accurate as possible. ### Internet browsing emissions We calculate the CO\({}_{2}\)eq emission due to the trackers on web browsing sessions by converting each GB transmitted over the Internet to its corresponding electricity usage (source C). Among the studies published since 2000, there is a non-negligible variance about the electricity-consumption estimates of Internet traffic. In some cases, the difference is even up to 5 orders of magnitude, from 136kWh/GB to 0.004kWh/GB. This dramatic variance is due to the differences in the system boundary and increased efficiency over the last two decades (Krause et al., 2017). While some studies only take into account the electricity consumed during the network transmission, others also incorporate the electricity consumed by the data centers, undersea cables, network accesses, ISPs, the type of device used, and any other equipment involved. GB to kWh conversion carries a long number of other limitations, such as the fact that it does not take into account the distance between the source and the destination, and the fact that the processing requirements of the data centers might not be linearly correlated with the size of data. The most recent studies that considered combinations of measurement approaches and system boundaries reported the total electricity consumption of a 1 GB data transmission to be in the range of 2.48 and 7.1 kWh (Krause et al., 2017; Goyal et al., 2017; Goyal et al., 2017). If the electricity consumed by data centers, servers, computers, and ISPs are not incorporated, they reported 0.023 to 0.16 kWh/GB. The most recent academic study analyzed data from 2015, therefore likely very different estimations would be made with more recent data. At a more recent conference organized by the American Council on the subject of Energy-Efficient Economy, on the other hand, it was reported a slightly lower number: 3.1 kWh/GB (Goyal et al., 2017). Calculating the carbon footprint of the web has been an interest to industrial and non-profit organizations as well (Goyal et al., 2017; Frieder et al., 2016). To calculate the carbon footprint of a given website, companies use the 1.8 kWh/GB estimation based on the predictions made about the electricity usage of communication technologies for the next ten years (Goyal et al., 2017). In our study, we will rely on these two most recent estimations (1.8 kWh/GB, 3.1 kWh/GB) rather than the higher ones reported with data from 2015. As deeply discussed by Aslan et. al (Kaslan et al., 2017), the time factor greatly impacts the estimations made by the previous works, and probably the current electricity consumption per GB of data transmission is lower than the one of 7 years ago. Once we convert GB to kWh, we have to estimate the CO\({}_{2}\)eq emissions, which could significantly differ by region. In Table 3, we provide the amounts of CO\({}_{2}\)eq per kWh produced for the top countries in our dataset. Moreover, we found that the worldwide average of CO\({}_{2}\)eq emitted in order to produce 1 kWh is 420 grams. ### Ethical Concerns Since the data provided to us come from actual users they have to be handled responsibly and with special care. All data come anonymized and without any PII in them. The web browsing logs are further anonymized by only getting the domain name from the whole URL. Therefore, no user artifacts (e.g. user names, real names, etc.) are kept in the logs. Furthermore, all the data analyzed and reported was in aggregated form with no individual user data accessed or available. The legal department, of the company that shared the data, reviewed the paper before submission to ensure that the data was processed ethically and preserved the customers' anonymity. ## 4. Web-tracking footprint In this section, we analyze the output of the crawler and provide estimations for web-tracking footprint at different granularities: per geographical location, per tracking companies, per website categories, and per device types. Thanks to the browsing history reconstruction methodology explained in the previous section, we also estimate the CO\({}_{2}\)eq emission rates per an average user only due to the trackers that exist on the websites browsed. \begin{table} \begin{tabular}{l c c c c} \hline \hline Continent & \% Users & Country & \% Users & CO\({}_{2}\)/kWh \\ \hline \multirow{4}{*}{North America} & \multirow{4}{*}{24.0 \%} & United States & 11.7 \% & 0.45 kg \\ & & Canada & 6.6 \% & 0.13 kg \\ & & Mexico & 2.6 \% & 0.45 kg \\ \hline \multirow{3}{*}{Europe} & \multirow{3}{*}{21.5 \%} & Great Britain & 1.4 \% & 0.35 kg \\ & & Germany & 1.2 \% & 0.38 kg \\ & & Netherlands & 1.2 \% & 0.45 kg \\ \hline \multirow{3}{*}{Asia} & \multirow{3}{*}{20.4 \%} & Japan & 9.0 \% & 0.5 kg \\ & & India & 1.9 \% & 0.7 kg \\ & & Hong Kong & 1.7 \% & 0.81 kg \\ \hline \multirow{3}{*}{Oceania} & \multirow{3}{*}{19.0 \%} & Australia & 10.3 \% & 0.79 kg \\ & & New Zealand & 5.7 \% & 0.09 kg \\ & & Fiji & 0.1 \% & NA \\ \hline \multirow{3}{*}{South America} & \multirow{3}{*}{13.9 \%} & Brasil & 8.2 \% & 0.07 kg \\ & & Chile & 0.9 \% & NA \\ & & Colombia & 0.9 \% & NA \\ \hline \multirow{3}{*}{Africa} & \multirow{3}{*}{1.3 \%} & South Africa & 0.8 \% & NA \\ & & Nigeria & 0.1 \% & NA \\ \cline{1-1} & & Egypt & 0.1 \% & NA \\ \hline \hline \end{tabular} \end{table} Table 3. Overview of continents and top-3 countries ordered by percentage of users. ### The overall picture In Table 3, we provide a geographical breakdown of the users in our dataset. Our telemetry spans over 6 continents and 211 different countries. While it has great geographic visibility, some continents (e.g., Africa) or countries (e.g., China) may be under-represented due to the tech-company's customers distribution. Nevertheless, all the results provided hereinafter account for this inequality, and they are normalized considering the _average user_ of each continent and its countries. The majority of connections (24%) originate from North America, and the most frequent countries that appear in our telemetry are the United States and Australia, that respectively account for 11.7% and 10.3% of the users. The median user is active during 20 over 28 days of the telemetry collection period, browses on average 5 out of 7 days per week with a frequency of 4 hours per day. By looking at aggregated browsing behaviors, we observe that users visit 197 different domains throughout the 4 weeks, by accessing on average 64 distinct domains per week. In Figure 2, we report the breakdown of the yearly transferred data for the average user in each continent. As our telemetry covers 4 weeks evenly distributed across the year, we compute the mean amount of data transmitted for each user during one week by averaging the four weeks of activity at our disposal and project it to one year. On average, the annual data downloaded and uploaded by an _average user_ worldwide accounts for 8.14 GB, out of which 1.67 GB are attributable to tracker-related exchanges (21.11%). The aforementioned data do not include multimedia, such as video streams or images, and only refer to data related to content and headers. Parssinen et. al (Parssinen et al., 2016) estimated the 25 to 75% of the web contents to be due to online advertisement with high uncertainty. While our focus is broader than online advertisement, we find the ratio to be lower than what was known. Note that our work focuses only on the web content excluding the analysis of other media contents such as images and videos (e.g. videos streamed over the network). If media transmitted by trackers were also take into account, higher numbers would be obtained. If we project the average data transmitted per capita to the current active Internet population (i.e., 4.95 B users), we report that web tracking is responsible for the transmission of 8,415 PB yearly. By further considering 3.1 kWh (1.8 kWh) to transfer 1 GB of data, and an average of 420 g CO2eq emitted per kWh, we compute that the worldwide emissions due to tracking reaches 10.79 Mt (6.28 Mt) of GHG. ### Geographical analysis In our telemetry, the highest amount of transmissions due to web browsing is produced by Asian users, who exchange 9.88 GB per capita in one year. On the other hand, we observe North American users, who generate on average 6.22 GB of network traffic. We found that an average Asian user visits many more websites than an average North American one (503.76 vs 343.70) during the period of 4 weeks. Moreover, the mean website visited in Asia is slightly bigger than the ones browsed in North America (3.83 MB vs 3.69 MB). This explains the obvious difference observed in Figure 2. If we look at the biggest data generators due to trackers, we find South America on top of this list. South-American users generate 2.06 GB of trackers traffic, Asian 1.89 GB, and African 1.77 GB per year. On the contrary, we observe that tracking organizations generate less traffic in Europe and North America, 1.45 GB and 1.42 GB respectively. We have to note that higher data transmissions do not necessarily mean higher CO2eq emissions. Indeed the average North American user will produce 1.42 GB due to tracking whereas the average South American 2.06 GB. When taking into account the actual CO2eq emissions we see that the average US citizen produces 0.45 kg of CO2 per kWh (0.64kg of CO2eq due to tracking), whereas the average Brazilian only 0.07 kg of CO2 per kWh (0.14 kg of CO2eq due to tracking). This is almost 4.6 times more than the average US citizen compared to the average Brazilian. We also find that users from Asian countries such as Japan, Hong Kong, and India produce as much as 0.95, 1.53, and 1.32 kg of CO2eq respectively mostly because of the resources used for electricity production in those countries. ### User browsing analysis In Figure 3, we report the distribution of ratios between the tracking and the website contents (left), the megabytes exchanged due to web-tracking (center), and the traffic generated due to the transmission of the sole tracker-related HTTP headers. The three figures are complemented with the corresponding global means. More than 96% of the active population in our dataset visits websites whose tracking content is lower than 40% compared to the rest of the website's content. Similarly, 90% of the users receive less than 4GB of tracking content annually and transfer 192 MB of headers related to trackers. We observed some websites (45,670) whose tracking content represents a considerably higher percentage (90%) of the whole data transmitted. A deeper analysis revealed that those websites mostly depend on third-party services or CDNs for content generation, which also acts as trackers when providing the resources. Figure 2. Breakdown of transferred data for the average user in each continent ### Categorical analysis In Table 4, we summarize the top-10 categories according to their prevalence and report the impact that web tracking has on each of them in terms of transferred data. We categorize the websites based on a categorization service provided to us by the AV company. The service supports over 60 languages and is composed of several specialized modules that disassemble web pages and analyze their components, such as the webpage language, source code language, documenttype, character set, externallink categories, content words, scripts and iframes. In addition, the categorization is fine-tuned by an offline system, which simultaneously analyzes multiple pages looking for connections and additional evidence to supplement what was collected in real time. HTTP referrer headers and hyperlinks are examples of attributes used in this phase. Generally, we observe a correlation between the most browsed categories and those that present the highest amount of generated traffic due to trackers. _Technology_, _Business_, and _Shopping_ websites deviate from the mean by presenting a higher share of tracking data transmitted yearly per capita (\(>\) 126 MB). In webpages that serve _News and media_ content, we instead measure the highest mean overload introduced by tracking resources (i.e., 2.16 MB per website). When we have a closer look at the categories generating the highest tracking traffic, we observe that not only the trackers in such websites include heavier content, but also that they are responsible for a considerably higher number of web requests. While their mean number is 40.9 in our dataset, an average _Shopping_ website requests 47 resources from third-party trackers, and an _Entertainment_ page 58. Once again, _News/Media_ websites are the ones characterized by the highest mean value of 85 resources related to trackers. When examining the average size of tracking-related resources (e.g. javascript libraries, application/json, etc.), we register a mean size of 70 KB per file with negligible variability across the categories. We also calculated the amount of CO2eq emissions due to tracking that is produced by each category globally based on user visits. We find Shopping websites to be responsible for the highest emissions (0.82 Mt CO2eq), followed by Technology and Internet (0.72Mt CO2eq) and News/Media websites (0.70 Mt CO2eq). Even if Shopping and Technology/Internet are not the most highly tracked categories, they are in the top three of most the visited ones which in turn makes them the biggest contributors of CO2eq out there. In contrast, News/Media is much less visited but due to the huge amount of tracking per website, it completes the triplet of the biggest CO2eq emitters. ### Tracker analysis We now turn the camera on tracking companies and estimate the total data transmitted due to their existence in the web ecosystem. Table 5 summarizes the average number of megabytes transmitted per website and per user related to the top-10 tracking organizations sorted by their prevalence in our dataset. As already proven by previous studies (Wang et al., 2018; Wang et al., 2018), _Google_ leads the ranking not only by the number of users tracked but also for the average number of megabytes exchanged in one year with a single user (709.31 MB) and for the amount of information transmitted in each request and response that transfer its resources (706.98 kB). We also highlight that _Facebook_, the second big player in our dataset, moves 3 times less data than _Google_ for the average user (198.86 MB) but shows a slightly lower transmission rate for each website that it tracks (399.66 kB). _Twitter_ represents an interesting case: while exchanging an annual amount of data with a single user that is around 12 times smaller than _Google_, the requests of its tracking-related contents are characterized by a large amount of transmitted data \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Category** & \begin{tabular}{c} **Unique** \\ **users** \\ \end{tabular} & \begin{tabular}{c} **\%** \\ **users** \\ \end{tabular} & \begin{tabular}{c} **Total** \\ **tracking** \\ \end{tabular} & \begin{tabular}{c} **Avg tracking** \\ **per user** \\ \end{tabular} & \begin{tabular}{c} **bytes** \\ **per website** \\ \end{tabular} \\ \hline Technology/Internet & 89.29 k & 92.12 \% & 549.83 PB & 129.64 MB & 930.57 kB \\ Business/Economy & 86.84 k & 89.59 \% & 519.84 PB & 126.70 MB & 115.38 kB \\ Shopping & 73.28 k & 75.50 \% & 62.83 B & 181.67 MB & 180.511 KB \\ Entertainment & 66.36 k & 68.46 \% & 212.54 PB & 76.80 MB & 1,788.50 kB \\ News/Media & 65.94 k & 68.03 \% & 54.210 PB & 172.64 MB & 2,160.26 kB \\ Health & 65.44 k & 67.51 \% & 267.19 PB & 85.85 MB & 1,438.99 kB \\ Financial Services & 64.99 k & 67.05 \% & 184.78 PB & 59.78 MB & 870.29 kB \\ Education & 61.44 k & 63.30 \% & 172.45 PB & 59.35 MB & 1,222.33 kB \\ Travel & 57.87 k & 59.70 \% & 253.48 PB & 92.10 MB & 1,347.34 kB \\ Government/Legal & 54.39 k & 56.11 \% & 77.94 PB & 30.13 MB & 844.99 kB \\ \hline \hline \end{tabular} \end{table} Table 4. Web-tracking network footprint for the top-10 categories in our dataset. Figure 3. Left - ECDF of ratio between the data transmissions of trackers versus the whole website, Center - Megabytes transferred due to tracker-related traffic, Right - Megabytes transferred due to HTTP headers in tracker-related traffic (448.19 kB). In this respect, we found that _Google_ and _Twitter_ respectively send on average 19 and 17 tracking related resources per tracked website, which is more than twice the average of 7.72 computed for all the organizations. In addition, we measured that the tracking libraries of _Twitter_ are smaller in size compared to the ones of _Google_ (\(\sim\)60kB vs \(\sim\)77kB) and that the former organization often transmits media content such as images and video that we exclude from our measurements --although considering the headers transmitted since those are used for tracking purposes. Moreover, users encounter _Twitter_ much less compared to _Google_ due to the lower presence of the former across the different websites. In turn, the combination of those factors results in high average tracking bytes per website exchanged by _Twitter_, but a lower amount of annual tracking data per user. Other interesting cases are the ones of _Pubmatic_ and _Rubicon_, two of the biggest players in the real-time-bidding advertising (Rubin, 2017) and header-bidding ecosystems (Rubin, 2017). Those organizations primarily focus on tracking users and delivering advertisements and don't offer any content or functionality as other trackers might do. Indeed, when examining the average number of resources per website, we found them to be on average 4.7 per website for _Pubmatic_, and 5.1 for _Rubicon_ --lower than the overall average of 7.71--, confirming their main interest only in tracking pixels, auction data and online advertising libraries. When we look at the environmental impact in terms of CO2eq emissions, we estimated that _Google_ annually produces more than all the others top trackers combined. With a whopping 4,382 kt of CO2eq emissions, _Google_ dominates the tracking scene and overshadows other big players such as _Facebook_ (1,129 kt), _Amazon_ (140 kt), and _Microsoft_ (98 kt). ## 5. Web-tracking carbon footprint We estimated that the tracking ecosystem's contribution to global pollution can be up to 10.76 Mt of CO2eq annually. We now compare web-tracking emissions to the ones of other human activities, namely those discussed in Section 2. The comparisons take into account the lax estimation of 3.1 kWh/GB to convert transferred data to electricity consumed, and the mean value of420 g CO2eq per kWh to consequently assess the GHG emitted. ### Foodprint vs TrackPrint The food industry accounts for 25% of all the yearly emissions of CO2eq in the atmosphere. Meat production is estimated to be the biggest offender when considering all of the energy consumed for feeding the cattle, processing and distributing the meat, and the GHG produced from the birth of the animal to the end of its life. Taking into account an average cattle weight of 720 kg, 40% of consumable meat (Krause, 2017), and using the aforementioned beef-related emission statistics, we estimate that the emission caused by the third-party trackers equals the production life-cycle of 416, 281 cows. On the other hand, if we made similar estimates for pigs (i.e., the average weight of 130 kg, and 70% of consumable meat (Krause, 2017)), we would obtain \(5,155,279\) pigs. This translates to 91 and 69% of the pigs and cows respectively bred in Argentina and Greece (Krause, 2017; Krause, 2017). Clearly, other types of food such as dairy products, vegetables, legumes, and nuts, generate megatomes of CO2eq every year. Tracking generates CO2eq emissions equivalent to 28% of the cheese consumption in France (Krause, 2017), 257% of coffee drunk in Italy (Krause, 2017), and 310% of olive oil produced in Spain (B CO\({}_{2}\)eq gases: to match worldwide web-tracking-generated emissions, the same plane will have to fly the same route 66,788 times -- which will require 11 thousand years at the current frequency of 30 flights per day (Beck et al., 2018). Passenger car transportation has been found to contribute five times more than aviation to global GHG production (Section 2). In this context, if we consider per capita emissions of the US --the country with the highest level of CO\({}_{2}\)eq emissions due to passenger cars-- the overall environmental impact of data transmissions related to tracking corresponds to the annual amount of CO\({}_{2}\)eq produced by the cars of 2.4M US citizens. In India and China, the same amount corresponds to the yearly CO\({}_{2}\)eq production of 54M and 21.6M inhabitants' cars. To provide the reader with a concrete example that allows us to better quantify web-tracking carbon footprint, we consider the CO\({}_{2}\)eq produced by an average car (i.e., 122.3 gCO\({}_{2}\)eq/km) during the trip from Rome to Paris (i.e., 1412.6km). To match the annual carbon emissions of tracking for the global Internet population, the car will have to travel from one city to the other 62.12M times and accumulate 87.75B kilometers. From a different perspective, this would correspond to going around the world along the equator 2.2M times. ### Cryptocurrency Bitcoin (BTC) is the most important cryptocurrency by market cap (over $850_B_). It is based on the proof-of-work consensus model, in which heavy computational effort is needed in order to validate transactions in the blockchain. As we discussed in Section 3, Bitcoin is annually responsible for over 70 megatones of CO\({}_{2}\)eq emissions, with the US and Russia being the biggest contributors. Our estimations for the web tracking ecosystem correspond to 72% of all the bitcoin-related emissions in the US, and 121% of the ones in Russia. If we calculated some global transaction-level statistics (Beck et al., 2018), we would find that tracking emissions are equal to \(24,087,768\) bitcoin transactions (almost 2 full months of the global bitcoin transactions). Taking into consideration the median Bitcoin transaction value of 0.013 BTC (Beck et al., 2018), and the current BTC price of $43, 455 (Beck et al., 2018), it would be equal to $13\(B\). ### Electricity of Home Appliances The residential electricity consumption is estimated to be responsible for 20% of the energy-related CO\({}_{2}\)eq emissions (Kal Due to situations of closed gardens or paid subscriptions, we are not able to login into certain services. This means that we may collect a lower bound in some situations, since more tracking might occur when a logged user browses through the website. Finally, the browsing history of the users consists of mobile and desktop users. Mobile and desktop devices consume different amounts of electricity due to different specs (e.g. network connectivity, CPU consumption, screen energy usage, etc.), so the energy consumption could be different between those devices. In this study we use generally accepted energy consumption numbers in order to transfer 1 GB of data independently of the user device. The differentiation between the specific consumption in mobile and desktop users is out of the scope of this study and is left as future work. ## 8. Discussion In this study, we try to estimate the CO\({}_{2}\)eq emissions produced by the web-tracking ecosystem and found those to account for almost 11 Mt of CO\({}_{2}\)eq annually. We report as well that this value is comparable to many activities of modern life, such as meat production and consumption, transportation, and even cryptocurrency mining. Considering that a tree can on average absorbs 20 kg of CO\({}_{2}\)eq every year, we would need to plant 550 M of them to compensate for the emissions due to web tracking. Our analysis is to our knowledge the most comprehensive study so far which put the tracking ecosystem under the spotlight, investigating an understudied but critically-important aspect of modern society. As broadly discussed, such estimation is not trivial to obtain: although we consider real-user telemetry, diversify their geographical locations, and provide an upper and a lower estimation, we highlight that our results represent a lower bound. The tracking ecosystem is far more complex and opaque than what is visible to the users. Indeed, there is a plethora of connections and transferred data behind the scenes. Trackers and advertisers continuously exchange information with each other and by using their internal networks: a measure of these data transfers from the user's perspective is impossible to achieve since the whole process is completely opaque. In any case, the existence of such networks and interconnections leads to much higher CO\({}_{2}\)eq emissions, which adds to the one that we could estimate from our perspective. Is there something that users could do to limit the emissions due to web tracking? The answer to this question is multilayered. Firstly, users could significantly reduce the information that trackers have by utilizing anti-tracking solutions. This approach would not only reduce data transmissions but also decrease the computation resources needed to process a web page, in turn saving electricity. As a consequence, tracking organizations would have fewer data to share, which would further limit polluting emissions. A more drastic approach would be the installation of specific software at the network level (i.e., in the routers) to completely drop connections towards trackers. This can further reduce CO\({}_{2}\)eq emissions as the data never arrives at the users' devices, and does not need to be processed. Such possibility is however too strict, as in many cases it would prevent some functionalities and would disrupt the whole tracking and advertising ecosystem. It is interesting to note that our study differs significantly from previous studies. Our methodology accounts for the all the possible techniques and tracking activities from third-party trackers, by monitoring them in a network level (e.g. finding requests being sent and received, monitoring scripts third-party trackers load, etc.) In contrast to previous works, we don't focus on specific tracking techniques such as cookies, which will limit our visibility into the tracking ecosystem. Also since not all users visit the same websites and not all websites have the same number of cookies, tracking scripts and tracking techniques it is not safe to assume just averages. In this study, we use a systematic and thorough approach that considers multiple different factors. Our approach is designed to represent the whole tracking ecosystem, and not just some aspects of it (e.g. cookies). In this work, we focus on real user behavior, in contrast to previous studies, so our calculations rely as little as possible to estimations and projections. To the best of our knowledge, this is the most reliable way of representing third-party tracking CO\({}_{2}\)eq emissions to date. Finally, our methodology can be easily generalized since it does not rely on estimations on user behavior or global averages. ## 9. Conclusion In this work we try to estimate the CO\({}_{2}\)eq emissions of the tracking ecosystem annually across the globe. By designing a thorough analysis that takes into account multiple factors regarding online tracking we were able to calculate the CO\({}_{2}\)eq emissions due to tracking activities in a systematic and comprehensive manner. We utilize the browsing telemetry of 100K users to design a comprehensive experiment across 41 million website visits (2.7 million unique domains). We observe that web tracking increases data transmission up to 21% annually per user which further increases GHG emissions. We found that the total emissions are up to 10.76 Mt of CO\({}_{2}\)eq which is comparable to many aspects of the modern life such as meat consumption, aviation and transportation and even cryptocurrency mining. We conduct a multilayered study broken down by user continents, country energy production mixture and even resource caching across websites. Our analysis is to our knowledge the most comprehensive study so far which put the tracking ecosystem under the spotlight and reports an understudied aspect but critically important for the modern society.
2303.05620
CFR-ICL: Cascade-Forward Refinement with Iterative Click Loss for Interactive Image Segmentation
The click-based interactive segmentation aims to extract the object of interest from an image with the guidance of user clicks. Recent work has achieved great overall performance by employing feedback from the output. However, in most state-of-the-art approaches, 1) the inference stage involves inflexible heuristic rules and requires a separate refinement model, and 2) the number of user clicks and model performance cannot be balanced. To address the challenges, we propose a click-based and mask-guided interactive image segmentation framework containing three novel components: Cascade-Forward Refinement (CFR), Iterative Click Loss (ICL), and SUEM image augmentation. The CFR offers a unified inference framework to generate segmentation results in a coarse-to-fine manner. The proposed ICL allows model training to improve segmentation and reduce user interactions simultaneously. The proposed SUEM augmentation is a comprehensive way to create large and diverse training sets for interactive image segmentation. Extensive experiments demonstrate the state-of-the-art performance of the proposed approach on five public datasets. Remarkably, our model reduces by 33.2\%, and 15.5\% the number of clicks required to surpass an IoU of 0.95 in the previous state-of-the-art approach on the Berkeley and DAVIS sets, respectively.
Shoukun Sun, Min Xian, Fei Xu, Luca Capriotti, Tiankai Yao
2023-03-09T23:20:35Z
http://arxiv.org/abs/2303.05620v2
# CFR-ICL: Cascade-Forward Refinement with Iterative Click Loss for ###### Abstract The click-based interactive segmentation aims to extract the object of interest from an image with the guidance of user clicks. Recent work has achieved great overall performance by employing the segmentation from the previous output. However, in most state-of-the-art approaches, 1) the inference stage involves inflexible heuristic rules and a separate refinement model; and 2) the training cannot balance the number of user clicks and model performance. To address the challenges, we propose a click-based and mask-guided interactive image segmentation framework containing three novel components: Cascade-Forward Refinement (CFR), Iterative Click Loss (ICL), and SUEM image augmentation. The proposed ICL allows model training to improve segmentation and reduce user interactions simultaneously. The CFR offers a unified inference framework to generate segmentation results in a coarse-to-fine manner. The proposed SUEM augmentation is a comprehensive way to create large and diverse training sets for interactive image segmentation. Extensive experiments demonstrate the state-of-the-art performance of the proposed approach on five public datasets. Remarkably, our model achieves an average of 2.9 and 7.5 clicks of NoC@95 on the Berkeley and DAVIS sets, respectively, improving by 33.2% and 15.5% over the previous state-of-the-art results. The code and trained model are available at [https://github.com/TitorX/CFR-ICL-Interactive-Segmentation](https://github.com/TitorX/CFR-ICL-Interactive-Segmentation). ## 1 Introduction Interactive image segmentation extracts object(s) of interest from images with user input. It is essential for enabling the broader application of deep learning-based solutions in real-world scenarios. Deep neural networks (DNNs) often require extensive annotated data, which could be extremely expensive and time-consuming to create due to high labor costs. Interactive segmentation offers a more cost-effective solution for generating large-scale labeled datasets. Interactions in interactive segmentation include scribbles [25], boxes [11], and clicks [24]. This work focuses on the click-based approach, whereby the user provides positive and negative clicks on the image to identify foreground and background regions, respectively. Earlier click-based methods were developed using image processing techniques, such as the connectedness-based approach described in [24]. However, subsequent advancements in the field led to the development of deep learning-based methods, beginning with [27], which resulted in significant improvements in segmentation performance. Recently, more deep learning-based methods have been introduced, including [10, 22, 23, 2], which have further improved the efficiency of interactive segmentation. We revisit the recent state-of-the-art approaches and note that the existing training procedure cannot balance the segmentation quality and the number of user clicks. To address this limitation, we propose a new training strategy called Iterative Click Loss, which encodes the number of clicks into the loss function. Additionally, we propose an inference-time refinement strategy named Cascade-Forward Refinement, which improves segmentation details during inference. We also introduce a novel image augmentation strategy, SUEM copy-paste, specifically designed for interactive image segmentation tasks. The contributions of this work are summarized below. * The proposed Cascade-Forward Refinement enhances the segmentation quality during inference in a simple and unified framework and can be applied to other iterative mask-guided interactive segmentation models. * To the best of our knowledge, the proposed Iterative Click Loss is the first loss that encodes the number of clicks to train a model for interactive segmentation; and it offers a novel approach to define preference on models with fewer user clicks. * In the proposed SUEM augmentation, we propose a set of four copy-paste augmentation approaches that greatly improve the diversity and increase the size of training sets in practical settings. ## 2 Related Work **Interactive segmentation**. Prior to the widespread adoption of deep learning techniques, interactive segmentation was primarily achieved through image processing-based methods, such as GrabCut [20], NC-Cut [25], and EISeg [24]. With the emergence of deep learning, deep learning-based interactive segmentation models have obtained increasing popularity. The DIOS model [27] encoded the background and foreground user clicks into two distance maps and concatenated the image with the two maps as input. The BRS [10] and f-BRS [22] formulated the task as an inference-time online optimization problem. BRS optimizes the image or distance maps during inference to improve the segmentation results for given user clicks, while f-BRS optimizes the intermediate layer weights to achieve the same goal and speed up the inference. FCA-Net [14] built a first-click attention module that enhances the importance of the first click as it usually plays a critical role to identify the location of the main body. UCP-Net [4] proposed a novel contour-based approach that asks the user to provide clicks on the contour of objects. **Iterative mask-guided interactive segmentation**. An iterative sampling strategy has been proposed by [17] which samples a single point from the center of a misclassified area iteratively. These iteratively generated points are used in training to boost performance. RITM [23] adopted and modified the iterative sampl Figure 1: Examples of segmentation results that have exceeded 0.95 of IoU. The first column shows images with clicks (green for the foreground and red for the background) and blue interactive segmentation masks. The second column shows the ground truth. The third column shows probability maps of the proposed approach. These raw images are from the Berkeley [18] and DAVIS [19] datasets. ing and iteratively uses the previous output as model input to achieve higher segmentation quality. The iterative mask-guided model only involves a simple feedforward process that is more computationally efficient than the inference-time optimization approaches such as BRS and f-BRS. More recently, FocalClick [2] proposed a coarse-to-fine pipeline with a SegFormer [26] backbone and achieved state-of-the-art results. It utilizes a heuristic strategy to determine local regions that possibly contain errors and uses a local refinement network to update these regions. SimpleClick [15] greatly improved performance by adopting the Plain Vision Transformer (Plain ViT), which was pre-trained with MAE [9], as the backbone of the RITM approach. The advanced ViT network architecture has significantly benefited interactive segmentation. Our work builds upon the RITM and SimpleClick approaches and further explores the nature of interactive segmentation. **Image augmentation**. Deep learning applications typically require large amounts of data to achieve optimal model performance, and the diversity of the training samples is crucial. Therefore, it is essential to develop efficient data augmentation techniques to enhance data efficiency. The previous study [6] has shown that a simple copy-paste strategy can serve as a powerful data augmentation for instance-level segmentation tasks. We develop more specific image augmentation approaches for adapting interactive segmentation tasks. ## 3 Proposed Method We build a new iterative mask-guided framework that includes 1) an inference-time refinement scheme without the need for extra modules; 2) a novel training loss that encodes the number of clicks into the loss term and defines the model preference of fewer clicks; 3) an effective image augmentation designed for the interactive segmentation scenario. ### Cascade-Forward Refinement [2, 28] proposed similar coarse-to-fine pipelines that incorporated a local refinement module to improve the details of local regions after the initial coarse segmentation of the entire image. These coarse-to-fine strategies demonstrate impressive overall performance; however, the approaches 1) depend on heuristic rules to select image regions for further refinement; and 2) have to optimize two individual deep learning models independently. We introduce the Cascade-Forward Refinement (CFR)-based inference strategy, which enables iterative refinement of the segmentation results without needing two models. The proposed CFR has two inference loops, i.e., the outer loop generates coarse segmentation masks using incremental user interactions, and the inner loop refines the masks by forwarding the segmentation model multiple times with the same input image and user clicks. Let \(f\) denote a deep neural network for segmentation, \(Y^{t}\) be the output of the \(t\)-th step, and the \(X\) be the input image. The sequence of user clicks at the \(t\)-th step is denoted as \(P^{t}=\{(u_{k},v_{k},l_{k})\}_{k=1}^{t}\), where \((u_{k},v_{k})\) represents the coordinates of a user click and \(l_{k}\in\{0,1\}\) denotes its label (0 for background and 1 for foreground). The model's inputs are raw images, maps generated by clicks, and previous segmentation masks. We define the CFR as \[Y_{0}^{t}=f(X,P^{t},Y_{n}^{t-1}),\text{and} \tag{1}\] \[Y_{i}^{t}=f(X,P^{t},Y_{i-1}^{t}),i\in\{1,2,3,...,n\}, \tag{2}\] where \(Y_{0}^{t}\) denotes the output of the \(t\)-th coarse segmentation step (outer loop), and \(Y_{n}^{t-1}\) is the last refined mask at step \(t-1\). Eq. 2 defines the refinement results at the \(i\)-th step in the inner loop, and \(n\) defines the number of refinement steps which could be defined as a fixed number or determined adaptively. The Cascade-Forward Refinement approach iteratively updates the coarse segmentation masks and refinement masks using Eqs. 1 and 2. No additional user clicks are required during the refinement process (inner loop), and the segmentation mask is continuously refined to provide higher-quality input. Figure 2 illustrates the overview pipeline of the CFR and Figure 3 shows a series of CFR refined results. **Fixed-step CFR and Adaptive CFR Inference.** A fixed-step CFR applies \(n\) times refinement for each user click. This refinement is employed during inference to enhance the quality of the output and can be integrated into any iterative mask-based model without the need for model modification. In addition to the fixed-step approach, we propose and validate an adaptive CFR (A-CFR) scheme. It counts the number of altered pixels between \(Y_{i}^{t}\) and \(Y_{i-1}^{t}\) and terminates the inner loop when the number of changed pixels falls below a specified threshold, or the maximum step is reached. We use CFR-\(n\) and A-CFR-\(n\) to denote fixed \(n\) step CFR and adaptive CFR with maximum \(n\) step, respectively. ### Iterative Click Loss Sofiuk et al. [22] encoded all generated user clicks for each image into two disk maps and inputted them into a segmentation network during training. The training process has no differences from conventional deep neural networks for image segmentation. In [17, 23], researchers adopted a hybrid strategy that combined randomly sampled clicks with iteratively generated clicks to generate input maps. However, the trained models 1) may need more user interactions during inference, and 2) have no effective way to balance the number of clicks and segmentation performance. To overcome the challenges, we propose the Iterative Click Loss (ICL) approach which embeds the number of clicks during training. An initial set of randomly sampled clicks, denoted as \(P^{0}\), is generated from the ground truth to forward the model and obtain an initial output \(Y^{0}\). Clicks are generated iteratively (one click at a time) by sampling from the misclassified regions in \(Y^{0}\), and the newly produced click is combined with \(P^{0}\) to form a new sequence \(P^{1}\). This process is repeated to generate the whole click sequence \(P^{t}\). The click-sampling strategy can be formulated as \[\begin{split} P^{0}&=\text{Random sample from }\mathbb{Y}\\ Y^{0}&=f(X,P^{0},\mathbf{0})\\ P^{1}&=P^{0}\cup S(\mathbb{Y},Y^{0})\\ Y^{1}&=f(X,P^{1},Y^{0})\\...& P^{t}&=P^{t-1}\cup S(\mathbb{Y},Y^{t-1}) \\ Y^{t}&=f(X,P^{t},Y^{t-1}),\end{split} \tag{3}\] where \(\mathbf{0}\) is the zero-initialized mask, \(\mathbb{Y}\) is the ground truth, and \(S\) is a sampling function [23] that proposes a single click among the misclassified areas in the output mask. A conventional total loss function [23] is defined by \[L=\mathbb{L}(Y^{t},\mathbb{Y}). \tag{4}\] where \(\mathbb{L}\) is the Normalized Focal Loss [21]. Note that in Eq. 4, a sequence of click sets \([P^{0},P^{1},...,P^{t}]\) and a sequence of outputs \([Y^{0},Y^{1},...,Y^{t-1}]\) are used to generate segmentation mask \(Y^{t}\); however, only the final output is applied to calculate the Normalized Focal Loss and update the model parameters. In the proposed ICL approach, a new loss function is built to accumulate the weighted losses of the generated mask sequence. \[L_{ICL}=\sum_{i=1}^{t}\beta_{i}\mathbb{L}(Y^{i},\mathbb{Y}), \tag{5}\] Figure 3: Sample results of Cascade-Forward Refinement. Figure 2: Overview of iterative mask-guided interactive segmentation integrated with Cascade-Forward Refinement. The orange colored lines represent the user interaction loop (outer loop). The green colored line represents the Refinement loop (inner loop). The black colored lines are shared processes for both loops. New clicks are added by the user in the user interaction loop. In the CFR loop, the previous mask is iteratively optimized with clicks. where \(\beta_{i}\) is used to control the weight of each term. Each click produces one loss term in the above equation, and minimizing the loss improves the segmentation performance and reduces the number of clicks simultaneously. By increasing the weights of the loss term for more clicks (larger \(i\)), the model is incentivized to use fewer clicks to achieve accurate segmentation. ICL offers a novel approach to define preference on interactive segmentation models with fewer user clicks. ### SUEM Copy-paste To generate large and diverse datasets for interactive segmentation scenarios, we proposed a comprehensive image augmentation method, namely SUEM Copy-paste (C&P), that consists of four C&P modes, Simple C&P [6], Union C&P, Exclusion C&P, and the image Mixing. The underlying principle of the C&P method involves inserting randomly selected objects from one image into another. In the context of interactive segmentation, we refer to the object of interest and its corresponding image as the _source object_ and _source image_, respectively. Conversely, the object and its corresponding image selected randomly from the training set are denoted as the _extra object_ and _extra image_. **Simple C&P Mode.** The simple copy-paste mode involves inserting a source object into a randomly selected extra image, with the mask of the source object serving as the ground truth. Figure 4 (b) provides a visual representation of this mode. **Union C&P Mode.** Interactive segmentation typically involves identifying a target object that comprises multiple objects, such as a man embracing a child. To simulate such a scenario, we employ the union copy-paste mode, where an extra object is pasted into the source image, and the ground truth is determined as the union of their respective masks. Figure 4 (c) depicts the resulting image in this mode. **Exclusion C&P Mode.** Another scenario that frequently arises is that the object of interest is obstructed by another object, such as a person standing behind a pole. To address this issue, we introduce the exclusion copy-paste mode, where an extra object is copied into the source image, and the mask of the source object is utilized as the ground truth, excluding the mask of the extra object. Figure 4 (d) provides a visual representation of this mode. **Image Mixing Mode.** The approach of image-mixing involves blending a source image with an extra image and utilizing the mask of the source object as the ground truth. The image-mixing mode is depicted in Figure 4 (e) The above strategies are combined to generate training images in our experiments. ## 4 Experiments ### Datasets and Experiment Setup **Datasets.** We use five standard instance-level annotated datasets to evaluate the performance of our approaches and two datasets for training. * **GrabCut [20]:** This dataset contains 50 images with one instance on each image. * **Berkeley [18]:** This dataset contains 96 images with 100 instances. * **DAVIS [19]:** This dataset has 345 images extracted by [10] from 50 videos with high-quality segmentation masks. * **Pascal VOC [5]:** Only the validation set is used. It has 1449 images with 3427 instances. * **SBD [8]:** The training set has 8497 images with 20172 instances and the validation set has 2857 images with 6671 instances. * **COCO [13]+LVIS [7] (C+L):** The COCO dataset contains 99k images with 1.2M instances on its training set. The LVIS dataset has 100k images and 1.2M instances in total. The [23] constructed a combined dataset called C+L from COCO and LVIS that contains 104k images and 1.6M instances. The C+L is only used for training. **Evaluation metric.** We assess the performance of segmentation models using the standard Number of Clicks (NoC) metric, which quantifies the number of user inputs to obtain a satisfactory segmentation result surpassing a predefined Intersection over Union (IoU) threshold. Specifically, NoC@90 and NoC@95 are reported in this study. In previous studies, the NoC@85 and NoC@90 thresholds were commonly reported. However, due to the increased demand for high-quality segmentation, more stringent criteria are applied to evaluate the models. To generate clicks during evaluation, we adopt the approach outlined in [12]. The maximum number of clicks is limited to 20. **Backbone model.** Following the methodology proposed by SimpleClick [15], we employ the SimpleClick ViT-Base and ViT-Huge models, that utilize the Plain Vision Transformer [3] as the backbone, in our experiments. Subsequently, we fine-tune the pre-trained models using our proposed methods. **Implementation details.** For the ICL, we use 3 iteratively generated clicks to train the model and \(\beta_{i}\in[1,2,3]\). The Adam optimizer with \(\beta_{1}=0.9,\beta_{2}=0.999\) and a learning rate of \(5\times 10^{-6}\) is used, and all models undergo 1 epoch finetuning from the SimpleClick models. The batch sizes for Vit-Base and Vit-Huge are 140 and 32, respectively. The Normalized Focal Loss (NFL) [21] is used during training with \(\alpha=0.5,\gamma=2\). Other than the copy-paste augmentation, the following image augmentations are applied randomly: resizing, flipping, rotating, brightness contrast adjusting, and cropping. The input images are unified to \(448\times 448\). The clicks are encoded into disk maps with a radius of 5. All models are trained on the NVIDIA Quadro RTX 8000 GPU. ### Inference Using the CFR Scheme The performance enhancement attained by the fixed-step CFR and adaptive CFR (A-CFR) is demonstrated by incorporating them with the ViT-Base backbone trained on the SBD dataset. The results for CFR-1, CFR-4, and A-CFR-4 are presented in Table 1, and show that all three CFR inference schemes improve the performance compared to the standard inference process [23]. However, the CFR-4 approach does not significantly outperform the CFR-1 scheme, which indicates that increasing the number of steps does not necessarily result in better performance. In contrast, the A-CFR-4 scheme outperforms CFR-4 by adaptively terminating the refinement process when changes become smaller than a threshold. Here, we set the threshold to 20 pixels. Although the results obtained from CFR-1 and A-CFR-4 are similar, there have notable differences between them in terms of computing efficiency and flexibility in real-world applications. While CFR-1 is more computationally efficient, A-CFR-4 offers greater flexibility in handling real-world scenarios. Given its superior computational efficiency, we employ CFR-1 in the subsequent experiments. ### The Effectiveness of ICL and SUEM C&P The newly proposed ICL and SUEM C&P are applied to fine-tune the SimpleClick Vit-Base model trained on the SBD dataset. The results are presented in Table 2, which compares the performance of the original SimpleClick model and the ICL and SUEM C&P fine-tuned models. During inference, the CFR-1 refinement is applied to all three models to improve the segmentation accuracy. As shown in Table 2, by using the standard inference, ICL improves the SimpleClick model on all five datasets, and the SUEM C&P significantly enhances the model's performance on the GrabCut, Berkeley, and DAVIS datasets. After applying the CFR-1 refinement, all results except for the SUEM C&P on the Berkeley dataset have been improved. ### Comparison With State-of-the-art Table 3 shows the results of 11 state-of-art deep learning-based interactive segmentation approaches and the proposed approach. The proposed approach ('Ours') integrates ICL and SUEM C&P to fine-tune the ViT-Huge model on the SBD or C+L datasets. The table has three sections indicated by notations '\(\dagger\)', '\(\#\)', and '\(*\)', and each shows the \begin{table} \begin{tabular}{c|c c|c c|c c|c c|c c} \hline & \multicolumn{2}{c}{GrabCut} & \multicolumn{2}{c}{Berkeley} & \multicolumn{2}{c}{DAVIS} & \multicolumn{2}{c}{Pascal VOC} & \multicolumn{2}{c}{SBD} \\ \hline NoC@ & 90 & 95 & 90 & 95 & 90 & 95 & 90 & 95 & 90 & 95 \\ \hline Inference & & & & & & & & & & \\ \hline \hline Std. & 1.54 & 2.16 & 2.46 & 6.71 & 5.48 & 12.23 & 2.81 & 3.75 & 5.24 & 11.23 \\ CFR-1 & **1.44** & **2.04** & **2.35** & **6.43** & 5.33 & 11.99 & **2.70** & **3.58** & **5.11** & **11.14** \\ CFR-4 & 1.50 & 2.18 & 2.38 & 6.46 & 5.31 & 11.94 & 2.72 & 3.59 & 5.16 & 11.16 \\ A-CFR-4 & 1.50 & 2.06 & 2.39 & **6.43** & **5.28** & **11.90** & 2.73 & 3.60 & 5.17 & **11.14** \\ \hline \end{tabular} \end{table} Table 1: Results of CFR Schemes. The ’Std.’ denotes the standard inference; CFR-\(n\) is CFR with fixed step \(n\); and A-CFR-\(n\) is the adaptive CFR with maximum steps \(n\). Figure 4: Illustration of copy-paste modes. results of a set of models trained on a different dataset and tested on five datasets. As shown in Table 3, for models trained on the SBD dataset, SimpleClick[15] significantly improved the performance on the five test datasets using NoC@90; and its NoC@90 and NoC@95 values can be further improved by using the proposed CFR-1 inference. The proposed approach (Ours + CFR-1) outperforms SimpleClick on four test datasets with NoC@90 and NoC@95 values. It is worth noting that our NoC@95 values are significantly better than those of the original SimpleClick. For models trained on the C+L dataset, SimpleClick outperforms other four models on the five test sets using the NoC@95 metric; and by using the proposed CFR-1, its NoC@10 and NoC@95 values are improved for the Berkeley, DAVIS, PascalVOC, and SBD sets. The proposed model outperforms SimpleClick on three datasets. Its NoC@95 value of Berkeley has decreased by 1.44 clicks which is 33.2% less than that of SimpleClick. Its NoC@95 of DAVIS has decreased by 1.38 clicks, which is 15.5% less \begin{table} \begin{tabular}{c|c|c c|c c|c c|c c|c c} \hline & & \multicolumn{3}{c}{GrabCut} & \multicolumn{3}{c}{Berkeley} & \multicolumn{3}{c}{DAVIS} & \multicolumn{3}{c}{Pascal VOC} & \multicolumn{3}{c}{SBD} \\ & \multicolumn{1}{c|}{NoC@} & 90 & 95 & 90 & 95 & 90 & 95 & 90 & 95 & 90 & 95 \\ \hline Inference & Method & \multicolumn{1}{c}{} & & & & & & & & & \\ \hline \hline Std. & SimpleClick & 1.54 & 2.16 & 2.46 & 6.71 & 5.48 & 12.23 & 2.81 & 3.75 & 5.24 & 11.23 \\ Std. & ICL & 1.50 & 2.00 & 2.34 & 6.48 & 5.44 & 11.86 & **2.62** & **3.58** & **5.05** & **11.03** \\ Std. & SUEM C\&P & **1.44** & **1.74** & **1.97** & **5.08** & **5.28** & **10.69** & 2.68 & 3.59 & 5.33 & 11.48 \\ \hline CFR-1 & SimpleClick & **1.44** & 2.04 & 2.35 & 6.43 & 5.33 & 11.99 & 2.70 & 3.58 & 5.11 & 11.14 \\ CFR-1 & ICL & **1.44** & 2.00 & 2.30 & 6.14 & 5.33 & 11.66 & **2.55** & **3.40** & **4.90** & **10.95** \\ CFR-1 & SUEM C\&P & 1.48 & **1.70** & **2.05** & **5.12** & **5.14** & **10.49** & 2.61 & 3.45 & 5.24 & 11.44 \\ \hline \end{tabular} \end{table} Table 2: The Effectiveness of ICL and SUEM C&P. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c} \hline & & \multicolumn{3}{c}{GrabCut} & \multicolumn{3}{c}{Berkeley} & \multicolumn{3}{c}{Pascal VOC} & \multicolumn{3}{c}{SBD} \\ \hline & NoC@ & 90 & 95 & 90 & 95 & 90 & 95 & 90 & 95 & 90 & 95 \\ \hline Model & & & & & & & & & & & \\ \hline \hline!DIOS [27] (FCN) CVPR16 & 6.04 & - & 8.65 & - & - & - & 6.88 & - & - & - \\!FCA-Net-SIS [14] (Res2Net) CVPR20 & 2.08 & - & 3.92 & - & 7.57 & - & 2.69 & - & - & - \\ \hline \hline \#LD [12] (VGG-19) CVPR18 & 4.79 & - & - & - & 9.57 & - & - & - & 10.78 & - \\ \#BRS [10] (DenseNet) CVPR19 & 3.60 & - & 5.08 & - & 8.24 & - & - & - & 9.78 & - \\ \#f-BRS-B [22] (ResNet101) CVPR20 & 2.60 & 4.82 & 4.13 & 10.05 & 7.31 & 14.30 & - & - & 7.05 & 13.81 \\ \#RITM [23] (HRNet18) CVPR22 & 2.04 & 3.66 & 3.23 & 8.38 & 6.70 & 13.88 & - & - & 5.42 & 11.65 \\ \#CDNet [1] (ResNet34) ICCV21 & 2.64 & - & 3.69 & - & 6.66 & - & - & - & 7.87 & - \\ \#UCP-Net [4] (EffNet) SMc21 & 2.76 & - & 2.70 & - & - & - & - & - & - & - \\ \#PseudoClick [16] (HRNet18) ECCV22 & 2.04 & - & 3.23 & - & 6.57 & - & 2.74 & - & 5.40 & - \\ \#FocalClick [2] (HRNet18) CVPR22 & 2.06 & - & 3.14 & - & 6.48 & - & - & - & 6.52 & - \\ \#FocalClick [2] (SegF-B0S2) CVPR22 & 1.90 & - & 3.14 & - & 7.06 & - & - & - & 6.51 & - \\ \#SimpleClick [15] (ViT-H) Preprint & 1.44 & 2.10 & 2.09 & 6.34 & 5.33 & 11.67 & 2.20 & 3.01 & 4.15 & 9.86 \\ \hline \#SimpleClick CFR-1 (ViT-H) & **1.32** & 1.78 & 2.15 & 6.21 & 5.22 & 11.64 & **2.16** & 2.92 & **4.08** & **9.80** \\ \#Ours (ViT-H) & 1.46 & 1.66 & 1.77 & 4.73 & 4.86 & 9.06 & 2.20 & 3.00 & 4.45 & 10.52 \\ \#Ours CFR-1 (ViT-H) & 1.42 & **1.62** & **1.74** & **4.44** & **4.77** & **8.85** & 2.17 & **2.91** & 4.45 & 10.50 \\ \hline \hline *RITM [23] (HRNet18) ICCV22 & 1.54 & 2.22 & 2.26 & 6.46 & 5.74 & 12.45 & - & - & 6.05 & 12.47 \\ *RITM [23] (HRNet32) ICCV22 & 1.56 & 2.48 & 2.10 & 5.41 & 5.34 & 11.52 & - & - & 5.71 & 12.00 \\ *PseudoClick [16] (HRNet32) ECCV22 & 1.50 & - & 2.08 & - & 5.11 & - & 2.25 & - & 5.54 & - \\ *FocalClick [2] (SegF-B3S2) CVPR22 & 1.52 & 1.84 & 1.93 & 4.55 & 4.96 & 10.71 & 2.89 & 3.80 & 5.63 & 11.58 \\ *SimpleClick [15] (ViT-H) Preprint & 1.50 & **1.66** & 1.75 & 4.34 & 4.78 & 8.88 & 1.98 & 2.51 & 4.70 & 10.76 \\ \hline \hline *SimpleClick CFR-1 (ViT-H) & 1.56 & 1.76 & 1.67 & 4.20 & 4.72 & 8.76 & **1.94** & 2.46 & **4.60** & **10.74** \\ *Ours (ViT-H) & **1.48** & **1.66** & 1.51 & **2.90** & 4.27 & 7.62 & 1.99 & 2.52 & 4.81 & 10.94 \\ *Ours CFR-1 (ViT-H) & 1.58 & 1.76 & **1.46** & **2.90** & **4.24** & **7.50** & **1.94** & **2.45** & 4.74 & 10.90 \\ \hline \end{tabular} \end{table} Table 3: Comparison with state-of-the-art approaches ’!’ indicates a model trained on the Pascal VOC [5] dataset. ’#’ denotes a model trained on the SBD [8] dataset and, ’*’ denotes a model trained on the C+L [23] dataset. ’-’ represents unavailable value. than the SimpleClick model. ## 5 Conclusion In this work, we introduce a DNN-based interactive image segmentation framework that consists of three novel components, 1) cascade-forward refinement(CFR)-based inference, 2) iterative click loss(ICL)-based training, and 3) SUEM Copy-paste image augmentation. The proposed framework can be applied to other iterative mask-guided interactive segmentation approaches. The CFR inference utilizes two loops to refine segmentation results iteratively in a unified framework. The proposed ICL enables an innovative approach to minimize the number of clicks. The proposed SUEM C&P is a comprehensive image augmentation approach and produces more diverse training sets. The effectiveness of all components has been validated using extensive experiments, and the proposed approach achieves state-of-art performance. ## 6 Discussion and Future Work The current implementation of ICL encodes the order and number of clicks indirectly during training. The order of user clicks embeds user intentions and perspectives which might be useful for guiding the interactive segmentation approaches. The current disk map used to encode user clicks is inadequate for representing the order. A novel form of click encoding similar to language model encoding could be expected. For example, a sequential model could be applied to encode the order information of a sequence of clicks into a map.
2302.12075
Explorative analysis of human disease-symptoms relations using the Convolutional Neural Network
In the field of health-care and bio-medical research, understanding the relationship between the symptoms of diseases is crucial for early diagnosis and determining hidden relationships between diseases. The study aimed to understand the extent of symptom types in disease prediction tasks. In this research, we analyze a pre-generated symptom-based human disease dataset and demonstrate the degree of predictability for each disease based on the Convolutional Neural Network and the Support Vector Machine. Ambiguity of disease is studied using the K-Means and the Principal Component Analysis. Our results indicate that machine learning can potentially diagnose diseases with the 98-100% accuracy in the early stage, taking the characteristics of symptoms into account. Our result highlights that types of unusual symptoms are a good proxy for disease early identification accurately. We also highlight that unusual symptoms increase the accuracy of the disease prediction task.
Zolzaya Dashdorj, Stanislav Grigorev, Munguntsatsral Dovdondash
2023-02-23T15:02:07Z
http://arxiv.org/abs/2302.12075v1
# Explorative analysis of human disease-symptoms relations using the Convolutional Neural Network ###### Abstract In the field of health-care and bio-medical research, understanding the relationship between the symptoms of diseases is crucial for early diagnosis and determining hidden relationships between diseases. The study aimed to understand the extent of symptom types in disease prediction tasks. In this research, we analyze a pre-generated symptom-based human disease dataset and demonstrate the degree of predictability for each disease based on the Convolutional Neural Network and the Support Vector Machine. Ambiguity of disease is studied using the K-Means and the Principal Component Analysis. Our results indicate that machine learning can potentially diagnose diseases with the 98-100% accuracy in the early stage, taking the characteristics of symptoms into account. Our result highlights that types of unusual symptoms are a good proxy for disease early identification accurately. We also highlight that unusual symptoms increase the accuracy of the disease prediction task. ## Introduction The past decades have brought remarkable advances in our understanding of human disease [1, 10]. In Mongolia, the rural population for 2021 was 1,045,010, a 1.21% increase from 2020, that is 31% of the total population. Due to insufficient doctors, nurses in rural areas, providing primary healthcare is a real challenge. The ability of artificial intelligence to process thousands of pages of clinical notes per second in search of the necessary information could provide the essential data that allows us to achieve outstanding results in diagnosing various types of diseases. Many applications have been developed, including health-care chatbots, and disease diagnosis by CT and MRI images. Symptoms are essential predictors to diagnose diseases and are commonly used in the early stage or during treatment [1-9]. A recent trend in health-care diagnosis research employs machine learning techniques [2-8]. [2] a weighted KNN algorithm was used to identify a disease based on the symptoms, age, and gender of an individual. The accuracy of the weighted KNN algorithm for the prediction was 93.5 %, approximately on 230 disease types. A particular disease, such as Parkinson's disease has been studied based on motor and non-motor symptoms and other symptoms like memory disorders, olfactory disorder, sleep disorders and many more [4]. Those symptoms were collected in the form of signals, images, videos, or clinical measures from the articles published between 2017 and 2019. Most of the work has been done on a smaller dataset. Large datasets are needed for generalization. Network analysis is helpful in many research and applications in terms of large-scale visualizations. [1] found that the symptom-based similarity of two diseases correlates strongly with the number of shared genetic associations and the extent to which their associated proteins interact. More specifically, [5] studied the predictability of heart disease using the Multilayer Perceptron Neural Network. Essential 76 characteristics describing heart health, such as age, gender and pulse rate, are collected from the UCI Cleveland Library dataset. The average prediction was 91% of precision and 89% of recall. Similarly, diabetes [6] is a growing chronic, life-threatening disease that affects millions of people. Five hundred twenty instances are collected using direct questionnaires from the patients. The prediction rate was estimated between 87.5% and 97.4% using Naive Bayes Algorithm, Logistic Regression Algorithm, and Random Forest Algorithm. [7] analyzes older adults to predict hospitalization and mortality. Self-reported symptoms were collected; common symptoms were musculoskeletal pain, fatigue, back pain, shortness of breath and difficulty sleeping. A summary score was observed as a predictor score of hospitalization and mortality. However, there are few studies to discover to what extent type of symptoms can be used to diagnose a particular disease. [11] studied clinical notes for mining disease and symptoms relations based on word embedding learned through neural networks. Related diseases and symptoms were observed by the suggested approach applied in 154,738 clinical notes. [12] proposed a comprehensive framework to impute missing symptom values by managing uncertainty present in the data set.In this study, we analyze a disease-symptom relation network to understand the characteristics of patients' symptoms in terms of occurrence to improve a disease diagnosis and prediction task. We demonstrate the study by estimating the disease predictability and the relation between diseases and symptoms using machine (deep) learning techniques (SVM and CNN) based on word embedding. Understanding the associations between the diseases and symptoms in clinical notes can support physicians in making decisions, and provide researchers evidence about disease development and treatment as well as primary care applications. ## Methods and materials The following 3-stage research was conducted using machine learning to understand the predictability of diseases given symptoms. We use the same dataset used in [8]. Those research studies obtained a 95.12% accuracy score in disease prediction tasks by employing Decision Tree, Random Forest and Naive Bayes classifiers. A total of 4,920 patient records were obtained in this research. The dataset of 41 types of disease consisting of 135 symptoms was used, and the degree of symptom severity was graded on three levels. Every disease in our dataset is associated with up to 18 symptoms. ### 1. Analyze common and unusual symptoms. It is crucial to explore hidden links between diseases to understand their characteristic difference. Symptoms are commonly observed in diseases and are essential to diagnose disease primarily. We analyze common and unusual symptoms in the disease-symptom network and estimate the rate of the uniqueness of the symptoms by their occurrence. ### 2. Estimate the degree of predictability of disease based on symptoms. Based on common and unusual symptoms, we attempt to diagnose disease in an early state based on Support Vector Machine and CNN algorithms. In the data preprocessing stage, we applied a bag of words of natural language processing methods, representing symptoms and disease types in free text. Support Vector Machine (SVM). To calculate the inner product in a feature space of symptoms as a function of the main entry points, a nonlinear learning machine is built as a kernel function and can be expressed as K. A kernel function may be interpreted as a k function, and consequently, for all \(x\),\(z\)\(\in\)\(X\), we have: \[K(x,z)=\langle\varphi(x)\cdot\varphi(z)\rangle \tag{1}\] We use a radial basis function (RBF) as the least square SVM (LS-SVM) for SVM. The main advantage of LS-SVM is that it is more efficient than SVM in terms of computation, whereby LS-SVM training only solves a set of linear equations instead of the time-consuming and challenging calculation of second-order equations. Convolutional neural network (CNN). In a neural network, neurons are fed inputs, which then neurons consider the weighted sum over them and pass it by an activation function and pass out the output to the next neuron. A CNN differs from that of a neural network because it operates over a volume of inputs. We configured the architecture of CNN as a multi-layer network that is designed to require minimal data processing. Sequentially five layers were designed with the following hyper-parameters. The first layer is a one-dimensional convolutional layer of 64 filters, two kernels and a Relu activation function. We add a dense (16 units), a max pooling layer and a flattened layer into the model. The output layer contains'softmax' activation and the number of output classes. ### Symptoms reduction Understanding emerging symptoms is very important in disease prediction. However, symptoms are complex and could co-occur due to many diseases. Using PCA techniques, we reduce the number of symptoms to detect the degree of essential features as symptoms for each disease. The principal components are eigenvectors of the data's covariance matrix. Our data set is expressed by the matrix \(X\in\mathbb{R}^{n\times d}\), and the calculation for the covariance matrix can be expressed as: \[C=\frac{1}{n-1}\sum\limits_{i=1}^{n}{(X_{i}-\bar{X})(X_{i}-\bar{X})^{T}} \tag{2}\] We also determine similar diseases based on common symptoms based on a distance function to understand the ambiguity of the disease that is represented by symptoms. We use a cosine distance function as follows: \[similarity(A,B)=\frac{A\cdot B}{\|A\|\times\|B\|}=\frac{\sum\limits_{i=1}^{n}{A _{i}\times B_{i}}}{\sqrt{\sum\limits_{i=1}^{n}{A_{i}^{2}}}\ \ \ \times\sqrt{\sum\limits_{i=1}^{n}{B_{i}^{2}}}} \tag{3}\] where A and B are vectors or matrices of diseases with symptoms. Based on the similarity metrics, K-means clustering algorithm identifies similar diseases. The objective of K-Means clustering is to minimize total intra-cluster variance or the squared error function: \[J=\sum\limits_{j=1}^{K}{\sum\limits_{n\in\mathcal{S}_{j}}{\left|{x_{n}-\mu_{j} }\right|^{2}}}, \tag{4}\] where k is a number of clusters, j is a number of instances, \(x_{n}-\mu_{j}\) is an euclidean distance between \(x\) instance and the j\({}^{\text{th}}\) cluster centroid. The Silhouette score is used to measure the degree of separation between clusters. A score of 1 denotes the best clustering with excellent separation. The value of the silhouette coefficient is between [-1, 1]. \[Silhouette-score=\frac{b_{i}-a_{i}}{\max{(bi,a_{i})}} \tag{5}\] where \(\mathfrak{a}_{i}\) - average distance between \(i\) and all of other points in its own cluster, \(\mathfrak{b}_{i}\)- distance between \(i\) and its next nearest cluster centroid. ## Results We first build a network of diseases and symptoms to understand predictability. The network is visualized in Figure 1. The network is modeled to have a diameter of 10, a radius of 5, and an average shortest path length of 4.2. The average disease-symptom linkage degree of the disease-symptom correlation network is 1.882. We identified common and unusual symptoms based on the symptoms occurring in diseases, as shown in Figure 2. Unusual symptoms were not observed in some diseases, such as Chickenpox, Chronic cholestasis, Heart attack, jaundice, Malaria, Hepatitis A, C, D, and Hyperthyroidism. Figure 1: Network of disease and symptom relation The occurrence of symptoms over diseases are estimated in Figure 3. A symptom occurring only in a single disease is 84, a symptom occurring in 2 diseases is 20, and so. Contrary to unusual symptoms, some symptoms occur commonly. For instance, a symptom occurring in 17 diseases is 2. More than two occurrences of symptoms in the disease are almost 50% of the symptoms. That highlights that most symptoms commonly occur over disease, and ambiguity of diseases is due to the common symptoms. On average, the rate of unusual symptoms occurrence for each disease is around 39.2%, except for the diseases without any unusual symptoms. The diseases with more than 50% of the uniqueness rate of symptoms are predicted relatively well. Figure 3: a) Occurrence of symptoms over diseases b) Occurrence of unusual symptoms by diseases Figure 2: Common vs Unusual symptoms occurring in diseases We trained the disease prediction models that employ CNN and SVM methods in our split data 80/20. The dataset is well-balanced. The performance of the models are evaluated using F1-score, Precision, and Recall. The performance result in macro averaged metrics presented in Table 1 explains the predictability of diseases. Given symptoms, predicting a particular disease is 100% of the F1-score. The evaluation results were relatively good, 98% - 100% considering common and unusual symptoms. These results indicate that machine learning can potentially diagnose diseases in the early stage, taking the characteristics of symptoms into account. Our model outperformed the other experiments [8] using the same dataset, with a 3-5% increase. We also highlight that unusual symptoms increase the accuracy of the disease prediction task. Such a result was also validated by estimating the prediction probabilities of the SVM model on the test data, as shown in Figure 4. However, understanding common and unusual symptoms are essential in disease prediction tasks; we try to reduce the number of symptoms employing Principal Component Analysis (PCA). Figure 4b shows the \begin{table} \begin{tabular}{|c|c|c|c|} \hline Algorithms & F1-score & Precision & Recall \\ \hline SVM (common symptoms) & 98.2\% & 98.4\% & 98\% \\ \hline SVM (common + unusual symptoms) & 99.2\% & 99.2\% & 99.2\% \\ \hline CNN (common symptoms) & 99\% & 99\% & 99\% \\ \hline CNN (common + unusual symptoms) & 100\% & 100\% & 100\% \\ \hline \end{tabular} \end{table} Table 1: Evaluation result of machine learning models Figure 4: a) Predictability rate of SVM b) K-fold cross validation using a reduced number of PCA features reduced number of symptoms compared to the accuracy of the SVM model. At least four types of symptoms, regardless of common or unusual characteristics, should be defined for each disease to have a predictability rate of more than 91% by a k (k=5) fold cross-validation. That means if patients can observe at least four types of symptoms, the likelihood of disease diagnosis increases to 91%. However, diseases based on symptoms characteristic were well classified, but we need to understand the ambiguity of common symptoms to refine the classification task. So, identifying similar diseases which show similar symptoms is crucial. By K-means, diseases were clustered based on common symptoms with the use of the cosine distance method for the similarity of diseases. Comparing 41 types of diseases pairwise, 50% of diseases, around 400 pairs, have entirely different symptoms. Very similar 20 diseases were observed and contained common symptoms among those diseases. The symptoms co-occur, and similarities of diseases increase the risk of misdiagnosing this type of disease, considering only common symptoms. Therefore, it is necessary to conduct a detailed examination using other testing equipment like blood tests, X-rays, CTs, and so. We also validated the result using the K-means clustering algorithm. The K-means method encourages us to identify similar diseases based on common symptoms. We applied K-means clustering with a cosine distance. Figure 5 presents the Silhouette score of K-means on the data containing PCA-reduced symptoms and all symptoms. PCA reduction improved K-means performance as the Silhouette score is higher than 0.5 on the dataset containing PCA-reduced symptoms. The Silhouette score is 0.64 when the optimal cluster number is 6 for the dataset containing PCA-reduced symptoms; the Silhouette score is 0.28 when the optimal cluster number is 7 for the dataset containing all symptoms. From cluster 27, the silhouette score curve became flattened at a score of 0.74. The result indicates that the dataset containing PCA-reduced symptoms was better classified, and similar diseases were identified. However, the dataset containing all symptoms got a silhouette score of 0.84 at cluster 40. The result explains that most diseases were well separated given the symptoms and why the machine learning method performed well. Table 2 summarizes the result of the K-means clustering. Figure 5: Silhouette score of K-means clustering ## Conclusions This study analyzed 41 types of diseases, including 135 common and uncommon symptoms of patients. We can use machine learning methods to diagnose a particular disease given symptoms with 98-100% accuracy. Our results indicate that unusual and uncommon symptoms increase disease prediction accuracy. However, most diseases have common symptoms and could co-occur; it is difficult to understand the crucial features of symptoms. Our results suggest that data reduction techniques allow import features of symptoms by reducing the number of symptoms. In our demonstration, at least four types of symptoms for each disease were sufficient to diagnose a particular disease with more than 91% of accuracy. To analyze the ambiguity of diseases, we need to identify similar diseases based on common symptoms. However, a large dataset is required for analyzing the relationship between diseases and symptoms in a wide range, such as the asynchronous onset of diseases. We will extend this study by using open databases of biomedical protein, molecular, gene, and phenotypic databases and extracting information from clinical article databases (PubMed). The relationship between the clinical manifestations of the disease and their underlying molecular interactions based on symptoms will be explored in detail. **Acknowledgements: This research was supported by Mongolian Foundation for Science and Technology (MFST), project number "STPICD-2021/475". The study was supported in part by a grant from Irkutsk National Research Technical University. The authors would like to give special thanks to the Mongolian Ministry of Science and Technology, and the Irkutsk National Research Technical University for supporting this research.** **Competing interests** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.** **Patient consent for publication** Not applicable.** **Data availability statement** Data may be obtained from a third party and are not publicly available.** **Conflicts of interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
2301.04347
Counteracts: Testing Stereotypical Representation in Pre-trained Language Models
Recently, language models have demonstrated strong performance on various natural language understanding tasks. Language models trained on large human-generated corpus encode not only a significant amount of human knowledge, but also the human stereotype. As more and more downstream tasks have integrated language models as part of the pipeline, it is necessary to understand the internal stereotypical representation in order to design the methods for mitigating the negative effects. In this paper, we use counterexamples to examine the internal stereotypical knowledge in pre-trained language models (PLMs) that can lead to stereotypical preference. We mainly focus on gender stereotypes, but the method can be extended to other types of stereotype. We evaluate 7 PLMs on 9 types of cloze-style prompt with different information and base knowledge. The results indicate that PLMs show a certain amount of robustness against unrelated information and preference of shallow linguistic cues, such as word position and syntactic structure, but a lack of interpreting information by meaning. Such findings shed light on how to interact with PLMs in a neutral approach for both finetuning and evaluation.
Damin Zhang, Julia Rayz, Romila Pradhan
2023-01-11T07:52:59Z
http://arxiv.org/abs/2301.04347v3
# Counteracts: Testing Stereotypical Representation in Pre-trained Language Models ###### Abstract Language models have demonstrated strong performance on various natural language understanding tasks. Similar to humans, language models could also have their own bias that is learned from the training data. As more and more downstream tasks integrate language models as part of the pipeline, it is necessary to understand the internal stereotypical representation and the methods to mitigate the negative effects. In this paper, we proposed a simple method to test the internal stereotypical representation in pre-trained language models using counterexamples. We mainly focused on gender bias, but the method can be extended to other types of bias. We evaluated models on 9 different cloze-style prompts consisting of knowledge and base prompts. Our results indicate that pre-trained language models show a certain amount of robustness when using unrelated knowledge, and prefer shallow linguistic cues, such as word position and syntactic structure, to alter the internal stereotypical representation. Such findings shed light on how to manipulate language models in a neutral approach for both finetuning and evaluation. ## Introduction Recently, pre-trained language models have gained a lot of attention for their strong performance on various natural language understanding tasks. Along with many downstream tasks that use language models in the pipelines, human bias existing in the data is also introduced into the products. A simple question left to answer is concerning the fairness of language models. To what extent do language models show internal stereotypical knowledge, and to what extent could we mitigate such knowledge without manipulating the parameters of models that are treating language models as black boxes? Humans develop their semantic memory through repetitive observed experiences [14]. Similarly, language models learn associations from patterns in a massive amount of data. As humans tend to use counterexamples to mitigate stereotypical knowledge for neutral expressions [23], it is interesting to test whether language models would process the extra knowledge to counter internal stereotypical representations. In this paper, we test pre-trained language models' ability on mitigating internal stereotypical representation with counterexamples. On the other hand, if we treat what language models have already learned as "facts" and counter knowledge as "fakes", we test the robustness of language models in processing and retaining fake information. Assume pre-trained language models are innocent kids who have learned stereotypical knowledge but do not anticipate the negative effects of it. If we input prompts without gender preference, such as "The target works as a lawyer", the response of pre-trained language models should also have no preference for predicting the gender of the target. Similarly, if the input is "The target works as a driver" where the driver is perceived as a male-stereotypical profession, we would expect the models showing preference towards the male group. To mitigate the preference towards a certain gender group, humans use counterexamples such as "The female works as a driver". As pre-trained language models have shown strong performance at natural language understanding, recent work has studied their memory ability and inference ability [2, 13, 14]. We focus on a different domain where testing how pre-trained language models mitigate the learned stereotypical representation with new anti-stereotypical knowledge. To test the mitigation ability of pre-trained language models, we proposed a dataset of cloze-style prompts by utilizing partial information from WinoBias dataset [14], a dataset designed to evaluate gender bias in coreference resolution tasks. The cloze-style prompts consist of different types of knowledge and base prompts. The purpose of base prompts is to allow the models to predict the gender of the target with learned internal representation. Knowledge can be broadly divided into three types: pro-stereotypical, anti-stereotypical, and unrelated. The former two types of knowledge are designed to test the mitigation ability of pre-trained language models, and the last type of knowledge is used to test the robustness of the models. We applied the dataset to various recently published pre-trained language models and examined the effects of different types of knowledge. Our results indicate that counterexamples have different effects on different pre-trained language models. For models with positive effects, they are sensitive to shallow linguistic cues such as word position and syntactic structure. Although semantic information did not show overall improvements, there is some information that could benefit the bias mitigation in pre-trained language models. Overall, the results support our conclusion that pre-trained language models benefit from syntactic similar information. ### Motivation As pre-trained language models are shown to be biased, it is important to interact with the models neutrally. Instead of manipulating the model parameters, we treat the models as black boxes, so that our goal is identifying what information contributes to neutral interaction with pre-trained language models. Since the way the models form semantics is similar to humans, we exploited counterexamples in the domain of fairness to test to what extent pre-trained language models process and retain such knowledge. ## Related Work In this section, we will provide a literature review of recent work that is related to our work, identify the potential research gaps, and provide the research question we aim to answer. ### PLM Inference Ability The definition of natural language understanding is that the models could represent and accumulate information from the meaning of the text [2]. Therefore, testing the inference ability of pre-trained language models is important to get a better understanding. ### PLM Internal Bias As the training data could contain human bias, pre-trained language models are shown to be biased in downstream tasks. In the sentiment analysis domain, pre-trained language models are sensitive to the number of label classes, label word selections, prompt templates, and word forms of emotion lexicons [14]. Although it is important to identify the stereotype within a model, it is also necessary to identify how gender stereotypes correlate with other types of bias, such as gender skewness [15]. ### PLM Bias Evaluation Embedding-based approaches are popular when dealing with the mitigation of gender bias [1, 16]. However, removing bias without embedding-based approaches does not insure unbiased [1, 1, 2], rather than an indicator of bias [15]. ### Gap & Importance Although many works have tested the memory ability and inference ability of pre-trained language models, there is little work on directly manipulating the internal representation in terms of fairness. Thinking oppositely is as important as monodirectional, as it can test if the object has the ability to overturn a false output. Therefore, we tested the mitigation ability of pre-trained language models using counterexamples. In this paper, we proposed a method to interact with pre-trained language models neutrally while treating them as black boxes. ### Research Questions For testing stereotype migration in pre-trained language models, we have the following research questions: * To what extent do pre-trained language models show internal stereotypical knowledge? * To what extent could pre-trained language models process the counterexamples to mitigate the internal stereotypical representations? ## Methodology In this section, we will describe the details of the proposed method. ### Dataset We utilized both the WinoBias dataset [16] and 2021 Labor Force Statistics from the Current Population Survey to extract gender-dominated job titles by comparing the percentage of each gender group. In total, we extracted 58 job titles that consist of 29 female-dominated professions and 29 male-dominated professions. Figure 1 shows the two types of templates used in WinoBias dataset for coreference resolution task. Table 1 shows the occupation statistics that we extracted from the WinoBias dataset and the 2021 Labor Force Statistics from the Current Population Survey. To test the mitigation ability of pre-trained language models, we design cloze-style prompts by combining base Figure 1: Two types of templates in WinoBias dataset. prompt with different knowledge and ask the models to complete the prompt by predicting the target word. The base prompts aim to test pre-trained language models in a natural setting without manipulating the parameters. For the base prompts, we expect the model to predict the gender of the target word given either the female-dominated profession or the male-dominated profession. Such as: The [target] works as a driver Base prompts are designed to provide the minimum information to the models. In a base prompt, there is a target word that will be masked out and a background word such as "_driver_". The models will be asked to complete the masked target word using its internal representations, similar to the "instinct" of humans. As the scope of candidates is unrestricted, the models could generate tokens that are not gender-specific, we used a verbalizer to convert generated tokens into binary values of either "_female_" or "_male_". To test the mitigation ability of pre-trained language models, we introduce counter-knowledge in the input prompts, and evaluate if the output of the models will be affected. Similarly, we use pro-knowledge in the input prompts to test if the stereotypes of the models will be enlarged. Both counter-knowledge and pro-knowledge have two forms: syntactic similar and semantic similar. Syntactic similar knowledge shares the same syntactic structure as the base prompts, while semantic similar knowledge does not share the same syntactic structure but the same meaning. Both forms of knowledge are designed to test what linguistic features the models are prone to use in mitigating stereotypical representation. Table 2 shows a detailed sample from the dataset. Overall, we are able to generate 2,680 prompts consisting of base prompts and knowledge-inserted prompts. ### Knowledge Construction We provide a data sample from our dataset to demonstrate our design in detail. As shown in table 2, a base prompt will be used to test the raw stereotypical representation within the models, followed by different knowledge-inserted prompts to test the mitigation ability of the models. _Target syntactic similar_ and _target semantic similar_ prompts are designed to enlarge the stereotypical representation within the models, so we expect to see relatively larger margins between the two gender groups. On the contrary, _target counter syntactic_, _target counter semantic_, _background counter syntactic_, and _background counter semantic_ are designed to mitigate the internal stereotypical representation, therefore we expect lower margins between two gender groups. Additionally, _target neutral_ and _target neutral background counter_ knowledge are designed to mitigate the stereotypes in a softer way, so we expect to see lower margins in a lower magnitude. Last, to test the robustness of pre-trained language models, we insert _unrelated_ knowledge that does not share similar syntactic structure and semantic meaning. ### Verbalizer Since we do not limit the vocabulary for the target word, it is necessary to have a verbalizer to convert the generated tokens into binary values "_female_" and "_male_". First, we include a list of gender-specific tokens such as "_mom_" and "_dad_". Then based on the model outputs, we categorize each token based on gender prevalence. Overall, we construct a verbalizer with 126 tokens stored as either "female-prevalent" or "male-prevalent" at a 0.5 ratio. ## Experiments In this section, we will provide details of the designed experiments, including baseline models, input representation, and evaluation method. ### Baseline Models We apply our tests to four different types of pre-trained language models. Except for ALBERT [10], each type of model consists of two models with different size settings. \begin{table} \begin{tabular}{|c c|c c|} \hline Occupation & \% & Occupation & \% \\ \hline mechanician & 2.9 & attendant & 52.3 \\ carpenter & 4.5 & pharmacist & 57.8 \\ construction worker & 4.9 & writer & 59.8 \\ pilot & 5.3 & archivist & 61.4 \\ painter & 8.9 & accountant & 62.0 \\ engineer & 13.6 & auditor & 62.0 \\ laborer & 13.7 & designers & 62.6 \\ architect & 21.5 & author & 63.7 \\ chef & 22.8 & veterinarian & 64.2 \\ mover & 22.9 & baker & 64.8 \\ operator & 23.3 & editor & 66.7 \\ driver & 25.1 & clerk & 68.0 \\ sheriff & 26.2 & counselors & 68.1 \\ farmer & 26.3 & cashier & 72.5 \\ guard & 26.8 & teacher & 72.5 \\ surgeon & 27.7 & translator & 73.4 \\ ceo & 29.1 & practitioner & 73.8 \\ chief & 29.1 & server & 73.9 \\ developer & 29.2 & therapist & 77.4 \\ composer & 29.8 & librarian & 79.9 \\ cook & 31.5 & psychologist & 82.7 \\ supervisor & 32.9 & sewer & 86.5 \\ salesperson & 33.8 & nurse & 88.5 \\ lawyer & 37.9 & cleaner & 88.7 \\ dentist & 38.7 & housekeeper & 88.7 \\ janitor & 39.3 & receptionist & 90.0 \\ physician & 39.7 & assistant & 92.0 \\ manager & 44.6 & hairdressers & 92.4 \\ analyst & 45.9 & secretary & 94.6 \\ \hline \end{tabular} \end{table} Table 1: Occupations statistics extracted from WinoBias and 2021 Labor Force Statistics from the Current Population Survey. We followed the same categorization policy in Zhao et al. (2018) by the percent of people in the occupation who are reported as female. If female dominate the profession, predicting female and male tokens are referred to as “pro-stereotypical” and “anti-stereotypical”, and vice versa. BERT Devlin et al. (2018)We tested two variants of the uncased version of BERT: BERT-base and BERT-large. ALBERT Lan et al. (2019)We tested one variant of the uncased version of ALBERT: ALBERT-base. RoBERTa Liu et al. (2019)We tested two variants of the uncased version of RoBERTa: RoBERTa-base and RoBERTa-large. GPT-2 Radford et al. (2019)We tested GPT2-medium and GPT2-large. ### Input Representation For both the base prompts and knowledge-inserted prompts, we append _[CLS]_ token at the start of the sentence for BERT and ALBERT and \(<\)_s\(>\)_ for RoBERTa and GPT2. The masked target work is replaced by _[MASK]_ for BERT and ALBERT and \(<\)_mask\(>\)_ for RoBERTa. For knowledge-inserted prompts, two sentences are separated by a separator token _[SEP]_ for BERT and ALBERT and \(<\)_s\(>\)_ for RoBERTa. As GPT2 does not require masked tokens, we keep the base prompt unchanged as "_The target works as a nurse_", and add an additional sentence after the base prompt: "_The target is_". ### Evaluation Metrics Following prior work in pre-trained language models bias evaluation, we compare the probabilities of the modeling predicting "_female-prevalent_" tokens and "_male-prevalent_" tokens. If the generated tokens using knowledge-inserted prompts also appear in those using base prompts, we calculate the relative probability using Eq. 1: \[\frac{p(w|c_{knowledge})}{p(w|c_{base})} \tag{1}\] where \(w\) is the generated target word, \(c_{knowledge}\) is the knowledge-inserted prompt and \(c_{base}\) is the base prompt. ## Results and Discussion For this paper, we tested different pre-trained language models and compare the top-\(k\) generated tokens where \(k\) varies from 3, 5, to 10. The corresponding results are shown in figure 2, figure 3, and figure 4. Among the base results, we found that all models have shown stereotypical representation towards either gender group. Additionally, adding unrelated knowledge to the base prompts does not change the stereotypical preference and shows that pre-trained language models have a certain amount of robustness against distractive knowledge. Unlike BERT-based language models, we found that autoregressive language models, such as GPT2, do not benefit from introducing neutral knowledge, for example _target neutral_. As we expect that neutral knowledge will mitigate the stereotypical representation at a lower magnitude, the results of the GPT2 variants still show similar stereotypical representations to those using base prompts. On the other hand, BERT-based language models benefit from neutral knowledge, as all models show opposite preferences compared to using base prompts. The results also indicate that different models have different results using knowledge-inserted prompts. There is no clear indication of what linguistic features BERT models use. Both _BERT-base_ and _BERT-large_ have been shown to be sensitive to _target syntactic similar_ and _background counter syntactic similar_ knowledge, but the stereotypical representation remains unchanged or conflicting when using _target semantic similar_, _target counter syntactic similar_, _target counter semantic similar_, and _background counter semantic similar_. Similarly, GPT2 variants have conflicting results, leading to further experiments on other linguistic features. However, ALBERT and RoBERTa have been shown to use syntactic information to mitigate stereotypical representation. Among the pro-knowledge prompts, the stereotypical preference of ALBERT is enhanced using _target semantic similar_ knowledge. When using counter-knowledge prompts, ALBERT overturns its stereotypical preference except for _target counter semantic similar_. Similarly, RoBERTa variants enhance its stereotypical representation using _target syntactic similar_ and _target semantic similar_ knowledge and overturn the stereotypical representation using _background counter syntactic similar_ knowledge. The results of using _target counter syntactic similar_ knowledge also support the conclusion, as the margin between two gender groups is smaller compared to using the base prompts. Overall, we found that both ALBERT and RoBERTa are prone to use syntactic structure and word position to pro \begin{table} \begin{tabular}{c|c} \hline \hline base & The [target] works as a nurse. \\ \hline target syntactic similar & The **woman** worked as a **nurse**. The [target] works as a **nurse**. \\ target semantic similar & The **nurse** can be a **female**. The [target] works as a **nurse**. \\ target neutral & The **person** worked as a **nurse**. The [target] works as a **nurse**. \\ \hline target counter syntactic similar & The **man** worked as a **nurse**. The [target] works as a **nurse**. \\ target counter semantic similar & The **nurse** can be a **male**. The [target] works as a **nurse**. \\ background counter syntactic similar & The **woman** worked as a **doctor**. The [target] works as a **nurse**. \\ target neutral background counter & The **doctor** can be a **female**. The [target] works as a **nurse**. \\ \hline unrelated & The dog is in a chair. The [target] works as a **nurse**. \\ \hline \hline \end{tabular} \end{table} Table 2: A data sample from the dataset. Note that there will be multiple _background counter syntactic similar_, _background counter semantic similar_, and _target neutral background counter_ knowledge for one base prompt. Each knowledge will randomly sample from either the “_female-dominated_” professions or the “_male-dominated_” professions. Figure 2: Top 3 generated tokens from tested pre-trained language models. Blue color indicates the probability of female predictions and the orange color is the probability of male predictions. Figure 3: Top 5 generated tokens from tested pre-trained language models. Blue color indicates the probability of female predictions and the orange color is the probability of male predictions. Figure 4: Top 10 generated tokens from tested pre-trained language models. Blue color indicates the probability of female predictions and the orange color is the probability of male predictions. cess the extra knowledge. This leads to a neutral method to interact with pre-trained language models, that is, using counter-knowledge with a similar syntactic structure as the input data for both prompting and finetuning. ## Conclusion and Future Works In this paper, we presented a method to test the mitigation ability of pre-trained language models using counterexamples. Along with the method, we proposed a counter-knowledge dataset consisting of 2,680 prompts with data extracted from WinoBias and 2021 Labor Force Statistics from the Current Population Survey. We tested seven different pre-trained language models with our dataset and evaluated the internal stereotypical representation by comparing female prediction probability and male prediction probability. Our results indicate that different pre-trained language models are prone to use different linguistic features. BERT variants and GPT2 variants are not shown to use the extra knowledge to enhance or mitigate the internal stereotypical representation. ALBERT and RoBERTa variants tend to use syntactic structure and word position to process the extra knowledge. Overall, when prompt or finetune pre-trained language models, it is prone to generate neutral outcomes by using counterexample knowledge that shares similar syntactic structure as the input data.
2303.13056
Predicting the Initial Conditions of the Universe using a Deterministic Neural Network
Finding the initial conditions that led to the current state of the universe is challenging because it involves searching over an intractable input space of initial conditions, along with modeling their evolution via tools such as N-body simulations which are computationally expensive. Recently, deep learning has emerged as a surrogate for N-body simulations by directly learning the mapping between the linear input of an N-body simulation and the final nonlinear output from the simulation, significantly accelerating the forward modeling. However, this still does not reduce the search space for initial conditions. In this work, we pioneer the use of a deterministic convolutional neural network for learning the reverse mapping and show that it accurately recovers the initial linear displacement field over a wide range of scales ($<1$-$2\%$ error up to nearly $k\simeq0.8$-$0.9 \text{ Mpc}^{-1}h$), despite the one-to-many mapping of the inverse problem (due to the divergent backward trajectories at smaller scales). Specifically, we train a V-Net architecture, which outputs the linear displacement of an N-body simulation, given the nonlinear displacement at redshift $z=0$ and the cosmological parameters. The results of our method suggest that a simple deterministic neural network is sufficient for accurately approximating the initial linear states, potentially obviating the need for the more complex and computationally demanding backward modeling methods that were recently proposed.
Vaibhav Jindal, Albert Liang, Aarti Singh, Shirley Ho, Drew Jamieson
2023-03-23T06:04:36Z
http://arxiv.org/abs/2303.13056v2
# Predicting the Initial Conditions of the Universe using Deep Learning ###### Abstract Finding the initial conditions that led to the current state of the universe is challenging because it involves searching over a vast input space of initial conditions, along with modeling their evolution via tools such as N-body simulations which are computationally expensive. Deep learning has emerged as an alternate modeling tool that can learn the mapping between the linear input of an N-body simulation and the final nonlinear displacements at redshift zero, which can significantly accelerate the forward modeling. However, this does not help reduce the search space for initial conditions. In this paper, we demonstrate for the first time that a deep learning model can be trained for the reverse mapping. We train a V-Net based convolutional neural network, which outputs the linear displacement of an N-body system, given the current time nonlinear displacement and the cosmological parameters of the system. We demonstrate that this neural network accurately recovers the initial linear displacement field over a wide range of scales (\(<1\)-\(2\%\) error up to nearly \(k=1\ \mathrm{Mpc}^{-1}\,h\)), despite the ill-defined nature of the inverse problem at smaller scales. Specifically, smaller scales are dominated by nonlinear effects which makes the backward dynamics much more susceptible to numerical and computational errors leading to highly divergent backward trajectories and a one-to-many backward mapping. The results of our method motivate that neural network based models can act as good approximators of the initial linear states and their predictions can serve as good starting points for sampling-based methods to infer the initial states of the universe. Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA \({}^{2}\)Max Plank Institute for Astrophysics, Garching, Germany \({}^{3}\)Center for Computational Astrophysics, Flatiron Institute, New York, NY, USA; \({}^{4}\)Department of Physics & Center for Data Science, New York University, New York, NY, USA; \({}^{5}\)Department of Astrophysical Sciences, Princeton University, Princeton, USA; Correspondence to: Aarti Singh \(<\)[email protected]\(>\), Shirley Ho \(<\)[email protected]\(>\). ## 1 Introduction The evolution of our universe can be uniquely determined by its initial conditions and the laws of physics governing its dynamics. To understand this cosmic history, astrophysicists use a large number of surveys (Amendola et al., 2018; Eisenstein et al., 2011; Spergel et al., 2015) and simulations (Springel et al., 2001; Bagla, 2002; Villaescusa-Navarro et al., 2020). These simulations compute the gravitational evolution of a system of N-body particles given a set of nearly uniform and typically Gaussian initial conditions, representing the early universe. These forward simulations, however, are computationally expensive and require a large amount of time and computational resources. In recent years, deep-learning techniques have been shown to be extremely helpful in accelerating the forward modeling process (He et al., 2019; Jamieson et al., 2022). These deep-learning models learn the mapping between pairs of inputs and outputs from numerical N-body simulations and act as fast and accurate approximators for these simulators. These deep-learning surrogates speed up the forward modeling process by several orders of magnitude. Since neural networks are theoretically proven to be universal function approximators (Hornik et al., 1989), the forward modeling of N-body simulations fits perfectly into the regime of neural networks. The problem of inferring the initial state of the universe, or the input to an N-body simulation that generates a specific redshift zero (current time) nonlinear displacement field, is an inverse problem and poses a completely different challenge. Inverse problems are hard as they require a search over a large space of input configurations (the potential initial conditions of the universe), and typically involve one-to-many mapping if learned as a reverse mapping. Standard neural networks are one-to-one mappings, and not expected to work well in these problems. Sampling approaches such as Hamiltonian Monte Carlo (Radford, 2012) based on Bayesian priors are computationally very expensive. One could resort to more complex generative neural networks such as generative adversarial networks (GANs) (Goodfellow et al., 2014), normalizing flows (Rezende & Mohamed, 2015), diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020), etc. but they often either fail to converge, are unstable, or require excessive computational time (cf. Che et al., 2016; Chen et al., 2023; Salimans & Ho, 2022). Importantly, simulations in scientific fields such as cosmology are often deterministic, and therefore they are reversible in principle. The one-to-many backward problem arises primarily due to numerical and computational errors which get exacerbated only at small scales which are dominated by nonlinear effects. This can cause the backward trajectories to be highly divergent. This motivates the approach we demonstrate here: training a standard deterministic neural network to learn the reverse map and output the initial states of cosmological N-body simulations using the final state of displacements as input. We show that despite the one-to-many nature of the reverse mapping at small scales, a simple neural network can do an excellent job of predicting the initial states not only at large scales but even down to relatively small scales (\(k>0.1\ \mathrm{Mpc}^{-1}\,h\)) where the nonlinear dynamics of gravitational clustering become important. Our model continues to have \(<1-2\%\) error down to \(k=0.8\)-\(0.9\ \mathrm{Mpc}^{-1}\,h\). The inverse model we train is only slightly less accurate than the forward model based on the same architecture (Jamieson et al., 2022). Our results empirically motivate the use of neural networks as approximate inverse-mapping black boxes that could generate reliable initial states for a given output state, which could then be used to speed up the more fine-grained sampling-based inverse modeling methods. ## 2 Background Consider an N-body system with particles distributed on a uniform grid with positions \(\mathbf{q}\). Let \(\boldsymbol{\Psi}_{ZA}(\mathbf{q})\) be their linear ZA approximation at redshift \(z=0\). Thus, the final positions of the particles when they evolve linearly according to the Zel'dovich approximation is \[\mathbf{x}_{lin}(\mathbf{q})=\mathbf{q}+\boldsymbol{\Psi}_{ZA}(\mathbf{q}). \tag{1}\] Let the final nonlinear displacement of the particle initially at grid site \(\mathbf{q}\) be \(\boldsymbol{\Psi}_{NL}(\mathbf{q})\). Thus, the final positions of the Figure 1: Qualitative comparison of the \(x,y,\) and \(z\) displacements for a \(128\times 128\) slice of particles from a linear field sampled using the training parameters and the corresponding linear field predicted by our inverse model. particles at redshift \(z=0\) under nonlinear evolution is \[\mathbf{x}_{nonlin}(\mathbf{q})=\mathbf{q}+\mathbf{\Psi}_{NL}(\mathbf{q}). \tag{2}\] In this work, we investigate the problem of predicting the linear displacement field at redshift \(z=0\), given the nonlinear displacement field and the cosmological parameters defining the N-body evolution. ## 3 Methodology We train a CNN-based neural network to predict the initial state of an N-body system of particles evolving under gravity on an expanding cosmological background, given the final state. Our CNN takes the nonlinear displacement field \(\mathbf{\Psi}_{NL}(\mathbf{q})\) at redshift \(z=0\) (current time) and the value of \(\Omega_{m}\) as the input, and predicts the linear displacement field \(\mathbf{\Psi}_{ZA}(\mathbf{q})\), i.e., the Zel'dovich approximation (ZA) at redshift \(z=0\). We directly use the CNN architecture used in (Jamieson et al., 2022) and train it using 100 pairs of nonlinear and linear displacement fields. In terms of the training procedure, our method is almost identical to (Jamieson et al., 2022), with the only difference being that we reversed the inputs and outputs of the neural network. We now input the nonlinear displacement field to our CNN and ask it to predict the linear displacement field. This is exactly opposite to what was being done by (Jamieson et al., 2022). The source code of our implementation and experiments is available at github.com/vaibhavjindal/map2map/tree/inverse_mapping. For our experiments, we train our model using simulations of \(128^{3}\) particles in a square box with a side-length of \(250\ \mathrm{Mpc}/h\). The particles are distributed uniformly across this grid with a mean separation of \(1.95\ \mathrm{Mpc}/h\) between two adjacent particles of the grid. This mean separation corresponds to a Nyquist wavenumber of \(k=1.608\ \mathrm{Mpc}^{-1}\ h\), the theoretical limit beyond which we can't trust the predictions of any model for this setting. For the training data, we randomly generated 100 linear fields for a fixed set of cosmological parameters (\(\Omega_{m}=0.300,\Omega_{b}=0.050,h=0.700,n_{s}=0.965,\sigma_{8}=0.799\)). We then use the emulator by (Jamieson et al., 2022) to generate the nonlinear displacement fields for these linear fields. We use this small set of pairs of nonlinear and linear fields to train our model. The neural network architecture involves four convolutional layers, which determine its field-of-view to be \(96^{3}\). That is, a focal particle's initial displacement is predicted based on its environment out to its \(48^{\mathrm{th}}\) neighbour on the initial particle grid, corresponding to a distance of \(93.75\ \mathrm{Mpc}/h\). This finite field-of-view has a benefit: the evolution of the system on large scales is accurately described by linear theory, so the V-net model preserves this linear evolution on scales larger than the field-of-view while allowing for nonlinear evolution at smaller (medium range of) scales. On very small scales, the particles clustered tightly into dark matter halos where their orbits become complicated and difficult to predict. This small-scale clustering imposes a resolution limit on the inverse mapping, making it a one-to-many mapping as numerical errors and floating point precision make the neural network model unable to fully encode the detailed textures of this virialized motion. This limitation is also present in the forward model, which tends to make halos more diffuse on small scales than in the simulations, blurring the details of these sharp structures. For the inverse mapping, the initial linear displacements are blurred for particles that end up inside of halos, limiting the accuracy of the inverse model predominantly on these very small scales. To evaluate the in-distribution performance of our model, i.e., the performance on fields corresponding to the training set cosmological parameters, we analyze the results of our model on unseen pairs of nonlinear and linear fields with the same set of cosmological parameters as the training data (Sections 4.1, 4.2 and 4.3). To evaluate how well our model performs on unseen cosmological parameters, we present the out-of-distribution (OOD) evaluation on the Quijote simulation suite (Villaescusa-Navarro et al., 2020) in Section 4.4. ## 4 Results and Analysis In the following section, we showcase our method's outputs both qualitatively and quantitatively with a variety of method and summary statistics. ### Qualitative Analysis Figure 1 shows the \(x,y,\mathrm{and}\ z\) direction displacements for a \(128\times 128\) slice of the linear displacement field and the corresponding linear field predicted by our inverse model. Qualitatively, the predictions of our model match very well with the original linear field that we wanted to predict. To further evaluate the quality of the prediction, we used the forward-direction emulator again to generate the nonlinear field when fed the predicted linear field. We qualitatively compare this regenerated nonlinear field with the original nonlinear field in Figure 2. We find these displacement fields match very nicely qualitatively. The residuals are also not significant (especially one should take into account that the color bar at the residual plot is approximately 10 times smaller). ### One-Point Statistics The initial conditions of the simulation are set up in Fourier space, with each mode of the displacement field drawn from a random Gaussian distribution with a variance determined by the linear power spectrum \(P(k)\), which is determined by the cosmological parameters. This construction yields a coordinate-space displacement field that is also Gaussian, so its statistics are uniquely determined by the two-point correlation function, which is simply the Fourier transform of the power spectrum. To demonstrate that we have accurately recovered the initial conditions of the simulation we must show that the output of our model has both the correct power spectrum and that its statistical distribution is Gaussian. Figure 3: Two-point correlation comparison between the original linear field (target) and the linear field predicted by our model (prediction). For an exact prediction, the variation of power with wavenumber should be exactly similar for both the fields, and the values of transfer function fractional error and stochasticity should be exactly zero. Figure 2: Qualitative comparison of the \(x,y,\) and \(z\) displacements for a \(128\times 128\) slice of particles from the original nonlinear field and the nonlinear field generated from the linear field predicted by our inverse model. The nonlinear field is generated from the forward modeling emulator (Jamieson et al., 2022). The slice shown here is the same as the one shown in Figure 1 for a one-to-one comparison. We plot the histogram of probability density of the displacements of the particles for the original linear field and the one predicted by our model (Figure 5). Each particle has three displacements corresponding to the \(x,y,\) and \(z\) directions. Since the linear fields are sampled from a Gaussian distribution determined by the cosmological parameters, we expect the distribution of displacements to match \(\mathcal{N}(0,5.275)\) for the simulation setting with \(128^{3}\) particles in a box size of 250 Mpc/\(h\). Specifically, the variance of displacements along the \(i^{\rm th}\) Cartesian direction is given by: \[\sigma_{i}^{2}=\int\frac{\mathrm{d}k^{3}}{(2\pi)^{3}}\frac{(k^{i})^{2}}{k^{4}} P(k), \tag{3}\] where the integral is over all wave vectors in Fourier space, \(k\equiv|\vec{k}|\) is the magnitude of the wave vector and \(k^{i}\) is its \(i^{\rm th}\) component. We numerically evaluate this integral on a Fourier space grid with the same dimensions as the initial simulation grid. From Figure 5 we see that the statistics of our model output agree well with the expected Gaussian distribution. The bin counts also match the particular realization from this distribution of the target data out to displacements of \(\sim 10~{}{\rm Mpc}/h\), or 2\(\sigma_{i}\). Extremely large displacements indicate a particle either flowing outwards in a rare, extremely underdense environment or inwards towards a rare, high-density peak. Based on the residuals, we see that the tails of the distribution predicted by the model are somewhat smaller than the target data, indicating the model misplaces these particles. This is unsurprising due both to the rarity of these trajectories and to the fact that these particles are the most affected by extreme non-linearities in their environments, which exacerbates the one-to-many problem. ### Two-Point Correlation Comparison The displacement power spectrum for a displacement field \(\mathbf{\Psi}\) for wavenumber \(k\) is defined as \[P(k)=\sum_{i\in\{x,y,z\}}\langle\mathbf{\Psi}_{i}(k)\mathbf{\Psi}_{i}(k)\rangle. \tag{4}\] Using this definition of power spectrum in the Fourier space, we can now define the transfer function as \[T(k)=\sqrt{P(k)}, \tag{5}\] and the correlation coefficient as \[r(k)=\frac{P_{pred\times true}(k)}{\sqrt{P_{pred}(k)P_{true}(k)}}, \tag{6}\] Figure 4: Two-point correlation comparison between the original nonlinear field (target) and the nonlinear field generated from the predicted linear field given by the inverse model. For an exact prediction, the variation of power with wavenumber should be exactly similar for both the fields, and the values of transfer function fractional error and stochasticity should be exactly zero. Figure 5: Distribution of the displacements of particles for a given linear field and the corresponding linear field predicted by our model. The distribution has been calculated by considering the \(x,y,\) and \(z\) displacements of \(128^{3}\) particles. Relative error denotes the relative errors in probability density for different bins. where \(P_{pred}(k)\) is the displacement power spectrum predicted by our neural network, \(P_{true}(k)\) is the ground truth power spectrum, and \(P_{pred\times true}(k)\) is the cross power spectrum between the predicted and the ground truth fields. Using these two quantities, we define the transfer function fractional error, \[\frac{\Delta T(k)}{T(k)}=\sqrt{\frac{P_{pred}(k)}{P_{true}(k)}}-1, \tag{7}\] to measure the discrepancy between amplitudes of the predicted and the true fields. We also define stochasticity, \[1-r^{2}(k)=1-\frac{P_{pred\times true}^{2}(k)}{P_{pred}(k)P_{true}(k)}, \tag{8}\] to capture the excess fraction of correlation in the prediction of our model that cannot be accounted for in the target data. For an ideal match between the target and the predicted field, the values of both these quantities should be exactly zero. Figure 3 shows the performance of our model in terms of these quantities for an original linear displacement field and its corresponding predicted linear field. We see that from large scales down to scales where \(k\simeq 0.5\ \mathrm{Mpc}^{-1}\ h\) the model achieves percent-level accuracy for both the transfer functions and stochasticities. Note that non-linearities become important at scales where \(k>0.1\ \mathrm{Mpc}^{-1}\ h\), so the model is able to accurately learn the inverse mapping even in the moderately nonlinear regime for in-distribution fields. To further test the reliability of our model, we generated the nonlinear displacement field from our predicted linear field by using the forward direction emulator (Jamieson et al., 2022). Since the mapping from the nonlinear displacement field to the linear displacement field is one-to-many, it becomes critical to see that the nonlinear field which gets generated from our predicted linear field matches the actual nonlinear field. Figure 4 shows the power spectra comparison between these two nonlinear displacement fields. We see that there is an excellent match between the powers of the two fields. The transfer function fractional errors and the stochasticity plots show a good match for large to medium scales (\(k<0.5\ \mathrm{Mpc}^{-1}\ h\)). ### Out-Of-Distribution Evaluation Given that our model has been trained on simulations generated by a neural network emulator (Jamieson et al., 2022) with fixed cosmological parameters (\(\Omega_{m}=0.300,\Omega_{b}=0.050,h=0.700,n_{s}=0.965,\sigma_{8}=0.799\)), there is a legitimate concern about its ability to generalize to actual N-body simulations with different sets of cosmological parameters. In order to assess this out-of-distribution (OOD) performance, we have conducted a series of experiments on simulations from the Quijote suite (Villaescusa-Navarro et al., 2020). The Quijote suite comprises 2000 linear and nonlinear fields, each consisting of \(512^{3}\) particles that are uniformly distributed in a cube with a side length of \(1000\ \mathrm{Mpc}/h\). To quantify the dissimilarity between these simulations and our training data, we rank them based on the sum of their relative percentage differences in \(\Omega_{m}\) and \(\sigma_{8}\) from our training values, since these two parameters have the most significant impact on the simulated distributions. This dissimilarity metric is then used to evaluate the model's performance on different percentiles, with the 100th percentile simulation representing the one on which we expect our model to perform the worst. Notably, our model's performance is unaffected by changes in the box size, as long as the mean particle density remains constant. In order to evaluate the model's performance on out-of-distribution (OOD) Quijote simulation data, we qualitatively compare the \(x,y,\) and \(z\) displacements of a \(512\times 512\) slice from the linear and nonlinear fields, similar to Section 4.1. The slices for the 0th percentile (\(\Omega_{m}=0.296,\Omega_{b}=0.067,h=0.523,n_{s}=1.091,\sigma_{8}=0.806\)) and the 20th percentile (\(\Omega_{m}=0.311,\Omega_{b}=0.067,h=0.673,n_{s}=0.993,\sigma_{8}=0.977\)) simulations are presented in Figures 8 and 11 respectively. Additionally, the one-point statistics for these simulations are provided in Figures 6 and 9. It is worth mentioning that the Quijote simulations consist of \(512^{3}\) particles in a box with a side length of \(1000\ \mathrm{Mpc}/h\). Hence, based on Equation 3, the theoretical distributions for these simulations are \(\mathcal{N}(0,6.193)\) and \(\mathcal{N}(0,7.159)\) respectively. Furthermore, we present the two-point correlation comparisons for these simulations in Figures 7 and 10. For completeness, we present the results of our power spectra analysis of the actual linear field and our model's predictions for a range of percentiles of the dissimilarity metric in Figure 12. We are able to match the in-distribution performance for the 0th percentile simulation and we are able to do reasonably well for the 20th percentile simulation for medium scales. The performance gradually gets worse as we move towards the higher percentiles of the dissimilarity metric. Additionally, in Figure 13, we compare the actual nonlinear fields with the nonlinear fields generated by passing the output of our model through the forward direction emulator (Jamieson et al., 2022). Once again, we see a good match for lower percentiles of the dissimilarity metric which progressively gets worse with increasing percentiles. we show that neural networks are able to do recover a linear field which matches the original linear field for a wide range of scales, including scales affected by the nonlinear physics of gravitational clustering. The final simulation state is also accurately recovered after first being inverse mapped by our model, and then being forward mapped back to the final state. We empirically demonstrate that the model generalizes reasonably well to OOD cosmological parameters. This indicates that despite the ill-defined nature of the inverse-mapping problem, neural networks can still be successfully trained to accurately predict the linear fields at large and medium scales. ## 6 Future Work In this work, we showed that neural networks are successful in predicting the initial states of the N-body simulation at a wide range of scales, going down to small scales where nonlinear gravitational effects kick in. The inherent one-to-many mapping between the final states and the initial states at smaller scales makes it impossible for N-body simulators to make such predictions. However, inverse neural network models, such as the one presented in this work, can be trained to predict approximate initial states of N-body simulations which are correct for a wide range of scales. These approximate initial states could then be used as initializations by further downstream sampling-based methods (e.g. Hamiltonian Monte Carlo, Active Learning, etc.) to refine their search for better initial states which are correct at even smaller scales. Other directions for future work involve training on larger set of cosmological parameters, improving OOD performance, and uncertainty quantification for the initial states. ## Acknowledgements This work is supported by Simons Foundation grant on Learning the Universe.
2306.07825
The main role of fractal-like nature of conformational space in subdiffusion in protein
Protein dynamics is a fundamental element to comprehend their biological functions. However, a theoretical picture providing microscopic-detail explanation of its relevant features is still missing. One of the outmost relevant properties exhibited by this dynamic is its subdiffusivity, whose origins are still unknown. Here, by directly comparing all-atom molecular dynamics simulations and theory we show that this behavior mainly arises from the fractal nature of the network of metastable state of conformational state over which protein dynamics, thought as diffusion process, takes place. This process is assumed to be Markovian by the employed theoretical picture. Therefore, to further support its validity, we built a simple Markov state model starting from the simulations outcome and show that it exhibits a subdiffusive behavior, in quantitative agreement with the one associated to the molecular dynamics. Moreover, Molecular dynamics gives direct access to relevant quantities which allowed us to rule out the possibility the Continuous Time Random Walk can explain the protein subdiffusivity.
Luca Maggi
2023-06-13T14:57:01Z
http://arxiv.org/abs/2306.07825v1
The main role of fractal-like nature of conformational space in subdiffusion in protein ## Abstract: Protein dynamics is a fundamental element to comprehend their biological functions. However, a theoretical picture providing microscopic-detail explanation of its relevant features is still missing. One of the outmost relevant properties exhibited by this dynamic is its subdiffusivity, whose origins are still unknown. Here, by directly comparing all-atom molecular dynamics simulations and theory we show that this behavior mainly arises from the fractal nature of the network of metastable state of conformational state over which protein dynamics, thought as diffusion process, takes place. This process is assumed to be Markovian by the employed theoretical picture. Therefore, to further support its validity, we built a simple Markov state model starting from the simulations outcome and show that it exhibits a subdiffusive behavior, in quantitative agreement with the one associated to the molecular dynamics. Moreover, Molecular dynamics gives direct access to relevant quantities which allowed us to rule out the possibility the Continuous Time Random Walk can explain the protein subdiffusivity. ## Main text: A fundamental paradigm in structural biology stated the relationship between protein structure and function. However, aside a single specific arrangement of protein atoms (i.e., a conformation), the dynamics is widely being recognized as a pivotal element to understand protein function at microscopic level [1, 2]. Catalytic enzymatic reactions [3], signal transduction [4] and molecule transport across the plasmidic membrane [5] are significant examples in which protein dynamics plays a crucial role and, thus, its investigation cannot thus be neglected. To avoid any source of confusion here we refer to protein dynamics as every time-dependent change of protein conformation. This is determined by the interactions that residues have with each other and with external environment. The whole set of those produces a potential energy landscape whose explicit analytical treatment is, however, unfeasible mainly due to its high dimensionality and complexity. Nevertheless, experimental and computational studies highlighted some of its relevant features as the inherently "roughness" which gives rise to a conformational space composed by a plenty of metastable states separated by energy barriers of different heights [[1, 2, 6]. According to this picture, therefore, protein dynamics is often modeled as a diffusion process among those metastable states [7, 8, 9], the same model is adopted in the present work. Previous investigations directly and indirectly showed [10, 11, 12, 13, 14] that the diffusion process exhibits a subdiffusive behavior, which implies a sublinear relationship between the mean square displacement (MSD) and time. More formally speaking, if \(X(t)=\{x_{1}(t),...x_{N}(t)\}\) is a single conformation, with \(x_{i}(t)\) the i-th degree of freedom at time t and N the total number of degrees of freedom, it follows: \[MSD=\langle|X(0)-X(t)|^{2}\rangle\sim t^{\alpha}\] (Eq. 1) Where \(<\)..\(>\) represents an ensemble average. In Eq. 1 subdiffusivity imposes \(\alpha\) to be smaller than 1. Despite numerous and notably theoretical efforts [14, 15, 16, 17] the microscopic origin of this phenomenon is still unclear. In this letter we bring strong evidences that indicates the fractal nature of the protein conformational space as the main origin of subdiffusion in protein. This is achieved by employing all atom Molecular Dynamics (MD) simulations to directly verify theoretical results. The theory which describes a diffusion process on a fractal structure prescribes that exponent \(\alpha\) is given by [18, 19]: \[\alpha=\frac{d_{s}}{d_{f}}\] (Eq. 2) Where \(d_{i}\) is associated to geometrical properties of conformational space and \(d_{i}\) is related to spectral features of the operator generating the diffusion process [18, 19]. Our goal is, therefore, to calculate \(\alpha\), \(d_{i}\), and \(d_{i}\) separately and verify whether Eq. 2 holds. The evaluation of those three quantities has been achieved by the analysis of 1-us MD simulations of three biomolecules: the N-terminal of the human histone H4 tail (H4); The Villin headpiece (Villin; PDB ID: 1Vil [20]) and a PDZ domain (PDZ; PDB ID: 1D5G [21]), Fig. 1a-c. They differ for the number of residues, which are: 25, 32 and 96 for H4, Villin and PDZ respectively, and its secondary and tertiary structure as H4 is a totally disordered peptide, Villin exhibits a partial structure and the PDZ can be classified as a small globular structured protein. All the MD simulations details are given in SI. In order to reduce the number of degrees of freedom we projected all the trajectories on to the first two principal component extracted from a principal component analysis, PC1(t) and PC2(t), carried out taking into account C\(\alpha\)'s backbone only. Hence, those two variables define a single sampled conformation, X(t)=(PC1(t), PC2(t)). The identification of metastable states has been done over this sub-space by means of a clustering algorithm. In this work an agglomerative hierarchical clustering method is employed [22]. The reasons leading to this choice are two-fold. Firstly, the intrinsic hierarchy of the whole metastable states appears reasonable as structural differences among conformations can be naturally classified as sub-sets of decreasing size which sub-divide the entire space. This idea is supported by previous works which highlighted this feature [2, 7]. On the other hand, this clustering method presents technical advantages as it does not require to set a fixed number of clusters (as K-means methods) employing an adjustable parameter that controls the cluster average size (\(\varepsilon\)) and it does not produce any outlier conformations which are hard to be included in the theoretical picture. The dynamics including each single conformation is, thus, replaced by a coarse-grained one involving only the cluster centroids which correspond to the representative conformations of each single metastable state. The average cluster size, is chosen to reproduce at the best the MSD calculated from fine-grained conformational sub-space, still providing a reliable sampling of each cluster (see Fig SI 1). The MSD calculation are performed using a moving average to cancel out the dependence from the initial conditions, and it reads: \[MSD = \frac{1}{\tau-t}\int_{0}^{T-t}d\tau\ \left|X(t+\tau)-X(\tau)\right|^{2}\] (Eq. 3) Where t < 0.01 T, where T is maximum simulation time (1us). We found that setting \(\varepsilon\) within a range from 1.0 to 0.2, depending on the system, produces almost the same subdiffusive behavior as showed by the very small relative difference between the exponent \(\alpha\)'s evaluated in the two cases, (reported as percentage) \(\Delta=\frac{\left|\alpha-\alpha_{fine}\right|}{\alpha_{fine}}\), where \(\alpha\) and \(\alpha_{fine}\) are the exponents calculated for coarse- and fine-grained conformational space respectively (Fig 1d-f). The distribution of metastable states over the sub-space is connected to \(\mathrm{d}_{t}\) which relates the number of cluster centroids within a sphere of radius r, M(r), to the radius itself as, \(M(r)\)\(\sim r^{d_{f}}\). The M(r) profiles are evaluated averaging over all the clusters and presented in Fig. 2a-c. All of them exhibit a power law relation with \(\mathrm{d}_{t}\)always less than 2 showing the non-homogenous distribution of cluster centroids. While \(\mathrm{d}_{t}\) is associated with the geometric arrangement of metastable states in the conformational sub-space, \(\mathrm{d}_{s}\) is related to their "connectivity", determining the spectral density of the operator generating the diffusion process [23]. Employing an operative definition this exponent characterizes the probability of a trajectory to return to its starting point (P\({}_{o}\)) after a time t, being \(P_{o}\)\(\sim t^{-d_{s}/2}\). In our case the "starting point" coincides with the starting metastable states (i.e., starting cluster). Hence, we introduced \(C(t+\tau,\tau)\) which is a function equal to 1 if the cluster visited at \(t+\tau\) and \(\tau\) are the same and 0 otherwise and Po is calculated employing a moving average: \[P_{o}(t)=\frac{1}{\tau-t}\int_{0}^{T-t}d\tau\ C(t+\tau,\tau)\] (Eq. 4) The profiles show a good agreement with power-law relation within about four order of magnitude (Fig. 2 d-f). In Tab.1 we summarize all the calculated quantities and compare the subdiffusion exponent \(\alpha\) with \(\mathrm{d}_{s}\)/\(\mathrm{d}_{t}\). We found an excellent quantitative agreement between these two. Therefore, it turns out that the theory of diffusion on fractals is capable to adequately modeling the protein conformational space exploration as described by all atom Molecular dynamics simulations and it should be noticed the general validity of this finding as it holds for biomolecules which present very different structural features. This is the main result of this work. Interestingly, the correspondence between theory and simulations entails an important feature of diffusion process which is its Markovianity [19]. Therefore, to test this implication we have extracted a transition probability matrix (T) directly form the MD simulations and analyzed the MSD resulting from this Markov state model (MSM). The matrix T is N\({}_{c}\) x N\({}_{c}\) matrix, where N\({}_{c}\)is the total number of clusters, and each element Tij is equal to: \[T_{ij} = \frac{s_{ij}}{\sum_{l=1}^{N_{c}}s_{ij}}\] (Eq. 5) Where s\({}_{ij}\) is the number of "jumps" between the l-th and j-th cluster. Obviously, the finiteness and discreteness of T prevents the results to arbitrarily coincide with the MD results which are, instead, produced by an infinite and continuous operator [24]. To each cluster has been assigned a stationary probability value (\(\pi_{s}\)) extracted directly from the simulations and system time evolution consist in the evolution of the Nc-length probability vector \(\pi\), which evolves like a time discrete Markov chain and at each step n we can calculate this vector for the step n+1 as: \[\pi_{n+1}=T\,\pi_{n}\] (Eq. 6) Therefore, the MSD in this case has been calculated as: \[MSD=\ \Delta t\cdot\sum_{i=1,\,j=1}^{N_{c}}\pi_{s}^{i}\cdot\pi_{n}^{j}\,d_{ij}^{ 2}\] (Eq. 7) Where \(\Delta t\) is the smallest sampling step in MD simulations (20 ps, see SI). The apices I,j indicates the relative cluster and \(d_{ij}^{2}\) is the square distance between them. Despite the unavoidable flaws introduced in the presented MSM, the calculated MSD exhibits a very similar quantitative time dependence to the one associated to MD simulations (Fig 3). This supports further the main finding and gives a new impetus to the debate over the long-memory effect on protein dynamics which has been previously accounted for to model the diffusion process in the conformational space. One of the most common microscopic theoretical pictures involving memory effects is the continuous time random walk (CTRW) which describes the diffusion as time-continuous jumping process among metastable states separated by energy barriers which, to give rise a subdiffusive dynamics, are distributed according to a power-law. This produces a sub-linear time relationship for the average number of jumps \(\langle N(t)\rangle\)\(\sim\)\(t^{\alpha}\) which originates subdiffusion [25]. To show that the subdiffusivity shown in all atom MD simulations cannot be described by CTRW we have directly calculated <N(t)> as: \[\langle N(t)\rangle=\int_{0}^{t}d\tau\ \left|C(\tau+\Delta t,\tau)-1\right|\] (Eq. 8) This quantity shows a time linear dependence independently from the size of clusters (Fig 4a). Moreover, the probability of "observing a jump within a time t, presents an exponential decay since its profile doesn't show any region following a power law relationship which should be invariant upon cluster size changing (Fig 4b). Therefore, the evaluation of those two quantities rules out CTRW as a possible description of subdiffusivity observed in all atom MD simulations. In conclusion we brought compelling evidence that subdiffusive protein dynamics, as described by MD simulations, stems from the fractal nature of the conformational space. The high dimensional and rough potential energy landscape gives rise to separated basins of attractions, namely metastable states whose distribution in the conformational space, regulated by d, resembles a fractal structures. On the other hand, d, is related to the connections among those states modulating the accessibility from one to another. The whole dynamics is therefore described by these two exponents which are directly connected to the geometry of the conformational energy landscape. As corollary we also highlighted the parallelism between the diffusion and a Markov process as implied by theory of diffusion on fractals and ruled out the role of CTRW in the origin of subdiffusion phenomenon. [MISSING_PAGE_POST] ## Figures: corresponds to 5% and 70 % of the total number of metastable states, to avoid artifacts due to the conformational space discreetness and finiteness \begin{table} \begin{tabular}{c|c|c|c|c} \hline System & \(\mathrm{d_{f}}\) & \(\mathrm{d_{s}}\) & \(\alpha\)= \(\mathrm{d_{s}}\)/\(\mathrm{d_{f}}\) & \(\alpha_{\mathrm{fit}}\) \\ \hline H4 & 1.44 & 0.70 & 0.486 & 0.49 \\ Villin & 1.51 & 0.62 & 0.411 & 0.41 \\ PDZ & 1.22 & 0.58 & 0.475 & 0.47 \\ \hline \end{tabular} \end{table} Table 1: Summarizing table showing all the evaluated exponents for all the system under investigation and comparing the \(\alpha\) coming from the theory and extracted directly from the fit with MSD (\(\alpha_{\mathrm{fit}}\)). Fig. 4: a) <N(t)> and b) probability of jumps for Villin with three different average cluster size. The linear profile for <N(t)> as well as the exponential decay for probability of jump are conserved feature upon changing the cluster size. H4 and PDZ plots exhibit the same characteristics (Fig. Sl 2). Fig. 3: Comparison between Markov state model MSD and the one calculated form MD simulations for the Different systems investigated: a) H4 tail, b) Villin, c) PDZ d) Summarizing table ## Supporting Information: ### Molecular Dynamics details All the presented simulations are carried out with following procedure. The simulations boxes which presented different sizes: 40x40x40, 47x47x47, 69x69x69 A\({}^{3}\) for H4, Villin and PDZ respectively are filled with TIP3P water molecules and Na+ and Cl- atoms to neutralize the systems. A first 100 ns equilibration step has been run followed by 1-us production run from which has been extracted frames every 20 ps. LINCS [26] was used to constrain all the bonds involving hydrogens allowing us to employ a 2 fs step to integrate the Newton equations. The first equilibration part is divided into 50 ns NVT ensemble simulation where T=310K, followed by a 50 ns NPT ensemble run to set the total pressure to 1 atm. We employed a Berendsen thermostat [27] {Ref.} with a coupling constant of 0.4 ps for the NVT ensemble and a Nose-Hover [28] {ref.} thermostat along with a Parrinello-Rahman Barostat [29] with 0.4 and 0.6 ps coupling constant respectively for the NPT ensemble, which are used also for the production run. We employed Mesh Ewald method to account for long-range interactions with a real-space cut-off of 12 A. We used GROMAC 2020.2 [30] code with Amber99sbildn force field for all the simulation. ### Molecular Dynamics Analysis: All the linear fit presented in this work has been done with Gnuplot 5.4. Pyhton Scikit Learn suite [31] is employed for the hierarchical clustering using a Word linkage method. The rest of the analysis are carried out utilizing _ad hoc_ in-house scripts. Figure S1: Villin free energy surface projected on to PC1 and PC2 on left. On right the same surface is subdivided into clusters identified by hierarchical clustering as described in the main text **Fig. SI 2: \(<\)N(t)\(>\) and probability of jumps for H4 ( a)-b) ) and PDZ ( c) - d) ).**
2310.14601
Modeling alpha particle-induced radioluminescence using GEANT4
Optical detection of alpha particle emitters in the environment by air radioluminescence is a new technology that enables sensing a radiological threat at safe distances, without putting personnel at risk or contaminating equipment. Radioluminescence detection systems need to be fine-tuned to efficiently capture a substantial number of photons while minimizing the contribution from ambient ultraviolet light. The accurate simulation of radioluminescence, in conjunction with ray tracing, facilitates the design and optimization of such detection systems. In this work, an application within the Geant4 framework has been developed to simulate radioluminescence photons emitted in the vicinity of accelerated alpha particles and at the surface of alpha radioactive samples. The application relies on existing scintillation physics implemented in Geant4 classes such as G4OpticalPhysics and G4Scintillation, which are used to simulate radioluminescence photons as scintillations produced during the passage of alpha particles through air. The application computes the ultraviolet image of alpha particles accelerated at energies of 5.1 MeV and 8.3 MeV, as well as an extended alpha source. The application enables optimization of experimental setups for various scenarios, such as radiological emergency management, radiological crime scene investigations, or decommissioning of nuclear facilities, thus minimizing the use of costly resources and exposure to radiation.
Claudia Olaru, Mihail-Razvan Ioan, Mastaneh Zadehrafi
2023-10-23T06:13:50Z
http://arxiv.org/abs/2310.14601v1
# Modeling Alpha Particle-Induced Radioluminescence Using Geant4 ###### Abstract Optical detection of alpha particle emitters in the environment by air radioluminescence is a new technology that enables sensing a radiological threat at safe distances, without putting personnel at risk or contaminating equipment. Radioluminescence detection systems need to be fine-tuned to efficiently capture a substantial number of photons while minimizing the contribution from ambient ultraviolet light. The accurate simulation of radioluminescence, in conjunction with ray tracing, facilitates the design and optimization of such detection systems. In this work, an application within the Geant4 framework has been developed to simulate radioluminescence photons emitted in the vicinity of accelerated alpha particles and at the surface of alpha radioactive samples. The application relies on existing scintillation physics implemented in Geant4 classes such as G4OpticalPhysics and G4Scintillation, which are used to simulate radioluminescence photons as scintillations produced during the passage of alpha particles through air. The application computes the ultraviolet image of alpha particles accelerated at energies of 5.1 MeV and 8.3 MeV, as well as an extended alpha source [1]. The application enables optimization of experimental setups for various scenarios, such as radiological emergency management, radiological crime scene investigations, or decommissioning of nuclear facilities, thus minimizing the use of costly resources and exposure to radiation. Radioluminescence, Geant4 simulation, optical photons, alpha-induced luminescence in air. ## 1 Introduction Alpha radiation detection poses a real challenge to radiological emergency response teams. Due to the short range of alpha particles in air (about 4 cm at 5 MeV), conventional detectors for alpha contamination (_e.g._, silver-activated ZnS thin films, passivated implanted planar silicon, and silicon gold surface-barrier detectors) usually rely on direct interactions between alpha particles and the sensitive detector material [2]. This approach requires scanning near contaminated surfaces (_i.e._, within the range of alpha particles in air), which complicates the management of emergencies and decontamination efforts by exposing operators to hazards and health risks (_e.g._, fire, radiation, toxic substances, etc.) and risking contaminating the detectors. Moreover, the use of short-range handheld detectors makes scanning large areas and complex terrain geometries (_e.g._, collapsed buildings) laborious and time-consuming. The shortcomings of conventional detectors can be overcome by applying a detection technique based on the alpha particle-induced ultraviolet (UV) luminescence of air, the so-called radioluminescence [3]. In this approach, the atmosphere serves as a scintillator. The ionization of the atmosphere triggered by the passage of alpha particles creates a cloud of excited air molecules which decay radiatively by emitting UV photons. Most of the emission is due to the de-excitation of molecular nitrogen (N\({}_{2}\)), and to a much lesser extent, due to trace amounts of nitric oxide (NO) in the composition of air, spanning three ultraviolet spectral regions. About 99% of the emissions occur in the UV-A and UV-B spectral regions, from 280 nm to 440 nm [4]. The UV light can propagate through air several hundreds of meters, many orders of magnitude larger than the range of alpha particles (primary radiation) in air [5], enabling sensing a radiological threat at a safe distance. An airborne application of radioluminescence detection technology can support emergency preparedness and management in case of accidental or intentional dispersal of alpha-emitting radionuclides in the environment [6]. Novel optical technologies for such applications, including calibration systems and methodologies, have been developed in the framework of the European Metrology Programme for Innovation and Research (EMPIR) project 19ENV02 RemoteALPHA [7]. Instrumentation and procedures developed in this project can also support national and international authorities in preventing the illicit trafficking of alpha-emitting materials. Furthermore, this instrumentation will benefit the nuclear industry by providing a detection system able to remotely monitor the manufacturing, handling, and storage of alpha-emitting materials. To facilitate emergency management, these detection systems should be optimized by maximizing the radioluminescence throughput. This is done by using large receiving optics, low noise photomultipliers (PMT), and suppressing the background signal through efficient UV band filtering. At the same time, the systems need to have an optimal field of view to allow mapping alpha contaminations when mounted on tripods or UAVs. In the framework of RemoteALPHA, the systems have been characterized and optimized using the joint metrological infrastructure of participating institutions. A useful alternative for prototyping detector instrumentation is Monte Carlo modeling [8, 9, 10]. With accurate models of involved physical processes, these development tools allow reducing the experimental component to a minimum making the prototyping more accessible in terms of required expenditure, equipment, and services. The Geant4 toolkit is well-suited for simulating the radioluminescence generation, which can be achieved by generating optical photons in well-defined scintillating materials. The performance of the MC simulation toolkit was tested by modeling the generation of air scintillation (luminescence) induced by the ionizing effects of alpha sources. The ultraviolet image of the alpha source can be obtained by registering the spatial distribution of the scintillation photons generated within a luminescent medium. ## 2 Simulation method When alpha particles in the megaelectronvolt energy range propagate through air, they ionize various molecules and atoms. The released electrons, the so-called secondary or delta electrons, also interact with the air molecules - mainly with N\({}_{2}\) - and generate further low-energy electrons. This cascade-like process leads to the excitation of air molecules, which then relax to lower energy electronic states by emitting luminescent photons in the UV spectral region [11]. Secondary electrons can be very energetic, up to a few kiloelectronvolt, but those with energies in the range of 14 eV to 15 eV contribute the most to triggering radioluminescence production, according to available maximal excitation cross-section values [12]. On average, a single 5 MeV alpha particle generates in air about 100 photons in the UV-A and UV-B spectral regions [13]. Spanning these regions, the radioluminescence emission is given by the second positive system - 2P (\(C^{3}\Pi_{u}\to B^{3}\Pi_{g}\)) and the first negative system - 1N (\(B^{2}\Sigma_{u}^{+}\to X^{2}\Sigma_{g}^{+}\)) of the nitrogen molecule. In contrast, the emission yield between 200 and 280 nm (UV-C) is considerably lower, at about 0.9 photons per MeV of deposited energy [4]. However, this yield can be boosted by more than three orders of magnitude, by adding trace amounts of nitric oxide to the nitrogen atmosphere surrounding an alpha source [14]. NO produces radioluminescence over the \(\gamma\) band system (\(A^{2}\Sigma^{+}\rightarrow\ X^{2}\Pi\)), between 200 nm and 300 nm, which is almost exclusively within the UV-C spectral region. This band system is mainly produced by excitation transfer from N\({}_{2}\) to NO molecules [15]. The radiative transitions between the vibrational levels (denoted as \(\nu\)) of these specific energy states shape the radioluminescence spectrum used in the remote detection of alpha sources in air [1]. Studies on modeling the generation of radioluminescence photons from radioactive sources using Geant4 are relatively scarce. Roberts [16] simulated the solar blind photon flux and 2D images of alpha, beta, and gamma sources by recording the radioluminescence photons produced at 10 m from the radioactive source on a 40 cm diameter detector. Although the simulation method used was not described in detail, it provides valuable information related to radioluminescence modeling using the Geant4 toolkit. Another work is the one of Thompson _et al.[17]_ who, based on molecular ionization, excitation, and emission models, developed a new physics model for predicting the number of photons emitted in the UV-A and UV-B regions from excited states of atmospheric nitrogen. The "Air Fluorescence Model" [17] was implemented in the Geant4 framework and used for studying the UV photons induced in air by alpha, gamma and beta radiation emitted by radioactive sources, as well as investigating the localization of Am-241 (\(\alpha\)-emitter), Co-60 (\(\gamma\)-emitter) and P-32 (\(\beta\)-emitter), through air fluorescence (radioluminescence). The model was also used for predicting the fluorescence yields for Po-210, Am-241, U-235, Co-60, P-32 and Sr-90. This work is focused on providing a simplified approach to modeling alpha induced radioluminescence production in Geant4. The simulation was performed using integrated physics models, rather than implementing a dedicated one for simulating the atomic physics processes involved in air luminescence. This way, the performance of Geant4 capabilities in modeling this effect was studied by employing the OpNovice pre-existing example. Building on this example, the simulation was expanded to consider air in normal atmospheric conditions (1 atm pressure and 22\({}^{\circ}\)C temperature) as a scintillating material. The primary particles are set to alpha particles, and the generated radioluminescence comes in the form of secondary radiation as optical photons, if the processes from the G4Scintillation source class are enabled. The scintillating medium is characterized by its optical properties such as photon emission spectrum, scintillation yield, scintillation time constants, etc. To model alpha particles emitted by radioactive sources, such as a structured Am-241 source and a beam of accelerated alpha particles, the G4ParticleGun source class is used. In both cases, the default electromagnetic physics processes were replaced with the ones from the G4EmLivermorePhysics model class to enable the generation of low energy (minimum 10 eV) electron interactions with high accuracy below 100 keV. Lastly, the generation of optical photons from excited air molecules is activated by instancing the G4OpticalPhysics source class in the main simulation source code, OpNovice.cc. By adding it to the list of physics constructors together with G4EmLivermorePhysics and G4RadioactiveDecayPhysics source classes, all physics interactions leading to radioluminescence production are enabled. Fig. 1 shows schematically the relevant source classes used in the simulation of alpha particle-induced radioluminescence. The following method describes how to simulate the generation of radioluminescence using semi-empirical radiative transition parameters of the N\({}_{2}\) and NO molecules. The optical properties of a scintillating material are provided by the user and stored as vector entries in a material property table (MPT). The MPT is instanced in the detector description user class OpNoviceDetectorDescription and linked to the scintillation processes. To use air as the scintillating medium, the radiative transition parameters shown in Table 1 were registered as optical properties in the MPT. The most notable parameters used in the simulation are: the band origin of the transition (wavelength), the Franck-Condon factor - \(q_{\nu^{\prime}\nu^{\prime\prime}}\) (transition probability), the Einstein coefficient - \(A_{\nu^{\prime}\nu^{\prime\prime}}\) (spontaneous emission rate), the radiative lifetime - \(\tau_{\nu^{\prime}}\), and the branching ratio \(B_{\nu^{\prime}\nu^{\prime\prime}}\), calculated using (1), where \(\nu^{\prime}\) denotes the vibrational level of the higher energy state, \(\nu^{\prime\prime}\) is the vibrational level of the lower energy or ground state of a molecule, and \(\nu=0,1,2,...\). \[B_{\nu^{\prime}\nu^{\prime\prime}}=\frac{A_{\nu^{\prime}\nu^{\prime\prime}}}{ \sum_{\nu^{\prime\prime}}A_{\nu^{\prime}\nu^{\prime\prime}}} \tag{1}\] In the MPT both energy dependent and constant optical properties can be registered. Some of the properties used in the simulation are displayed in Table 1, where the 2P, 1N and \(\gamma\) band systems (BS) are marked out in column 0. Then, column 1 depicts the wavelengths of the identified spectral lines from radioluminescence measurements performed at the PTB Ion Accelerator Facility (PIAF) [1], while column 2 contains the band origins available in the scientific literature, characteristic to the \(\nu^{\prime}\rightarrow\nu^{\prime\prime}\) vibronic transitions, listed in Table 9 and Table 15 of reference [18] for the 2P and 1N systems, respectively, and in Table III of reference [19] for the \(\gamma\) band system. The values displayed in the following columns correspond to these wavelengths. Columns 3 and 4 display the vibrational levels of the upper energy (\(\nu^{\prime}\)) and lower energy (\(\nu^{\prime\prime}\)) electronic states involved in each radiative transition. Columns 5 and 6 contain the values of the Franck-Condon factors and Einstein coefficients, respectively, extracted from the previously mentioned tables of references [18] for N\({}_{2}\) and [19] for NO. Column 7 is filled with branching ratios calculated using (1). Lastly, column 8 shows the radiative lifetime for the 2P system (averaged for levels \(\nu^{\prime}=0,1,2,3,4\)) and 1N, taken from Table 19 of reference [18], while the radiative lifetime measured for the \(\gamma\) system is displayed in Table IV of reference [19]. To use the parameters displayed in Table 1 as optical properties in the MPT, a _photonEnergy_ vector was created and its entries were sampled from the empirical data in column 1, by converting the wavelength into energy. For cases when a radioluminescence peak could not be resolved or the fitting results were unsatisfactory, the values from the second column were used. The first Figure 1: Schematic representation of Geant4 source classes used in radioluminescence modeling. \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ BS & \(\lambda_{1}\) & \(\lambda_{2}\) & \(\nu^{\prime}\) & \(\nu^{\prime\prime}\) & \(q_{\nu^{\prime}\rightarrow\nu^{\prime\prime}}\) & \(A_{\nu^{\prime}\rightarrow\nu^{\prime\prime}}\) & \(B_{\nu^{\prime}\rightarrow\nu^{\prime\prime}}\) & \(\tau_{\nu^{\prime}}\) \\ & (nm) & (nm) & & & & (s\({}^{-1}\)) & & (ns) \\ \hline \hline & 337 & 337 & & 0 & 4.54E-01 & 1.31E+07 & 4.86E-01 & \\ & 357.4 & 357.6 & 0 & 1 & 3.27E-01 & 8.84E+06 & 3.28E-01 & \\ & 380 & 380.4 & & 2 & 1.45E-01 & 3.56E+06 & 1.32E-01 & \\ \cline{2-8} & 315.6 & 315.8 & & 0 & 3.92E-01 & 1.19E+07 & 4.46E-01 & \\ & 334.9 & 333.8 & & 1 & 2.26E-02 & 5.87E+05 & 2.20E-02 & \\ & 353.5 & 353.6 & 1 & 2 & 2.05E-01 & 5.54E+06 & 2.08E-01 & \\ & 375.2 & 375.4 & & 3 & 1.98E-01 & 4.93E+06 & 1.85E-01 & \\ & - & 399.7 & & 4 & 1.10E-01 & 2.43E+06 & 9.11E-02 & \\ \cline{2-8} & 297.1 & 297.6 & & 0 & 1.33E-01 & 3.97E+06 & 1.51E-01 & \\ 2P & 313.4 & 313.5 & & 1 & 3.42E-01 & 1.01E+07 & 3.85E-01 & 38.4 \\ & - & 330.9 & & 2 & 2.36E-02 & 7.99E+05 & 3.05E-02 & \\ & - & 349.9 & & 3 & 6.42E-02 & 1.71E+06 & 6.52E-02 & \\ & 370.4 & 370.9 & & 4 & 1.61E-01 & 4.04E+06 & 1.54E-01 & \\ & 393.9 & 394.2 & & 5 & 1.39E-01 & 3.14E+06 & 1.20E-01 & \\ \cline{2-8} & 281.7 & 281.8 & & 0 & 2.02E-02 & 5.28E+05 & 2.06E-02 & \\ & - & 296.1 & & 1 & 2.53E-01 & 7.30E+06 & 2.85E-01 & \\ & 310.7 & 311.5 & & 2 & 2.11E-01 & 5.94E+06 & 2.32E-01 & \\ & 327.5 & 328.4 & & 3 & 8.90E-02 & 2.85E+06 & 1.11E-01 & \\ & 267.3 & 268.4 & 4 & 0 & 9.50E-04 & 1.38E+04 & 5.65E-04 & \\ \cline{2-8} 1N & 391.1 & 391.2 & 0 & 0 & 6.63E-01 & 1.14E+07 & 7.10E-01 & 62.3 \\ \hline & 226.1 & 226.5 & & 0 & 1.65E-01 & 9.26E+05 & 1.90E-01 & \\ & 236.2 & 236.6 & & 1 & 2.62E-01 & 1.37E+06 & 2.81E-01 & \\ & 247.1 & 247.4 & 0 & 2 & 2.36E-01 & 1.15E+06 & 2.36E-01 & \\ \cline{2-8} \(\gamma\) & 258.7 & 259 & & 3 & 1.60E-01 & 7.25E+05 & 1.49E-01 & \\ & 271.3 & 271.6 & & 4 & 9.15E-02 & 3.86E+05 & 7.92E-02 & \\ & 285.1 & 285.3 & & 5 & 4.65E-02 & 1.83E+05 & 3.75E-02 & \\ \hline \end{tabular} \end{table} Table 1: Semi-empirical radiative transition parameters used in air radioluminescence modeling. scintillation component which refers to the emission probability. The input values describe the intensity of the spectral lines in the simulated radioluminescence spectrum. To quantify the probability that a molecule undergoes a vibronic transition \(\nu^{\prime}\rightarrow\nu^{\prime\prime}\), as well as shape the radioluminescence spectrum according to the relative emission probabilities, the product of the Franck-Condon coefficients (column 5) and branching ratios (column 7) was used. Three different scintillation components were created and registered in the MPT. These are the _secondPositive_, _firstNegative_, and _gammaNO_ vectors, which use as entries the values of the \(q_{\nu^{\prime}\rightarrow\nu^{\prime\prime}}\cdot B_{\nu^{\prime}\rightarrow\nu ^{\prime\prime}}\) product, corresponding to the 2P, 1N and \(\gamma\) regions, respectively. An optical property which does not depend on the energy vector is the scintillation yield, which was set to 19 photons per MeV [13]. To account for the quenching effects of the O\({}_{2}\) and H\({}_{2}\)O molecules on the air radioluminescence spectrum [1], the scintillation yield was separated into three relative yields, 0.65 (2P), 0.2 (1N), and 0.15 (\(\gamma\)), which were attributed to their specific scintillation components. Lastly, the _scintillation-time-constant_ property was added for each region, using the values from column 8. Within the OpNoviceDetectorDescription class, the geometry of the simulation was also defined. The scintillating medium was set to a cube with a side length of 50 meters containing air with a normal molecular composition, at 22\({}^{\circ}\)C and 1 atm pressure. At its centre, the geometry of the extended alpha source is described as a 30 mm \(\times\) 100 mm Ag-foil with a thickness of 0.25 mm, on which an active layer of Am-241 is placed and covered with a 2 \(\upmu\)m Au-foil. The thickness of the Am-241 active area is 1 \(\upmu\)m and its width and length of 20 mm and 100 mm, respectively, are consistent with the Au layer [1]. The dimensions of the active area and the type of radionuclide used were set in the Am241.mac macro file. Similarly, the instructions for simulating the beam of alpha particles were written in alphbeam.mac. The simulated beam is characterized by a gaussian profile, and a diameter of 100 \(\upmu\)m. The user classes OpNoviceSteppingAction, OpNoviceEventAction, and OpNoviceRunAction were modified to enable recording the simulated radioluminescence spectrum, the track length of the emitted alpha particles, and the deposited energy per alpha particle. The simulated values are registered in histograms and also be printed in data files. Lastly, the UV image of the alpha source is registered using a secondary radiation primitive scorer, set up to filter only optical photons. This method scores the number of photons traversing a defined surface and is implemented in the Am241.mac macro file. ## 3 Results and Discussion The simulated radioluminescence spectrum was registered and scaled to the measurement results in [4], with respect to the 337 nm peak. Fig. 2 shows that the simulated spectrum is in good agreement with the experimental one in terms of wavelength and relative intensity of each peak. Although the relative yield used for simulating the UV-C component (0.15) leads to overestimating the scintillation yield in this region, it was the minimum value which could be used. Further lowering the relative yield led to no production of UV-C photons. However, without using relative yields, the simulation cannot properly account for the quenching effects of oxygen and water molecules. The deviations in the relative intensity of the simulated spectral lines compared to the measured peak intensities could be attributed to various factors. These include the absence of the quenching mechanisms for the N\({}_{2}\) and NO molecules in the physics models used, and the averaging of lifetimes corresponding to the \(\nu^{\prime}\) vibrational levels of the 2P system. The simulation code was used to generate the radioluminescence images of the extended Am-241 source and the beams of alpha particles accelerated at 5.1 MeV and 8.3 MeV, depicted in Fig. 3. The track length of alpha particles was also computed. The results show that for a pure Am-241 source, the simulated mean track length is 41.7 \(\pm\) 1.7 mm, which is lowered to 23.6 \(\pm\) 7.6 mm in the case of the extended Am-241 source, due to an increased energy loss as alpha particles pass through the gold cover. The ultraviolet image of the Am-241 source shows that the boundaries of radioluminescence production include the length and width of the active area, as well as the mean track length of alpha particles. In the case of alpha particles accelerated Figure 2: The radioluminescence emission spectrum of air spanning the UV region. The simulated spectrum is displayed in blue for a simulation of \(10^{6}\) decays of the Am-241 source. The experimental spectrum measured in [4] is presented in red for comparison. at 5.1 MeV, the computed mean track length of \(37.67\pm 1.19\) mm is consistent with the ultraviolet image of the Bragg peak illustrated in Fig. 3. As expected, by increasing the energy of the accelerated alpha particles to 8.3 MeV, the extent of the Bragg peak expands to approximately 80 mm. Therefore, the simulation predicts well both the length of the radioluminescence image and the location where the alpha particle energy loss is greatest (the Bragg peak). The application effectively models the radioluminescence production in the proximity of alpha sources placed in air, allowing it to be used for optimization of radioluminescence detection technologies. Fig. 3 indicates that the application could be a useful tool in characterizing the mapping and imaging capabilities of radioluminescence setups. By coupling this method with ray-tracing simulations, and detector modeling, the optimization and development of radioluminescence setups could be further achieved. Figure 3: The ultraviolet images of the simulated Am-241 source (top) and the simulated beam of alpha particles accelerated at 5.1 MeV (middle) and 8.3 MeV (bottom). ## 4 Conclusion A Geant4 [20] application was developed based on the OpNovice example from the toolkit package. It demonstrates an accessible method to simulate radio-luminescence production using presented input parameters - the radiative transition parameters specific to N\({}_{2}\) and NO molecules, responsible for air luminescence. In the context of the RemoteALPHA project, the simulation was designed to model the radioluminescence effect by employing two types of alpha sources, an extended Am-241 sample and accelerated alpha particles. The application was demonstarted to compute the spectrum of emitted optical photos, as well as the ultraviolet image of the alpha sources, indicating that it can serve as a tool in optimizing remote alpha detection systems. To support future research in this field, the application is available to readers at [21]. The authors wish to express their gratitude to the European Association of National Metrology Institutes (EURAMET) and the Physikalisch-Technische Bundesanstalt (PTB) for their support in the participation of Claudia Olaru in the research activities related to the EMPIR Researcher Mobility Grant (RMG). They also extend their appreciation to Maksym Luchkov, Dr. Faton Krasniqi, Dr. Volker Dangendorf, and Dr. Ulrich Giesen of PTB for their assistance during Claudia Olaru's RMG visit at PTB and their support with experiments at the PTB Ion Accelerator Facility. Additionally, the authors are thankful to Prof. Dr. Ionel Lazanu of the University of Bucharest, Faculty of Physics, for his unwavering support. The project 19ENV02 RemoteALPHA has received funding from the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. 19ENV02 RemoteALPHA denotes the EMPIR project reference. This work was partly funded by the Romanian Ministry of Research, Innovation and Digitalization, from the Core Project PN: 23 21 02 03.
2305.01904
Robust Multi-bit Natural Language Watermarking through Invariant Features
Recent years have witnessed a proliferation of valuable original natural language contents found in subscription-based media outlets, web novel platforms, and outputs of large language models. However, these contents are susceptible to illegal piracy and potential misuse without proper security measures. This calls for a secure watermarking system to guarantee copyright protection through leakage tracing or ownership identification. To effectively combat piracy and protect copyrights, a multi-bit watermarking framework should be able to embed adequate bits of information and extract the watermarks in a robust manner despite possible corruption. In this work, we explore ways to advance both payload and robustness by following a well-known proposition from image watermarking and identify features in natural language that are invariant to minor corruption. Through a systematic analysis of the possible sources of errors, we further propose a corruption-resistant infill model. Our full method improves upon the previous work on robustness by +16.8% point on average on four datasets, three corruption types, and two corruption ratios. Code available at https://github.com/bangawayoo/nlp-watermarking.
KiYoon Yoo, Wonhyuk Ahn, Jiho Jang, Nojun Kwak
2023-05-03T05:37:30Z
http://arxiv.org/abs/2305.01904v2
# Robust Multi-bit Natural Language Watermarking ###### Abstract Recent years have witnessed a proliferation of valuable original natural language contents found in subscription-based media outlets, web novel platforms, and outputs of large language models. However, these contents are susceptible to illegal piracy and potential misuse without proper security measures. This calls for a secure watermarking system to guarantee copyright protection through leakage tracing or ownership identification. To effectively combat piracy and protect copyrights, a multi-bit watermarking framework should be able to embed adequate bits of information _and_ extract the watermarks in a robust manner despite possible corruption. In this work, we explore ways to advance both payload and robustness by following a well-known proposition from image watermarking and identify features in natural language that are invariant to minor corruption. Through a systematic analysis of the possible sources of errors, we further propose a corruption-resistant infill model. Our full method improves upon the previous work on robustness by +16.8% point on average on four datasets, three corruption types, and two corruption ratios.1 Footnote 1: Department of Intelligence and Information, Graduate School of Convergence Science and Technology. [https://github.com/bangawayoo/nlp-watermarking](https://github.com/bangawayoo/nlp-watermarking) ## 1 Introduction Recent years have witnessed a proliferation of original and valuable natural language contents such as those found in subscription-based media outlets (e.g. Financial Times, Medium), web novel platforms (e.g. Wattpad, Radish) - an industry that has shown rapid growth, especially in the East Asian market [14, 15] - and texts written by human-like language models [1, 13, 12]. Without proper security measures, however, these contents are susceptible to illegal piracy and distribution, financially damaging the creators of the content and the market industry. In addition, the recent emergence of human-like language models like ChatGPT has raised concerns regarding the mass generation of disinformation [11]. This calls for a secure watermarking system to guarantee copyright protection or detect misuse of language models. Digital watermarking is a technology that enables the embedding of information into multimedia (e.g. image, video, audio) in an unnoticeable way without degrading the original utility of the content. Through embedding information such as owner/purchaser ID, its application includes leakage tracing, ownership identification, meta-data binding, and tamper-proofing. To effectively combat intentional evasion by the adversary or unintentional digital degradation, a watermarking framework should not only be able to embed adequate bits of information but also demonstrate robustness against potential corruption [16, 15]. Watermarking in image and video contents has been extensively explored for pre-deep learning methods [13, 14, 15]. With the advent of deep neural networks, deep watermarking has emerged as a new paradigm that improves the three key aspects of watermarking: payload (i.e. the number of bits embedded), robustness (i.e. accuracy of the extracted message), and quality of the embedded media. Natural language watermarking uses text as the carrier for the watermark by imperceptibly modifying semantics and/or syntactic features. As opposed to altering the visual appearances [16], this type of modification makes natural language watermarking resistant to piracy based on manual transcription. Previous research has focused on techniques such as lexical substitution with predefined rules and dictionaries or structural transformation [17, 18, 19]. Through utilizing neural networks, recent works have either replaced the predefined set of rules with learning-based methodology (Abdelnabi and Fritz, 2021, AWT), thereby removing heuristics or vastly improved the quality of lexical substitution (Yang et al., 2022, ContextLS). Despite the superiority over traditional methods, however, recent works are not without their limitations: AWT is prone to error during message extraction especially when a higher number of bits are embedded and occasionally generates deteriorated watermarked samples due to its entire reliance on a neural network; ContextLS has a fixed upperbound on the payload and more importantly, does not consider extracting the bit message under corruption, which leads to low robustness. This work strives to advance both payload and robustness of natural language watermarking. To build an effective robust watermarking system for natural language, we draw inspiration from a well-known proposition of a classical image watermarking work (Cox et al., 1997): That watermarks should _"be placed explicitly in the perceptually most significant components"_ of an image. If this is achieved, the adversary must corrupt the content's fundamental structure to destroy the watermark. This degrades the utility of the original content, rendering the purpose of pirating futile. However, embedding the watermark directly on the "perceptually most significant components" is only possible for images due to the inherent perceptual capacity of images. That is, modification in individual pixels is much more imperceptible than on individual words. Due to this, while we adhere to the gist of the proposition, we do not embed directly on the most significant component. Instead, we identify features that are semantically or syntactically fundamental components of the text and thus, invariant to minor modifications in texts. Then we use them as anchor points to pinpoint the position of watermarks. After formulating a general framework for robust natural watermarking, we empirically study the effectiveness of various potential invariant features derived from the semantic and syntactic components. Through step-by-step analysis of the possible sources of errors during watermark extraction, we further propose a corruption-resistant infill model that is trained explicitly to be robust on possible types of corruption. Our experimental results encompassing four datasets of various writing styles demonstrate the robustness of (1) relying on invariant features for watermark embedding (2) using a robustly trained infill model. The absolute robustness improvement of our full method compared with the previous work is +16.8% point on average on the four datasets, three corruption types, and two corruption ratios. ## 2 Preliminaries ### Problem Formulation of Watermarking In watermarking, the sender embeds a secret message \(m\) into the cover text \(X\) to attain the watermarked text \(X_{\text{wm}}=\texttt{embed}(X,m)\). A cover text is the original document that is to be protected. A message, for instance, can be the ID of a purchaser or owner of the document represented in bit. The receiver2 attempts to extract the embedded message \(\hat{m}=\texttt{extract}(\tilde{X}_{\text{wm}})\) from \(\tilde{X}_{\text{wm}}=\texttt{corrupt}(X_{\text{wm}})\) which may be corrupted via intentional tampering by an adversary party as well as to natural degradation (e.g. typo) that may occur during distribution. We focus on blind watermarking, which has no access to the original cover text. The main objectives of the sender and the receiver are (1) to attain \(X_{\text{wm}}\) that is semantically as similar as \(X\) so as not to degrade the utility of the original content and (2) to devise the _embed_ and _extract_ functions such that the extracted message is accurate. Footnote 2: Contrary to the separate terms (the sender and receiver) the two parties may be identical. ### Corruptions on \(X_{\text{wm}}\) Conversely, the adversary attempts to interfere with the message extraction phase by corrupting the watermarked text, while maintaining the original utility of the text. For instance, an illegal pirating party will want to avoid the watermark being used to trace the leakage point while still wanting to preserve the text for illegal distribution. This constrains the adversary from corrupting the text too much both quantitatively and qualitatively. To this end, we borrow techniques from adversarial attack (Jin et al., 2020; Morris et al., 2020) to alter the text and maintain its original semantics. We consider word insertion (Li et al., 2021), deletion (Feng et al., 2018), and substitution (Garg and Ramakrishnan, 2020) across 2.5% to 5.0% corruption ratios of the number of words in each sentence following Abdelnabi and Fritz (2021). The number of words inserted/substituted/deleted is equal to \(\texttt{round}(CR\times N)\) where \(CR\) is the corruption ratio and \(N\) is the number of words in the sentence. This ensures shorter sentences containing little to no room for corruption are not severely degraded. To additionally constrain the corrupted text from diverging from the original text, we use the pre-trained sentence transformer3_all-MiniLM-L6-v2_, which was trained on multiple datasets consisting of 1 billion pairs of sentences, to filter out corrupted texts that have cosine similarity less than 0.98 with the original text. Footnote 3: [https://www.sbert.net/](https://www.sbert.net/) ### Infill Model Similar to ContextLS [11], we use a pre-trained infill model to generate the candidates of watermarked sets. Given a masked sequence \(X_{\backslash i}=\{x_{1},\cdots,x_{i-1},\text{MASK},x_{i+1},\cdots,x_{t}\}\), an infill language model can predict the appropriate words to fill in the mask(s). An infill model parameterized by \(\theta\) outputs the probability distribution of \(x_{i}\) over the vocabulary (\(v\)): \[P(X_{\backslash i}|\theta)=p_{i}\in\mathbb{R}_{+}^{|v|}. \tag{1}\] We denote the set of top-\(k\) token candidates outputted by the infill model as \[\{t_{1}^{i},\cdots,t_{k}^{i}\}=\textsc{infill}(X_{\backslash i};k). \tag{2}\] ## 3 Framework for Robust Natural Language Watermarking Our framework for natural language watermarking is composed of two phases. Phase 1 is obtaining state \(S\) from the text \(X\) (or \(\tilde{X}_{\text{wm}}\)) using some function \(g_{1}\). \(S\) can be considered as the feature abstracted from the text _that contains sufficient information_ to determine the embedding process. Phase 2 comprises function \(g_{2}\) that takes \(X\) and \(S\) as inputs to generate the valid watermarked texts. We rely on the mask infilling model to generate the watermarked texts, which makes \(S\) the positions of the masks. The infill model generates the watermarked text \(X_{\text{wm}}\) depending on the bit message. A general overview is shown in Figure 1. ### Phase 1: Mask Position Selection For the watermarking system to be robust against corruption, \(S\) should be chosen such that it depends on the properties of the text that are relatively invariant to corruption. That is, \(S\) should be a function of the _invariant features_ of the text. More concretely, an ideal _invariant feature_ is characterized by: 1. A significant portion of the text has to be modified for it to be altered. 2. Thus, it is invariant to the corruptions that preserve the utility (e.g. semantics, nuance) of the original text. By construction, when \(S\) is a function of an ideal invariant feature, this allows recovering the identical state \(S\) for both \(X\) and \(\tilde{X}_{\text{wm}}\), which will enhance the robustness of the watermark. In essence, we are trying to find which words should be masked for the watermark to be robust. Given a state function \(g_{1}(\cdot)\), let \(S=g_{1}(X)\), \(\tilde{S}=g_{1}(\tilde{X}_{\text{wm}})\). Then, we define the **robustness of**\(g_{1}\) as follows: \[\mathcal{R}_{g_{1}}\coloneqq\mathbb{E}[\mathbb{1}\left(S=\tilde{S}\right)]. \tag{3}\] Here, \(\mathbb{1}\) denotes the indicator function and \(\mathbb{E}\) is the expectation operation. We sought to discover invariant features in the two easily attainable domains in natural language: semantic and syntactic components. An illustration of these components is shown in Figure 1 Left. Figure 1: Leftmost shows an example of a cover text and its keyword and syntactic dependency components (only partially shown due to space constraint); Middle shows Phase 1 and Phase 2; Rightmost shows an example of a valid watermark sample. **Keyword Component** On the semantic level, we first pinpoint keywords that ought to be maintained for the utility of the original text to be maintained. Our intuition is that keywords are semantically fundamental parts of a sentence and thus, are maintained and invariant despite corruption. This includes proper nouns as they are often not replaceable with synonyms without changing the semantics (e.g. name of a movie, person, region), which can be extracted by an off-the-shelf Named Entity Recognition model. In addition, we use an unsupervised method called YAKE Campos et al. (2018) that outputs semantically essential words. After extracting the keywords, we use them as anchors and can determine the position of the masks by a simple heuristic. For instance, the word adjacent to the keyword can be selected as the mask. **Syntactic Dependency Component** On the syntactic level, we construct a dependency parsing tree employing an off-the-shelf parser. A dependency parser describes the syntactic structure of a sentence by constructing a directed edge between a head word and its dependent word(s). Each dependent word is labeled as a specific type of dependency determined by its grammatical role. We hypothesize that the overall grammatical structure outputted by the parsing tree will be relatively robust to minor corruptions in the sentence. To select which type of dependency should be masked, we construct a predefined ordering to maintain the semantics of the watermarked sentences. The ordering is constructed by masking and substituting each type of dependency using an infill model and comparing its entailment score computed by an NLI model(e.g. RoBERTa-Large-NLI4) on a separate held-out dataset as shown in Alg. 1 (a more detailed procedure and the full list are provided in the Appendix A.4). Using the generated ordering, we mask each dependency until the target number of masks is reached. For both types of components (semantic & syntactic), we ensure that keywords are not masked. Footnote 4: [https://huggingface.co/roberta-large-mnli](https://huggingface.co/roberta-large-mnli) So how well do the aforementioned components fare against corruption? The results in Table 1 bolster our hypothesis that keywords and syntactic components may indeed act as invariant features as both show considerably high robustness across three different types of corruption measured by the ratio of mask matching samples. As opposed to this, ContexLS Yang et al. (2022), which does not rely on any invariant features has a drastically lower \(\mathcal{R}_{g_{1}}\). This signifies that a different word is masked out due to the corruption, which hampers the watermark extraction process. ### Phase 2: Watermark Encoding In Phase 2, a set of valid watermarked texts is generated by \(g_{2}(X,S)\) to embed or extract the message. For ours, since the state is the set of mask positions, this comprises using an infill model to select top-\(k\) words and alphabetically sort them to generate a valid set of watermarks. Concretely, using the notations from SS2.3, \(g_{2}(X,S)\) can be divided into the following steps: 1. \(\mathcal{T}_{i}=\{t^{i}_{1},\cdots,t^{i}_{k}\}=\textsc{infill}(X_{\setminus i};k _{1}),\forall i\in S\) 2. Filter \(\mathcal{T}_{i}\) to remove any punctuation marks, subwords, stopwords. Update \(\mathcal{T}_{i}\) by selecting top-\(k_{2}\) (\(\leq k_{1}\)) and sort them alphabetically. 3. Form a cartesian product of the token sets \(\mathbb{T}=\mathcal{T}_{s_{1}}\times\cdots\times\mathcal{T}_{s_{j}}\) where \(j=|S|\). Let \(\mathbb{X}\) be \begin{table} \begin{tabular}{c c c c} \hline \hline Robustness & \begin{tabular}{c} Corr. \\ Types \\ \end{tabular} & \begin{tabular}{c} ContextLS \\ (Yang et al., 2022) \\ \end{tabular} & Keyword & Syntactic \\ \hline \multirow{3}{*}{\(\mathcal{R}_{g_{1}}\)} & D & 0.656 & 0.944 & 0.921 \\ & I & 0.608 & 0.955 & 0.959 \\ & S & 0.646 & 0.974 & 0.949 \\ \hline \hline \end{tabular} \end{table} Table 1: Robustness of \(g_{1}\) (\(\mathcal{R}_{g_{1}}\)) for ContextLS and Ours (Keyword, Syntactic) against three corruption types: Deletion (D), Insertion (I), and Substitution (S) under 5% corruption rate on IMDB. See Appendix Table 9 for full results. the set of texts with the corresponding tokens substituted \((|\mathbb{X}|=|\mathbb{T}|)\). 4. Generate a _valid_ watermarked set \(\mathbb{X}_{\text{wm}}=\{X_{i}\in\mathbb{X}|g_{1}(X_{wm})=g_{1}(X_{i})\}\subseteq \mathbb{X}\) and assign a bit message for each element in the set \(\mathbb{X}_{\text{wm}}\). In (4), generating a _valid_ set of watermarks means ensuring the message bit can be extracted without any error. This is done by keeping only those watermarked texts from \(\mathbb{X}\) that have the same state as \(X\) (Figure 1 Middle and Right). Under zero corruption (when \(X_{\text{wm}}\)=\(\tilde{X}_{\text{wm}}\)), Phase 2 will generate the same sets of watermarked texts if \(S\) and \(\tilde{S}\) are equivalent (i.e. \(g_{2}(X,S)=g_{2}(\tilde{X}_{\text{wm}},\tilde{S})\)). Thus, our method is able to extract the watermark without any error when there is no corruption. However, what happens when there _is_ corruption in the watermarked texts? Even if the exact state is recovered, the same set of watermarked texts may not be recovered as the infill model relies on local contexts to fill in the masks. Noting this in mind, we can also define the **robustness of \(g_{2}\)** as \[\mathcal{R}_{g_{2}}\coloneqq\mathbb{E}[\mathbbm{1}\,(g_{2}(X,S)=g_{2}(\tilde {X}_{\text{wm}},\tilde{S}))]. \tag{4}\] Figure 2 Right shows \(\mathcal{R}_{g_{1}}\) and the difference between \(\mathcal{R}_{g_{1}}\) and \(\mathcal{R}_{g_{2}}\). We observe that \(\mathcal{R}_{g_{2}}\) is significantly lower than \(\mathcal{R}_{g_{1}}\) for ours when we choose the infill model to be a vanilla pretrained language model such as BERT. While the type of invariant features does influence \(\mathcal{R}_{g_{2}}\), our key takeaway is that \(\mathcal{R}_{g_{2}}\) is substantially lower than \(\mathcal{R}_{g_{1}}\) in all cases5. Footnote 5: Larger \(\mathcal{R}_{g_{2}}\) does not necessarily imply a lower bit error rate as the extent of the discrepancy between \(g_{2}(X,S)\) and \(g_{2}(\tilde{X}_{\text{wm}},\tilde{S})\) is not measured in the metric. Interestingly, for ContextLS the gap between \(\mathcal{R}_{g_{1}}\) and \(\mathcal{R}_{g_{2}}\) is nearly zero, showing that Phase 1 is already a bottleneck for achieving robustness. The smaller gap can be explained by the use of smaller top-\(k_{2}\)(=2) and the incremental watermarking scheme, which incrementally increases the sequence to infill. This may reduce the possibility of a corrupted word influencing the infill model. ### Robust Infill Model To overhaul the fragility of Phase 2, we build an infill model robust to possible corruptions by finetuning \(\theta\) to output a consistent word distribution when given \(X_{\setminus i}\) and \(\tilde{X}_{\setminus i}\), a corrupted version of \(X_{\setminus i}\). This can be achieved by minimizing the divergence of the two distributions \(p_{i}\) and \(\tilde{p}_{i}\) where \(\tilde{p}_{i}\) refers to the word distribution of the corrupted sequence, \(\tilde{X}_{\setminus i}\). Instead of using the original word distribution as the target distribution, which is densely populated over \(>\) 30,000 tokens (for BERT-base), we form a sparse target distribution over the top-\(k_{1}\) tokens by zeroing out the rest of the tokens and normalizing over the \(k_{1}\) tokens. This is because only the top-\(k_{1}\) tokens are used in our watermarking frame (see SS3.2). In addition, to improve the training dynamics, we follow the masking strategy proposed in SS3.1 to choose the words to masks, instead of following the random masking strategy used in the original pretraining phase. This aligns distributions of the masked words at train time and test time, which leads to a better performance (robustness) given the same compute time. As opposed to this, since the original masking strategy randomly selects a certain proportion of words to mask out, this will provide a weaker signal for the infill model to follow. We use the Kullback-Leibler (KL) divergence as our metric. More specifically, we use the'reverse KL' as our loss term in which the predicted dis \begin{table} \begin{tabular}{c|c c} \hline \hline Dataset & \(\Delta\mathcal{R}_{g_{1}}\) & \(\Delta\mathcal{R}_{g_{2}}\) \\ D1 &.005\(\pm\).004 &.113\(\pm\).013 \\ D2 &.009\(\pm\).007 &.070\(\pm\).024 \\ D3 &.0\(\pm\).002 &.142\(\pm\).051 \\ D4 &.0\(\pm\).002 &.151\(\pm\).048 \\ \hline \hline \end{tabular} \end{table} Table 2: Effect of applying robust infill model on the robustness of Phase 1 and 2 (With - Without) averaged over the three corruption types up to three decimal points. The four datasets (D1 - D4) are IMDB, Wikitext-2, Dracula, and Wüthering Heights, respectively. Further details about the datasets are in §4. Figure 2: Robustness of \(g_{1}\) and the difference between robustness of \(g_{1}\) and \(g_{2}\) under 5% corruption rate on IMDB. tribution (as opposed to the target distribution) is used to weigh the difference of the log distribution as done in Variational Bayes (Kingma and Welling, 2014). This aids the model from outputting a "zero-forcing" predicted distribution. The consistency loss between the two distributions is defined by \[\mathcal{L}_{con} =\sum_{i\in S}\text{KL}(\tilde{p_{i}}|p_{i}), \tag{5}\] \[\text{where}\quad\tilde{p_{i}} =P(\tilde{X}_{\backslash i}|\theta),\] (6) \[p_{i} =P(X_{\backslash i}|\text{freeze}(\theta)) \tag{7}\] for all \(i\) of the masked tokens. The graph outputting \(p\) is detached to train a model to output a consistent output when given a corrupted input. As we expected, using the robust infill model to the Syntactic component leads to a noticeable improvement in \(\mathcal{R}_{g_{2}}\), while that of \(\mathcal{R}_{g_{1}}\) is negligible (Table 2). The corrupted inputs are generated following the same strategy in SS2.2 using a separate train dataset. We ablate our design choices in SS5.3. To summarize, the proposed framework 1. allows the embedding and extraction of watermarks faultlessly when there is no corruption. 2. can incorporate invariant features for watermark embedding, achieving robustness in the presence of corruption. 3. further enhance robustness in Phase 2 by utilizing a robust infill model. ## 4 Experiment **Dataset** To evaluate the effectiveness of the proposed method, we use four datasets with various styles. IMDB (Maas et al., 2011) is a movie reviews dataset, making it more colloquial. WikiText-2 (Merity et al., 2016), consisting of articles from Wikipedia, has a more informative style. We also experiment with two novels, Dracula and Withering Heights (WH), which have a distinct style compared to modern English and are available on Project Gutenberg (Bram, 1897; Emily, 1847). **Metrics** For payload, we compute bits per word (BPW). For robustness, we compute the bit error (BER) of the extracted message. We also measure the quality of the watermarked text by comparing it with the original cover text. Following Yang et al. (2022); Abdelnabi and Fritz (2021), we compute the entailment score (ES) using an NLI model (RoBERTa-Large-NLI) and semantic similarity (SS) by comparing the cosine similarity of the representations outputted by a pre-trained sentence transformer (stsb-RoBERTa-base-v2). We also conduct a human evaluation study to assess semantic quality. **Implementation Details** For ours and ContextLS (Yang et al., 2022), both of which operate on individual sentences, we use the smallest off-the-shelf model (_en-core-web-sm_) from Spacy (Honnibal and Montani, 2017) to split the sentences. The same Spacy model is also used for NER (named entity recognizer) and building the dependency parser for ours. Both methods use BERT-base as the infill model and select top-32 (\(k_{1}\)) tokens. We set our payload to a similar degree with the compared method(s) by controlling the number of masks per sentence (\(|S|\)) and the top-\(k_{2}\) tokens (SS3.2); these configurations for each dataset are shown in Appendix Table 12. We watermark the first 5,000 sentences for each dataset and use TextAttack (Morris \begin{table} \begin{tabular}{c|c c|c c c} \hline \hline \multicolumn{6}{c}{**IMDB**} \\ \cline{2-6} & \multicolumn{4}{c}{Methods} \\ \cline{2-6} Metrics & ContextLS & Keyword & Syntactic & +RI \\ \hline \hline BPW (\(\uparrow\)) & 0.100 & 0.116 & 0.125 & **0.144** \\ \hline BER(\(\downarrow\)) & D & 0.219 & 0.127 & 0.100 & **0.074** \\ @CR=0.025 & I & 0.303 & 0.153 & 0.153 & **0.106** \\ & S & 0.273 & 0.142 & 0.133 & **0.110** \\ \hline \multirow{3}{*}{BER(\(\downarrow\)) \(\theta\)CR=0.05} & D & 0.392 & 0.252 & 0.277 & **0.200** \\ & I & 0.355 & 0.201 & 0.242 & **0.163** \\ & S & 0.343 & 0.218 & 0.220 & **0.177** \\ \hline \multicolumn{6}{c}{**WikiNet-2**} \\ \cline{2-6} & \multicolumn{4}{c}{Methods} \\ \cline{2-6} Metrics & AWT & ContextLS & Keyword & Syntactic & +RI \\ \hline \hline BPW (\(\uparrow\)) & 0.100 & 0.083 & 0.092 & 0.090 & **0.136** \\ \hline BER(\(\downarrow\))(\(\theta\)CR=0) & 0.264 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline BER(\(\downarrow\)) & D & 0.273 & 0.24 & 0.202 & 0.162 & **0.136** \\ @CR=0.025 & I & 0.272 & 0.289 & 0.222 & 0.216 & **0.205** \\ & S & 0.279 & 0.266 & 0.176 & 0.155 & **0.157** \\ \hline BER(\(\downarrow\)) & D & 0.284 & 0.410 & 0.326 & 0.321 & **0.282** \\ @CR=0.05 & I & 0.272 & 0.338 & 0.246 & 0.235 & **0.201** \\ & S & 0.289 & 0.342 & 0.256 & 0.228 & **0.201** \\ \hline \hline \multicolumn{6}{c}{**Dracula**} \\ \cline{2-6} BPW (\(\uparrow\)) & 0.100 & 0.089 & 0.126 & 0.117 & **0.146** \\ \hline BER(\(\downarrow\))\(\theta\)CR=0 & 0.111 & 0. & 0. & 0. & 0. \\ \hline BER(\(\downarrow\)) & D & 0.236 & 0.201 & 0.116 & 0.076 & **0.030** \\ @CR=0.025 & I & 0.218 & 0.299 & 0.181 & 0.133 & **0.063** \\ & S & 0.231 & 0.272 & 0.140 & 0.130 & **0.081** \\ \hline BER(\(\downarrow\)) & D & 0.286 & 0.373 & 0.255 & 0.248 & **0.177** \\ @CR=0.05 & I & 0.264 & 0.375 & 0.228 & 0.279 & **0.155** \\ & S & 0.281 & 0.337 & 0.207 & 0.229 & **0.164** \\ \hline \hline \multicolumn{6}{c}{**Withering Heights**} \\ \cline{2-6} BPW (\(\uparrow\)) & 0.100 & 0.076 & 0.088 & 0.097 & **0.114** \\ \hline BER(\(\downarrow\))\(\theta\)CR=0 & 0.100 & 0. & 0. & 0. \\ \hline BER(\(\downarrow\)) & D & 0.224 & 0.194 & 0.102 & 0.088 & **0.063** \\ @CR=0.025 & I & 0.212 & 0.284 & 0.144 & 0.132 & **0.068** \\ & S & 0.234 & 0.271 & 0.161 & 0.143 & **0.096** \\ \hline BER(\(\downarrow\)) & D & 0.283 & 0.379 & 0.253 & 0.240 & **0.169** \\ @CR=0.05 & I & 0.258 & 0.363 & 0.224 & 0.268 & **0.133** \\ @CR=0.05 & S & 0.276 & 0.363 & 0.231 & 0.245 & **0.161** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of payload and robustness on four datasets. +RI denotes adding the robust infill model to our Syntactic component. **Top-1** numbers are shown in bold. et al., 2020) to create corrupted samples. For robust infilling, we finetune BERT for 100 epochs on the individual datasets. For more details, refer to the Appendix. **Compared Methods** We compare our method with deep learning-based methods (Abdelnabi and Fritz, 2021, AWT)(Yang et al., 2022, ContextLS) for our experiments as pre-deep learning methods (Topkara et al., 2006; Hao et al., 2018) that are entirely rule-based have low payload and/or low semantic quality (later shown in Table 4). More details about the compared methods are in SS6. ### Main Experiments Table 3 shows the watermarking results on all four datasets. Some challenges we faced during training AWT and our approach to overcoming this are detailed in Appendix A.2. Since the loss did not converge on IDMB for AWT as detailed in appendix A.3, we omit the results for this. We test the robustness of each method on corruption ratios (CR) of 2.5% and 5%. For ours, we apply robust infilling for the Syntactic Dependency Component, which is indicated in the final column by +RI. AWT suffers less from a larger corruption rate and sometimes outperforms our methods without RI. However, the BER at zero corruption rate is non-negligible, which is crucial for a reliable watermarking system. In addition, we observe qualitatively that AWT often repeats words or replaces pronouns on the watermarked sets, which seems to provide signals for extracting the message - this may provide a distinct signal for message extraction at the cost of severe quality degradation. Some examples are shown in Appendix A.7 and Tab. 17-19. Our final model largely outperforms ContextLS in all the datasets and corruption rates. Additionally, both semantic and syntactic components are substantially more robust than ContextLS even without robust infilling in all the datasets. The absolute improvements in BER by using Syntactic component across corruption types with respect to ContextLS under CR=2.5% are 13.6%, 8.2%, 14.4%, and 12.9% points for the four datasets respectively when using the Syntactic component; For CR=5%, they are 10.0%, 10.2%, 11.0%, and 11.7% points. ### Semantic Scores of Watermark Table 4 shows the results for semantic metrics. While our method falls behind ContextLS, we achieve better semantic scores than all the other methods while achieving robustness. ContextLS is able to maintain a high semantic similarity by explicitly using an NLI model to filter out candidate tokens. However, the accuracy of the extracted message severely deteriorates in the presence of corruption as shown in the previous section. Using ordered dependencies sorted by the entailment score significantly increases the semantic metrics than using a randomly ordered one, denoted by "-NLI Ordering". The results are in Appendix Table 15. We also conduct human evaluation comparing the fluency of the watermarked text and cover text (Fluency\(\Delta\)) and how much semantics is maintained (Semantic Similarity; SS) compared to the original cover text in Tab. 5. The details of the experiment are in appendix A.6. This is aligned with our findings in automatic metrics, but shows a distinct gap between ours and AWT. Notably, the levels of fluency change of ours and ContextLS compared to the original cover text are nearly the same. ## 5 Discussion ### Comparison with ContextLS Some design choices we differ from ContextLS is top-\(k_{2}>2\) which determines the number of candidate tokens per mask. We can increase the payload depending on the requirement by choosing a higher \(k_{2}\). However, for ContextLS increasing \(k_{2}\) counter-intuitively leads to a _lower_ payload. This is because ContextLS determines the valid watermark sets (those that can extract the message without er \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & & [1] & [2] & AWT & ContextLS & Ours \\ \hline \multirow{2}{*}{IMDB} & ES & 0.843 & 0.867 & 0.958 & 0.985 & 0.975 \\ & SS & 0.916 & 0.943 & 0.973 & 0.982 & 0.981 \\ \hline \multirow{2}{*}{Wikitext-2} & ES & 0.888 & 0.907 & 0.935 & 0.986 & 0.966 \\ & SS & 0.941 & 0.945 & 0.991 & 0.989 & 0.993 \\ \hline \multirow{2}{*}{Dracula} & ES & 0.869 & 0.915 & 0.869 & 0.985 & 0.963 \\ & SS & 0.910 & 0.889 & 0.855 & 0.986 & 0.971 \\ \hline \multirow{2}{*}{WH} & ES & 0.882 & 0.893 & 0.947 & 0.984 & 0.964 \\ & SS & 0.929 & 0.934 & 0.968 & 0.989 & 0.975 \\ \hline \hline \end{tabular} \end{table} Table 4: [1]: Topkara et al. (2006), [2]: Hao et al. (2018). Semantic scores (ES: entailment score, SS: semantic similarity) of the watermarked sets in relation to the original cover text. All numbers except ours are from Yang et al. (2022) \begin{table} \begin{tabular}{l l l l} \hline \hline Metrics & AWT & ContextLS & Ours \\ \hline Fluency\(\Delta(\downarrow)\) & 1.32\(\pm\)0.7 & 0.25\(\pm\)0.4 & 0.26\(\pm\)0.4 \\ SS(\(\uparrow\)) & 2.97\(\pm\)0.8 & 4.22\(\pm\)0.5 & 3.90\(\pm\)0.8 \\ \hline \hline \end{tabular} \end{table} Table 5: Human evaluation results on Likert scale (20 samples and 5 annotators). ror) with much stronger constraints (for details see Eq. 5,6,7 of Yang et al. (2022)). This also requires an exhaustive search over the whole sentence with an incrementally increasing window, which leads to a much longer embedding / extraction time due to the multiple forward passes of the neural network. For instance, the wall clock time of embedding in 1000 sentences on IMDB is more than 20 times on ContextLS (81 vs. 4 minutes). More results are summarized in Table 6. Results for applying our robust infill model to ContextLS are in Appendix A.4. ### Pitfalls of Automatic Semantic Metrics Although the automatic semantic metrics do provide a meaningful signal that aids in maintaining the original semantics, they do not show the full picture. First, the scores do not accurately reflect the change in semantics when substituting for the coordination dependency (e.g. and, or, nor, but, yet). As shown in Table 7, both the entailment score and semantic similarity score overlook some semantic changes that are easily perceptible by humans. This is also reflected in the sorted dependency list we constructed in SS3.1 - the average NLI score after infilling a coordination dependency is 0.974, which is ranked second. An easy fix can be made by placing the coordination dependency at the last rank or simply discarding it. We show in Appendix Table 11 that this also provides a comparable BPW and robustness. Another pathology of the NLI model we observed was when a named entity such as a person or a region is masked out. Table 7 shows an example in ContextLS and how ES is abnormally high. Such watermarks may significantly hurt the utility of novels if the name of a character is modified. This problem is circumvented in ours by disregarding named entities (detected using NER) as possible mask candidates. ### Ablations and Other Results **Ablations** In this section, we ablate some of the design choices. First, we compare the design choices of our masking strategies (random vs. ours) and loss terms (Forward KL and Reverse KL) in Table 8. Our masking strategy improves both BPW and robustness compared to randomly masking out words. Though preliminary experiments showed RKL is more effective for higher payload and robustness, further experiments showed the types of KL do not significantly affect the final robustness when we use our masking strategy. We further present the results under character-based corruption and compare robustness against different corruption types in Appendix A.4. **Stress Testing Syntactic Component** We experiment with how our proposed Syntactic component fares in a stronger corruption rate. The results are shown in Appendix Fig. 3. While the robustness is still over 0.9 for both insertion and substitution at CR=0.1, the robustness rapidly drops against deletion. This shows that our syntactic component is most fragile against deletion. ## 6 Related Works Natural language watermarking embeds information via manipulation of semantics or syntactic features rather than altering the visual appearance of \begin{table} \begin{tabular}{c c|c c c} \hline \hline & top-\(k_{2}\) & 2 & 3 & 4 \\ \hline \multirow{2}{*}{BPW} & ContextLS & 0.100 & 0.033 & 0.021 \\ & Ours & 0.100 & 0.161 & 0.211 \\ \hline \multirow{2}{*}{Forward Pass} & ContextLS & 1994 & 2386 & 2801 \\ & Ours & 94 & 94 & 94 \\ \hline \hline \end{tabular} \end{table} Table 6: The effect of top-\(k_{2}\) on payload, # of forward pass to the infill model, and wall clock time for ContextLS and ours on IMDB. We fix our keyword ratio to 0.11. \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{2}{c}{**Coordination**} \\ Sci-fi movies/TV are usually underfunded, under-appreciated **and[nor]** misunderstood. (ES=0.996, SS=0.989) \\ I thought the main villains were pretty well done **and[but]** fairly well acted. (ES=0.994, SS=0.994) \\ \hline \hline \multicolumn{2}{c}{**Named Entity**} \\ The only reason this movie is not given a 1 (awful) vote is that the acting of both **Ida[data]** Lupino and **Robert[Rob]** Ryan is superb. (ES=0.993, SS=0.961) \\ I have not seen any other movies from the **”Crime[Criminal]** Doctor” series, so I can’t make any comparisons. (ES=0.994, SS=0.990) \\ \hline \hline \end{tabular} \end{table} Table 7: Entailment score between the cover text and the watermarked text. The **original[**_watermarked_**] words are shown. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{Ran. Mask (FKL)} & Ran. Mask (RKL) & Ours \\ \hline BPW(\(\uparrow\)) & & 0.121 & 0.129 & 0.144 \\ \hline BER(\(\downarrow\)) & D & 0.106 & 0.101 & 0.074 \\ 6CR=0.025 & I & 0.141 & 0.139 & 0.106 \\ & S & 0.138 & 0.137 & 0.110 \\ \hline \hline \end{tabular} \end{table} Table 8: Ablation of masking design choices (FKL: Forward KL, RKL: Reverse KL). Ours is the final version used in the main experiments (our masking strategy + RKL). words, lines, and documents (Rizzo et al., 2019). This makes natural language watermarking robust to re-formatting of the file or manual transcription of the text (Topkara et al., 2005). Early works in natural language watermarking have relied on synonym substitution (Topkara et al., 2006), restructuring of syntactic structures (Atallah et al., 2001), or paraphrasing (Atallah et al., 2003). The reliance on a predefined set of rules often leads to a low bit capacity and the lack of contextual consideration during the embedding process may result in a degraded utility of the watermarked text that sounds unnatural or strange. With the advent of neural networks, some works have done away with the reliance on pre-defined sets of rules as done in previous works. Adversarial Watermarking Transformer (Abdelnabi and Fritz, 2021, AWT) propose an encode-decoder transformer architecture that learns to extract the message from the decoded watermarked text. To maintain the quality of the watermarked text, they use signals from sentence transformers and language models. However, due to entirely relying upon a neural network for message embedding and extraction, the extracted message is prone to error even without corruption, especially when the payload is high and has a noticeable artifact such as repeated tokens in some of the samples. Yang et al. (2022) takes an algorithmic approach for embedding and extraction of messages, making it errorless. Additionally, using a neural infill model along with an NLI model has shown better quality in lexical substitution than more traditional approaches (e.g. WordNet). However, robustness under corruption is not considered. **Image Watermarking** Explicitly considering corruption for robustness and using different domains of the multimedia are all highly relevant to blind image watermarking, which has been extensively explored (Mun et al., 2019; Zhu et al., 2018; Zhong et al., 2020; Luo et al., 2020). Like our robust infill training, Zhu et al.; Luo et al. explicitly consider possible image corruptions to improve robustness. Meanwhile, transforming the pixel domain to various frequency domains using transform methods such as Discrete Cosine Transform has shown to be both effective and more robust (Potdar et al., 2005). The use of keywords and dependencies to determine the embedding position in our work can be similarly considered as transforming the raw text into semantic and syntactic domains, respectively. **Other Lines of Work** Steganography is a similar line of work concealing secret data into a cover media focusing on covertness rather than robustness. Various methods have been studied in the natural language domain (Tina Fang et al., 2017; Yang et al., 2018; Ziegler et al., 2019; Yang et al., 2020; Ueoka et al., 2021). This line of works differs from watermarking in that the cover text may be arbitrarily generated to conceal the secret message, which eases the constraint of maintaining the original semantics. Recently, He et al. (2022) proposed to watermark outputs of language models to prevent model stealing and extraction. While the main objective of these works (He et al., 2022, 20) differs from ours, the methodologies can be adapted to watermark text directly. However, these are only limited to zero-bit watermarking (e.g. whether the text is from a language model or not), while ours allow embedding of any multi-bit information. Similarly, Kirchenbauer et al. (2023) propose to watermark outputs of language models at decoding time in a zero-bit manner to distinguish machine-generated texts from human-written text. ## 7 Conclusion We propose using invariant features of natural language to embed robust watermarks to corruptions. We empirically validate two potential components easily discoverable by off-the-shelf models. The proposed method outperforms recent neural network-based watermarking in robustness and payload while having a comparable semantic quality. We do not claim that the invariant features studied in this work are the optimal approach. Instead, we pave the way for future works to explore other effective domains and solutions following the framework. ### Limitations Despite its robustness, our method has subpar results on the automatic semantic metrics compared to the most recent work. This may be a natural consequence of the perceptibility vs. robustness trade-off Tao et al. (2014); De Vleeschouwer et al. (2002): a stronger watermark tends to interfere with the original content. Nonetheless, by using some technical tricks (e.g. neural infill model, NLI-sorted ordering) our method is able to be superior to all the other methods including two traditional ones and a neural network-based method. Techniques from adversarial attack were employed to simulate possible corruptions in our work. However, these automatic attacks does not always lead to imperceptible modifications of the original texts Morris et al. (2020). Thus, the corruptions used in our work may be a rough estimate of what true adversaries might do to evade watermarking. In addition, our method is not tested against paraphrasing, which may substantially change the syntactic component of the text. One realistic reason that deterred us from experimenting on paraphrase-based attacks was their lack of controllability compared to other attacks that have fine-grained control over the number of corrupted words. Likewise, for text resources like novels that value subtle nuances, the aforementioned property may discourage the adversary from using it to destroy watermarking. ## Acknowledgements This work was supported by Korean Government through the IITP grants 2022-0-00320, 2021-0-01343, NRF grant 2021R1A2C3006659 and by Webtoon AI at NAVER WEBTOON in 2022.
2308.01720
Bichromatic Rabi control of semiconductor qubits
Electrically-driven spin resonance is a powerful technique for controlling semiconductor spin qubits. However, it faces challenges in qubit addressability and off-resonance driving in larger systems. We demonstrate coherent bichromatic Rabi control of quantum dot hole spin qubits, offering a spatially-selective approach for large qubit arrays. By applying simultaneous microwave bursts to different gate electrodes, we observe multichromatic resonance lines and resonance anticrossings that are caused by the ac Stark shift. Our theoretical framework aligns with experimental data, highlighting interdot motion as the dominant mechanism for bichromatic driving.
Valentin John, Francesco Borsoi, Zoltán György, Chien-An Wang, Gábor Széchenyi, Floor van Riggelen, William I. L. Lawrie, Nico W. Hendrickx, Amir Sammak, Giordano Scappucci, András Pályi, Menno Veldhorst
2023-08-03T12:26:02Z
http://arxiv.org/abs/2308.01720v1
# Bichromatic Rabi control of semiconductor qubits ###### Abstract Electrically-driven spin resonance is a powerful technique for controlling semiconductor spin qubits. However, it faces challenges in qubit addressability and off-resonance driving in larger systems. We demonstrate coherent bichromatic Rabi control of quantum dot hole spin qubits, offering a spatially-selective approach for large qubit arrays. By applying simultaneous microwave bursts to different gate electrodes, we observe multichromatic resonance lines and resonance anticrossings that are caused by the ac Stark shift. Our theoretical framework aligns with experimental data, highlighting interdot motion as the dominant mechanism for bichromatic driving. Spin qubits, semiconductor quantum dots, Rabi control, germanium + Footnote †: These authors jointly supervised this work + Footnote †: These authors jointly supervised this work + Footnote †: These authors jointly supervised this work + Footnote †: These authors jointly supervised this work ## I Introduction Spin qubits based on semiconductor quantum dots represent a promising platform for quantum computing owing to their small qubit footprint, long coherence times, and compatibility with semiconductor manufacturing techniques [1; 2]. However, the current control approach for small spin qubit processors relies on a brute force approach, where each qubit is individually connected to at least one control line. This approach poses a significant challenge for scaling to larger systems and is already impeding further progress [3; 4]. To overcome this limitation, multiplexing strategies will most likely be essential, and this has been the motivation for various proposals, such as crossbar arrays with shared control [5; 6]. Executing quantum algorithms requires selective quantum control, but its implementation in large qubit arrays poses significant challenges. Recently, the concept of bichromatic spin resonance has been proposed as a potential solution to enable efficient and addressable microwave control in qubit crossbar architectures [7]. In this approach, two microwave tones with frequencies \(f_{\text{w}}\) and \(f_{\text{b}}\) are utilized. These tones are applied to the word line and the bit line of the crossbar array, respectively, and target rotations of a qubit with Larmor frequency of \(f_{\text{w}}\pm f_{\text{b}}\) at the intersection of the two lines (Figure 1a). This mechanism exploits the non-linearity of electric dipole spin resonance (EDSR) [8; 9; 10; 11; 12; 13; 14; 15; 16], and analogous two-photon processes have been utilized in Rydberg-atom processors [17; 18] and superconducting qubits [19] to optimize qubit performance. Here, we investigate experimentally and theoretically the bichromatic driving of semiconductor spin qubits in a two-qubit system defined in a strained germanium quantum well. We find that both qubits can be coherently driven by mixed frequency signals, including the sum and difference of the two frequencies. We investigate the occurrence of resonance anticrossings in EDSR spectroscopy maps, which originate from the Autler-Townes (also known as ac Stark) shift of a photon-dressed spin transition. Additionally, we introduce a model that reveals the importance of spin-preserving and spin-flip tunneling terms in bichromatic and monochromatic EDSR. ## II Results We investigate bichromatic driving of spin qubits in a two-qubit system within a four-qubit germanium quantum processor (Figure 1b) [20; 21]. By tuning the electrostatic potential using plunger and barrier gates, we confine a single-hole quantum dot underneath each of the four plungers P1-P4, and define virtual gate voltages vP1-vP4 based on P1-P4 to achieve independent control. We focus on the spin qubits Q1 and Q2, while Q3 and Q4 remain in their ground state. We furthermore define the detuning voltage \(\epsilon_{12}=\)vP1\(-\)vP2 [22; 23]. Figure 1c displays the charge stability diagram of the double quantum dot system, obtained through rf-reflectometry charge sensing [24]. The device is operated in an in-plane magnetic field of 0.675 T, resulting in qubit frequencies of \(f_{\text{Q1}}=1.514\) GHz and \(f_{\text{Q2}}=2.649\) GHz. To investigate the bichromatic driving approach, we follow the pulse protocol outlined in Figure 2a. We initialise the Q1, Q2 qubits in the \(\ket{\downarrow\downarrow}\) state by adiabatically pulsing \(\epsilon_{12}\) from the (0, 2) to the (1, 1) charge state via the spin-orbit induced anticrossing. Next, we manipulate the spins by simultaneously applying microwave pulses on plunger gates P2 and P4, with a duration \(t_{\rm p}\) and microwave frequencies \(f_{\rm P2}\) and \(f_{\rm P4}\). We perform such two-tone qubit manipulation at the voltage point indicated in Figure 1c corresponding to \(\epsilon_{12}=-20\) mV. Finally, we return to the (0, 2) charge sector through appropriate pulsing and perform read-out using latched Pauli spin blockade [20]. The 2D EDSR spectroscopy in Figure 2b reveals resonance lines from monochromatic and bichromatic driven spin excitations. Monochromatic qubit transitions labelled as \(\rm Q^{1P^{2}}\), \(\rm Q^{1P^{4}}\), \(\rm Q^{2P^{2}}\), \(\rm Q^{2P^{4}}\) (with the superscript defining the driving plunger gate) are observed as vertical and horizontal lines at the Larmor frequencies. Bichromatic excitations appear as tilted resonance lines, with negative (positive) slopes indicating matching sum (difference) frequencies to the qubit Larmor frequency. Three-photon excitations are also observed in certain cases, resulting from resonance between two-photon and single-photon driving with the qubit Larmor frequency. Figs. 2c, d depict the expected resonance lines considering the individual resonance frequencies of the two qubits. The qubits exchange interaction resulting from interdot tunnelling (55 MHz at \(\epsilon_{12}=-20\) mV [23]) is taken into account. To label the Larmor frequency of qubit \(i\) when qubit \(j\) is in the excited state, we use the notation \(\rm Qi\_{-}\) (with \(i,j\in\{1,2\}\) and \(i\neq j\)). The monochromatic transition from \(\ket{\downarrow\downarrow}\) to \(\ket{\uparrow\uparrow}\) driven by P4 is then denoted as \(\rm(Q1+Q2\_)P^{4}\). A bichromatic transition can be visualised as a two-step process via a virtual state, as illustrated in Figure 2e. Following perturbation theory, bichromatic spin transitions are activated thanks to spin-conserving (\(t\)) and spin-flipping (\(\Omega\)) tunnelling terms, which hybridize the four possible spin states with the S(2,0) state, as discussed below and in [23]. We analyze three resonance lines (dashed lines in Figure 2b) resulting from bichromatic rotation of Q1 and Q2. The bichromatic Q1 spin resonance (\(\rm Q1^{-P2,P^{4}}\)) occurs when the frequency difference matches the Q1 Larmor frequency. Similarly, Q2 exhibits bichromatic resonance lines from both frequency difference (\(\rm Q2^{-P2,P4}\)) and frequency sum (\(\rm Q2^{P2,P4}\)) rotations. The bichromatic spin resonance \(\rm Q1^{P2,P4}\) is not investigated due to the presence of a high-pass filter. The conditions for the three studied bichromatic qubit rotations are: \(\rm Q1^{-P2,P^{4}}:f_{\rm P4}-f_{\rm P2}=f_{\rm Q1}\), \(\rm Q2^{-P2,P^{4}}:f_{\rm P4}-f_{\rm P2}=f_{\rm Q2}\) and \(\rm Q2^{P2,P^{4}}:f_{\rm P4}+f_{\rm P2}=f_{\rm Q2}\). In addition to the data shown in Figure 2a, we also achieve coherent bichromatic qubit rotations with a Rabi frequency exceeding 1 MHz, as we demonstrate in [23]. At the intersection of specific resonance lines (see Figure 2b), we also observe anticrossings (labelled as AC\(n\) with \(n\in\{1,\ldots,5\}\) in Figs. 2c and d). In Figure 3, we analyse the evolution of the two bichromatic spin resonances, \(\rm Q2^{-P2,P^{4}}\) and \(\rm Q2^{P2,P^{4}}\), in the frequency plane. We vary the two microwave frequencies together to follow the two resonance lines, using \(\Delta f_{\rm P2}\) in the range of [-40, 40] MHz centered around the bichromatic resonance. This procedure allows to monitor in detail the evolution of the Q2 bichromatic spin resonance within the boxed areas indicated in Figure 2b. The bichromatic resonance aligns with the expected value of \(\Delta f_{\rm P2}=0\) for most of the frequency range. However, significant anticrossings occur when the resonance intersects with other qubit transitions. Examples of these anticrossings are observed at specific frequencies, and are labelled as AC5, AC3 (for \(\rm Q2^{-P2,P^{4}}\)), and AC4, AC1 (for \(\rm Q2^{P2,P^{4}}\)). The observed anticrossings in the frequency plane, such as AC3 in Fig. 3a, result from resonant driving of monochromatic and bichromatic transitions from the Figure 1: **Bichromatic control of a spin qubit.****a,** Bichromatic driving in a crossbar architecture comprising shared gates in the form of word and bit lines. **b,** Illustration of the four-qubit quantum processor. We focus on the operation of Q1 and Q2 with microwave bursts applied to the plunger gates P2 and P4 for electrical control. We model qubit rotations by considering the AC detuning modulation (sketched potential). **c,** Charge stability diagram of the double quantum dot system illustrating the (1, 1) charge sector and indicating the detuning \(\epsilon_{12}\) axis (black line). The dc voltages of the two gates at \(\epsilon_{12}=0\) mV (black circle) at the centre of the map are: -1921.3 mV and -1899.0 mV. The white star indicates the gate voltages used for the qubit manipulation stage. The green and blue arrow indicate the displacement within the vP1, vP2 framework, when applying a microwave burst on P2 and P4, elucidating the different orientation of the driving fields. The displayed length of the arrows is proportional to the amplitude of the signal at the device, amplified by a factor of 5 for visibility. \(\ket{\downarrow\downarrow}\) state to the higher (1,1) states. AC3 involves three resonant processes: the bichromatic transition \(\ket{\downarrow\downarrow}\leftrightarrow\ket{\downarrow\uparrow}\), the monochromatic P4 drive \(\ket{\downarrow\downarrow}\leftrightarrow\ket{\uparrow\uparrow}\), and the monochromatic P2 drive \(\ket{\downarrow\uparrow}\leftrightarrow\ket{\uparrow\uparrow}\). Due to the greater driving efficiency of P2 compared to P4 (Fig. 1c), the dominant transition is \(\ket{\downarrow\uparrow}\leftrightarrow\ket{\uparrow\uparrow}\)[23]. Strong driving via P2 dresses up the spin states \(\ket{\downarrow\uparrow}\) and \(\ket{\uparrow\uparrow}\): in the rotating frame where they are degenerate in the absence of P2 driving, the eigenstates become dressed in the form \(\frac{(\ket{\downarrow\uparrow}\pm\ket{\uparrow\uparrow}))}{\sqrt{2}}\), and the corresponding eigenvalues exhibit a splitting set by the Rabi frequency. In this context, dressing refers to the coherent interaction between the electromagnetic field and the spin system, resulting in entangled states of spins and photons becoming the eigenstates of the coupled system [25]. This effect, known as the Autler-Townes effect or ac Stark shift, has been observed in quantum optics and in strongly driven superconducting qubits [26; 27] and it is at the basis of control strategies for highly coherent solid-state qubits [28]. Due to the Autler-Townes effect, the resonance frequencies of the two weaker transitions (\(\ket{\downarrow\downarrow}\leftrightarrow\ket{\downarrow\uparrow}\) and \(\ket{\downarrow\downarrow}\leftrightarrow\ket{\uparrow\uparrow}\)) are shifted by the Rabi frequency of the strongly driven \(\ket{\downarrow\uparrow}\leftrightarrow\ket{\uparrow\uparrow}\) transition, resulting in the anticrossing between the resonance lines (AC3 in Figs. 3a, b). We use a two-spin qubit Hamiltonian to model our system and gain a quantitative understanding. The model considers the lowest orbital in each dot, including four states in the (1,1) charge regime, as well as the (0,2) and (2,0) singlet states. Spin-conserving and spin-flip tunneling between the quantum dots are also included, with a coupling strength of \(t\) for spin-conserving transitions and \(\Omega\) for spin-flip transitions. Despite neglecting additional electrical g-tensor modulations [2; 29], this minimal model successfully explains electrically driven spin transitions via ac modulation of the detuning voltage us Figure 2: **Bichromatic EDSR spectroscopy.****a,** Control sequence including qubit initialisation, bichromatic manipulation, and readout. **b,** Single-shot read-out probability (\(1-P_{\downarrow\downarrow}\)) as a function of \(f_{\mathrm{P4}}\) and \(f_{\mathrm{P2}}\), taken at \(\epsilon_{12}=-20\) mV. Monochromatic qubit rotations are found at \(f_{\mathrm{Q1}}=1.514\) GHz and \(f_{\mathrm{Q2}}=2.649\) GHz. We include two light green, light blue and purple dotted lines to enclose the bichromatic resonances of \(\mathrm{Q2^{P2,P4}}\), \(\mathrm{Q1^{-P2,P4}}\) and \(\mathrm{Q2^{-P2,P4}}\) respectively. The broad vertical excitation at \(f_{\mathrm{P4}}\sim 1.8\) GHz is associated to a transmission resonance in the lines, and not to a spin transition. **c, d** Monochromatic (in dark red) and bichromatic (in orange) excitations in the 2D frequency plane, as predicted by theory. Three-photon bichromatic excitations are shown in pink. Relevant lines are labelled with the corresponding qubit and driven plunger gate(s). Red circles mark ACs that have been investigated in more detail. **e,** Energy diagram of a two-spin system with finite exchange and finite magnetic field. The green and blue arrows represent the applied microwave frequencies \(f_{\mathrm{P2}}\) and \(f_{\mathrm{P4}}\), whose sum or difference match the energy difference between two states. Driven spin-flipping processes originate from higher order processes via the S(2,0) state involving the spin-conserving tunneling term \(t\) and spin-flip tunneling term \(\Omega\). ing both monochromatic and bichromatic resonance techniques. Here, spin dynamics occur through virtual transitions between the (1,1) spin states and the (0,2) and (2,0) singlet states, mediated by the spin-conserving and spin-flipping terms, as shown in Figure 2e. Our model provides an explanation for the observed resonance crossings and anticrossings in Figs. 3a, c. Using Floquet theory analysis, we classify the intersections of the resonance lines in the frequency plane as crossings, weak anticrossings, and strong anticrossings, as shown in Figs. 2c, d. Strong anticrossings are characterized by a first-order dependence on the P2 driving amplitude and a second-order dependence on the tunneling amplitudes. Weak anticrossings are either controlled by P4 or involve higher-order processes. By analyzing the five strong anticrossings (AC1 to AC5), we estimate the spin-conserving and spin-flip tunneling energies to be on average \(t=(18.1\pm 1.9)\)\(\mathrm{\SIUnitSymbolMicro eV}\) and \(\Omega=(14.3\pm 2.4)\)\(\mathrm{\SIUnitSymbolMicro eV}\)[23]. To verify our theoretical description, we investigate the dependence of the \(\mathrm{Q1^{-P2,P4}}\) resonance anticrossing on the detuning voltage. Experimental data and the expected detuning dependence from the model are shown in Figure 4. In the modelling, we use the previously estimated tunneling amplitudes and only vary the detuning voltage. Moreover, we utilise an estimated detuning lever arm of \(\alpha=0.0917\)\(\mathrm{eV/V}\) and quantum dot charging energy of \(U=2.56\)\(\mathrm{meV}\)[23]. Our theoretical model accurately captures the diminishing size of the anticrossing as the detuning approaches \(\epsilon_{12}\sim 0\). Both the bichromatic and monochromatic resonance lines fade, indicating a reduced efficiency as detuning approaches zero. The diminished efficiency of bichromatic operations near the charge-symmetry point supports the fundamental role of virtual interdot transitions as the underlying driving mechanism. However, in [23] we discuss the limitations of our model and suggest that additional mechanisms, such as EDSR induced by g-tensor modulation, may be necessary to fully interpret all experimental observations [30; 31; 32]. ## III Conclusions With the continuous improvement of semiconductor qubits, it is vital to develop advanced qubit operations based on electric dipole spin resonance to enable scalable and high-performance qubit operations. By establishing the bichromatic control approach, we have turned challenges in EDSR [14] into an opportunity for addressable qubit control in larger arrays. Moreover, we elucidated Figure 3: **Modelling the frequency anticrossings due to the Autler-Townes effect.****a, c** Single-shot probabilities (\(1-P_{\pm 1}\)) in a frequency range around the bichromatic \(\mathrm{Q2^{-P2,P4}}\) and \(\mathrm{Q2^{P2,P4}}\) resonance conditions, respectively. These scans are higher-resolution measurements along the color-coded diagonals enclosed by two dashed lines in Figure 2b. Here, vertical lines of Figure 2b appear horizontal, and horizontal lines appear slightly tilted (as can be seen with \(\mathrm{Q1^{P2}}\) and \(\mathrm{Q1^{P4}}\) in **d**). The values on the \(f_{\mathrm{P2}}\) axes are valid at \(\Delta f_{\mathrm{P2}}=0\). **b, d** Calculated spin transitions nearby the \(\mathrm{Q2^{-P2,P4}}\) and \(\mathrm{Q2^{P2,P4}}\) resonances. **e, f** Illustration of the driven transitions at the four large anticrossings. Strong driving via P2 induces a photon-dressed spin transition, indicated in green, which is blocked at resonance due to the Autler-Townes shift. the relevance of interdot motion in obtaining bichromatic and monochromatic driving. Future experiments may focus on optimizing bichromatic driving, for example by tuning parameters such as the interdot coupling, aiming to achieve reliable and precise qubit operation. ###### Acknowledgements. We are grateful to Maximilian Rimbach-Russ, Brennan Undseth and all the members of the Veldhorst lab for fruitful discussions. We acknowledge support by the Dutch Research Council through an NWO ENW grant and the European Union for an ERC Starting Grant. F.B. acknowledges support from the Dutch Research Council (NWO) via the National Growth Fund programme Quantum Delta NL (Grant No. NGF.1582.22.001). This research was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office (NKFIH) within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004), by the UNKP-22-1 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research Development and Innovation Fund, by NKFIH through the OTKA Grant FK 132146, by the Janos Bolyai Research Scholarship of the Hungarian Academy of Sciences, and by the European Union through the Horizon Europe project IGNITE and QLSI. ## IV Data Availability All data and analysis underlying this study are available on a 4TU.ResearchData repository at [https://doi.org/10.4121/bb43fe1d-f503-49e8-9f17-ce7d734f015d](https://doi.org/10.4121/bb43fe1d-f503-49e8-9f17-ce7d734f015d).
2309.00717
The high-speed X-ray camera on AXIS
AXIS is a Probe-class mission concept that will provide high-throughput, high-spatial-resolution X-ray spectral imaging, enabling transformative studies of high-energy astrophysical phenomena. To take advantage of the advanced optics and avoid photon pile-up, the AXIS focal plane requires detectors with readout rates at least 20 times faster than previous soft X-ray imaging spectrometers flying aboard missions such as Chandra and Suzaku, while retaining the low noise, excellent spectral performance, and low power requirements of those instruments. We present the design of the AXIS high-speed X-ray camera, which baselines large-format MIT Lincoln Laboratory CCDs employing low-noise pJFET output amplifiers and a single-layer polysilicon gate structure that allows fast, low-power clocking. These detectors are combined with an integrated high-speed, low-noise ASIC readout chip from Stanford University that provides better performance than conventional discrete solutions at a fraction of their power consumption and footprint. Our complementary front-end electronics concept employs state of the art digital video waveform capture and advanced signal processing to deliver low noise at high speed. We review the current performance of this technology, highlighting recent improvements on prototype devices that achieve excellent noise characteristics at the required readout rate. We present measurements of the CCD spectral response across the AXIS energy band, augmenting lab measurements with detector simulations that help us understand sources of charge loss and evaluate the quality of the CCD backside passivation technique. We show that our technology is on a path that will meet our requirements and enable AXIS to achieve world-class science.
Eric D. Miller, Marshall W. Bautz, Catherine E. Grant, Richard F. Foster, Beverly LaMarr, Andrew Malonis, Gregory Prigozhin, Benjamin Schneider, Christopher Leitz, Sven Herrmann, Steven W. Allen, Tanmoy Chattopadhyay, Peter Orel, R. Glenn Morris, Haley Stueber, Abraham D. Falcone, Andrew Ptak, Christopher Reynolds
2023-09-01T19:49:13Z
http://arxiv.org/abs/2309.00717v1
# The high-speed X-ray camera on AXIS ###### Abstract AXIS is a Probe-class mission concept that will provide high-throughput, high-spatial-resolution X-ray spectral imaging, enabling transformative studies of high-energy astrophysical phenomena. To take advantage of the advanced optics and avoid photon pile-up, the AXIS focal plane requires detectors with readout rates at least 20 times faster than previous soft X-ray imaging spectrometers flying aboard missions such as Chandra and Suzaku, while retaining the low noise, excellent spectral performance, and low power requirements of those instruments. We present the design of the AXIS high-speed X-ray camera, which baselines large-format MIT Lincoln Laboratory CCDs employing low-noise pJFET output amplifiers and a single-layer polysilicon gate structure that allows fast, low-power clocking. These detectors are combined with an integrated high-speed, low-noise ASIC readout chip from Stanford University that provides better performance than conventional discrete solutions at a fraction of their power consumption and footprint. Our complementary front-end electronics concept employs state of the art digital video waveform capture and advanced signal processing to deliver low noise at high speed. We review the current performance of this technology, highlighting recent improvements on prototype devices that achieve excellent noise characteristics at the required readout rate. We present measurements of the CCD spectral response across the AXIS energy band, augmenting lab measurements with detector simulations that help us understand sources of charge loss and evaluate the quality of the CCD backside passivation technique. We show that our technology is on a path that will meet our requirements and enable AXIS to achieve world-class science. X-ray detectors, CCDs, APEX Probe missions, detector response Further author information: E. D. Miller: E-mail: [email protected] ## 1 Introduction The Advanced X-ray Imaging Satellite (AXIS) [1, 2] is a mission concept in response to the Astrophysics Probe Explorer (APEX) call, providing high-throughput, high-spatial-resolution X-ray imaging spectroscopy. The unique capabilities of AXIS will allow ground-breaking studies addressing key science priority areas identified by the National Academies' 2020 Decadal Survey on Astronomy and Astrophysics [3]. In particular, AXIS will explore "Cosmic Ecosystems" by studying the birth and evolution of super-massive black holes and the mechanisms of galactic feedback; it will help place "Worlds and Suns in Context" by studying the affects of stellar activity on the planets they harbor; and it will be a key player in "New Messengers and New Physics" thanks to a rapid on-board transient alert system and rapid response time to external transient triggers. AXIS will provide the crucial X-ray counterpart to the panchromatic suite of large observatories of the 2030s, including JWST, Rubin, Roman, LIGO/Virgo/Kagra, LISA, SKA, and Euclid. The high throughput and spatial resolution of AXIS create technical challenges for the detector system. While similar instruments have flown with great success on missions such as Chandra and Suzaku, to avoid photon pile-up, AXIS requires a camera operating at least 20 times faster than on those heritage missions. At the same time, to allow ground-breaking astrophysics studies, the AXIS detectors must retain or exceed the excellent spectral imaging performance of Chandra ACIS and Suzaku XIS. As we have shown in previous work, in some cases these requirements are at odds with each other technically. For example, AXIS requires excellent quantum efficiency and spectral response across a wide 0.2-10 keV energy band. To accomplish this at the hard end requires a detector at least 100 \(\mu\)m thick, yet such a thick silicon detector with pixels small enough to sample the PSF can struggle to meet the soft X-ray performance requirement due to diffusion of charge from photons interacting far from the collection gates [4, 5]. The AXIS camera design follows careful consideration of the baseline mission performance, shown in Table 1 along with the derived camera requirements. In this contribution, we describe the design for the AXIS camera, called the Focal Plane Assembly (FPA). We also review recent developments of our focal plane performance, which continue a multi-year effort to develop fast, low-noise detectors for future strategic X-ray missions [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. We finally demonstrate that the advanced technology is on track to reach the required technical readiness level for the AXIS mission. \begin{table} \begin{tabular}{|l|l|} \hline \hline **AXIS mission parameters** & \\ \hline Spatial resolution at 1 keV (HPD) & 1.25\({}^{\prime\prime}\) (on-axis) \\ & 1.50\({}^{\prime\prime}\) (FoV-average) \\ \hline Effective area at 1 keV & 4200 cm\({}^{2}\) (on-axis) \\ & 3600 cm\({}^{2}\) (FoV-average) \\ \hline Effective area at 6 keV & 830 cm\({}^{2}\) (on-axis) \\ & 570 cm\({}^{2}\) (FoV-average) \\ \hline Field of view & 24\({}^{\prime}\) diameter \\ \hline Energy band & 0.2–10 keV \\ \hline Energy resolution (FWHM) & \(\leq\)70 eV (at 1 keV) \\ & \(\leq\)150 eV (at 6 keV) \\ \hline Orbit & circular low-Earth orbit \\ & \(i<8^{\circ}\), 610–680 km \\ \hline Prime mission lifetime & 5 years \\ \hline \hline **Focal Plane Assembly characteristics** & \\ \hline Frame rate & \(\geq\) 5 fps (goal 20 fps) \\ \hline Serial readout rate & \(\geq\) 2 MHz \\ \hline Pixel size & 24 \(\mu\)m (0.55\({}^{\prime\prime}\)) \\ \hline Readout noise & \(\leq\) 3 e- RMS \\ \hline Focal plane temperature & \(-90\pm 0.1^{\circ}\)C \\ \hline \hline \end{tabular} \end{table} Table 1: AXIS baseline parameters relevant to the camera. ## 2 Axis Camera Design The high-speed AXIS X-ray camera incorporates a focal plane array of fast-readout charge-coupled devices (CCDs) designed and fabricated by MIT Lincoln Laboratory (MIT/LL) and building on a long line of successful space instruments spanning the last three decades. Each CCD is coupled with an application-specific integrated circuit (ASIC) specifically designed by Stanford University to provide low-noise and low-power amplification of the CCD analog signal. The front-end electronics incorporate modern digital processing to further reduce noise. The back-end electronics under development at Penn State implement a state-of-the-art FPGA-based Event Recognition Processor (ERP) to greatly reduce the telemetry stream, and they also include a Transient Alert Module (TAM) to detect changes in flux among sources in the field of view and rapidly disseminate transient alerts to the community. The detector system is housed in a reliable, high-heritage camera structure that builds on lessons learned from previous missions. ### Detector system At the heart of the camera are four MIT/LL CCD detectors arranged in a 2\(\times\)2 array to cover the 24\({}^{\prime}\) AXIS field of view. Each frame-store CCD is back-illuminated for enhanced soft X-ray sensitivity, 100 \(\mu\)m thick for hard X-ray sensitivity, and has 24-\(\mu\)m (0.55\({}^{\prime\prime}\)) pixels to sample the sharp AXIS PSF. The AXIS CCID-100 detector builds on heritage devices such as the CCID-41 back-illuminated device that flew as XIS1 on Suzaku, sharing several design features including pixel size and charge injection implementation. Two key technical advancements allow faster operation without a loss of performance or increase in power consumption. First, the triple layer of polysilicon used for the clocking gate structures in the CCID-41 and similar devices has been replaced with a single polysilicon layer. This allows the gates to be located very near each other, in turn requiring much lower voltage swings during charge transfer and thus much lower power consumption per transfer. Second, the single-stage on-chip MOSFET output amplifier has been updated to a two-stage pJFET amplifier, producing similar noise levels at \(\sim\) 10 times faster readout rate. Each CCD has eight outputs to increase the data rate from the 1440\(\times\)1440 pixel imaging area. The CCD backside is passivated with a molecular beam epitaxy (MBE) process that deposits a thin 5-10 nm layer of heavily doped silicon. Performance results obtained by our group at the MIT Kavli Institute (MKI) from prototype CCDs with these design features have been reported over the past several years [5, 6, 7, 8, 9, 10, 11, 12, 13], and updates are presented in Section 3.1. Traditional off-chip amplification done with discrete components is not suitable for our purposes due to the amount of power required to reduce the parasitic capacitance and the real-estate these components would occupy. Our group at Stanford University has developed an ASIC called the Multi-Channel Readout Chip (MCRC) specifically for use with these MIT/LL CCDs. The MCRC is designed in a 350-nm technology node featuring 8 channels. Each channel has two selectable gain settings, an input referred noise of 1.63 e\({}^{-}\) RMS, an input dynamic range of \(\pm\)320 mV, channel-to-channel crosstalk less than \(-\)75 dBc, a power consumption of roughly 31 mW/channel, and a bandwidth of approximately 50 MHz, translating to an effective rise time of around 5 ns. Such a response can comfortably support readout speeds for large CCD pixel matrices in excess of 5 Mpixel/s/channel. The ASIC also features 8 integrated current sources to bias the CCD outputs. The heart of the MCRC is a fully differential amplifier with a switched capacitive feedback to minimize added noise. Along with being optimized for low noise it also provides two gain settings of 6 and 12 V/V. The amplifier is configured such that it translates the input single-ended CCD signal to a fully differential output. The signal is then buffered to the outside of the ASIC by a fully differential unity-gain amplifier designed to drive a transmission line to support digital waveform sampling with the commercial ADCs that we deploy in the Front End Electronics. The MCRC is currently deployed and functioning in test systems at MIT and Stanford, as described in Section 3.2. Each CCD and ASIC pair is mounted on the same detector package, shown in Figure 1, and operates as a unit. For eight CCD outputs and ASIC channels per detector running at 2 MHz, and a parallel transfer speed of 1 MHz, the current best estimate for CCID-100 frame rate is 7 frames per second (fps), meeting the AXIS requirement of 5 fps. This configuration results in 0.5% out-of-time fraction, and so the frame rate could be increased without requiring faster parallel transfer. Increasing the frame rate is possible through design changes such as increasing the number of outputs; or through operational changes, such as increasing the output rate to 5 MHz. This latter change may come at the cost of increased noise and reduced soft X-ray response, and so could be implemented as a choice based on the science goal. Similarly, time resolution for point-like sources could be improved to the sub-ms regime by reading out only a small sub-array of the aimpoint detector or even employing a continuous-clocking mode that eliminates all spatial information along CCD columns. We are confident that we can meet the AXIS goal of 20 fps with some combination of design and operational considerations, while retaining good spectral performance to allow ground-breaking science. Because of this, all electronics downstream of the detector system have been designed to accommodate this faster pixel rate. ### Mechanical design The focal plane is housed in a vacuum enclosure that incorporates high-heritage components and lessons learned from previous missions. This housing, shown in Figure 2 with its various components, is kept under vacuum on the ground to reduce the risk of molecular contamination build-up. The detector array is thermally isolated from the housing with standoffs, and connected to a cold finger that cools the detectors and ASICs to \(-90^{\circ}\)C, eliminating dark current and mitigating the effects of particle-induced radiation damage. This temperature is maintained to \(\pm 0.1^{\circ}\)C using trim heaters. The vacuum is maintained with a commandable vent valve and a one-time-open door; in orbit the vent valve is permanently opened to the vacuum of space. A warm contamination blocking filter serves the dual purpose of blocking optical and UV light and, along with proper contamination control at all stages of assembly and integration, preventing the build-up of molecular contamination along the light path. Similar contamination compromised the soft X-ray performance of instruments aboard Chandra and Suzaku. A bonnet and baffle provide structural support and stray-light blocking, and radioactive \({}^{55}\)Fe sources mounted on the closed door and illuminating unused corners of the CCDs provide continuous, well-characterized reference photons of \(\sim\) 6 keV for detector monitoring and calibration. ### Front End Electronics The CCD+ASIC detectors are controlled and sampled by the Front End Electronics (FEE), which share heritage with several highly successful past missions, including ASCA, Chandra, Suzaku, TESS, and eROSITA. The interfaces between the detectors and the FEE, and further downstream between the FEE and Back End Electronics (BEE), are shown in Figure 3. The FEE contains five 6U printed circuit boards connected to a backplane within an electronics box enclosure that provides the connector panel interface. Two of these boards, the Power and Clocking Boards, provide fully differential bias and sequenced clock voltages to the CCDs as well as bias voltages to the readout ASICs. One board operates a pair of detectors independently from the other board. The ADC Boards receive the detector video signal from the ASIC outputs and amplify and digitize the pixel voltages, again with one board responsible for a pair of detectors. These FEE boards are highly complementary to the CCD+ASIC detector system, employing Microchip PolarFire FPGAs and ST Microelectronics ADCs to perform state-of-the-art digital video waveform capture at 40 Msamples/s, delivering low noise at high speed. The fifth Figure 1: AXIS detector array with components labeled. The array consists of four fast-readout frame-store CCDs arranged in a 2\(\times\)2 pattern, each with a dedicated ASIC readout chip co-mounted on the interposer. An aluminum cover shields the frame store regions from focused celestial X-rays during readout, and it provides the ASICs with additional particle damage protection. The entire array is maintained at -90\({}^{\circ}\)C using a cold finger connection and active trim heaters. Figure 3: Block diagram of the FPA and BEE showing components and interfaces between the sub-systems. Within the FEE, the (A) and (B) indicate boards dedicated to a pair of detectors. Within the BEE, the (A) and (B) Event Recognition Processor (ERP) boards are fully redundant thanks to cross-strapping; each ERP board will nominally read pixel data from a single FEE ADC board (a pair of detectors), but a single ERP board can receive and process data from the entire focal plane, even at the goal frame rate of 20 fps. Figure 2: The AXIS focal plane camera assembly, rendered as integrated (left) and in an exploded view (right). Various features are labeled and described further in the text. Telemetry and Heater Control board controls all thermal and mechanical components in the camera assembly, including the trim heaters that maintain the focal plane temperature, the heaters keeping the contamination blocking filter warm, and single actuation of the camera vent valve and door. This board also receives commands and relays HK data along a dedicated LVDS line to the BEE. Digitized pixel data is transferred from the ADC Boards to the BEE along four dedicated Ethernet lines that utilize a Microchip gigabit Ethernet PHY layer. To minimize noise and impedance loss along the analog signal line, the FEE is located within 1 m of the camera. ### Back End Electronics The BEE receives an image of digitized pixel values for each CCD frame from the FEE and processes this into a list of candidate X-ray events. This processing is performed by a pair of FPGA-based Event Recognition Processor (ERP) [21] boards, which first correct the images for pixel-to-pixel and time-dependent bias offsets and mask bad pixels using maps held in memory. Local maxima are identified as candidate events along with the neighboring 3\(\times\)3 pixels that exceed tunable noise thresholds. Information about the location, summed pulse height, and pixel pattern or "grade" are extracted for each candidate event, which are then filtered based on a configurable set of rules and packaged for telemetry. Each ERP board carries half the load simultaneously; however, all CCD lines are strapped to both boards, thus enabling intrinsic redundancy with a single ERP board capable of processing data from all detectors, even at the goal frame rate of 20 fps. A copy of the event stream is sent to the Transient Analysis Module (TAM) within the BEE to search for transient sources and enable alerts to be issued in real time. The TAM software is based on that currently flying on the Swift observatory and also on algorithms initially developed for the Athena Science Products Module [22]. It compares changes in detected sources with an on-board source catalog and with its own recent history, accounting for pointing uncertainties and effects such as dithering. Transient alerts meeting a configurable set of criteria are transferred to the ground and disseminated to the community within 10 minutes [2]. ## 3 Current Performance of the Detector System ### CCD performance #### 3.1.1 Summary of CCD types and MKI test facilities Our groups at MIT/LL and MKI have continued development of advanced, fast-readout CCD detectors for a strategic mission such as AXIS for the past several years. Here we provide an update on the performance of various CCD types, building on results presented in previous work [5, 6, 7, 8, 9, 10, 11, 12, 13]. These CCDs are shown in their test packages in Figure 4, and their features are summarized in Table 2 along with the current AXIS CCD design. While the CCID-93 and CCID-94 are useful devices for characterizing certain features of the AXIS CCID-100, the CCID-89 is identical in most ways to that expected future device, and can be considered the "prototype" AXIS detector. The test facilities at MKI include dedicated vacuum chambers for each of the CCD types, each equipped with a liquid-nitrogen cryostat for thermal control to temperatures below \(-100^{\circ}\)C. Each setup includes an Archon1 controller that provides CCD bias and clock voltages and performs digital sampling on the CCD analog video waveform, incorporating an interface board that connects to a custom made vacuum feed-through board and detector board for each CCD. The lab detector package and boards for each CCD were designed by MIT/LL. Each chamber allows easy insertion of a radioactive \({}^{55}\)Fe source for reliable full-frame illumination with Mn K\(\alpha\) (5.9 keV) and K\(\beta\) (6.4 keV) X-rays. The CCDID-89 and -94 setups can incorporate a \({}^{210}\)Po source with Teflon target that produces fluorescence lines of C K (0.53 keV) and F K (0.68 keV) across the full 50\(\times\)25 mm imaging area. For the testing presented here, the CCID-93 chamber is mounted on an In-Focus Monochromator (IFM) [23] that uses grazing incidence reflection gratings to produce clean monochromatic lines at energies below 2 keV, with typical spectral resolving power \(\lambda/\Delta\lambda=E/\Delta E\sim\) 60-80, far higher than that of the CCD itself. We are currently adapting the IFM setup to allow testing with the CCID-89 and -94 chambers. Footnote 1: [http://www.sta-inc.net/archon](http://www.sta-inc.net/archon) All test data is acquired as a set of full pixel frames that are processed in a similar way to flight data on Chandra and to the expected BEE algorithm (see Section 2.4). Briefly, each image is corrected for pixel-to-pixel and time-dependent bias levels. Persistent ("hot") pixels are masked, and local maxima above a defined event threshold (5-8 times the RMS noise) are identified. Pixel islands around these maxima are searched for pixels above a second "split" threshold (3-4 times the RMS noise), and all such pixels are summed to produce the event pulse height, a measurement of the incident photon energy. This summed pulse height is denoted "allaboveph" in several figures in this work. Pixel islands are typically 7\(\times\)7 for the CCID-93 with 8-\(\mu\)m pixels, and 3\(\times\)3 for the others devices with 24-\(\mu\)m pixels. Each event is assigned a "multiplicity", akin to the "grade" on ASCA, Chandra, and Suzaku, but here simply encoding the number of pixels "n#" that are above the split threshold. Where appropriate, we include detector simulations in an effort to understand the detector physics and its effect on performance. These simulations use Poisson CCD[24], a software package that models the 3-D electric field in a specified silicon detector and performs a Monte Carlo simulation of the drift and diffusion of photoelectrons introduced into the detector volume. The simulation setup is similar to what we have presented in past work[10, 4, 5]. #### 3.1.2 CCD X-ray performance The ability of a detector to accurately report the energy of an incoming photon is crucial to X-ray imaging spectroscopy. This is one of the key challenges for AXIS due to its sensitivity and broad energy band, as we have explained in Section 1 and elsewhere[10, 5]. In addition to Fano noise, a variety of effects can degrade the detector response to photons of different energies, including signal lost below the noise threshold; signal lost to charge transfer inefficiency (CTI) or other charge traps[12]; signal lost to defects at the backside entrance window; and signal redistributed to other energies due to fluorescence ionization within the detector. Minimizing readout noise is a key challenge for the AXIS detector in order to achieve excellent spectral performance, since any pixel-to-pixel stochastic noise such as that introduced by the readout chain produces an irreducible broadening of the response profile. Our latest testing of a back-illuminated CCID-89 (identified as W10C6) shows that the pJFET outputs perform remarkably well at 2 MHz at a variety of temperatures (see Figure 5). This readout rate would provide a frame rate of 7 fps for the AXIS CCID-100, meeting our requirement. At the AXIS operating temperature of \(-90^{\circ}\)C, the noise is less than 2.5 e- RMS for six of the eight output, meeting our requirement. These measurements were taken with modest tuning of clock settings to achieve the best noise for a single output. With additional tuning, we anticipate meeting the noise requirement for all nodes. Further noise measurements from our test CCDs are presented in Section 3.1.3 and elsewhere in these proceedings[25]. The measured spectral response bears out the excellent noise results. In Figure 5, we also show a spectrum of \({}^{55}\)Fe from a single representative segment of the same CCID-89. Mn K\(\alpha\) and K\(\beta\) are seen near 6 keV, along with escape and fluorescence lines at lower energies. The Mn K\(\alpha\) line profile is Gaussian over more than two orders of magnitude, with a width of 137 eV FWHM including events of all pixel multiplicities. This performance exceeds the AXIS requirement of 150 eV FWHM. Similar results are seen for other segments on this device, and for other well-performing segments of our other test detectors. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \hline **Feature** & **CCID-93** & **CCID-94** & **CCID-89** & **CCID-100 (AXIS)** \\ \hline Format & Frame-transfer, 512\(\times\)512 & Frame-transfer, 2048\(\times\)1024 pixel imaging array & Frame-transfer, 1440\(\times\)1440 \\ & pixel imaging array & & pixel imaging array & pixel imaging array \\ \hline Image area pixel size & 8\(\times\)8 \(\mu\)m & 24\(\times\)24 \(\mu\)m & 24\(\times\)24 \(\mu\)m & 24\(\times\)24 \\ \hline Output ports & 1 pJFET, 1 SiSeRO & 8 pJFET & 8 pJFET & 8–16 pJFET \\ & (independent) & & & \\ \hline Transfer gate design & Single layer polysilicon & Triple layer polysilicon & Single layer polysilicon & Single layer polysilicon \\ \hline Additional features & Regions with 0.5, 1, 2 & Trough, charge injection & Trough, charge injection & Trough, charge injection \\ & \(\mu\)m and no trough; & & injection & \\ & charge injection & & & \\ \hline BI detector thickness & 50 \(\mu\)m & 50 \(\mu\)m & 50 \(\mu\)m & 100 \(\mu\)m \\ \hline Back surface & MIT/LL MBE 5–10 nm; & MIT/LL MBE 10 nm & MIT/LL MBE 5–10 & MIT/LL MBE 5–10 \\ & JPL delta doping & & nm & \\ \hline Typical serial rate & 2–5 MHz & 0.5 MHz & 2–5 MHz & \(\geq\)2 MHz \\ \hline Typical parallel rate & 0.1 MHz & 0.1 MHz & 0.2 MHz & \(\geq\)0.5 MHz \\ \hline Full frame read time & 0.15–0.56 s & 1 s & 0.15–0.56 s & \(\leq\)0.2 s \\ \hline \hline \end{tabular} \end{table} Table 2: Features of MIT/LL CCDs under testing for AXIS at MKI and Stanford, compared to the AXIS CCD design. Figure 4: Photographs of packaged MIT/LL CCDs under testing for AXIS development and performance demonstration. Photos are not at the same scale; the schematic in the upper left panel shows the actual relative sizes and the number of output nodes as orange segments in the frame store region. While the CCID-94 and -89 are the same size, they are shown in different orientations. The CCID-89 shown is a front-illuminated device, so the metal layer is visibly delineating the frame store regions, which cannot be seen in the back-illuminated CCID-94 shown here. The CCID-100 is the AXIS design CCD and has not been fabricated, so there is no photo. The response below 1 keV is key for the study of high-redshift objects and low temperature X-ray-emitting plasma. In a back-illuminated CCD, soft response depends on the backside passivation quality, since soft X-rays interact close to the entrance window. In addition, since the interaction site is furthest away from the collection gates, charge diffusion is maximized, and signal (which is smaller to begin with) is more likely to spread to multiple pixels and be lost below the noise threshold. Understanding the importance of each effect is valuable. We have characterized the soft X-ray response in back-illuminated versions of all three CCD models. Due to its smaller 8-\(\mu\)m pixels, the CCID-93 is invaluable to study the effects of charge diffusion to separate it from other sources of signal loss, such as entrance window passivation quality. We illuminated this detector with O K 0.53 keV photons using the IFM, and the resulting spectra are shown in Figure 6 (left panel). The readout speed of 0.5 MHz was selected to achieve the lowest noise (1.7 e- RMS) in order to identify non-noise-related charge loss effects. Spectra from events with different pixel multiplicities are plotted separately; there is a clear energy shift for different multiplicities, although the widths of the dominant multiplicities are all less than 70 eV FWHM (see Table 3). In principle, a multiplicity-dependent gain correction[26] can significantly improve the overall FWHM. Non-Gaussian low-energy tails indicate some source of charge loss. Spectra from simulations of same detector and lab setup are shown in the right panel of Figure 6 along with a selection of spectra from the left panel. The simulations include charge diffusion and the effects of enforcing a noise threshold, but do not reproduce the low-energy tail, indicating a different source of charge loss. The multiplicity distribution is also different for the lab data and simulations. The source of the soft tail remains under investigation, however it is clear that even with small pixels, the MBE backside processing allows us to achieve the soft spectral response required for AXIS. The CCID-94 is also useful to understand the soft response. We illuminated our test device with \({}^{210}\)Po with Teflon target and \({}^{55}\)Fe simultaneously to produce a broad-band spectrum from a single representative segment (see Figure 7). The primary fluorescence lines of F K, Mn K\(\alpha\), and Mn K\(\beta\) can be seen along with several other fluorescence and escape features. C K is not visible due to a low-energy noise peak that is unrelated to the detector or source. After fitting and subtracting a power-law model to this noise continuum, we measure the spectral resolution at F K (0.68 keV) to be 66 eV FWHM for all event multiplicities for this segment, with little variation from segment to segment. There is a non-Gaussian tail, which remains under investigation but could result from backside surface losses. This tail contains less than 10% of the counts for all segments, and Figure 5: (left) Readout noise as a function of temperature for all pJEFT nodes on the back-illuminated CCID-89 (W10C6). At this readout rate of 2 MHz, the AXIS baseline CCID-100 will reach 7 fps frame rate. Six of the eight nodes have lower noise than the required 3 e- RMS at the AXIS operating temperature of \(-90^{\circ}\)C. (right) Spectrum from a representative node of the same CCID-89 produced under illumination by \({}^{55}\)Fe. The bright lines of Mn K\(\alpha\) (5.9 keV) and K\(\beta\) (6.4 keV) are easily resolved and show that we meet the AXIS requirements for spectral resolution at 6 keV (\(\leq\)150 eV FWHM), even combining all events of different pixel multiplicities. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \hline **Typical result** & **CCID-93** & **CCID-94** & **CCID-89** \\ \hline Detector temperature & \(-128^{\circ}\) & \(-89^{\circ}\) & \(-87^{\circ}\)C \\ \hline Serial readout rate & 0.5 MHz & 0.5 MHz & 2 MHz \\ \hline Readout noise (RMS) & 1.7 e- & 2.1–3.1 e- & 2.2–3.4 e- \\ \hline \multicolumn{4}{|l|}{Spectral resolution (Gaussian FWHM)a \\ \hline C K & all & \(\cdots\) & \(74\) eV (100\%) \\ 0.27 keV & n1 & \(\cdots\) & \(71\) eV (48\%) \\ & n2 & \(\cdots\) & \(75\) eV (45\%) \\ & n3 & \(\cdots\) & \(90\) eV (6\%) \\ \hline O K & all & 83 (100\%) & \(\cdots\) & \(\cdots\) \\ 0.53 keV & n2 & 66 (4\%) & \(\cdots\) & \(\cdots\) \\ & n3 & 63 (25\%) & \(\cdots\) & \(\cdots\) \\ & n4 & 63 (53\%) & \(\cdots\) & \(\cdots\) \\ & n5 & 65 (16\%) & \(\cdots\) & \(\cdots\) \\ & n6 & 67 (2\%) & \(\cdots\) & \(\cdots\) \\ \hline F K & all & \(\cdots\) & 66 eV (100\%) & 76 eV (100\%) \\ 0.68 keV & n1 & \(\cdots\) & 63 eV (36\%) & 70 eV (21\%) \\ & n2 & \(\cdots\) & 66 eV (49\%) & 75 eV (52\%) \\ & n3 & \(\cdots\) & 80 eV (9\%) & 80 eV (19\%) \\ & n4 & \(\cdots\) & 74 eV (6\%) & 81 eV (75\%) \\ \hline Mn K & all & 142 eV (100\%) & 139 eV (100\%) & 137 eV (100\%) \\ 5.9 keV & n1 & 123 eV (5\%) & 129 eV (18\%) & 129 eV (6\%) \\ & n2 & \(\cdots\) & 136 eV (44\%) & \(\cdots\) \\ & n3 & \(\cdots\) & 140 eV (18\%) & \(\cdots\) \\ & n4 & \(\cdots\) & 143 eV (20\%) & \(\cdots\) \\ \hline \hline \end{tabular} \end{table} Table 3: Results from CCD testing at MKI. Figure 6: (left) Spectrum of the O K fluorescence line from the CCID-93 illuminated by the IFM. Spectra from events with different pixel multiplicities are plotted separately. “allaboveph” in the X axis indicates the event energies were estimated by summing the pulse heights of all pixels above the noise threshold. Details of the processing and further interpretation can be found in the text. (right) Simulated spectra for the same detector and lab setup (dashed lines), plotted with a selection of spectra from the left panel (solid lines). The simulations include charge diffusion and the effects of enforcing a noise threshold, but do not reproduce the low-energy tail, indicating a different source of charge loss. The multiplicity distribution is also different. its segment-to-segment uniformity indicates it can be accounted for using a standard redistribution matrix. The response at \(\sim 6\) keV is excellent for all nodes, averaging 135 eV FWHM for all event multiplicities. Results are summarized in Table 3. The CCID-89 operating at 2 MHz is the closest approximation of the future AXIS CCD, featuring a similar structure and including all effects of charge collection and fast charge transfer that can alter the soft X-ray response. A spectrum from \({}^{210}\)Po with Teflon target illuminating a representative segment of this chip is shown in Figure 8. We clearly detect both the C K (0.28 keV) and F K (0.68 keV) from the Teflon and easily separate them from the noise peak. The spectral FWHM, shown in Table 3, approaches the AXIS requirement of 70 eV FWHM below 1 keV, although it is wider than simulations of charge diffusion and noise threshold effects would Figure 8: (left) Spectrum of the C K and F K fluorescence lines from the CCID-89 illuminated by a \({}^{210}\)Po source with a Teflon target. This spectrum is from a representative CCD segment, and spectra from events with different pixel multiplicities are plotted separately. (right) Spectral lines produced in a simulation of the same detector and lab setup show considerably narrower FWHM and somewhat different multiplicity distributions. Figure 7: (left panel) Spectrum of a single CCID-94 segment (node ‘C’) simultaneously illuminated with \({}^{210}\)Po with a Teflon target and \({}^{55}\)Fe. The primary fluorescence lines of F K, Mn K\(\alpha\), and Mn K\(\beta\) can be seen along with several other fluorescence and escape features. (right two panels) Zoom-in of the F K and Mn K peaks from the left panel, now also showing spectra from the other seven nodes of this chip as orange curves. The F K line has been corrected for the noise continuum by subtracting a best-fit power law, shown in dashed orange in the left panel. Gaussian fits (dashed blue lines) indicate that spectral FWHM meets the AXIS requirement at both energies. While the F K peak shows a non-Gaussian tail, it contains fewer than 10% of the line counts. predict (Figure 8, right panel). All three CCD flavors exhibit low noise and excellent X-ray response across the AXIS band and, for the multi-output models, across multiple segments. While the soft response formally meets AXIS requirements, each detector shows some disagreement with simulations, generally producing broader FWHM with a tail toward low energies. This is indicative of a charge loss mechanism not included in simulations, such as CTI or (more likely) charge losses at the backside entrance window. We are planning to test the CCID-89 and CCID-94 with the IFM soon, and this will allow illumination with cleaner monochromatic lines at more soft energies, such as C K (0.27 keV), O K (0.53 keV), and Mg K (1.25 keV). It should also eliminate the noise source from the current CCID-93 setup and allow us to probe to lower energies. These results and updated simulations including backside passivation effects will be presented in future work. #### 3.1.3 CCID-93 sub-channel implant variant performance Our group at Stanford University has tested one of the MIT/LL CCID-93 detector variants where the output stage follows the same two amplifier architectures, but with an additional sub-channel implant beneath the pJFET amplifier gate. In this setup, the detector was readout by a readout module consisting of 2-stage amplification stages -- a low noise, low input capacitance, large bandwidth amplifier (ADA4817) followed by a differential ADC driver (AD8138). The bandwidth of the readout is around 50 MHz with noise around 10\(-\)15 nV/\(\sqrt{\rm Hz}\) (the CCD output stage thermal noise is around 20 nV/\(\sqrt{\rm Hz}\)). We use an Archon controller to control and run the CCD clocks and bias voltages, and to digitize the CCD analog output. Figure 9 (left) shows the experiment setup (also knows as the "Tiny Box" chamber) used to characterize the detectors at Stanford University. The CCD temperature was \(-\)25\({}^{\circ}\)C for this experiment. The detector has been characterized at readout speeds of 2, 3, 4, and 5 Mpix/s. Figure 9 (right panel) shows the CCD digital waveform obtained from the Archon controller for the lowest and highest speeds. As shown in Table 4, the readout noise ranges from 2.3 e- RMS at 2 MHz to 4.1 e- RMS at 5 MHz. For the measured output stage gain of 35 \(\mu\)V / electron and a transistor transconductance (g\({}_{m}\)) of 20 \(\mu\)S, the measured read noise values are very close to the theoretical thermal noise limit of the transistors. Spectral performance of the device was evaluated using an \({}^{55}\)Fe radioisotope. An X-ray spectrum showing Mn K\(\alpha\) (5.9 keV) and Mn K\(\beta\) (6.4 keV) lines from the radioactive source is shown in Figure 10. The 5.9 keV line width FWHM is 128 eV for 2 MHz and 142 eV for 5 MHz readout. The measured read noise and FWHM for 2-5 MHz readout speeds are summarized in Table 4. Figure 9: (left) The Stanford University experimental set up with a CCID-93 device mounted inside the “Tiny Box” vacuum chamber. A beryllium window mounted on the top flange serves as the X-ray entrance window. (right) CCID-93 video waveform obtained from the Archon controller. The red and blue lines are obtained for 2 and 5 Mpix/s readout speeds, respectively. The baseline and signal samples enable correlated double sampling (CDS) for estimation of charge. \begin{table} \begin{tabular}{|c|c|c|} \hline \hline **Readout Speed** & **Read Noise (e- RMS)** & **FWHM (eV) at 5.9 keV** \\ \hline 2 MHz & 2.3 & 127 eV \\ \hline 3 MHz & 3.0 & 128 eV \\ \hline 4 MHz & 3.4 & 133 eV \\ \hline 5 MHz & 4.1 & 142 eV \\ \hline \hline \end{tabular} \end{table} Table 4: Summary of results from the sub-channel implant variant of the CCID-93 detector at \(-25^{\circ}\)C. Figure 10: Spectrum showing the Mn K\(\alpha\) (5.9 keV) and K\(\beta\) (6.4 keV) lines from a \({}^{55}\)Fe radioactive source for single-pixel (n0) events for 2 MHz (left) and 5 MHz (right) readout speeds. ### MCRC ASIC performance The readout ASIC development and testing has been underway for several years at Stanford University, with the current MCRC v1.0 showing excellent performance [14, 15, 16, 17, 19], as summarized in Table 5. The integrated readout chip outperforms the established discrete electronics in all parameters, at a fraction of the power consumption. Recently, the group has assembled the first combination of the MCRC with an MIT/LL prototype CCD package at MKI, and tested for functionality and performance across all eight ASIC readout channels. The anticipated speed, power consumption, and baseline noise were all achieved, with the MCRC clearly outperforming the previous discrete electronics. In parallel, the group is building a second setup at Stanford University utilizing the smaller (and more economical) single-channel MIT/LL CCID-93 CCDs, to further characterize the combined ASIC+CCD performance and develop procedures for optimum operation. The setup will also be used to drive further development of the readout chip. Preliminary results at room temperature with the setup show near identical detector video waveforms for both the discrete and ASIC readout modes. The two ASIC enabled setups are shown in Figure 11. We have recently tested a set of MCRC chips at the Brookhaven National Laboratory radiation test facility, exposing them to gamma rays of approximately 1 MeV (from a \({}^{60}\)Co source) with a total ionizing dose of roughly 50 Mrad. All of the chips exhibited anticipated deviations of their operating points above 25 Mrad but no chip failed or showed any measurable degradation in the monitored response. The MCRC has thus been proven to be a successful, robust all-in-one readout solution. We will soon be ready to pair a large, representative CCD (CCID-89) with the MCRC ASIC and verify the full X-ray performance. ## 4 Summary The high throughput and excellent spatial resolution of AXIS will enable breakthrough science in the next decade. However, these exceptional performance characteristics also present technical challenges to design and fabricate a fully complementary X-ray camera. We exploit recent technology advances from MIT/LL that allow us to use well-established CCD technology in combination with a newly developed ASIC-based readout chip. These advances provide the speed required while also achieving the spectral performance necessary to realize the full potential of AXIS. The detector electronics combine advanced digital processing techniques with proven flight hardware and firmware design. The advanced detectors are housed in a high-heritage camera that builds on decades of flight experience and the associated lessons learned. Performance testing of prototype CCDs and ASICs produces excellent results, with low noise and good spectral response at the speeds required for AXIS. We have recently mated ASICs to our CCD test packages with excellent preliminary results, and technical demonstration of our advanced detector system continues on schedule to meet the needs of AXIS. \begin{table} \begin{tabular}{|l|l|} \hline \hline Number of channels & 8 channels / chip \\ \hline Input capacitance & 1.5 pF \\ \hline Achievable pixel rate & 5 Mpix/s per channel \\ \hline Voltage gain & 6.2 (low gain mode) \\ & 12.1 (high gain mode) \\ \hline Input dynamic range & 320 mV (equivalent to 30 keV) \\ \hline Input noise density & 6.5 nV/\(\sqrt{\text{Hz}}\) \\ \hline Crosstalk & \(\leq\)\(-75\) dB in passband \\ \hline Power consumption & 35 mW per channel \\ \hline Radiation tolerance & \(\geq\) 25 krad \\ \hline \hline \end{tabular} \end{table} Table 5: Overview table of the MCRC V1.0 readout ASIC performance ###### Acknowledgements. We gratefully acknowledge support from NASA through the Strategic Astrophysics Technology (SAT) program, grants 80NSSC18K0138 and 80NSSC19K0401 to MIT, and from the Kavli Research Infrastructure Fund of the MIT Kavli Institute for Astrophysics and Space Research. Stanford team members acknowledge support from NASA through Astrophysics Research and Analysis (APRA) grants 80NSSC19K0499 and 80NSSC22K1921, and from the Kavli Institute for Particle Astrophysics and Cosmology.
2306.10381
Intermediate geodesic growth in virtually nilpotent groups
We give a criterion on pairs $(G,S)$ - where $G$ is a virtually $s$-step nilpotent group and $S$ is a finite generating set - saying whether the geodesic growth is exponential or strictly sub-exponential. Whenever $s=1,2$, this goes further and we prove the geodesic growth is either exponential or polynomial. For $s\ge 3$ however, intermediate growth is possible. We provide an example of virtually $3$-step nilpotent group for which $\gamma_{\mathrm{geod}}(n) \asymp \exp\!\big(n^{3/5}\cdot \log(n)\big)$. This is the first known example of group with intermediate geodesic growth. Along the way, we prove results on the geometry of virtually nilpotent groups, including asymptotics with error terms for their volume growth.
Corentin Bodart
2023-06-17T15:50:32Z
http://arxiv.org/abs/2306.10381v2
# Intermediate Geodesic Growth in ###### Abstract We give a criterion on pairs \((G,S)\) - where \(G\) is a virtually \(s\)-step nilpotent group and \(S\) is a finite generating set - saying whether the geodesic growth is exponential or strictly sub-exponential. Whenever \(s=1,2\), this goes further and we prove the geodesic growth is either exponential or polynomial. For \(s\geqslant 3\) however, intermediate growth is possible. We provide an example of virtually \(3\)-step nilpotent group for which \(\gamma_{\text{geod}}(n)\asymp\exp\bigl{(}n^{3/5}\cdot\log(n)\bigr{)}\). This is the first known example of group with intermediate geodesic growth. Along the way, we prove results on the geometry of virtually nilpotent groups of independent interest, including asymptotics with error terms for their volume growth. **Keywords:** Geodesic growth, virtually nilpotent groups, Engel group. A common scheme in Geometric Group theory is to consider groups as geometric spaces, define some notion of growth and then classify groups with growth in a given regime by their algebraic properties. The first example is the volume growth: we consider a group \(G\), together with a finite (monoid) generating set \(S\). This defines a norm \[\left\|g\right\|_{S}=\min\{\ell(w)\mid w\in S^{*}\text{ and }\overline{w}=g\}\] on \(G\). The _volume growth function_ of the pair \((G,S)\) is \[\beta_{(G,S)}(n)=\#\bigl{\{}g\in G\bigm{|}\|g\|_{S}\leqslant n\bigr{\}}\,.\] Milnor famously asked two questions about the volume growth of groups: * Characterize groups \(G\) with polynomial volume growth, that is, \(\beta_{(G,S)}(n)\preceq n^{d}\) for some constant \(d\geqslant 0\). * Does there exists a group \(G\) with intermediate volume growth? More precisely, does there exists \(G\) such that \(\beta_{(G,S)}(n)\) is neither bounded above by a polynomial, nor bounded below by an exponential function? On the one hand, Gromov proved that a group has polynomial volume growth if and only if this group is virtually nilpotent, introducing a large machinery to make the link with nilpotent Lie groups [10]. In the other hand, Grigorchuk constructed a whole family of groups of intermediate growth, now source of groups with many intriguing properties [1]. These results constitute an ideal to aim for. In our case, the story starts in the same way: a pair \((G,S)\), and the word metric \(\left\|\,\cdot\,\right\|_{S}\). However, instead of counting elements, we count geodesics. A word \(w\in S^{*}\) is a _geodesic_ if \(\ell(w)=\left\|\overline{w}\right\|_{S}\). The _geodesic growth function_ of \((G,S)\) is then defined as \[\gamma(n)=\gamma_{\mathrm{geod}}^{(G,S)}(n)=\#\big{\{}w\in S^{n}\;\big{|}\;w \text{ is geodesic}\big{\}}\,.\] We should mention that geodesic growth is sensitive to the choice of a generating set. Indeed, Bridson, Burillo, Elder and Sunic proved that all infinite groups \(G\) admit a generating set \(S\) such that \(\gamma_{\mathrm{geod}}^{(G,S)}(n)\) grows exponentially [1, Example 6].1 Therefore, BBES ask the following questions: Footnote 1: This holds if we allow generating _multisets_. Otherwise, the only virtually nilpotent counter-example is \(G=\mathbb{Z}\) (as a corollary of Theorem 1), and any other hypothetical counter-example would be of intermediate volume growth (as exponential volume growth implies exponential geodesic growth). * Characterize groups \(G\) with polynomial geodesic growth, that is, with \(\gamma_{\mathrm{geod}}^{(G,S)}(n)\preceq n^{d}\) for some constant \(d\geqslant 0\) and for at least one generating set \(S\). * Does there exists a pair \((G,S)\) with intermediate geodesic growth? In the same paper, BBES proposed some partial answers. Their main theorem is a sufficient condition for polynomial geodesic growth: **Theorem** ([1, Theorem 1]).: _Let \(G\) be a finitely generated group. If there exists an element \(a\in G\) such that \(H=\left\langle\!\left\langle a\right\rangle\!\right\rangle_{G}\) is a finite-index abelian subgroup, then there exists a symmetric generating set \(S\) such that \((G,S)\) has polynomial geodesic growth._ This includes all virtually cyclic groups, but also groups like \[\vancel{\mathscr{L}}=\mathbb{Z}^{2}\rtimes C_{2}=\left\langle a,t\mid t^{2}= e;\;[a,a^{t}]=e\right\rangle.\] Subsequently, Bishop and Elder [1] proved that the group \[\vancel{\mathscr{H}}=H_{3}(\mathbb{Z})\rtimes C_{2}=\left\langle a,t\mid t^{2 }=e;\;[a,[a,a^{t}]]=[a^{t},[a,a^{t}]]=e\right\rangle\] has polynomial geodesic growth w.r.t. the generating set \(S=\{a^{\pm},t\}\). In the opposite direction, BBES showed that any group factoring onto \(\mathbb{Z}^{2}\), for instance every (non virtually-cyclic) nilpotent group, has exponential geodesic growth w.r.t. every generating set. We generalize most of these results in the following criterion: **Theorem 1**.: _Let \(G\) be a virtually \(s\)-step-nilpotent group, with \(S\) a finite generating set. Consider \(H\) a torsionfree, \(s\)-step nilpotent, finite-index, normal subgroup of \(G\). We get out of this data a map \(\pi\colon H\twoheadrightarrow H/[H,H]\), and an action of the finite group \(F=G/H\) on \(H/[H,H]\simeq\mathbb{Z}^{d}\) (by conjugation). We define the multiset_ \[A=A(S)=\left\{\frac{\pi(\bar{u})}{\ell(u)}\in\mathbb{Q}^{d}\;\middle|\;u\in S ^{*}\text{ labels a simple cycle in }\mathcal{S}ch(H\backslash G,S)\right\}\] _and the polytope \(P(S)=\operatorname{ConvHull}(A^{F})\)\((\)where \(\mathcal{S}ch(H\backslash G,S)\) denotes the Schreier graph, and \(A^{F}\) denotes the orbit of \(A\) under conjugation by \(F\)\()\). If no two elements of \(A\) lie on a common facet of \(P(S)\), then_ * _If_ \(s\leqslant 2\)_, the geodesic growth is bounded above by a polynomial._ * _If_ \(s\geqslant 3\)_, the geodesic growth is bounded above by_ \[\gamma_{\mathrm{geod}}^{(G,S)}(n)\preceq\exp\bigl{(}n^{\alpha_{s}}\log(n) \bigr{)},\] _with_ \(0<\alpha_{s}<1\) _an explicit constant_ (_eg._ \(\alpha_{3}=3/5\)_)_._ _Otherwise the geodesic growth is exponential._ **Remark**.: \(A(S)\) should be considered as a multiset: any point \(p\in\mathbb{Q}^{d}\) appears as many times in \(A(S)\) as there are simple cycles \(u\in S^{*}\) such that \(p=\pi(\bar{u})/\ell(u)\). **Remark**.: Any virtually \(s\)-step nilpotent group \(G\) contains by definition a finite-index \(s\)-step nilpotent subgroup \(H^{\prime}\). By [10, Theorem 3.23], this group contains a finite-index torsionfree subgroup \(H^{\prime\prime}\). Finally, \(H^{\prime\prime}\) contains a finite-index subgroup \[H=\operatorname{core}(H^{\prime\prime})=\bigcap_{g\in G}gH^{\prime\prime}g^{-1 }\trianglelefteq G.\] Therefore, the second sentence of Theorem 1 is not a restriction on \(G\). Hence, we reduce the first BBES question for virtually \(2\)-step nilpotent groups to the characterization of finite subgroups \(F\leqslant\operatorname{GL}_{d}(\mathbb{Z})\) with a certain property, as follows: **Corollary 2**.: _Let \(G\) be a finitely generated, virtually \(2\)-step-nilpotent group. Consider \(H\trianglelefteq G\) a torsionfree, \(2\)-step nilpotent, finite-index, normal subgroup of \(G\). This defines an action of \(F=G/H\) on \(\mathbb{Z}^{d}\simeq H/[H,H]\). The following assertions are equivalent:_ 1. _There exists a generating set_ \(S\) _such that_ \((G,S)\) _has polynomial geodesic growth._ 2. _There exists a finite set_ \(A\subset\mathbb{Z}^{d}\) _such that_ \(P=\operatorname{ConvHull}(A^{F})\) _is a full-dimensional polytope, and no two elements of_ \(A\) _lie on the same facet of_ \(P\)_._ Proof.: We have to prove (ii) \(\Rightarrow\) (i). We construct a generating set from \(A=\{a_{1},\ldots,a_{m}\}\) as follows. Consider \(S_{0}\) a fixed generating set for \(G\), and define \[S_{n}=S_{0}\cup\{h_{1}^{n},\ldots,h_{m}^{n}\}\text{ for all }n\in\mathbb{Z}_{>0},\] where \(h_{i}\) are elements of \(H\) satisfying \(\pi(h_{i})=a_{i}\). Observe that the new generators \(h_{i}^{n}\in H\) only add loops in the Cayley graph of \(G/H\), and therefore \[A(S_{n})=A(S_{0})\cup nA.\] For \(n\) large enough, we have \(P(S_{n})=\operatorname{ConvHull}(nA^{F})\) and \(A(S_{0})\cap\partial P(S_{n})=\emptyset\). At this point, the hypothesis on \(A\) implies that \((G,S_{n})\) has polynomial geodesic growth. **Remark 3**.: The statement of this last result can be adapted if we only allow _symmetrical_ generating sets \(S\). The only other modification needed is 1. The exists a _symmetric_ finite set \(A\subset\mathbb{Z}^{d}\) such that \(P=\operatorname{ConvHull}(A^{F})\) is a full-dimensional polytope, and no two elements of \(A\) lie on the same facet of \(P\). Note that conditions (ii) and (ii') are not equivalent. An example is given by the group \(G_{2}=\mathbb{Z}^{2}\rtimes C_{2}\) where \(C_{2}=\langle r\rangle\) acts by \(180^{\circ}\) rotations (see also [1, Example 16]). If we only look at _symmetric_ sets \(A\), we always have \(A=A^{C_{2}}\), so that all vertices of any facet of \(P\) belong to \(A\). In contrast \(G_{2}\) satisfies condition (ii): This means that the geodesic growth of \(G_{2}\) is polynomial w.r.t. \(S=\{x,y,(xy)^{-1},r\}\), and exponential w.r.t. any symmetric generating set (as shown in the BBES paper). Regarding the second BBES question, we get the following affirmative answer **Theorem 4**.: _The geodesic growth of the group_ \[\iota\mathscr{G}=\big{\langle}a,t\bigm{|}t^{2}=1;\;[a,[a,a^{t}]]=[a^{t},[a,a^ {t}]]\text{ commutes with }a,a^{t}\big{\rangle}\] _with generating set \(S=\{a^{\pm 1},t\}\) satisfies \(\gamma_{\mathrm{good}}(n)\asymp\exp\bigl{(}n^{3/5}\cdot\log(n)\bigr{)}\)._ Note that this group is virtually \(3\)-step nilpotent, more precisely it admits an index-\(2\) subgroup isomorphic to the so-called "Engel group". In some sense, this is the next smallest candidate for intermediate geodesic growth, as virtually abelian have either polynomial or exponential geodesic growth (see [1]), and the same holds true in virtually \(2\)-step nilpotent groups (see Theorem 1). The example relies on the same trick as examples \(\iota\mathscr{Z}\) and \(\iota\mathscr{G}\) of BBES and Bishop-Elder; we re-use some of their ideas, combined with insights from nilpotent geometry. As a byproduct, we provide estimates on the volume growth \(\beta_{(G,S)}(n)\) of virtually nilpotent groups \(G\). A celebrated result due to Pansu [10] is that \[\beta_{(G,S)}(n)=c_{(G,S)}\cdot n^{d}+o(n^{d})\] whenever \(G\) is virtually nilpotent. Subsequently the error term was refined whenever \(G\) is \(2\)-step nilpotent [12] and more generally \(s\)-step nilpotent [1, 1]. We extend these results to virtually \(s\)-step nilpotent groups: Figure 1: \(A=\{(1,0),(0,1),(-1,-1)\}\) in purple and \(P(S)\) in green. **Theorem 5** (Corollary 2.6).: _Let \(G\) be a virtually \(s\)-step nilpotent group, and \(S\) a finite symmetric generating set. The volume growth satisfies_ \[\beta_{(G,S)}(n)=c_{(G,S)}\cdot n^{d}+O(n^{d-\delta_{s}})\] _where \(\delta_{s}=1\) for \(s=1,2\) and \(\delta_{s}=\frac{1}{s}\) for \(s\geqslant 3\)._ (The error term coincides with Gianella's error term for nilpotent groups.) Finally, in Remark 4.3, we disprove a conjecture of Breuillard and Le Donne stating that, in any torsionfree nilpotent group \(G\) with a finite symmetric generating set \(S\), we should have \[\left\|g\right\|_{S}-\left\|g\right\|_{\operatorname{Stoll},S}=O_{G,S}(1)\] (where \(\left\|\cdot\right\|_{\operatorname{Stoll},S}\) is the Stoll metric, defined in Section 1). [BD13, Conjecture 6.5]. This conjecture was made in a effort to improve the volume growth estimate to \(O(n^{d-1})\). **Organization of the paper.** In Section 1, we recall some results mainly due to Stoll on what's now called the "Stoll metric". Section 2 gathers some general results on word metrics in virtually nilpotent group. We compare word metrics in virtually nilpotent groups and the corresponding nilpotent subgroups, and give some local structure for geodesics. Section 3 is devoted to the proof of Theorem 1. In Section 4, we do a deeper dive around the Engel group. We introduce a model for this group, provide fine lower bounds on word length of some elements, and prove the lower bound on geodesic growth needed for Theorem 4. Finally, Section 5 compiles some remarks and questions. **Acknowledgement.** I'd like to thank Alex Bishop for helpful conversations, feedback, and suggesting a simplification for Proposition 4.2, and Tatiana Nagnibeda for her constant support and feedback on early versions. The author acknowledges support of the Swiss NSF grant 200020-200400. ## 1 Preliminaries In this paper, we will consider metrics slightly more general than the usual word metrics on groups, allowing each generator to have a different weight: **Definition 1.1**.: Consider \(G\) a group, with a finite generating set \(S\) and a weight function \(\sigma\colon S\to\mathbb{Z}_{>0}\). All words \(w\in S^{*}\) can be written as \(w=s_{1}^{m_{1}}s_{2}^{m_{2}}\ldots s_{k}^{m_{k}}\) for some letters \(s_{i}\in X\) satisfying \(s_{i}\neq s_{i+1}\), and \(m_{i}\in\mathbb{Z}_{>0}\). We define * the _coarse length_\(k(w)=k\), * the _length_\(\ell_{\sigma}(w)=\sum_{i=1}^{k}m_{i}\cdot\sigma(s_{i})\). Moreover, for each element \(g\in G\), we define \[\left\|g\right\|_{S,\sigma}=\min\big{\{}\ell_{\sigma}(v)\mid v\in S^{*}\text{ and }\bar{v}=g\big{\}}.\] Whenever \(\sigma\equiv 1\), we will drop the subscript \(\sigma\). ### The Stoll metric Consider \(H\) a torsionfree nilpotent group and \(\bar{H}\) its Malcev completion. In the Malcev completion, exponentiation like \(h^{\mu}\) makes sense for \(h\in\bar{H}\) and \(\mu\in\mathbb{R}\). **Definition 1.2** (\(\mathbb{R}\)-words).: Fix a finite Lie generating set \(S\) for \(\bar{H}\). An _\(\mathbb{R}\)-word_ is an expression \(w=s_{1}^{\mu_{1}}\cdot s_{2}^{\mu_{2}}\cdot\ldots\cdot s_{k}^{\mu_{k}}\) with \(s_{i}\in S\) and \(\mu_{i}\in\mathbb{R}_{>0}\). We denote the set of \(\mathbb{R}\)-words over \(S\) by \(S_{\mathbb{R}}^{*}\). The notions of coarse length \(k(w)\) and length \(\ell_{\sigma}(w)\) extend easily to \(S_{\mathbb{R}}^{*}\). In [12], Stoll proves that any element \(g\in\bar{H}\) can be represented by an \(\mathbb{R}\)-word. We can therefore define the Stoll norm on \(\bar{H}\) as **Definition 1.3** (Stoll metric).: Given \(\bar{H}\) a simply connected nilpotent Lie group, \(S\) a finite Lie generating set and \(\sigma\colon S\to\mathbb{Z}_{>0}\) a weight function, we define \[\left\|h\right\|_{\mathrm{Stoll},S,\sigma}=\inf\left\{\ell_{\sigma}(w)\ \middle|\ w\in S_{\mathbb{R}}^{*}\text{ and }\overline{w}=h\right\}.\] ### The Abelian case We first have a look at the Abelian case and make some useful observations. See [13] for a more thorough treatment. We have \(H\simeq\mathbb{Z}^{d}\) and \(\bar{H}=V_{1}=\mathbb{R}^{d}\). In this case, the Stoll metric has a very geometrical interpretation: **Lemma 1.4**.: _The Stoll norm coincide with the "Minkowski norm" associated to_ \[P=\mathrm{ConvHull}\left\{\frac{s}{\sigma(s)}\ \middle|\ s\in S\right\} \subset\mathbb{R}^{d}.\] _More precisely,_ \[\left\|v\right\|_{\mathrm{Stoll}}=\left\|v\right\|_{\mathrm{Mink},P}\stackrel{{ \mathrm{def}}}{{=}}\min\left\{\lambda\geqslant 0\mid v\in\lambda P \right\}\quad\text{for all }\,v\in\mathbb{R}^{d}.\] _Moreover, an \(\mathbb{R}\)-word \(s_{1}^{\mu_{1}}\cdot\ldots\cdot s_{k}^{\mu_{k}}\) is geodesic iff all \(\frac{s_{i}}{\sigma(s_{i})}\) lie on a common face of \(P\)._ Proof.: We can reduce ourselves to the case \(\tilde{\sigma}\equiv 1\) by setting \(\tilde{S}=\left\{\frac{s}{\sigma(s)}\mid s\in S\right\}\subset\bar{H}\). The case \(v=0\) is trivial. Suppose \(v\neq 0\) and let \(m=\left\|v\right\|_{\mathrm{Mink},P}>0\). We first construct an \(\mathbb{R}\)-word representing \(v\) of length \(m\). Consider \(F\) the minimal face of \(P\) containing \(\frac{1}{m}\cdot v\). By the Caratheodory theorem, there exists \(d\) vertices of \(F\), say \(s_{1},\ldots,s_{d}\in S\), such that \(\tilde{v}\in\mathrm{ConvHull}(s_{1},\ldots,s_{d})\), i.e., \[\exists\nu_{1},\ldots,\nu_{d}\geqslant 0\quad\text{such that}\quad\nu_{1}+\ldots+\nu_{d}=1\quad\text{and} \quad\nu_{1}s_{1}+\ldots+\nu_{d}s_{d}=\frac{1}{m}\cdot v\] and therefore \(v=s_{1}^{\nu_{1}m}\cdot\ldots\cdot s_{d}^{\nu_{d}m}\). Next we show that any \(\mathbb{R}\)-word \(s_{1}^{\mu_{1}}\cdot\ldots\cdot s_{k}^{\mu_{k}}\) s.t. all letters \(s_{i}\) lie on a common face \(F\) are geodesics. Consider another \(\mathbb{R}\)-word \(t_{1}^{\lambda_{1}}\cdot\ldots\cdot t_{\ell}^{\lambda_{\ell}}\) (\(t_{i}\in S\) and \(\lambda_{i}>0\)) representing the same element as \(s_{1}^{\mu_{1}}\cdot\ldots\cdot s_{k}^{\mu_{k}}\). As \(F\) is a face, there exists a linear form \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) such that \(f(x)\leqslant 1\) for all \(x\in P\), with equality if and only if \(x\in F\). It follows \[\mu_{1}+\ldots+\mu_{k}=f(s_{1}^{\mu_{1}}\cdot\ldots\cdot s_{k}^{\mu_{k}})=f(t_ {1}^{\lambda_{1}}\cdot\ldots\cdot t_{\ell}^{\lambda_{\ell}})=\lambda_{1}f(t_{ 1})+\ldots+\lambda_{\ell}f(t_{\ell})\leqslant\lambda_{1}+\ldots+\lambda_{\ell}\] which means \(s_{1}^{\mu_{1}}\cdot\ldots\cdot s_{k}^{\mu_{k}}\) has indeed minimal length. In particular, this applies to the previously constructed \(\mathbb{R}\)-word (of length \(m\)) for \(v\): we have \(\left\|v\right\|_{\mathrm{Stoll}}=m\). ### The \(2\)-step nilpotent case When \(H\) is \(2\)-step nilpotent, \(\bar{H}=V_{1}\oplus V_{2}\) and the operation is given by \[(u,z_{1})(v,z_{2})=\big{(}u+v,\;z_{1}+z_{2}+[u,v]\big{)}\] where \([\cdot,\cdot]\colon V_{1}\times V_{1}\to V_{2}\) is a surjective, anti-symmetric bilinear form. Given an element \(g\in\bar{H}\), we denote by \(\pi(g)\) its first component and \(z(g)\) the second component (its "areas"). Note that the areas of a product are given by \[z(h_{1}h_{2}\ldots h_{k})=\sum_{i=1}^{k}z(h_{i})+\,\sum_{i<j}\big{[}\pi(h_{i}),\pi(h_{j})\big{]}. \tag{1.5}\] In the \(2\)-step-nilpotent case, Stoll proves much more: **Proposition 1.6** (Rough isometry, [12, Proposition 4.3]).: _Let \(H\) be a torsionfree \(2\)-step nilpotent group, with a finite generating set \(S\) and \(\sigma\colon S\to\mathbb{Z}_{>0}\) a weight function. Then there exists a constant \(C=C(S,\sigma)\) such that_ \[\forall h\in H,\quad\left\|h\right\|_{\mathrm{Stoll},S,\sigma}\leqslant\left\| h\right\|_{S,\sigma}\leqslant\left\|h\right\|_{\mathrm{Stoll},S,\sigma}+C.\] When considering words in \(w\in S^{*}\), there is usually a trade-off between having small coarse length \(k(w)\), and having length \(\ell_{\sigma}(w)\) close to \(\left\|\overline{w}\right\|_{S,\sigma}\). However, in the \(2\)-step nilpotent case, this does not happen. Indeed, in order to prove \(\left\|h\right\|_{S}\leqslant\left\|h\right\|_{\mathrm{Stoll}}+C\), Stoll constructs words \(w\in S^{*}\), with length \(\ell(w)\leqslant\left\|h\right\|_{\mathrm{Stoll}}+C\). Crucially, those words do not switch between letters too many times. More precisely, **Lemma 1.7** (The best of both worlds).: _There exists a constant \(K\) such that any element \(h\in H\) can be written \(h=s_{1}^{m_{1}}s_{2}^{m_{2}}\ldots s_{k}^{m_{k}}\) with \(s_{i}\in S\), \(m_{i}\in\mathbb{Z}_{>0}\), \(k\leqslant K\) and_ \[\ell_{\sigma}(s_{1}^{m_{1}}s_{2}^{m_{2}}\ldots s_{k}^{m_{k}})\leqslant\left\| h\right\|_{\mathrm{Stoll},S,\sigma}+C\leqslant\left\|h\right\|_{S,\sigma}+C.\] **Remark**.: The Stoll paper only treats the case of symmetric generating set, with weight function \(\sigma\equiv 1\). However the argument adapts to our more general setting. Generalities on virtually nilpotent groups Let us start with some general observations on virtually nilpotent groups. In this section, \(G\) is a group with a finite generating set \(S\). We suppose \(G\) contains a finite-index, torsionfree, \(s\)-step nilpotent subgroup \(H\). We do not require \(H\) to be normal. ### A generating set for \(H\) Consider the Schreier graph \(\mathcal{S}ch(H\backslash G,S)\). We define a new generating set \[X(S)=\left\{\,\,\text{$tut^{-1}$}\,\left|\begin{array}{l}\text{$t\in S^{*}$ labels a simple path $H\;\to\;Ht$}\\ \text{$u\in S^{*}$ labels a simple cycle $Ht\to Ht$}\end{array}\right.\right\}\subset H\] with a cost function \(\sigma\colon X\to\mathbb{Z}_{>0}\) defined by \(\sigma(tut^{-1})=\ell(u)\). We have \(H=\langle X\rangle\). Indeed, as \(S\) generates \(G\), any element \(h\in H\) can be represented by a word \(w\in S^{*}\) labeling a cycle \(H\to H\) in \(\mathcal{S}ch(H\backslash G,S)\). This path can be decomposed as a product of conjugates \(t_{i}u_{i}t_{i}^{-1}\), using a loop-erasure algorithm: We will call the resulting word \(\tilde{w}\in X^{*}\) the _decomposition_ of \(w\). ### Sub-linear control between word metrics By construction, we have \(\ell_{\sigma}(\tilde{w})=\ell(w)\) hence \(\left\|h\right\|_{X,\sigma}\leqslant\left\|h\right\|_{S}\) for all \(h\in H\). The goal of this paragraph is to prove an inequality in the other direction. In order to state our result, we first define a sequence \((\alpha_{s})_{s\geqslant 2}\subset[0,1)\). It starts with \(\alpha_{2}=0\) and then \[\forall s\geqslant 3,\quad\alpha_{s}=\frac{1-\frac{1}{s}\alpha_{s-1}}{2- \alpha_{s-1}-\frac{1}{s}}.\] An induction shows that \(0\leqslant\alpha_{s}\leqslant 1-\frac{1}{s}<0\). We can now state our main inequality: **Proposition 2.1**.: _Let \(G\) be a virtually \(s\)-step nilpotent group, and \(H\) a finite-index, torsionfree, \(s\)-step nilpotent subgroup of \(G\). We consider \(S\) a generating set for \(G\) and \(X\) the associated generating set for \(H\), with the cost function \(\sigma\colon X\to\mathbb{Z}_{>0}\). Then_ \[\forall h\in H,\;\left\|h\right\|_{X,\sigma}\leqslant\left\|h\right\|_{S} \leqslant\left\|h\right\|_{X,\sigma}+O\big{(}\|h\|_{X,\sigma}^{\alpha_{s}} \big{)}.\] Figure 2: A path decomposed as a product of “freeze frames” of the loop-easure algorithm. We will need some intermediate results. First, recall the lower central series \[\gamma_{1}(H)=H\quad\text{and, for each }s\geqslant 2,\quad\gamma_{s}(H)=[\gamma_{s-1}(H),H].\] A classical lemma is the following, see for instance [1, SS14.1.3] **Lemma 2.2** (Distortion).: _Let \(H\) be a finitely generated torsionfree \(s\)-step nilpotent group. Consider \(\|\cdot\|_{E}\) an Euclidean norm on \(\gamma_{s}(H)\simeq\mathbb{Z}^{c}\). Then_ \[\forall z\in\gamma_{s}(H),\quad\|z\|_{X,\sigma}=\Theta\big{(}\|z\|_{E}^{1/s} \big{)}.\] Next we need the following generalization of Lemma 1.7: **Lemma 2.3** (\(k\) versus \(\ell_{\sigma}\)).: _Any element \(h\in H\) with \(\|h\|_{X,\sigma}=n\) can be represented by a word \(w\in X^{*}\) with coarse length \(k(w)=O(n^{\alpha_{s}})\) and length \(\ell_{\sigma}(w)=n+O(n^{\alpha_{s}})\)._ Proof.: We argue by induction on \(s\). The case \(s=2\) (for which \(\alpha_{s}=0\)) is Lemma 1.7. Suppose the induction hypothesis holds for \(s-1\geqslant 2\). Consider \(h\in H\) with \(\|h\|_{X,\sigma}=n\). Let \(v\in X^{*}\) be a geodesic word representing \(h\). We can decompose \(v\) as a product \(v=v_{1}v_{2}\dots v_{m}\) with \(m=n^{\beta}+O(1)\) pieces of length \(n^{1-\beta}+O(1)\). By induction hypothesis, there exist words \(w_{i}\in X^{*}\) such that \[\overline{w}_{i}=\overline{v}_{i}\,\,\operatorname{mod}\gamma_{s}(H),\,\,\,k (w_{i})=O\big{(}n^{(1-\beta)\alpha_{s-1}}\big{)}\,\,\,\text{and}\,\,\,\ell_{ \sigma}(w_{i})=n^{1-\beta}+O\big{(}n^{(1-\beta)\alpha_{s-1}}\big{)}.\] Observe that the error \(z_{i}=\bar{v}_{i}\bar{w}_{i}^{-1}\in\gamma_{s}(H)\) has length \[\|z_{i}\|_{X,\sigma}\leqslant\ell_{\sigma}(v_{i})+\ell_{\sigma}(w_{i})=O(n^{1 -\beta})\] hence \(\|z_{i}\|_{E}=O(n^{(1-\beta)s})\) by Lemma 2.2. Therefore, the total error \(z=z_{1}z_{2}\dots z_{m}\) has size \(\|z\|_{E}=O(n^{\beta}\cdot n^{(1-\beta)s})\). The same lemma delivers \(w_{z}\in X^{*}\) such that \(\overline{w}_{z}=z\) and \(\ell_{\sigma}(w_{z})=O(n^{1-\frac{s-1}{s}\beta})\). Finally, let \[w=w_{1}w_{2}\dots w_{m}w_{z}\in X^{*}.\] We have \(\overline{w}=h\), and \[k(w) \leqslant\sum_{i=1}^{m}k(w_{i})+m+k(w_{z})=n^{\beta}\cdot O(n^{(1 -\beta)\alpha_{s-1}})+n^{\beta}+O(n^{1-\frac{s-1}{s}\beta}),\] \[\ell_{\sigma}(w) =\sum_{i=1}^{m}\ell_{\sigma}(w_{i})+\ell_{\sigma}(w_{z})=n+n^{ \beta}\cdot O(n^{(1-\beta)\alpha_{s-1}})+O(n^{1-\frac{s-1}{s}\beta}).\] To conclude, we fine-tune \(\beta=\frac{1-\alpha_{s-1}}{2-\alpha_{s-1}-\frac{1}{s}}\) so that \(\beta+(1-\beta)\alpha_{s-1}=1-\frac{s-1}{s}\beta=\alpha_{s}\). **Remark 2.4**.: The first exponents \(\alpha_{2}=0\) and \(\alpha_{3}=\frac{3}{5}\) are optimal (by Remark 4.3). Later \(\alpha_{s}\) can probably be improved. For instance, Gianella has proved a result analogous to Lemma 2.3 with \(\alpha_{s}=\frac{s}{s+2}\) if we allow \(w\) to be an \(\mathbb{R}\)-word. [1, Lemma 40] Proof of Proposition 2.1.: Let us prove the right inequality. Using the previous lemma, any element \(h\in H\) of length \(\left\lVert h\right\rVert_{X,\sigma}=n\) can be represented by a word \[w=(t_{1}u_{1}t_{1}^{-1})^{m_{1}}\ldots(t_{k}u_{k}t_{k}^{-1})^{m_{k}}\in X^{*}\] with \(k(w)=O(n^{\alpha_{s}})\) and \(\ell_{\sigma}(w)=n+O(n^{\alpha_{s}})\). We convert this word into \[w^{\prime}=t_{1}\,u_{1}^{m_{1}}\,v_{1}\ldots t_{k}\,u_{k}^{m_{k}}\,v_{k}\in S^ {*}\] where \(v_{i}\in S^{*}\) is a geodesic representative for \(t_{i}^{-1}\). This word has length \[\ell(w^{\prime}) =\ell_{\sigma}(w)+\sum_{i=1}^{k}\big{(}\ell(t_{i})+\ell(v_{i}) \big{)}\leqslant\ell_{\sigma}(w)+k(w)\big{(}[G:H]+\max_{t}\|t^{-1}\|_{S}\big{)}\] \[=n+O(n^{\alpha_{s}})\] so that \(\left\lVert h\right\rVert_{S}\leqslant\left\lVert h\right\rVert_{X,\sigma}+ O\big{(}\|h\|_{X,\sigma}^{\alpha_{s}}\big{)}\) as announced. ### A parte: Volume growth of virtually nilpotent groups Let us first recall the state-of-the-art results for growth of f.g. nilpotent groups: **Theorem 2.5** ([1, Theorem 1]).: _Let \(H\) be a finitely generated \(s\)-step nilpotent group, and \(X\) be a symmetrical generating set. We have_ \[\beta_{(H,X)}(n)=c_{(H,X)}\cdot n^{d}+O(n^{d-\delta_{s}}),\] _where_ * \(d=d(H)=\sum_{i=1}^{s}i\cdot\operatorname{rank}_{\mathbb{Q}}\bigl{(}\gamma_{i}( H)\big{/}\gamma_{i+1}(H)\bigr{)}\in\mathbb{Z}_{\geqslant 0}\) _is the Bass-Guivarc'h exponent,_ * \(c_{(H,X)}\in\mathbb{R}_{>0}\) _is the volume of the unit ball in the asymptotic cone of_ \((H,X)\) _with its associated Carnot-Caratheodory metric._ * \(\delta_{s}=1\) _for_ \(s=1,2\) _and_ \(\delta_{s}=\frac{1}{s}\) _for_ \(s\geqslant 3\)_._ (Note that the case of \(s=2\) was previously proven by Stoll [11].) The statement extends if we add a weight function \(\sigma\colon X\to\mathbb{R}_{>0}\) to the picture. Our modest contribution is to extend this result to _virtually_ nilpotent groups **Corollary 2.6**.: _Let \(G\) be a finitely generated, virtually \(s\)-step nilpotent group, and \(S\) be a symmetrical generating set. Let \(H\) be a finite-index \(s\)-step nilpotent group, with the corresponding generating set \(X\) and weight function \(\sigma\colon X\to\mathbb{Z}_{>0}\). We have_ \[\beta_{(G,S)}(n)=[G:H]\cdot c_{(H,X,\sigma)}\cdot n^{d}+O(n^{d-\delta_{s}}).\] Proof.: Let \(j=[G:H]\), and decompose \(G=\bigsqcup_{i=1}^{j}t_{i}H\). Picking \(t_{i}\) as short as possible, we may assume that \(\left\lVert t_{i}\right\rVert_{S}<j\) for all \(i\). We have \[\bigsqcup_{i=1}^{j}t_{i}\cdot\bigl{(}B_{G,S}(n-j)\cap H\bigr{)}\subseteq B_{G,S}(n)\subseteq\bigsqcup_{i=1}^{j}t_{i}\cdot\bigl{(}B_{G,S}(n+j)\cap H\bigr{)}\] Combining this with Proposition 2.1, we get the inclusions \[\bigsqcup_{i=1}^{j}t_{i}\cdot B_{H,X,\sigma}\big{(}n-j-O(n^{\alpha_{s}})\big{)} \subseteq B_{G,S}(n)\subseteq\bigsqcup_{i=1}^{j}t_{i}\cdot B_{H,X,\sigma}(n+j)\] hence, by Theorem 2.5, \[j\cdot c_{H,X,\sigma}\cdot\big{(}n-j-O(n^{\alpha_{s}})\big{)}^{d}+O(n^{1- \delta_{s}})\leqslant\beta_{(G,S)}\leqslant j\cdot c_{H,X,\sigma}\cdot(n+j)^{ d}+O(n^{1-\delta_{s}}),\] that is, \(\beta_{(G,S)}(n)=[G:H]\cdot c_{(H,X,\sigma)}\cdot n^{d}+O(n^{d-\delta_{s}})\) as \(\alpha_{s}\leqslant 1-\delta_{s}<1\) for all \(s\). ### Structure of almost-geodesics We give local conditions on almost geodesic \(w\in X^{*}\). Essentially, most subwords of \(w\) should be geodesics in the abelianization \((\pi(\bar{H}),\|\cdot\|_{\text{\rm Mink}})\). More precisely **Proposition 2.7**.: _Let \(H\) be a torsionfree, \(s\)-step nilpotent group, with a finite generating set \(X\), and a weight function \(\sigma\colon X\to\mathbb{Z}_{>0}\). Consider \(\pi\colon H\twoheadrightarrow H/[H,H]\) and_ \[P=\operatorname{ConvHull}\left\{\frac{\pi(x)}{\sigma(x)}\biggm{|}x\in X \right\}.\] _For \(w\in X^{*}\), let \(N(w)\) count the occurrences of subwords \(v\in X\cup X^{2}\) of the form_ * \(v=x\) _with_ \(\frac{\pi(x)}{\sigma(x)}\) _not on the boundary of_ \(P\)_, or_ * \(v=xy\) _with_ \(\frac{\pi(x)}{\sigma(x)},\frac{\pi(y)}{\sigma(y)}\) _not on a common face of_ \(P\)_._ _There exists a constant \(\delta=\delta(H,X,\sigma)>0\) such that, for all \(w\in X^{*}\),_ \[\ell_{\sigma}(w)\geqslant\|\overline{w}\|_{X,\sigma}+\delta\cdot N(w)-O\big{(} \|\overline{w}\|_{X,\sigma}^{\alpha_{s}}\big{)}.\] In some sense, this result is a quantified, discrete analog to the "\((s-1)\)-iterated blowup" result of Hakavuori and Le Donne. [10, Corollary 1.4] Proof.: Let us say a word \(v\in X\cup X^{2}\) is _costly_ if it satisfies one of the conditions of the statement. By Lemma 1.4, for each costly word \(v\), there exists \(v^{\prime}\in X^{*}_{\mathbb{R}}\) such that \(\pi(v)=\pi(v^{\prime})\) and that \(\ell_{\sigma}(v)-\ell_{\sigma}(v^{\prime})>0\). We take \[0<\delta<\frac{1}{2}\min\big{\{}\ell_{\sigma}(v)-\ell_{\sigma}(v^{\prime}) \mid v\text{ is costly}\,\big{\}}.\] We now argue by induction on \(s\). \(\blacktriangleright\) We first initialize for \(s=2\): Consider a word \(w\in X^{*}\) with \(N=N(w)\) occurrences of costly subwords. Say \(M\geqslant\frac{1}{2}N\) of these occurrences are disjoints, we denote them by \(v_{1},\dots,v_{M}\). We replace \(v_{i}\) in \(w\) by \(v^{\prime}_{i}\), thus defining a new \(\mathbb{R}\)-word \(w^{\prime}\). Observe \(w^{\prime}\) has the same abelianization \(\pi(\overline{w}^{\prime})=\pi(\overline{w})\). It only differs in areas: \[z(\overline{w}^{\prime})-z(\overline{w})=\sum_{i=1}^{M}\big{(}z(\overline{v}^{ \prime}_{i})-z(\overline{v}_{i})\big{)}\in[\bar{H},\bar{H}]\simeq\mathbb{R}^{c}.\] (using Formula 1.5). Recall \([\bar{H},\bar{H}]\) is quadratically distorted in \(\bar{H}\) (Lemma 2.2 but for Lie groups, see rather [12, Lemme II.1] or [1, Theorem 2.7]) so that \[\|z(\overline{w}^{\prime})-z(\overline{w})\|_{\mathrm{Stoll}}=O\big{(}M^{\frac{ 1}{2}}\big{)}\] Therefore, there exists an \(\mathbb{R}\)-word \(w_{z}\) with \(\ell_{\sigma}(w_{z})=O\big{(}M^{\frac{1}{2}}\big{)}\) and \(\overline{w}=\overline{w^{\prime}w_{z}}\). It follows \[\ell_{\sigma}(w) =\ell_{\sigma}(w^{\prime}w_{z})\quad\,\,+\sum_{i=1}^{M}\big{(} \ell_{\sigma}(v_{i})-\ell_{\sigma}(v^{\prime}_{i})\big{)}\qquad\qquad\qquad \qquad\qquad\,-\,\,\ell_{\sigma}(w_{z})\] \[\geqslant\,\|\overline{w}\|_{\mathrm{Stoll},X,\sigma}\,\,\,+M \cdot\min\big{\{}\ell_{\sigma}(v)-\ell_{\sigma}(v^{\prime})\mid v\text{ is costly}\,\big{\}}\,\,\,\,-O\big{(}M^{\frac{1}{2}}\big{)}\] \[\geqslant\,\|\overline{w}\|_{X,\sigma}-C +\frac{1}{2}N\cdot\min\big{\{}\ell_{\sigma}(v)-\ell_{\sigma}(v^{ \prime})\mid v\text{ is costly}\,\big{\}}-\,O\big{(}N^{\frac{1}{2}}\big{)}\] \[\geqslant\,\|\overline{w}\|_{X,\sigma}\qquad\,\,+\delta\cdot N(w) \,\,-\,O(1).\] where we have used Proposition 1.6 for the second inequality. Suppose the induction hypothesis holds for \(s-1\geqslant 2\). Consider \(w\in X^{*}\) of length \(\ell_{\sigma}(w)=n\). We can decompose \(w\) as a product \(w=w_{1}w_{2}\ldots w_{m}\) with \(m=n^{\beta}+O(1)\) pieces of length \(n^{1-\beta}+O(1)\). Using the induction hypothesis, there exists \(v_{i}\in X^{*}\) s.t. \[\overline{v}_{i}=\overline{w}_{i}\,\,\operatorname{mod}\gamma_{s}(H)\quad \text{and}\quad\ell_{\sigma}(v_{i})\leqslant\ell_{\sigma}(w_{i})-\delta\cdot N (w_{i})+O\big{(}n^{(1-\beta)\alpha_{s-1}}\big{)}.\] As previously, observe that the error \(z_{i}=\bar{w}_{i}\bar{v}_{i}^{-1}\in\gamma_{s}(H)\) has length \[\|z_{i}\|_{X,\sigma}\leqslant\ell_{\sigma}(v_{i})+\ell_{\sigma}(w_{i})=O(n^{1- \beta})\] The same argument using distortion gives a word \(v_{z}\in X^{*}\) such that \(\overline{v}_{z}=z_{1}z_{2}\ldots z_{m}\) and \(\ell_{\sigma}(v_{z})=O(n^{1-\frac{s-1}{s}\beta})\). Finally, let \(v=v_{1}v_{2}\ldots v_{m}v_{z}\in X^{*}\). We have \(\overline{v}=\overline{w}\), and \[\|\overline{w}\|_{X,\sigma}\leqslant\ell_{\sigma}(v) =\sum_{i=1}^{m}\ell_{\sigma}(v_{i})+\ell_{\sigma}(v_{z})\] \[\leqslant\sum_{i=1}^{m}\Big{(}\ell_{\sigma}(w_{i})-\delta N(w_{i}) +O\big{(}n^{(1-\beta)\alpha_{s-1}}\big{)}\Big{)}+O(n^{1-\frac{s-1}{s}\beta})\] \[\leqslant\ell_{\sigma}(w)-\delta\big{(}N(w)-m\big{)}+O\big{(}n^{ \beta+(1-\beta)\alpha_{s-1}}\big{)}+O(n^{1-\frac{s-1}{s}\beta})\] Fine-tuning \(\beta=\frac{1-\alpha_{s-1}}{2-\alpha_{s-1}-\frac{1}{s}}\) gives us the desired result. Proof of Theorem 1 From now on, we suppose \(H\) is a torsionfree, \(s\)-step nilpotent, finite index, _normal_ subgroup of \(G\). As in Section 2.1, we consider the labeled graph \(\mathcal{C}ay(H\backslash G,S)\) and \[X(S)=\left\{\left.\begin{array}{l}\mbox{$tut^{-1}$}\;\middle|\;\begin{array}[ ]{l}\mbox{$t\in S^{*}$ labels a simple path $H\;\to\,Ht$}\\ \mbox{$u\in S^{*}$ labels a simple cycle $Ht\to Ht$}\end{array}\right\} \subset H\right.\] with a weight function \(\sigma\colon X\to\mathbb{Z}_{>0}\) defined by \(\sigma(tut^{-1})=\ell(u)\). **Remark.** Observe that \(\mathcal{C}ay(H\backslash G,S)\) is transitive, hence a word \(u\in S^{*}\) labels a simple cycle \(Ht\to Ht\) if and only if it labels a simple cycle \(H\to H\). We can therefore say a word \(u\in S^{*}\) is a _simple cycle_ without ambiguity. The proof naturally splits into two cases. \(\blacktriangleright\) First, we suppose that two elements of \(A(S)\) lie on the same facet of \(P(S)\), and we conclude \((G,S)\) has exponential geodesic growth. We already provide exponentially many geodesics in the virtually abelian quotient \(G/[H,H]\). Observe that \[P(S)\stackrel{{\rm def}}{{=}}\mbox{ConvHull}\left(\left\{\frac{ \pi(u)}{\ell(u)}\;\middle|\;u\in S^{*}\mbox{ is a simple cycle}\right\}^{G/H} \right)=\mbox{ConvHull}\left\{\frac{\pi(x)}{\sigma(x)}\;\middle|\;x\in X\right\}.\] is the polytope that governs the Stoll/Minkowski norm in \(\hat{H}=H/[H,H]\simeq\mathbb{Z}^{d}\) with respect to the natural generating set \(\hat{X}=\pi(X)\), with the weight function \(\hat{\sigma}(\hat{x})=\sigma(x)\). Consider distinct simple cycles \(u,v\in S^{*}\) (in particular \(\bar{u},\bar{v}\in X\)) such that \[\frac{\pi(\overline{u})}{\ell(u)}=\frac{\pi(\overline{u})}{\sigma(u)}\quad \mbox{and}\quad\frac{\pi(\overline{v})}{\ell(v)}=\frac{\pi(\overline{v})}{ \sigma(v)}\] lie on a common face of \(P(S)\). Then, for all \(w\in\{u,v\}^{*}\), we have \[\ell(w)=\ell_{\hat{\sigma}}(\hat{w})=\left\|\pi(\overline{w})\right\|_{ \mbox{\scriptsize Mink},P}\leqslant\left\|\pi(\overline{w})\right\|_{\hat{X}, \hat{\sigma}}\leqslant\left\|\overline{w}\right\|_{X,\sigma}\leqslant\left\| \overline{w}\right\|_{S}.\] (The second equality is justified by Lemma 1.4, and the last inequality by the easy part of Proposition 2.1.) It follows that all those words are geodesics, which concludes. \(\blacktriangleleft\) From now on, we work towards an upper bound. First, we work specifically on geodesics \(w\in S^{*}\) evaluating in the subgroup \(H\). \(\blacktriangleright\) Under the hypothesis that no two elements of \(A(S)\) lie on the same facet of \(P(S)\), we prove that the coarse length of the decomposition \(\tilde{w}\in X^{*}\) is bounded by \[k(\tilde{w})\leqslant N(\tilde{w})\cdot[G:H]+1\] for any word \(w\in S^{*}\) evaluating in \(H\). Let us write \[\tilde{w}=x_{1}^{m_{1}}\,x_{2}^{m_{2}}\cdot\ldots\cdot x_{k}^{m_{k}}\quad \mbox{with $x_{i}\neq x_{i+1}$ for all $i$}.\] The decomposition process not only gives this sequence of generators \((x_{i})\in X\), but also two sequences of simple paths \((t_{i})\in S^{*}\) and simple cycles \((u_{i})\in S^{*}\) such that \(x_{i}=t_{i}u_{i}t_{i}^{-1}\). Observe that, for each \(i\), one of \(t_{i}\) and \(t_{i+1}\) is a prefix of the other (depending on where the walk re-intersects itself). Consider a time \(i\) such that \(t_{i+1}\) is a prefix of \(t_{i}\) (including the case \(t_{i+1}=t_{i}\)). In particular, we can rewrite \(x_{i+1}=t_{i}v_{i+1}t_{i}^{-1}\) for some simple cycle \(v_{i+1}\in S^{*}\). (\(v_{i+1}\) is a cyclic permutation of \(u_{i+1}\).) As \(x_{i}\neq x_{i+1}\), we have \(u_{i}\neq v_{i+1}\). Now our hypothesis on \(A(S)\) kicks in: both points \[\frac{\pi(\overline{u}_{i})}{\ell(u_{i})}\quad\text{and}\quad\frac{\pi( \overline{v}_{i+1})}{\ell(v_{i+1})}\] cannot lie on a common facet of \(P\). We get the same conclusion for \[\frac{\pi(x_{i})}{\sigma(x_{i})}\quad\text{and}\quad\frac{\pi(x_{i+1})}{\sigma (x_{i+1})}\] after conjugation by \(t_{i}\). We conclude that the subword \(x_{i}x_{i+1}\) is _costly_ in the sense of Proposition 2.7 (hence counted in \(N(\tilde{w})\)), as soon as \(t_{i+1}\) is a prefix of \(t_{i}\). In order to avoid this situation, the path \(t_{i}\) should be a proper prefix of \(t_{i+1}\). This means the \(t_{i}\)'s usually get longer and longer, but they have length bounded between \(0\) and \([G:H]-1\). Combining these two observations, we get the desired bound. \(\blacktriangleright\) We are now able to combine everything. Consider a word \(w\in S^{*}\) representing an element \(h\in H\). Propositions 2.1 and 2.7 give us \[\left\|h\right\|_{S} \leqslant\left\|h\right\|_{X,\sigma}+O\big{(}\|h\|_{X,\sigma}^{ \alpha_{s}}\,\big{)}\] \[\ell(w) =\ell_{\sigma}(\tilde{w})\geqslant\left\|h\right\|_{X,\sigma}+ \delta\cdot N(\tilde{w})-O\big{(}\|h\|_{X,\sigma}^{\alpha_{s}}\,\big{)}\] for some constant \(\delta>0\). If \(w\) is geodesic, we have \(\ell(w)=\left\|h\right\|_{S}\) and therefore \[\delta\cdot N(\tilde{w})=O\big{(}\|h\|_{X,\sigma}^{\alpha_{s}}\,\big{)}=O \big{(}\ell(\tilde{w})^{\alpha_{s}}\big{)}.\] Finally, combining with the previous observation, we get \[k(\tilde{w})\leqslant C\cdot\ell(\tilde{w})^{\alpha_{s}}\] for some fixed constant \(C=C(G,S)>0\). * Finally the upper bounds. The previous observation gives an injection \[\left\{w\in S^{*}\;\big{|}\;w\text{ is geodesic and }\overline{w}\in H\right\} \hookrightarrow\left\{v\in X^{*}\;\big{|}\;k(v)\leqslant C\cdot\ell(v)^{\alpha_{s }}\right\}.\] sending \(w\mapsto\tilde{w}\). Observe that \(\ell(\tilde{w})\leqslant\ell_{\sigma}(\tilde{w})=\ell(w)\), hence \[\#\big{\{}w\in S^{n}\;\big{|}\;w\text{ is geodesic and }\overline{w}\in H\big{\}} \leqslant\#\big{\{}v\in X^{\leqslant n}\;\big{|}\;k(v)\leqslant C\cdot n^{ \alpha_{s}}\big{\}}\,.\] Some combinatorics translates this into the crude upper bound of \[\#\{w\in S^{n}\text{ geodesic with }\overline{w}\in H\}\leqslant\binom{n+1}{Cn^{ \alpha_{s}}}\cdot\lvert X\rvert^{C\cdot n^{\alpha_{s}}}\leqslant\big{(}(n+1) \,\lvert X\rvert\,\big{)}^{Cn^{\alpha_{s}}}\asymp\exp\bigl{(}Cn^{\alpha_{s}} \log(n)\bigr{)}.\] To conclude, each geodesic \(w\) in \((G,S)\) can be decomposed as a product \[w=w_{1}s_{1}w_{2}s_{2}\dots s_{k-1}w_{k}\] where each \(w_{i}\in S^{*}\) is a geodesic ending up in \(H\), \(s_{i}\in S\), and \(k\leqslant[G:H]\), which gives an upper bound on \(\gamma_{\text{geod}}(n)\) of the same type. ## 4 Geometry of the Engel group In this section, we are looking at a 3-step nilpotent group, the Engel group \(\mathscr{E}\). We first define a nilpotent Lie group \(\bar{\mathscr{E}}\), in which \(\mathscr{E}\) is a cocompact lattice. Elements of \(\bar{\mathscr{E}}\) can be understood geometrically: They are equivalence classes of absolutely continuous paths in \(\mathbb{R}^{2}\) starting from \((0,0)\). For any path \(g\), we define three parameters: 1. its second endpoint \(\hat{g}=(x_{g},y_{g})\in\mathbb{R}^{2}\). 2. a distribution of winding numbers. First, we get a closed path \(g_{c}\) by concatenating \(g\) with the segment back from \(\hat{g}\) to \((0,0)\). Then the function \(w_{g}\colon\mathbb{R}^{2}\setminus\operatorname{Im}(g_{c})\to\mathbb{Z}\) is defined as \(w_{g}(x,y)=\) the winding number of \(g_{c}\) around \((x,y)\). (See Figure 4.) 3. its total algebraic (or signed) area \[A(g)=\iint_{\mathbb{R}^{2}}w_{g}(x,y)\;\mathrm{d}x\,\mathrm{d}y.\] 4. the \(y\)-coordinate of its "barycenter" (or center of gravity) \[B_{y}(g)=\iint_{\mathbb{R}^{2}}y\cdot w_{g}(x,y)\;\mathrm{d}x\,\mathrm{d}y.\] Two paths \(g,h\) are equivalent if they share same endpoint \(\hat{g}=\hat{h}\), same algebraic area \(A(g)=A(h)\) and same "\(y\)-coordinate of barycenter" \(B_{y}(g)=B_{y}(h)\). **Proposition 4.1**.: _Given two paths \(g,h\), their concatenation \(gh\) has parameters_ \[\widehat{gh} =\widehat{g}+\widehat{h}\] \[A(gh) =A(g)+A(h)+\frac{1}{2}\det\bigl{(}\hat{g},\hat{h}\bigr{)}\] \[B_{y}(gh) =B_{y}(g)+B_{y}(h)+y_{g}\cdot A(h)+\frac{1}{3}(2y_{g}+y_{h})\cdot \frac{1}{2}\det\bigl{(}\hat{g},\hat{h}\bigr{)}\] _As a corollary, the operation "concatenation" pass to the quotient. With the empty path as neutral element and reverse paths as inverses, this defines a group._ Proof.: The relation \(\widehat{gh}=\hat{g}+\hat{h}\) is obvious. The key observation is the decomposition \[w_{gh}=w_{g}+w_{h}\circ\tau_{-\hat{g}}\pm\chi_{\triangle((0,0),\hat{g},\hat{g}+ \hat{h})},\] where * \(\tau_{v}\colon\mathbb{R}^{2}\to\mathbb{R}^{2}\) is the translation by \(v\), * \(\triangle\bigl{(}(0,0),\hat{g},\hat{g}+\hat{h}\bigr{)}\) is the convex hull of those three points, \(\chi_{\triangle((0,0),\hat{g},\hat{g}+\hat{h})}\) denotes its characteristic function, and the sign \(\pm\) depends on the order of \((0,0)\), \(\hat{g}\) and \(\hat{g}+\hat{h}\) on the boundary of the triangle (\(-1\) if clockwise and \(+1\) if anti-clockwise). Here is a pictorial explanation: It only remains to compute \(A(gh)\) and \(B_{y}(gh)\) using the decomposition. Observe that \(\frac{1}{2}\det(\hat{g},\hat{h})\) is the signed area of the triangle \(\triangle\bigl{(}(0,0),\hat{g},\hat{g}+\hat{h}\bigr{)}\), and the \(y\)-coordinate of its center of gravity is given by \(\frac{1}{3}\bigl{(}0+y_{g}+(y_{g}+y_{h})\bigr{)}\). As our main example, we will consider the lattice generated by the straight segments \(a\) and \(b\) from \((0,0)\) to \((1,1)\) and \((1,-1)\) respectively. This group is given by \[\mathscr{E}=\bigl{\langle}a,b\bigm{|}[a,[a,b]]=[b^{-1},[a,b]]\text{ is central}\,\bigr{\rangle}\,.\] Throughout, we fix \(X=\{a^{\pm},b^{\pm}\}\) as a generating set. Figure 4: Pictures of \(g\), \(h\) and \(gh\) and some winding numbers. ### An observation of Stoll and lower bound on word length In unpublished notes [10], Stoll shows that a key result of [10] ("Every element of \(\bar{H}\) admits an \(\mathbb{R}\)-word of minimal length", whenever \(\bar{H}\) is a simply connected 2-step nilpotent group) fails in the Engel group2\(\bar{\mathscr{E}}\). Specifically Footnote 2: Stoll works instead with the Cartan group, i.e., the free 3-step nilpotent group of rank 2, but his argument works just as well for the Engel group, and I’d rather not introduce new notations. **Proposition** ([10]).: _The element \(g_{1}\) represented by a segment from \((0,0)\) to \((1,0)\) has Stoll length \(\left\|g_{1}\right\|_{\operatorname{Stoll},X}=1\), but is not represented by any \(\mathbb{R}\)-word of length \(1\)._ By a compactness argument, this proves that Lemma 1.7 does not extend either: **Corollary**.: _Consider \(g_{n}\in\overline{\mathscr{E}}\) the element represented by a segment to \((n,0)\). There does not exist a sequence of \(\mathbb{R}\)-words \(w_{n}\) representing \(g_{n}\) such that the coarse lengths \(k(w_{n})\) are uniformly bounded and \(\ell(w_{n})\leqslant\left\|g_{n}\right\|_{\operatorname{Stoll},X}+o(n)=n+o(n)\)._ Proof of the Corollary.: We argue by contradiction, and suppose such a sequence of words does exist. After rescaling by a factor of \(\frac{1}{n}\), we get a sequence of \(\mathbb{R}\)-words of length \(1+o(1)\) and coarse length \(\leqslant K\), all representing \(g_{1}\). However, \(\mathbb{R}\)-words with length \(\leqslant 2\) and coarse length \(\leqslant K\) form a compact set, so some subsequence converges to an \(\mathbb{R}\)-word representing \(g_{1}\) of length \(1\) and coarse length \(\leqslant K\), a contradiction. In what follows, we quantify the dependence between \(k(w_{n})\) and \(\ell(w_{n})-\left\|g_{n}\right\|_{\operatorname{Stoll}}\). We use that the horizontal path is a (particularly bad) abnormal curve in \(\overline{\mathscr{E}}\). **Proposition 4.2** (\(B_{y}\) back from the depths).: _There exists \(C>0\) such that, for any \(w\in X_{\mathbb{R}}^{*}\) with endpoint \(\hat{w}=(n,0)\) and length \(\ell(w)=n+\Delta\), we have_ \[-B_{y}(\bar{w})\geqslant\frac{1}{24}\frac{n^{3}}{\left(k(w)-1\right)^{2}}-C \cdot\Delta^{2}\cdot\max\left\{\Delta;\,\frac{n}{k(w)-1}\right\}.\] Proof.: We decompose the path \(w\) into a main path (in purple), and some loops and boundary mess (in orange), and estimate the contribution of each part to \(-B_{y}(\bar{w})\). We first estimate the contribution of the purple main path. Note that this path is simple (and we cured out boundary mess), so the winding number of any point is \(\pm 1\) if the point lies between the \(x\)-axis and the curve - more precisely \(+1\) if it lies below the \(x\)-axis and \(-1\) if it lies above - and \(0\) otherwise. In particular, the contribution is non-positive, and bounded by the contribution of the following green area The green area is composed of \(k^{\prime}\) trapezoids/triangles, with \(k^{\prime}\leqslant 2k(w)-2\) (each segment delimits at most 2 trapezoids if it crosses the \(x\)-axis, at most 1 otherwise, and the first and last segments cannot cross the axis). In turn, we bound the contribution of each slice by that of a triangle (with basis and height \(a_{i}\)) included inside it: \[-B_{y}(\text{purple curve})\geqslant-B_{y}(\text{green zone})\geqslant-B_ {y}(\text{blue triangles})=\sum_{i=1}^{k^{\prime}}\frac{a_{i}^{2}}{2}\cdot\frac{a_{i}}{3}.\] Finally, since \(\sum_{i=1}^{k^{\prime}}a_{i}=n\), the generalized mean inequality gives \[-B_{y}(\text{purple curve})\geqslant\frac{1}{6}\frac{n^{3}}{k^{\prime 2}} \geqslant\frac{1}{24}\frac{n^{3}}{(k(w)-1)^{2}}.\] Figure 6: The decomposition of a word \(w\). The purple curve is obtained from the interval between the last crossing of the line \(x=0\) and the first crossing of \(x=n\), after loop-erasure. Figure 7: The green area decomposed into \(k^{\prime}\) trapezoids, and the blue triangles included inside. When adding back the orange curves, the variation in \(B_{y}\) is given by the total variation in winding numbers, weighted by \(y\)-coordinates. Observe that the orange curves have total length at most \(\Delta\), as the purple curve goes from \(x=0\) to \(x=n\). It follows that the total variation in winding numbers is at most \(I\cdot(2\Delta)^{2}\), where \(I\) is the isoperimetric constant of the plane with norm \(\|\cdot\|_{\text{Mink},X}\). (In this case \(I=\frac{1}{2}\). The factor 2 comes from some orange curves not being closed.) To control \(y\)-coordinates, let us just define \(y_{\text{max}}\) as the largest distance from points of the path \(w\) to the \(x\)-axis. If \(y_{\text{max}}\) is reasonably small, say \(y_{\text{max}}\leqslant L\max\big{\{}\Delta;\,\frac{n}{k(w)-1}\big{\}}\) for some constant \(L=L(X)\) which will be made precise later, then \[-B_{y}(\bar{w}) \geqslant-B_{y}(\text{green zone})+y_{\text{max}}\cdot 4I\Delta^{2}\] \[\geqslant\frac{1}{24}\frac{n^{3}}{(k(w)-1)^{2}}-4LI\cdot\Delta^{ 2}\cdot\max\left\{\Delta;\,\frac{n}{k(w)-1}\right\}\] as promised. The only remaining case is when \(y_{\text{max}}\) is unreasonably large: \(y_{\text{max}}\geqslant L\max\big{\{}\Delta;\,\frac{n}{k(w)-1}\big{\}}\). In particular, \(2y_{\text{max}}-\Delta\) is larger than \(y_{\text{max}}\), \(L\Delta\) and \(L\frac{n}{k(w)-1}\). We improve our bound on \(-B_{y}(\text{green zone})\) using that the curve \(w\) goes through some point \(p=(\pm y_{\text{max}},x_{p})\) far away from the \(x\)-axis: There exists a large triangle \(T\) inside which the curve \(w\) cannot enter, hence which must be included inside the green region. (Indeed, for any point \(q\in T\), \[d_{\text{Mink}}\big{(}(0,0),q\big{)}+d_{\text{Mink}}(q,p) >d_{\text{Mink}}\big{(}(0,0),p\big{)}+\Delta\] \[d_{\text{Mink}}(p,q)+d_{\text{Mink}}\big{(}q,(n,0)\big{)} >d_{\text{Mink}}\big{(}p,(n,0)\big{)}+\Delta\] so any path going through all 4 points would have length \(>n+\Delta\).) Moreover, as previously, we have a lower bound on \(-B_{y}\) for the green regions on both sides of \(T\), composed of at most \(2k(w)-2\) trapezoids. Combining everything together \[-B_{y}(\bar{w}) \geqslant-B_{y}(\text{green area})-y_{\max}\cdot 4I\Delta^{2}\] \[\geqslant\frac{1}{24}\left(\frac{(n-2y_{\max}+\Delta)^{3}}{(k(w)-1 )^{2}}+(2y_{\max}-\Delta)^{3}\right)-y_{\max}\cdot 4I\Delta^{2}\] \[\geqslant\frac{1}{24}\left(\frac{n^{3}}{(k-1)^{2}}-3(2y-\Delta) \frac{n^{2}}{(k-1)^{2}}+(2y-\Delta)^{3}\right)-4I\cdot y\Delta^{2}\] \[=\frac{1}{24}\frac{n^{3}}{(k-1)^{2}}+\frac{1}{24}(2y-\Delta)^{3}- 4I\cdot y\Delta^{2}-\frac{n^{2}(2y-\Delta)}{8(k-1)^{2}}\] \[\geqslant\frac{1}{24}\frac{n^{3}}{(k-1)^{2}}+(2y-\Delta)^{3} \left(\frac{1}{24}-\frac{4I}{L^{2}}-\frac{1}{8L^{2}}\right)\] which is \(\geqslant\frac{1}{24}\frac{n^{3}}{(k(w)-1)^{2}}\) for \(L=10\cdot\max\{\sqrt{I},1\}\). **Remark 4.3**.: In particular, if \(\overline{w}=g_{n}\) (for which \(\|g_{n}\|_{\mathrm{Stoll}}=n\)), then \[\ell(w)-\|g_{n}\|_{\mathrm{Stoll}}=\Delta\geqslant\frac{1}{24C}\cdot k(w)^{- \frac{2}{3}}\cdot n=C^{\prime}\cdot k(w)^{-\frac{2}{3}}\cdot\|g_{n}\|_{ \mathrm{Stoll}}\] (as \(B_{y}(g_{n})=0\)). This matches the best known upper bound: **Lemma** ([18, Lemma 40]).: _Let \(\bar{H}\) be a simply connected \(s\)-step nilpotent Lie group, and \(X\) a finite Lie generating set. For every \(K\) large enough and every \(g\in\bar{H}\), there exists an \(\mathbb{R}\)-word over \(X\) representing \(g\) such that \(k(w)\leqslant K\) and_ \[\ell(w)-\|g\|_{\mathrm{Stoll}}\leqslant C^{\prime\prime}\cdot K^{-\frac{2}{s} }\cdot\|g\|_{\mathrm{Stoll}}+C^{\prime\prime}.\] Moreover, as \(k(w)\leqslant\ell(w)\) for genuine words \(w\in X^{*}\), we deduce that \[\|g_{n}\|_{X}-\|g_{n}\|_{\mathrm{Stoll},X}=\Omega\big{(}n^{1/3}\big{)}\] disproving Conjecture 6.5 of Breuillard and Le Donne. [1] ### Matching lower bound in a virtually Engel group In this paragraph, we finally consider the semi-direct product \[\nu\mathscr{G}=\mathscr{G}\rtimes C_{2}=\big{\langle}a,t\bigm{|}t^{2}=1;\ [a,[a,a^{t}]]=[a^{t},[a,a^{t}]]\ \text{commutes with}\ a,a^{t}\big{\rangle}\] (so \(C_{2}=\langle t\rangle\) acts by symmetry along the \(y\)-axis, and in particular \(tat=b^{-1}\)), with the generating set \(S=\{a^{\pm},t\}\). First, we may compute \(A(S)\) and \(P(S)\), so that Theorem 1 gives the upper bound \(\gamma_{\mathrm{geod}}(n)\preceq\exp\bigl{(}n^{3/5}\cdot\log(n)\bigr{)}\). It remains to prove a matching lower bound, i.e., to construct a lot of geodesics. We fix \(\kappa>0\) a small constant (to be determined) and \(0<\varepsilon<\frac{1}{10}\). For any \(n\) even integer, let \(K\approx\kappa n^{3/5}\) be another even integer, and \(m=\frac{n}{2K}\). We show that words of the form \[w=a^{m_{1}}ta^{-(m_{1}+m_{2})}ta^{m_{2}+m_{3}}t\ldots ta^{-(m_{K-1}+m_{K})}ta^{m _{K}}\in S^{*}\] with \(2\sum_{i}m_{i}=n\) and \(|m_{i}-m|\leqslant n^{\varepsilon}\), are all geodesics for \(n\) large enough. \(\blacktriangleright\) First, we compute \(B_{y}\) for the corresponding element \(\overline{w}\). Note that \(\overline{w}\in\mathscr{E}\) (as \(K\) is even), so this makes sense. Let \(\delta_{i}=m_{i}-m\) (so that \(\sum_{i}\delta_{i}=0\)), we have \[-B_{y}(\overline{w})=\sum_{i=1}^{K}\frac{m_{i}^{3}}{3}=\frac{1}{3}\sum_{i=1}^ {K}(m+\delta_{i})^{3}=\frac{Km^{3}}{3}+\sum_{i=1}^{K}\left(m\delta_{i}^{2}+ \frac{1}{3}\delta_{i}^{3}\right)=\frac{1}{24}\frac{n^{3}}{K^{2}}+O(n^{1+2 \varepsilon}).\] \(\blacktriangleright\) Next, we take a shorter word \(v\in\{a^{\pm},t\}^{*}\) ending up in the same coset as \(w\) (that is the \(\mathscr{E}\) coset), and with same endpoint \(\hat{v}=\hat{w}=(n,0)\), and prove that \(B_{y}(\bar{v})<B_{y}(\bar{w})\). Consider the decomposition \(\tilde{v}\in S^{*}\). Its coarse length and length are \[k(\tilde{v})-1 \leqslant\text{``number of $t$ in $v$''}\quad=K-d\quad\text{(for some even $d$)}\] \[\ell(\tilde{v}) =\text{``number of $a^{\pm 1}$ in $v$''}\quad<n+d.\] Note that, as you need at least \(n\) letters "\(a^{\pm 1}\)" to reach \(\hat{v}=(n,0)\), the shortening relative to \(w\) has to come from the number of "\(t\)" in \(v\), so that \(2\leqslant d\leqslant K-2\). It follows from Proposition 4.2 that \[B_{y}(\bar{w})-B_{y}(\bar{v})\geqslant\frac{n^{3}}{24}\left(\frac{1}{(K-d)^{ 2}}-\frac{1}{K^{2}}\right)-C\cdot d^{2}\cdot\max\left\{d;\frac{n}{K-d}\right\} -O(n^{1+2\varepsilon}).\] Now we split into two cases: * If \(d\geqslant K-\frac{n}{K}\), then the term \(\frac{n^{3}}{24(K-d)^{2}}\) dominates and \(B_{y}(\bar{w})>B_{y}(\bar{v})\). * If \(d\leqslant K-\frac{n}{K}\). We have \(\frac{1}{(K-d)^{2}}-\frac{1}{K^{2}}\geqslant\frac{2d}{K^{3}}\) by the mean value theorem, hence \[B_{y}(\bar{w})-B_{y}(\bar{v}) \geqslant\frac{n^{3}}{12}\cdot\frac{d}{K^{3}}-C\cdot d\cdot K^{2}- O(n^{1+2\varepsilon})\] \[=d\left(\frac{1}{12\kappa^{3}}-C\kappa^{2}\right)\cdot n^{6/5}-o( n^{6/5})\] \[>0\] as long as \(\kappa<\sqrt[5]{\frac{1}{12C}}\) and \(n\) is large enough. \(\blacktriangleright\) We conclude that all those words \(w\) are minimal length representatives. This construction gives quite a lot of geodesics. Indeed, we may choose the values of partial sums \(\left(\sum_{i=1}^{r}m_{i}\right)_{r=1,\ldots,K-1}\) independently in \(B\big{(}rm,\frac{1}{2}n^{\varepsilon}\big{)}\). As \(\ell(w)=n+K\), we get \[\gamma_{\text{geod}}\big{(}n+K\big{)}\geqslant(n^{\varepsilon})^{K-1} \asymp\exp\big{(}\varepsilon\kappa\cdot n^{3/5}\cdot\log(n)\big{)}.\] Varying \(n\) and \(K\approx\kappa n^{3/5}\), both even, gives the bound for even entries. Removing the last letter gives geodesics of odd length (i.e., sub-multiplicativity closes the deal). \(\square\) ## 5 Further questions Lots of questions remain open. Perhaps the one we would most like to see solved **Conjecture A.** If the geodesic growth of \((G,S)\) is polynomial, with \(S\) a _symmetrical_ generating set, then \(G\) is virtually 2-step nilpotent. Among the possible counter-examples (all virtually nilpotent), treating the virtually 3-step nilpotent cases would be sufficient, as \(G\) factors onto \(G/\gamma_{4}(H)\). We emphasize "symmetrical" as we have the following intriguing example: **Question B.** Is the geodesic growth of \(\iota\!\!\mathscr{E}\) w.r.t. \(S=\{a,b,(ab)^{-1},t\}\) polynomial? In contrast, as the \(x\)-axis is fixed by automorphisms of \(\mathscr{E}\), in a finite-by-\(\mathscr{E}\) group, any _symmetrical_ generating set \(S\) such that \(P(S)\) has vertices on the \(x\)-axis will yield exponential geodesic growth, and any generating set \(S\) such that \(P(S)\) has no vertex on the \(x\)-axis should yield super-polynomial geodesic growth. More generally, could we construct virtually \(s\)-step nilpotent examples with polynomial geodesic growth on top of any filiform nilpotent groups of type I \[\mathscr{F}_{s}=\left\langle y,z_{1},z_{2},\ldots,z_{s}\ \big{|}\ [y,z_{i}]=z_{i+1},\ [y,z_{s}]=[z_{i},z_{j}]=1\right\rangle\?\] If Conjecture A holds, then the characterization of groups of polynomial geodesic growth (for symmetric generating set) is reduced to the following question: **Problem C.** Characterize finite subgroups \(F\leqslant\operatorname{GL}_{d}(\mathbb{Z})\) for which * there exists a finite _symmetric_ set \(A\subset\mathbb{Z}^{d}\) such that \(P=\operatorname{ConvHull}(A^{F})\) is full-dimensional and each facet of \(P\) contains at most one point of \(A\). Ideally, the characterization should be algorithmic. Given \(F\) as finite set of integer-valued matrices, one should be able to decide whether or not the condition is satisfied. Moreover, one should be able to answer the following question: **Question D.** Does the condition (ii') imply the BBES's condition? (That is, if such an \(A\) exists, can we ask that \(A\) consists only of a pair of antipodal points?) Note that the asymmetrical condition (ii) does not imply BBES's condition, as shown by the group \(G_{2}\) treated in the introduction. Finally, we would like to reiterate **Question E.** (See [1, Problem 13]) Does there exists a pair \((G,S)\) of intermediate geodesic growth and intermediate volume growth? Exponential geodesic growth has been established in several examples of intermediate volume growth by [1], for instance the first Grigorchuk group with standard generating set. The simplest open example is the Fabrikowski-Gupta group. We isolate this question as an answer in either direction would require new insight on these groups.
2305.09380
Unimodular Proca Theory: Breaking the U(1) gauge symmetry of unimodular gravity via a mass term
We study the Hamiltonian structure of unimodular-like theories, where the cosmological constant (or other supposed constants of nature) are demoted from fixed parameters to classical constants of motion. No new local degrees of freedom are present as a result of a $U(1)$ gauge invariance of the theory. Hamiltonian analysis of the action reveals that the only possible gauge fixing that can be enforced is setting the spatial components of the four-volume time vector ${\cal T}^{i}\approx0$. As a consequence of this, the gauge-fixed unimodular path integral is equivalent to the minisuperspace unimodular path integral. However, should we break the $U(1)$ gauge invariance, two things happen: a massless propagating degree of freedom appears, and the (gauge-invariant) zero-mode receives modified dynamics. The implications are investigated, with the phenomenology depending crucially on the target ``constant''.
Raymond Isichei, João Magueijo
2023-05-16T12:05:26Z
http://arxiv.org/abs/2305.09380v2
# Unimodular Proca Theory: Breaking the \(U(1)\) gauge symmetry of unimodular gravity via a mass term ###### Abstract We study the Hamiltonian structure of unimodular-like theories, where the cosmological constant (or other supposed constants of nature) are demoted from fixed parameters to classical constants of motion. No new local degrees of freedom are present as a result of a \(U(1)\) gauge invariance of the theory. Hamiltonian analysis of the action reveals that the only possible gauge fixing that can be enforced is setting the spatial components of the four-volume time vector \(\mathcal{T}^{i}\approx 0\). As a consequence of this, the gauge-fixed unimodular path integral is equivalent to the minisuperspace unimodular path integral. However, should we break the \(U(1)\) gauge invariance, two things happen: a massless propagating degree of freedom appears, and the (gauge-invariant) zero-mode receives modified dynamics. The implications are investigated, with the phenomenology depending crucially on the target "constant". ## I Introduction Ever since General Relativity was proposed, a question has been hanging over the theory, as posed by Einstein himself [1]: should diffeomorphisms be restricted to volume preserving ones? If so, this assumption leads to the so-called "unimodular" theory of gravity, and this was one of the first attempts at grand-unification. Later it was found that unimodular gravity demotes the cosmological constant from a fixed pre-given parameter to a constant of motion [2; 3; 4; 5; 6; 7; 8], leading to considerable ambiguity [2; 3; 4; 11] over whether this resolves, ameliorates or simply does nothing to solve the cosmological constant problem [12; 13]. Possibly the cleanest formulation of unimodular gravity is due to Henneaux and Teitelboim (HT) [3], who eschew the restriction of fixed volume for the diffeomorphisms in favour of a Lagrange multiplier density \(\mathcal{T}^{\mu}\) enforcing the on-shell constancy of \(\Lambda\) (preserving full diffeomorphism invariance). This density can be used to build a physical definition of time, which on-shell becomes "4-volume time" [7; 8; 15; 16]. The HT procedure can be used as a blueprint for turning other parameters into constants of motion [17; 18], such as the Planck mass or the gravitational coupling, with alternative conjugate time variables, such as the Ricci or the matter times, respectively. Unimodular gravity does not introduce new local degrees of freedom, even though it does introduce a global one. The reason for this is a \(U(1)\) gauge-invariance of the theory (first noted by [3], and with several reformulations which we review in Section III). As the Hamiltonian analysis in Section IV will show, this induces constraints which always work to subtract from the phase space any new local degrees of freedom, whatever formulation one chooses. This is also evident in the path integral formulation (Section V). But if we break the \(U(1)\) gauge-invariance, for example, via a Proca mass term (as done for electromagnetism in Maxwell-Proca theory), this releases a new local particle, as we show in this paper. We first do this in Section VI by explicitly evaluating the equations of motion and their solutions. These can be split into a revised zero-mode equation (producing a time-varying Lambda, instead of the ususal on-shell constant, and a corrected conjugate relational time) together with a propagating massless mode. The associated phenomenology is not ideal, at least for the simplest models (as we discuss in Section IX), but the purpose of this paper is to illustrate the concept, rather than to fine-tune the phenomenology. Also the observational implications do depend on the target constant. In this paper we shall use signature \(-+++\) (important for defining the sign of the Proca term) and use units such that \(\hbar=c=1\) (relevant for evaluating the mass dimensions of the Proca coupling). ## II The HT formulation of unimodular theory In the HT formulation of unimodular theory full diffeomorphism invariance is preserved, but one adds to the base action \(S_{0}\) an additional term: \[S_{0}\to S=S_{0}+S_{U}\equiv S_{0}-\int d^{4}x\,\Lambda(\partial_{\mu} \mathcal{T}^{\mu}). \tag{1}\] Here \(\mathcal{T}^{\mu}\) is a density, so that the added term is diffeomorphism invariant without using the metric or the connection. Since these do not appear in the new term, the Einstein equations (and other field equations) are left unchanged. In standard HT, given that \(S_{0}\) does not depend on \(\mathcal{T}^{\mu}\), one gets on-shell constancy of \(\Lambda\). Note the gauge symmetry [3]: \[\mathcal{T}^{\mu}\rightarrow\mathcal{T}^{\mu}+\epsilon^{\mu}\quad\text{with} \quad\partial_{\mu}\epsilon^{\mu}=0, \tag{2}\] rendering local degrees of freedom in the theory pure gauge modes. However, the zero-mode of \({\cal T}^{0}\): \[T(\Sigma)\equiv\int_{\Sigma}d^{3}x\,{\cal T}^{0} \tag{3}\] is gauge-invariant. It provides a physical definition of time, canonically dual to \(\Lambda\). On-shell \({\cal T}\) is proportional to the 4-volume to the past, or unimodular time (indeed it is equal to it if we replace \(\Lambda\) by \(\rho_{\Lambda}\) in the above). More generally, we may select a set of \(D\) constants \(\alpha\) and take: \[S_{0}\to S=S_{0}-\int d^{4}x\,\mathbf{\alpha}\cdot\partial_{\mu}{ \cal T}^{\mu}_{\mathbf{\alpha}} \tag{4}\] where the dot denotes the Euclidean inner product in \(D\) dimensional space. As with unimodular theory, the zero-modes of the zero components of the density \({\cal T}^{\mu}_{\mathbf{\alpha}}\) provide definitions of time \({\cal T}_{\mathbf{\alpha}}\), dual to the on-shell constants \(\alpha\). We will also occasionally integrate the unimodular action by parts and use: \[S_{0}\to S=S_{0}+\int d^{4}x\,(\partial_{\mu}\mathbf{\alpha})\cdot{ \cal T}^{\mu}_{\mathbf{\alpha}} \tag{5}\] (but see the implications this has for the path integral in [25]). For definiteness we will make our arguments with \(\Lambda\), but at the end of the paper will contemplate more general situations. Where required we will also choose: \[S_{0}=\frac{M_{P}^{2}}{2}\int d^{4}x\,\sqrt{-g}(R-2\Lambda) \tag{6}\] for definiteness but any alternative theory of gravity could be inserted into our calculations. Here \(M_{P}^{2}=1/(8\pi G)\) is the reduced Planck mass. ## III The underlying symmetry The underlying symmetry of unimodular-like theories is a non-standard representation of the \(U(1)\) symmetry, which has appeared in the literature in various contexts. In spite of the superficial differences, its various guises are closely related. By dualizing \({\cal T}^{\mu}\) we obtain a 3-form density, \({\bf T}\), in terms of which the HT action can be written as: \[S_{U}=\int\Lambda d{\bf T}=\int\Lambda{\bf F}. \tag{7}\] The gauge transformation (2) has the dual formulation \({\bf T}\rightarrow{\bf T}+{\bf E}\) where \(d{\bf E}=0\) (so that1\({\bf E}=d{\bf\Omega}_{2}\) for a 2-form \({\bf\Omega}_{2}\)), resulting in: Footnote 1: Recall that any p-form can be written as the sum of a closed p-form and the dual of a closed \(D-p\)-form. \[{\bf T}\rightarrow{\bf T}+d{\bf\Omega}_{2}. \tag{8}\] This is just the usual \(U(1)\) gauge transformation represented by a 3-form gauge-field, with the usual scalar function replaced by a 2-form \({\bf\Omega}_{2}\). It is a 3 dimensional representation because we can shift \({\bf\Omega}_{2}\rightarrow{\bf\Omega}_{2}+d{\bf\Omega}_{1}\), where in turn we can shift \({\bf\Omega}_{1}\rightarrow{\bf\Omega}_{1}+d{\bf\Omega}_{0}\). So of the 4 degrees of freedom of \({\bf\Omega}_{1}\) only 3 are relevant, and so of the 6 degrees of freedom of \({\bf\Omega}_{2}\) only 3 are relevant. Relation with Hawking's electromagnetic 3-form [19] (and related supergravity constructions [20]) is evident. Ditto with p-form Electromagnetism [22], which amounts to an equivalent construction but with membranes as charged sources. But superficially the unimodular action seems different. The two differences are that Hawking's action contains the metric, whereas the unimodular one does not; and that the Hawking's action is quadratic in F (it is an electromagnetic action; cf. \(p\)-form EM as in [22]), whereas the unimodular action is linear in \(F\). Regarding the first difference we note that Ref. [21] has also elected to define unimodular theory with the metric included: \[S_{U}=\int d^{4}x\,\sqrt{-g}T^{\mu}\partial_{\mu}\Lambda \tag{9}\] where \(T^{\mu}\) is a proper vector (or a density with weight zero). This is equivalent to defining \(T^{\mu}=\sqrt{-g}{\cal T}^{\mu}\) and does not have an effect on the classical theory without Proca symmetry breaking terms. Indeed: \[S_{U}=-\int d^{4}x\Lambda\partial_{\mu}{\cal T}^{\mu}=-\int d^{4}x\sqrt{-g} \Lambda\nabla_{\mu}T^{\mu} \tag{10}\] with the equivalent equations of motion \[\nabla_{\mu}T^{\mu} = -M_{P}^{2} \tag{11}\] \[\partial_{\mu}\Lambda = 0. \tag{12}\] The contribution to the energy-momentum tensor of the new term in the action is proportional to \(\partial_{\mu}\Lambda\) and so vanishes on-shell. So although in this formulation the unimodular action contains the metric, the new term does not contribute to the Einstein equations. We will return to this formulation presently. Regarding the second difference, we could regard unimodular and p-form electromagnetism as "\(F(R)\)" (in analogy with the Einstein-Hilbert action) versions of each other, since one is quadratic in \(F\), the other linear. This is not very useful, since \(F\) is essentially a scalar (it is dual to a scalar), so at least in the most basic case all these theories are just field redefinitions of each other. This is not true in more complicated models, or if one breaks the gauge symmetry. ## IV The Hamiltonian structure of HT unimodular theory As was traditional at the time, the HT action was reconstructed from the Hamiltonian and the action's presumed 3+1 split, but we can work in the opposite direction (as was done before in [4]). In doing so, many different treatments are possible, all leading to the same result: that no local degrees of freedom arise. The multitude of possible treatments is due to the freedom to integrate the unimodular extension by parts. The resulting boundary terms are constants which do not affect the Hamiltonian analysis. ### Method 1 For example, if the derivatives are on \(\Lambda\) then we may consider the integral \[S_{U}=\int d^{4}x\,(\partial_{\mu}\Lambda)\mathcal{T}^{\mu}=\int dt\,d^{3}x[ \dot{\Lambda}\mathcal{T}^{0}+(\partial_{i}\Lambda)\mathcal{T}^{i}] \tag{13}\] and immediately state that we have 2 new phase space variables (\(\Lambda\) and \(\mathcal{T}^{0}\)), and look at \(\mathcal{T}^{i}\) as Lagrange multipliers enforcing one new primary constraint \[\partial_{i}\Lambda\approx 0. \tag{14}\] This does not generate secondary constraints, and it is first class because it commutes with everything else. Hence the counting of local degrees of freedom goes as: \[D=\frac{2-2\times 1}{2}=0. \tag{15}\] However, there are more complicated alternatives which, whilst exactly equivalent to this simple argument for straight unimodular theory, will be helpful to clarify matters once we break its gauge invariance. ### Method 2 Although a momentum conjugate to \(\mathcal{T}^{i}\) does not appear in the 3+1 split unimodular action, this can be inserted in the split action with Lagrange multipliers \(\lambda^{i}\) forcing it to vanish on-shell. The action is then given by \[S_{U}=\int dt\ d^{3}x\,\left[\dot{\Lambda}\mathcal{T}^{0}+\dot{\pi}_{i} \mathcal{T}^{i}+\lambda^{i}\pi_{i}+\mathcal{T}^{i}(\partial_{i}\Lambda)\right] \tag{16}\] with the Lagrange multiplier density \(\lambda^{i}\) enforcing 3 primary constraints: \[\pi_{i}\approx 0. \tag{17}\] Since: \[\dot{\pi}_{i}=\{\pi_{i},H\}=\partial_{i}\Lambda \tag{18}\] we now get (14) as a secondary constraint. No further constraints exist, and they all commute, so they are first class. In this rendition we have 8 new phase space variables and 4 first class constraints, so again no new local d.o.f.s: \[D=\frac{8-2\times 4}{2}=0. \tag{19}\] It is these first class constraints that generate the gauge transformations under which the theory is invariant. ### Method 3 We could also gauge fix \(\mathcal{T}^{i}=0\) at the expense of introducing second class constraints, writing the action as: \[S_{U}=\int d^{4}x\,[-\Lambda\dot{\mathcal{T}}^{0}-\pi_{i}\dot{\mathcal{T}}^{i }-\Lambda(\partial_{i}\mathcal{T}^{i})+\lambda^{i}\pi_{i}+\mu_{i}\mathcal{T} ^{i}. \tag{20}\] The last 6 constraints are then second class because \(\{\pi_{i},\mathcal{T}^{j}\}=\delta_{i}^{j}\). The number of degrees of freedom associated with this action is then counted as: \[D=\frac{8-2\times 1-3-3}{2}=0. \tag{21}\] This is the form of the action most suitable to a path integral treatment, as we explain below. Hence, no new local degrees of freedom emerge, a matter due to the structure of constraints, which points to the underlying gauge symmetry of the theory. Note that the conclusion is different for _global degrees of freedom_, which indeed are increased by 1, as evident if we impose homogeneity and isotropy. Whereas in standard General Relativity subjected to this reduction, de Sitter space time has no degrees of freedom, in unimodular gravity there is one degree of freedom: the freedom to specify the (constant, global) value of \(\Lambda\). ## V The Path Integral Path integrals containing dynamical variables which obey gauge symmetries require the gauge fixing of these variables to avoid contributions from an infinite gauge volume when evaluated. Consequently, the Method 3 action is the most relevant action for path integral calculations, and so we have to deal with second class constraints. In the unimodular case, the action given by Method 3 has three primary second class constraints \(\mu_{i}\) which enforce \(\mathcal{T}^{i}\approx 0\) and three primary second class constraints \(\lambda^{i}\) which enforce \(\pi_{i}\approx 0\). The path integral starts off as: \[Z= \int\mathcal{D}\Lambda\mathcal{D}T^{0}\mathcal{D}\lambda^{i} \mathcal{D}\mu_{i}\prod_{t}|\mathrm{det}\{\xi_{\mathrm{a}},\xi_{\mathrm{b}}\}| ^{\frac{i}{2}}\] \[\exp\Biggl{[}\int d^{4}x(-\Lambda\dot{T}^{0}-\pi_{i}\dot{T}^{i}- \Lambda(\partial_{i}T^{i})+\lambda^{i}\pi_{i}+\mu_{i}T^{i})\Biggr{]},\] where \(\mathrm{det}\{\xi_{\mathrm{a}},\xi_{\mathrm{b}}\}\) is the determinant of the antisymmetric matrix whose entries are the Poisson brackets of second class constraints \(\xi_{a}\) and \(\xi_{b}\). It was previously shown in [24] that this is the path integral equivalent of Dirac bracket canonical quantisation for theories containing second class constraints. In this case, as the constraints are all primary second class, the antisymmetric matrix takes the form: \[\{\xi_{a},\xi_{b}\}=\left|\begin{matrix}\{T^{i},T^{i}\}&\{T^{i},\pi_{j}\}\\ \{\pi_{j},T^{i}\}&\{\pi_{j},\pi_{j}\}\end{matrix}\right|=\left|\begin{matrix} 0_{(3\times 3)}&\delta_{j(3\times 3)}^{i}\\ -\delta_{j(3\times 3)}^{i}&0_{(3\times 3)}\end{matrix}\right|\] The determinant of this antisymmetric matrix is therefore 1. As the constraints enforce the vanishing of the spatial components \({\cal T}^{i}\) and of their conjugate momenta on-shell, the path integral for the unimodular extension reduces to \[Z=\int{\cal D}\Lambda{\cal D}T^{0}\,\exp\Biggl{[}i\left(\int d^{4}x\ \dot{\Lambda}{\cal T}^{0}-\left[\Lambda{\cal T}^{0}\right]^{f}_{i}\right) \Bigg{]}. \tag{22}\] The boundary term resulting from integration of the first term by parts has been included since it is non vanishing in general. This is exactly the minisuperspace path integral used to derive the Hartle-Hawking and Vilenkin metric representation and Chern-Simons connection representation wavefunctions of the universe [25]. This is unsurprising since \({\cal T}^{i}\approx 0\) is also a consequence of enforcing homogeneity and isotropy. We stress that the results of this section are independent of the representation chosen for the base. These results also hold if we choose \(\alpha=M_{P}^{2}\) as this constant has mass dimensions \([M_{P}^{2}]=M^{2}\). Choosing the base action as (6), the total action is then given by: \[S=S_{0}-\int d^{4}x\ M_{P}^{2}\cdot\partial_{\mu}T_{R}^{\mu}, \tag{23}\] where \(T_{R}\) is the Ricci time dual to the Planck mass squared. The Ricci time variable will also have mass dimension \([T_{R}]=M^{1}\). In this case, the path integral for the unimodular extension takes the exact same form as (24), with \(\Lambda\) replaced by \(M_{P}^{2}\). In the case where both \(\Lambda\) and \(M_{P}^{2}\) are both promoted to variables, the full path integral is given by: \[\begin{split}{\cal Z}=&\int{\cal D}g_{\mu\nu}\int{ \cal D}\Lambda\int{\cal D}T^{0}\int{\cal D}M_{P}^{2}\int{\cal D}T_{R}^{0}\\ &\exp\biggl{[}i\Bigl{(}S_{0}+\int d^{4}x\ \dot{\Lambda}{\cal T}^{0}+\int d^{4}x\ M_{P}^{2}T_{R}^{0}\\ &-\left[\Lambda T^{0}\right]^{f}_{i}-\left[M_{P}^{2}T_{R}^{0} \right]^{f}_{i}\Bigr{)}\biggr{]}.\end{split} \tag{24}\] Both \(\Lambda\) and \(M_{P}^{2}\) are functions of coordinate time only. Evaluating the \(T^{0}\) and \(T_{R}^{0}\) path integrals leads to the delta functions \(\delta(\dot{\Lambda})\) and \(\delta(\dot{M_{P}})\). In general, for a function \(f(t)\), the functional and normal integration measures are related by \(df(t)={\cal D}f\delta(\dot{f})\). Applying this relation to the above path integral leads to the reduced path integral: \[\begin{split}{\cal Z}=\int{\cal D}g_{\mu\nu}\int d\Lambda\int dM _{P}^{2}\,\exp&\biggl{[}i\Bigl{(}S_{0}-\left[\Lambda T^{0} \right]^{f}_{i}\\ &-\left[M_{P}^{2}T_{R}^{0}\right]^{f}_{i}\Bigr{)}\biggr{]}.\end{split} \tag{25}\] If the initial and final values of \(\Lambda\), \(M_{p}^{2}\) and their dual times are chosen to be equal, then the boundary terms in (27) vanish. Physically, this may correspond to an expansion of the universe from these initial values until it reaches a maximum size, then re-collapse back to the same initial state. This scenario is an important aspect of phenomenology in the Sequester model. Therefore, it is no surprise that the above path integral corresponds to the sequester path integral conjectured in [13]. ## VI Proca term in unimodular-like theories Given the \(U(1)\) symmetry of unimodular-like theories one may expect a parallel with electromagnetism, the EM potential \(A^{\mu}\) paralleled by \({\cal T}^{\mu}\). This is explored in [23], where a free kinetic is added on to unimodular theory. Here we investigate symmetry breaking in straight unimodular theory, using Proca's theory as a parallel. Recall that the Proca action in electromagnetism is defined by the addition to the usual EM action of a mass term quadratic in \(A_{\mu}\): \[S_{PROCA}=\int d^{4}x\bigg{(}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{M^{2}}{2}A _{\mu}A^{\mu}\bigg{)} \tag{26}\] (using the common assumption of giving positive energy to the spatial modes in \((A^{i})^{2}\)). The action is then no longer invariant under the local \(U(1)\) gauge symmetry defined by the transformation \(A_{\mu}\to A_{\mu}+\partial_{\mu}\xi\). Similarly, a Proca mass term for \({\cal T}^{\mu}\) can be defined. However, it must be emphasized that \({\cal T}^{\mu}\) is a vector density, so the most literal Proca term would not be a scalar (this could lead to an interesting model for local Lorentz symmetry breaking). As explained after (9), to make \({\cal T}^{\mu}\) a proper vector we must appeal to the metric, defining: \[\begin{split} T^{\mu}&={\cal T}^{\mu}/\sqrt{-g}\\ T_{\mu}&=g_{\mu\nu}T^{\nu}.\end{split} \tag{27}\] A suitable scalar Proca term can then be added to unimodular theory: \[S_{U} =\int d^{4}x\ \bigg{(}-\Lambda(\partial_{\mu}{\cal T}^{\mu})+ \frac{M^{2}}{2}\sqrt{-g}T_{\mu}T^{\mu}\bigg{)}\] \[=\int d^{4}x\ \bigg{(}-\Lambda(\partial_{\mu}{\cal T}^{\mu})+ \frac{M^{2}}{2}\frac{g_{\mu\nu}}{\sqrt{-g}}{\cal T}^{\mu}{\cal T}^{\nu} \bigg{)} \tag{28}\] where we use the opposite convention for the sign of the mass term as that used in Proca theory, for reasons that will be obvious presently (we want to give positive energy to the scalar mode). The added mass term respects diffeomorphism invariance while breaking the \(U(1)\) gauge symmetry associated with the unimodular extension. We can also write the action completely in terms of \(T^{\mu}\) as: \[S_{U} =\int d^{4}x\sqrt{-g}\ \bigg{(}-\Lambda\nabla_{\mu}T^{\mu}+\frac{M^{2} }{2}T_{\mu}T^{\mu}\bigg{)}\] \[=\int d^{4}x\sqrt{-g}\ \bigg{(}(\partial_{\mu}\Lambda)T^{\mu}+ \frac{M^{2}}{2}T_{\mu}T^{\mu}\bigg{)}. \tag{29}\] (where we have ignored a boundary term going between the two). Due to notational and algebraic simplicity for further calculations, the above action will be used as the unimodular Proca action. We stress that all subsequent results can be obtained from either rendition of the Proca action. This procedure could be applied to any other unimodular-like theory based on different \(\mathbf{\alpha}\) (c.f. Eq. 4), but the dimensions of the Proca coupling would have to be adjusted. For a target "constant" \(\alpha\) with mass dimensions: \[[\alpha]=M^{n} \tag{30}\] we have (recalling that we set \(\hbar=c=1\)): \[[T^{\mu}]=[\mathcal{T}^{\mu}]=M^{3-n} \tag{31}\] and so a (gauge-invariant) zero-mode time \([T_{\alpha}]=[V][\mathcal{T}^{0}]=M^{-n}\) (cf. Eq. 3). Hence the Proca coupling has dimensions \(M^{2n-2}\). In the case we have illustrated, \([\Lambda]=M^{2}\), so the Proca term is indeed a mass term and the \(M\) used above is indeed a "mass". But if we target the Planck mass squared, \(\alpha=M_{P}^{2}\), so that its dual is the Ricci time used in sequesteration [26; 27], then the Proca coupling is dimensionless. Also it matters which function of the constant or constants we took for \(\alpha\) (that is, canonical transformations matter). For example, if we took \(\rho_{\Lambda}\) instead of \(\Lambda\) (as is done in [26; 27]), then indeed the conjugate is four-volume time (\([T]=M^{-4}=L^{4}\)), unlike the time conjugate to \(\Lambda\) (which would be mixed with Ricci time in sequester scenarios). But then the coupling would have dimensions \(M^{6}\). Finally, in the context of unimodular-like theories with several \(\alpha_{I}\) and \(T_{I}^{\mu}\) (where \(I\) indexes the different \(\mathbf{\alpha}\) in (4)), we could define a general mass matrix: \[S_{U}=\int d^{4}x\sqrt{-g}\,\left(\,-\alpha_{I}\nabla_{\mu}T_{I}^{\mu}+\frac{ M_{IJ}^{2}}{2}T_{I\mu}T_{J}^{\mu}\right) \tag{32}\] (Einstein notation implied for the indices \(I,J\)). The fact that this matrix does not need to be diagonal, but can be diagonalized, then points to the existence of a rotation between a "flavour" space of constants and their canonically conjugated times, and the mass eigenmodes. In a future publication we will explore how this can be applied to the sequester mechanics to shed light on the value of the observed cosmological constant. ## VII Solutions and the Propagating Modes Breaking the symmetry releases a new propagating degree of freedom, but whereas in Proca's case this is in addition to 2 existing ones (a longitudinal polarization mode in addition to the 2 transverse ones), here the original theory has no local degrees of freedom. This can be illustrated by evaluating the equations of motion of (28) (but similar results would be obtained starting from (29)), to find: \[\nabla_{\mu}T^{\mu} = -M_{P}^{2} \tag{33}\] \[\partial_{\mu}\Lambda = -M^{2}T_{\mu} \tag{34}\] that is, the equation of motion for \(T^{\mu}\) remains the same (cf. 11) but the equation for \(\Lambda\) is modified (cf. 12), with local variations in \(\Lambda\) permitted. These equations can be combined into: \[\Box\Lambda=M^{2}M_{P}^{2} \tag{35}\] which is a sourced wave equation. Since this equation is linear its general solutions can be written as: \[\Lambda=\Lambda_{0}+\chi \tag{36}\] with: \[\Box\chi = 0 \tag{37}\] \[\Box\Lambda_{0} = M^{2}M_{P}^{2} \tag{38}\] We can choose \(\Lambda_{0}\) to be homogeneous on the surface \(\Sigma\) defining the gauge-invariant zero mode that provides the time \(T\). Hence \(\Lambda_{0}\) is the (time-varying) zero mode in \(\Lambda\) in this model (we will evaluate it in Section IX). The mode \(\chi\) is a massless scalar field, a Lambdon, as it were, upon quantization. This is the new local degree of freedom that has been released by breaking gauge invariance. From the point of view of the Hamiltonian analysis performed in Section IV it is not surprising that we acquire a local degree of freedom. By breaking the Gauge symmetry we lose constraints (or change their nature), increasing the number of degrees of freedom. Using Section IV.2, for example, we get the secondary constraint: \[S_{i}=\dot{\pi}_{i}=\{\pi_{i},H\}=\partial_{i}\Lambda+M^{2}T_{i}\approx 0 \tag{39}\] wherein before we had the spatial constancy of \(\Lambda\). These 3 secondary constraints do not commute with the primary constraints (since \(\{\pi_{i},S_{j}\}=M^{2}\delta_{ij}\)) so they are second class. This results in one local degree of freedom (\((8-6)/2=1\)). Note that unlike in the formulation of [21] the use of the metric and of the vector \(T^{\mu}\) do result in a contribution to the Stress-Energy-Momentum tensor in Unimodular Proca theory: \[T_{\mu\nu} = \frac{-2}{\sqrt{-g}}\frac{\delta\mathcal{S}_{M}}{\delta g^{\mu\nu}} \tag{40}\] \[= -[(\partial_{\mu}\Lambda)T_{\nu}+(\partial_{\nu}\Lambda)T_{\mu}] +g_{\mu\nu}(\partial_{\alpha}\Lambda)T^{\alpha}+\] \[-M^{2}\left(T_{\mu}T_{\nu}-\frac{1}{2}g_{\mu\nu}T_{\alpha}T^{ \alpha}\right).\] For plain unimodular theory (\(M^{2}=0\)) this is zero on-shell (since \(\partial_{\mu}\Lambda=0\) is an equation of motion). This is not the case if \(M^{2}\neq 0\). Eliminating the \(T^{\mu}\) by means of (34) we get: \[T_{\mu\nu}=\frac{1}{M^{2}}\bigg{(}(\partial_{\mu}\Lambda)(\partial_{\nu} \Lambda)-\frac{1}{2}g_{\mu\nu}(\partial_{\alpha}\Lambda)(\partial^{\alpha} \Lambda)\bigg{)} \tag{41}\] This is just the stress energy tensor of a massless scalar field \(\Lambda/M\). Notice that it was important to choose the sign of \(M^{2}\) for Unimodular-Proca theory as we did, that is the opposite of the sign chosen for Electromagnetic-Proca theory. This is because our new mode is a scalar instead of a longitudinal polarization. Therefore we want to give positive energy to the scalar mode \(T^{0}\), rather than the spatial modes \(T^{i}\). ## VIII BRST quantisation of the original unimodular theory In Sec. III we discussed how the gauge symmetry of unimodular-like theories is a non-standard 3-dimensional representation of the \(U(1)\) gauge group. This differs from regular electromagnetism which is based on a 1-dimensional representation of the \(U(1)\) gauge symmetry. However, this shared \(U(1)\) gauge symmetry implies that the Becchi-Rouet-Stora-Tyutin (BRST) quantisation of these theories should be similar. To gauge-fix the Maxwell action, one deals with terms of the form \(B(\partial_{\mu}A^{\mu})\), or, \(B(\partial_{i}A^{i})\) where \(B\) is a Nakanishi-Lautrup auxiliary field. This auxiliary field leads to a Dirac-delta function enforcing the Lorenz gauge \(\partial_{\mu}A^{\mu}\approx 0\), or, the Coulomb gauge \(\partial_{i}A^{i}\approx 0\) on-shell. One can equivalently consider off-shell formulations of these gauge fixing conditions. In these formulations, a gauge-parameter is introduced such that the gauge fixing action is quadratic in this new parameter and the particular gauge to enforced. Substituting the equations of motion for the gauge parameter back into the action yields the original gauge fixing term. In Sec.V, we showed that the minisuperspace path integral is the result of enforcing the gauge fixing condition \(T^{i}\approx 0\). Taking inspiration from the Maxwell case, in this section we will develop an off-shell formulation of this gauge fixing condition. This off-shell gauge fixing action contains a spatial Proca term which is perfectly consistent with previous results for unimodular theories. Additionally, we develop a consistent BRST quantisation of the original and off-shell gauge fixing actions. This procedure closely mirrors that of the Maxwell case. ### The Off-Shell gauge fixing action & Spatial Proca From the Hamiltonian analysis we see that the on-shell vanishing of the \(T^{i}\) components and the survival of \(T^{0}\) component allows for the notion of a 4-volume time on-shell. The addition of a quadratic term \((T^{0})^{2}\) ruins the 4-volume time relation and leads to a contribution to the energy-momentum tensor. However, a suitable Proca term for the spatial components can be defined which respects the original unimodular relations. The off-shell formulation of the \(T^{i}\) gauge fixing condition is defined as: \[S_{\rm GF}^{\rm OFF-SHELL}=-\int d^{4}x\sqrt{-g}\ \Big{(}\frac{1}{2\xi}T_{i}T^{i}+ \frac{1}{2}\gamma_{i}\gamma^{i}\xi\Big{)}, \tag{42}\] where \(\xi\) is a gauge parameter and \(\gamma_{i}\) is an auxiliary 3-vector. Varying \(S_{\rm GF}\) with respect to \(\xi\) yields \[\sqrt{\frac{T_{i}T^{i}}{\gamma_{i}\gamma^{i}}}=\xi. \tag{43}\] As the action can be considered as quadratic in \(\xi\), the above equation of motion for \(\xi\) can be substituted into the action to yield: \[S_{\rm U}+S_{\rm GF}=\int d^{4}x\ \Lambda\dot{T}^{0}+\Lambda(\partial_{i}T^{i} )-\gamma^{i}T_{i}, \tag{44}\] which is the original gauge fixing condition. While not central to this analysis, a similar off-shell gauge fixing action for the conjugate momenta \(\pi_{i}\) can be defined. ### The BRST Action & BRST Symmetry As the path integral has been gauge fixed, the next step in BRST quantisation is to define a Fadeev-Popov ghost action. The quantisation procedure is then considered complete when the original unimodular, gauge-fixing and ghost actions all obey a defined nilpotent BRST symmetry. The BRST symmetry is determined by the Slavnov derivative \(\delta_{B}\) acting on each term in the BRST action. For the action containing the regular gauge-fixing term, this BRST symmetry is defined as \[\delta_{B} \Lambda =0, \tag{45}\] \[\delta_{B}T^{0} =0,\] (46) \[\delta_{B}T^{i} =\nabla^{i}c\] (47) \[\delta_{B}c =0,\] (48) \[\delta_{B}^{\alpha}\nabla^{i}\bar{c} =(\gamma^{i}+\nabla^{i}\Lambda)\] (49) \[\delta_{B}\gamma^{i} =0. \tag{50}\] This BRST symmetry is internally consistent as \(\delta_{B}^{2}\) vanishes for all terms. The BRST action associated with the regular gauge fixing condition is given by: \[S_{\rm BRST}\equiv S_{\rm U}+ S_{\rm GF}+S_{\rm GH}=\] \[\int d^{4}x\sqrt{-g}(\Lambda\nabla_{\mu}T^{\mu}-\gamma_{i}T^{i}+ \nabla^{i}c\nabla_{i}\bar{c}),\] such that \(\delta_{B}(S_{\rm BRST})=0\). There exists a number of interesting similarities and differences between the gauge fixing and ghost actions of unimodular gravity and those of electromagnetism. When gauge fixing the Maxwell action, one considers terms such as \(B(\partial_{\mu}A^{\mu})\) or \(B(\partial_{i}A^{i})\), as we said. These are gauge fixing conditions on the derivatives of the components of the gauge field \(A^{\mu}\). In the unimodular case, there are three constraints \(\gamma_{i}\) which directly enforce \(T^{i}\approx 0\). As the Slavnov derivative acts non-trivially on the spatial components only, the corresponding ghost kinetic term is purely spatial. This differs from usual electromagnetism in which the ghost kinetic term consists of the full four derivative. Additionally, to have a consistent BRST symmetry, the Slavnov derivative acting \(\bar{c}\) must be accompanied with a gradient term \(\nabla^{i}\). The definition by (51) can be viewed in two ways. The first is that the Slavnov derivative \(\delta_{B}\) is acting on \(\nabla^{i}\bar{c}\). The second viewpoint is that \(\delta_{B}\nabla^{i}\) is a unique Slavnov derivative acting on \(\bar{c}\). A consequence of the nilpotency of the Slavnov derivative is that we can always consider a gauge transformation on the original action defined by \(S_{BRST}\to S_{BRST}+\delta_{B}G_{1}\) where the Slavnov derivative \(\delta_{B}\), and the integral \(G_{1}\) are Grassmann odd such that \(\delta_{B}G_{1}\) is Grassmann even. The overall Grassmann odd quantity \(\delta_{B}(S_{BRST}+\delta_{B}G_{1})\) then vanishes. This property is most evident if we take the BRST action which contains the off-shell gauge fixing condition shown in (44). In this case, the BRST action containing the spatial Proca term is given by: \[S_{BRST}^{\rm OFF-SHELL}=\int d^{4}x\sqrt{-g}\;\bigg{(}\Lambda\partial_{\mu}T ^{\mu}-\frac{1}{2\xi}T^{i}T_{i}+\nabla_{i}\bar{c}\nabla^{i}c\bigg{)}.\] The previously defined BRST symmetry (cf. Eqns. (45)-(50)) is mostly applicable, with the only difference being in (49), where we should define \[\delta_{B}^{\beta}\nabla^{i}\bar{c}=\bigg{(}\frac{1}{\xi}T^{i}+\nabla^{i} \Lambda\bigg{)}, \tag{51}\] so that \(\delta_{B}S_{BRST}=0\) for the off-shell action. It must be noted that if we take the perspective that the \(\delta_{B}\) acts on \(\nabla^{i}\bar{c}\), then the BRST symmetry is not nilpotent since \((\delta_{B}^{\beta})^{2}\nabla^{i}c\neq 0\). If we adopt the second perspective that \(\delta_{B}\nabla^{i}\) is an operator acting on \(\bar{c}\) then we see that: \[\begin{split}(\delta_{B}^{\beta}\nabla)^{2}\bar{c}=& \delta_{B}^{\beta}\nabla_{i}(\delta_{B}^{\beta}\nabla^{i}\bar{c}),\\ =&\delta_{B}^{\beta}\nabla_{i}\bigg{(}\frac{1}{\xi} T^{i}+\nabla^{i}\Lambda\bigg{)},\\ =&\nabla_{i}\bigg{(}\frac{1}{\xi}\delta_{B}^{\beta }(T^{i})+\nabla^{i}(\delta_{B}^{\beta}(\Lambda))\bigg{)},\\ =&\frac{\nabla_{i}\nabla^{i}c}{\xi}=0.\end{split} \tag{52}\] Therefore the operator \(\delta_{B}^{\beta}\nabla_{i}\) is only nilpotent on-shell. This is in exact analogy with the EM case. Once again, the only difference is due to the BRST symmetry only being defined for \(T^{i}\). In the BRST quantisation of the Maxwell action, the quantity \(\Box c/\xi\) vanishes on-shell. As the off-shell BRST action has been defined using the first term in the off-shell gauge fixing action in (44), we must now define a suitable suitable integral \(G_{1}\) such that \(\delta_{B}G_{1}\) yields the second term in (44). In this case, we will consider two integrals \(G_{1}\) and \(G_{2}\) defined as: \[\begin{split} G_{1}&=\int d^{4}x\sqrt{-g}\;\frac{1} {2}(-\nabla_{i}\bar{c})\gamma^{i}\xi\\ G_{2}&=\int d^{4}x\sqrt{-g}\;\frac{1}{2}(\nabla_{i }\Lambda)\gamma^{i}\xi\end{split} \tag{53}\] From these definitions we see that \(G_{1}\) is Grassmann odd while \(G_{2}\) is Grassmann even. To retrieve the second term in (44), we take the Slavnov derivative of \(G_{1}\) while \(G_{2}\) serves purely as a counter-term to remove an unwanted part of \(\delta_{B}G_{1}\). This yields: \[\begin{split}\delta_{B}G_{1}+G_{2}=&\int d^{4}x \sqrt{-g}\;\frac{1}{2}(-\delta_{B}^{\alpha}\nabla_{i}\bar{c}+\nabla_{i} \Lambda)\gamma^{i}\xi,\\ =&\int d^{4}x\sqrt{-g}\;\frac{1}{2}(-\gamma_{i}- \nabla_{i}\Lambda+\nabla_{i}\Lambda)\gamma^{i}\xi,\\ =&\int d^{4}x\sqrt{-g}\;\frac{1}{2}(-\gamma_{i}\gamma ^{i}\xi),\end{split} \tag{54}\] which is the second term in the off-shell gauge fixing action (44). The definition of a Grassmann even counter-term, while unorthodox, is consistent with the overall idea of gauge transformations. Initially, we said that we can consider a gauge transformation \(S_{BRST}\to S_{BRST}+\delta_{B}G_{1}\) such that the Grassmann odd quantity \(\delta_{B}(S_{BRST}+\delta_{B}G_{1})=0\). The only other term that we could possibly add to the gauge transformation is a Grassmann even counter-term. In this case, the result is still a Grassmann even quantity \(S_{BRST}+\delta_{B}G_{1}+G_{2}\). The Slavnov derivative acting on this yields the Grassmann odd quantity \(\delta_{B}(S_{BRST}+\delta_{B}G_{1}+G_{2})\) which vanishes. The vanishing of \(\delta_{B}G_{2}\) is ensured by (50). Another difference between the BRST quantisation of the unimodular and Maxwell actions is the BRST charge of both theories. In the unimodular case, the defined BRST symmetries of both the on-shell and off-shell actions do not contain Slavnov derivatives of \(T^{0}\) or \(\Lambda\). Therefore any BRST current derived from either action must only contain spatial components \(J_{i}^{BRST}\). Assuming that the definition of the BRST charge is given by \[Q_{BRST}=\int d^{3}x\sqrt{-h}\;J_{0}^{BRST}, \tag{55}\] then \(Q_{BRST}\) is trivially zero in this case. ## IX Discussion We close with some discussion on the phenomenological implications of these theories. This might seem premature with such blatantly "toy" models, but nonetheless we will try, bearing in mind that complicating the theory (for example replacing our mass term by more general potentials) could significantly affect our statements. The two central predictions of Unimodular Proca theory are: * A time-variation in the zero-mode of Lambda. * A massless propagating mode, or a "Lambda" particle. Equivalent statements apply if the procedure targets any other "constant", such as the Planck mass or the gravitational coupling, but obviously the phenomenology is different. Regarding the first implication, and taking \(\Lambda\) as an example, identifying \(\Sigma\) with the cosmological frame, equation (38) becomes: \[\frac{1}{a^{3}}(a^{3}\dot{\Lambda}_{0})\cdot=-M^{2}M_{P}^{2}\] where \(a\) is the expansion factor and dots are derivatives with respect to proper cosmological time. Assuming the stress energy tensor of \(\Lambda\) (including its Unimodular-Proca contribution (41)) is subdominant with respect to a dominant fluid with constant equation of state \(w\), we then find: \[\Lambda_{0}=\bar{\Lambda}_{0}-\frac{M^{2}M_{P}^{2}}{2}\frac{1+w}{3+w}t^{2} \tag{56}\] where \(\bar{\Lambda}_{0}\) is a constant. Hence the zero-mode of \(\Lambda\) subjected to a Proca term will _decrease_ in time, if we assume that its propagating mode is not a ghost (see discussion after (41); the sign of \(M^{2}\) has this implication). Similar results apply to any other "constant" \(\alpha\) subject to (4). The various constraints on time variations of the constants therefore translate into upper bounds on their respective \(M^{2}\). For example, constraints from Big Bang Nucleosynthesis will translate into constraints on the mass term of a theory targeting \(\alpha=M_{P}^{2}\) (since they would imply a time variation in the gravitational "constant"). In the specific case of \(\Lambda\) we note that its equation of state is modified, since we can read off from (41): \[\rho_{\Lambda} =M_{P}^{2}\Lambda+\frac{\dot{\Lambda}^{2}}{2M^{2}} \tag{57}\] \[p_{\Lambda} =-M_{P}^{2}\Lambda+\frac{\dot{\Lambda}^{2}}{2M^{2}}, \tag{58}\] (in the case of \(\Lambda\), but not in general, one must add to the stress energy tensor due to the Proca term, that due to \(S_{0}\)). The constraints on the equation of state of dark energy can then also be translated into constraints on \(M\), since it forces \(w_{\Lambda}>-1\). The new terms could even lead to kination instead of inflation. For \(\Lambda\) there is also the possibility that it could become the dominant contribution to the cosmological stress energy tensor, in which case the solution (56) would have to be modified. Regarding the second implication itemised above, we have to contend with the scalar particles predicted by these theories. The fact that they have not been directly seen does not mean that they have not already been ruled out by their implications. This may be the case if \(\alpha\) is the gravitational coupling (or the Planck mass, if the two are identified), for the same reason that gravitational scalars can be ruled out by the milisecond pulsar, for example. However, such arguments depend crucially on the choice of \(\alpha\): if this is \(\Lambda\) then the Lambdon is sufficiently elusive to bypass most indirect observational constraints. In closing we note that there are significant differences between Electromagnetic Proca theory and Unimodular Proca theory. For example, the representation of the gauge group is very different (it has different dimension), so one cannot lift the Stueckelberg procedure used in the first case to understand the second one. Also, before the symmetry breaking terms are included, unimodular-like theories have no propagating degrees of freedom. The new propagating degree of freedom is not a longitudinal mode, to be added on to the usual two transverse modes; instead it is a new scalar mode. We are in different territory, and so the phenomenology of standard Proca theory is not applicable here. ## X Acknowledgments We thank Claudia de Rham, Arkady Tseytlin and Toby Wiseman for discussions related to this paper. This work was supported by the STFC Consolidated Grant ST/T000791/1 (J.M.).
2302.02272
Divide and Compose with Score Based Generative Models
While score based generative models, or diffusion models, have found success in image synthesis, they are often coupled with text data or image label to be able to manipulate and conditionally generate images. Even though manipulation of images by changing the text prompt is possible, our understanding of the text embedding and our ability to modify it to edit images is quite limited. Towards the direction of having more control over image manipulation and conditional generation, we propose to learn image components in an unsupervised manner so that we can compose those components to generate and manipulate images in informed manner. Taking inspiration from energy based models, we interpret different score components as the gradient of different energy functions. We show how score based learning allows us to learn interesting components and we can visualize them through generation. We also show how this novel decomposition allows us to compose, generate and modify images in interesting ways akin to dreaming. We make our code available at https://github.com/sandeshgh/Score-based-disentanglement
Sandesh Ghimire, Armand Comas, Davin Hill, Aria Masoomi, Octavia Camps, Jennifer Dy
2023-02-05T00:53:33Z
http://arxiv.org/abs/2302.02272v1
# Divide and Compose with Score Based Generative Models ###### Abstract While score based generative models, or diffusion models, have found success in image synthesis, they are often coupled with text data or image label to be able to manipulate and conditionally generate images. Even though manipulation of images by changing the text prompt is possible, our understanding of the text embedding and our ability to modify it to edit images is quite limited. Towards the direction of having more control over image manipulation and conditional generation, we propose to learn image components in an unsupervised manner so that we can compose those components to generate and manipulate images in informed manner. Taking inspiration from energy based models, we interpret different score components as the gradient of different energy functions. We show how score based learning allows us to learn interesting components and we can visualize them through generation. We also show how this novel decomposition allows us to compose, generate and modify images in interesting ways akin to dreaming. We make our code available at [https://github.com/sandeshgh/Score-based-disentanglement](https://github.com/sandeshgh/Score-based-disentanglement) ## 1 Introduction Diffusion based [11] or score based [28] generative models are a new class of generative models based on the idea of reversing the image corruption process to generate realistic images from noise. These approaches have recently become quite successful, not only in synthesizing realistic and diverse images [8], but also in obtaining better data likelihood [17]. Numerous works have applied score based generative models in text-based image generation [23, 24], inpainting [24], editing [20] etc. Recently developed models like DALL-E [23] and Latent Diffusion [24] using diffusion models have been reported to generate realistic and diverse images with wild imagination ability. Most of the works that conditionally generate images using diffusion/score model train models in a supervised manner conditioned either on actual class labels or embedding of paired text [23]. Supervised conditional generation could be either guided using the gradient of pretrained classifier [8] obtained from supervised learning or could also be classifier free [12]. Building upon the works of text based conditional generation, some works have also tried to edit or manipulate images [1]. While these methods show that with labels, learning conditional score model is quite effective, there is one fundamental problem with the present conditional generation: we do not have control over what the model generates. Suppose we generate an image based on text. The image looks okay, but it's not quite what we want. How can we change it to match our expectations? Do we have control? Do we have interpretable understanding of the conditioning? No! This topic is not unexplored in the context of traditional generative models, like VAEs and GANs. In fact, they have been extensively explored as disentanglement [4, 19] in the autoencoding setup and GAN inversion [3, 6] in GANs. Several works in GAN inversion [31] try to find latent representations corresponding to an image and then manipulate the representation in the latent space to edit and manipulate the image. In case of score based models, however, there is little understanding of the latent factors in terms of generation of images. Models that use text based conditional generations are opaque and our only way to manipulate the image is through the generation of another text. To bridge this gap, we are interested in learning inter pretable factors in a score based generative models, which could be later used to manipulate and edit images as in the case of GAN Inversion. The first plausible step to learn such factors is an autoencoding type approach where we first learn different image components and then recompose image out of those components. Unfortunately, the theoretical formulation of such diffusion autoencoder is currently unclear. Our first contribution is to cement the theoretical foundation of diffusion autoencoder through likelihood based formulation (see section 4.1). From the implementation perspective, we did find the diffusion autoencoder (DiffAE) implementation due to Preechakul et al. [22] to work well and have built upon their implementation. While we agree with DiffAE [22] on the autoencoding setup, we take a very different approach to autoencoding by decomposing an image into different score components. We believe this is arguably better suited for the score models since the score functions are the main building components of score models. Therefore, we would like to decompose an image into different score components and try to understand their contribution in image generation. We take inspiration from the energy based models [9, 10, 32]. Imagine that the probability density of image is given by product of exponential distributions of the form: \[p_{0}(x)=\prod p_{0}^{(i)}(x)=\frac{e^{-\sum_{i}E^{(i)}(x)}}{Z} \tag{1}\] where \(E^{(i)}(x)\) represents energy functions and \(Z\) is the normalization constant, also known as partition function. Taking logarithm of eq.(1), we can conclude that modeling score as summation of several components can be interpreted as learning different energy components, _i.e._\(s=\nabla_{x}\log p_{0}(x)=\sum_{i}\nabla_{x}(-E^{(i)}(x))\). Based on this intuition, we imagine that we should be able to decompose the score function into different components and train the score based generative model. We ask, could we decompose an image into several interpretable components in unsupervised way and recombine them to generate new images, akin to dreaming? In this paper, we follow this abstract idea to generate interesting images by dividing image and recomposing components. We perform several experiments to illustrate the score components learned in unsupervised manner and what we can achieve through their composition and manipulation. To interpret the factors captured by score components we generate images from individual components. Some components capture human interpretable attributes like shape and color, while others are not as they capture complex texture/features in images. We also modify images by manipulating individual components and interpolating them with unconditional score which results in interesting manipulation of images and diverse generation. We discuss how our experiments elicit new perspective on interpretability and disentanglement of images. ## 2 Related Works Diffusion generative models based on denoising idea was first proposed by Ho et al. [11] and Sohl-Dickstein et al. [25]. From a different perspective, Song et al. [27] showed that we can generate images by estimating the score, i.e. gradient of data loglikelihood. These two perspectives were later reconciled by Song et al. [28]. They showed that the forward diffusion and reverse generative models are both stochastic processes in continuous time guided by stochastic differential equations. This work unifies the diffusion perspective with the score based perspective. The score based generative model utilizes the denoising and implicit score matching ideas [15, 29] to develop a computationally cheap way to estimate the score function at different time instants. Score based generative model research has seen several new directions. Some works have tried to decrease the generation time with fast differential equation solvers [18], while others have tried to analytically estimate reverse time variance to improve image quality [2]. Others have tried to improve the log likelihood of the score based generative models [17]. Some theoretical works have derived the loss function from likelihood optimization perspective [5, 13, 26]. Other theoretical works have solved the stochastic differential equation by solving Schrodinger Bridge problem [7]. There are several applications of score based models like text based image generation [23, 24], image editing [20] and adversarial purification [21, 33]. ## 3 Background ### Denoising Diffusion Probabilistic Model Sohl-Dickstein et al. [25] and Ho et al. [11] proposed to design a generative model, called Denoising diffusion probabilistic model (DDPM) from a Bayesian perspective. Imagine we sample an image from the data distribution, \(x_{0}\sim p_{0}\). Consider the data corruption sequence where we incrementally add Gaussian noise to the image until it turns into complete noise. This forms a Markov chain and the joint distribution of the forward series is given by \(p_{0}(x_{0})\prod_{t=1}^{t=T}p_{t-1,t}(x_{t})|p_{t-1,t}(x_{t-1})\). Then, a reverse Markov chain is considered were \(p_{\theta}(x_{t-1}|x_{t})\) is conditionally Gaussian such that the reverse process joint distribution is given by \(p_{\theta}(x_{T})\prod_{t=T}^{t=T}p_{\theta}(x_{t-1}|x_{t})\). The DDPM algorithm optimizes the evidence lower bound of the data likelihood such that when optimization is complete, the reverse joint distribution coincides the forward joint distribution. Since the reverse conditional distributions are parameterized by the error functions \(\epsilon_{\theta}\), the algorithm, essentially, boils down to optimizing the \(\epsilon_{\theta}\). One key trick proposed in DDPM is that the complex loss obtained from the ELBO can be neatly expressed as an extremely simple loss function as follows: \[\mathcal{L}_{\theta}^{dDPM}=\mathbb{E}_{t,x_{0},\epsilon}\big{\{}\lambda_{t}|| \epsilon_{\theta}(x_{t},t)-\epsilon||^{2}\big{\}} \tag{2}\] where, \(x_{t}=\sqrt{\alpha_{t}}x_{0}+\sqrt{1-\alpha_{t}}\epsilon\) is a sample from distribution \(p_{t}(x|x_{0})=\mathcal{N}(x;\sqrt{\alpha_{t}}x_{0},1-\alpha_{t})\), the marginal distribution at time \(t\) and \(\epsilon\) is a random vector from an isotropic Gaussian distribution. Note that as we incrementally add noise to \(x_{0}\), the marginal distribution \(p_{t}\) at time \(t\) can be expressed as a Gaussian distribution conditioned on \(x_{0}\). The mean at time \(t\) has diluted by a factor of \(\sqrt{\alpha_{t}}\) and variance is \(1-\alpha_{t}\). Also, the \(\lambda_{t}\) in eq.(2) is a function of \(\alpha_{t}\)s and noise added at each time instant. Once we have the optimized network, \(\epsilon_{\theta}\), the image generation process is nothing but following the reverse conditional distribution \(p_{\theta}(x_{t-1}|x_{t})\) starting from the isotropic Gaussian noise \(p_{\theta}(x_{T})\). ### Score Based Generative Model Song et al. [27] proposed a different generative model by estimating the gradient of data distribution, called the score function, whose sampling looked similar to that of DDPM. Even though the score based generative model seemed similar to diffusion model, DDPM, the connection was unclear until another paper due to Song et al. [28], which showed that there is a deeper connection between the two networks: the error network in DDPM, \(\epsilon_{\theta}\) is same as the score function \(s_{\theta}\) in score generative model. They establish that DDPM essentially performs score matching [15, 29]. They further develop this line of argument showing that the continuous version of score based generative model can be obtained by reversing a stochastic differential equation (SDE). Specifically, they generalize the forward process of adding noise to a continuous setting where the forward process takes the form of a stochastic differential equation (SDE). Similarly, the reverse process of starting from the isotropic Gaussian and incrementally removing noise can also be shown to be a stochastic different equation continuous in time: FOR \[:dx=f(x,t)dt+g_{t}dw\] (3) REV \[:dx=[f(x,t)-g_{t}^{2}\nabla_{x}\log p_{t}(x)]dt+g_{t}d\bar{w}\] (4) Luckily, the reverse stochastic differential equation only needs the score function at each timestamp \(t\) in addition to other functions \(f,g\) from the forward equation. Using the same score matching idea, the score function can be first trained with the following loss function \[\mathcal{L}_{\theta}=\mathbb{E}_{t}\big{\{}\lambda_{t}\mathbb{E}_{x_{0}} \mathbb{E}_{x_{t}|x_{0}}||s_{\theta}(x_{t},t)-\nabla_{x_{t}}\log p_{0,t}(x_{ t}|x_{0})||^{2}]\big{\}} \tag{5}\] Note that this loss is same as eq.(2) once we use the fact that \(p_{t}\) is conditionally Gaussian. Once the score function is learnt, the generative model is given by replacing \(s=\nabla_{x}\log p\) in the reverse time stochastic differential equation: \[dx=[f(x,t)-g_{t}^{2}s_{\theta}(x,t)]dt+g_{t}d\bar{w} \tag{6}\] In practice, we need to discretize eq. (6) to obtain a generative model. We achieve this through Euler-Maruyama discretization as suggested in [28]: \[x(t-\delta t)=[f(x,t)-g_{t}^{2}s_{\theta}(x,t)](-\delta t)+g_{t}\sqrt{\delta t}z \tag{7}\] where \(z\sim\mathcal{N}(0,I)\) is a random sample from standard Gaussian distribution. ### Conditional Score based Models Following the unconditional generative models, a few conditional generative models have been developed. Most of these models, however, require some form of supervision regarding the group on which to condition the generation. For example, Dhariwal et al. [8] developed conditional generative models based on class label by modifying score functions with extra term representing the gradient of log of classifier likelihood. These methods are known as classifier guided conditional generative models. Later, classifier-free models [12] were also proposed who eschewed the idea of training a classifier altogether. Nevertheless, they also need class label to train. Towards the direction of unsupervised learning, Diffusion autoencoder (DiffAE) [22] learns the conditional distribution based on latent vector obtained from the encoder, which is in the line of our work. Our work differs from theirs in the fundamental notion of what represents the component. In our model, we encode different score components from an image, where each component sort of captures one concept (energy function). Later we generate images by recombining these components. ## 4 Method Existing score based generative models train a common unconditional score function \(s_{\theta}(x)\) such that the reverse SDE yields samples from the whole distribution \(p_{0}(x)\). We are interested in learning different energy components in each image. Before we can decompose different energy components in each image, we need to be able to design a conditional score function which can reverse the forward diffusion process and converge to a single image rather than the whole distribution. More precisely, we want to modulate the score function such that the reverse SDE yields the dirac-delta distribution concentrated on a single image, say \(x_{\zeta}\). It is unclear how to do that from the existing works. Note that the loss function in eq.(5) is an expectation across all images \(x_{0}\) sampled from the data distribution \(p_{0}\), and thus \(s_{\theta}\) is common to all data. To design data-specific score function such that reverse SDE converges to a dirac delta distribution around image, we adopt the log likelihood formulation of score based models [5, 13, 26]. ### Likelihood Based Formulation We can derive a likelihood formulation to train the score based generative model based on Feynman-Kac Theorem [16], as shown in [5, 13, 26]. We start from the likelihood formulation of training score based generative model. \[\log p(x_{0})\geq\mathcal{L}_{VLB}(x_{0},\theta)=E_{p_{T}}[\log p _{\theta}(x_{T})]\] \[-\int_{0}^{T}\mathbb{E}_{p_{t}}\bigg{[}\frac{1}{2}g_{t}^{2}||s_{ \theta}(x_{t},t)||^{2}+\nabla_{x}(g_{t}^{2}s_{\theta}(x_{t},t)-f)\bigg{]}dt \tag{8}\] Taking expectation, we arrive at, \[\mathcal{L}_{EVLB}(\theta)=\mathbb{E}_{x_{0}\sim p_{0}}[\mathcal{L}_{VLB}(x_{ 0},\theta)] \tag{9}\] From eq.(9), we can derive the same loss function as in eq.(5) by using a rough approximation of the integral with the discrete summation and equivalence between different score matching [15, 29](as shown in [13]). Therefore, unconditional score estimation can be thought as a crude approximation of the expected likelihood maximization. Nevertheless, expressing the likelihood as the integral in eq.(8) is much more illuminating and powerful. Observe that eq.(9) is obtained by taking expectation of the likelihood of individual data in eq.(8). It is this expectation which leads to learning a common unconditional score function. We can forego expectation to train score functions for each data point. For any data \(x_{\zeta}\), we can directly optimize its likelihood by training a score function \(s_{\theta,\zeta}\) to optimize the lower bound on the right hand side of eq.(8). Specifically, we train an encoder, Enc and score function, \(s\) as follows: \[\zeta=\texttt{Enc}_{\theta}(x_{\zeta}) \tag{10}\] \[\log p(x_{\zeta})\geq=E_{p_{T}}[\log p_{\theta}(x_{T})|x_{0}=x_{ \zeta}]\] \[-\int_{0}^{T}\mathbb{E}_{p_{t}}\bigg{[}\frac{1}{2}g_{t}^{2}||s_{ \theta}||^{2}+\nabla_{x}(g_{t}^{2}s_{\theta,\zeta}-f)|x_{0}=x_{\zeta}\bigg{]}dt \tag{11}\] Eq.(10) and eq.(11) completes our autoencoder model that maximizes the lower bound on log likelihood of individual image, \(x_{\zeta}\). ### Score Decomposition As motivated in the introduction (eq.(1)), we want to decompose each score function into multiple components, which intuitively represent different energy components (negative of gradient of energy to be precise). We decompose each score function into \(K\) components such that \(s_{\theta,\zeta}=(s_{\theta,\zeta}^{(1)}+s_{\theta,\zeta}^{(2)}+...+s_{\theta,\zeta}^{(K)})/K\). This decomposition requires \(K\) different score functions. A computationally efficient way would be to share the weight of score functions but use different latent vectors for different components, which also gives a structure to the latent space. Therefore, we decompose the score function as: \[s_{\theta,\zeta}=(s_{\theta,\zeta^{(1)}}+s_{\theta,\zeta^{(2)}}+...+s_{\theta,\zeta^{(K)}})/K \tag{12}\] Note that now the burden of learning different energy components has been shifted to the latent vectors together with a shared conditional score function \(s_{\theta,\zeta^{(k)}}\). ### Model and Training The encoder encodes each image \(x_{\zeta}\) into \(K\) latent vectors \(\zeta^{(K)}\). The summed score function given by eq.(12) is used to maximize the lower bound on the log likelihood of each \(x_{\zeta}\). We also approximate this integral with discrete summation and invoke the equivalence of implicit score matching [15] and denoising score matching [28, 29] to obtain an approximation [13] as the following loss function: \[\mathcal{L}_{\theta,\zeta}=\mathbb{E}_{t}\big{\{}\lambda_{t} \mathbb{E}_{x_{t}|x_{\zeta}}[||s_{\theta,\zeta}(x_{t},t)-\nabla_{x_{t}}\log p _{0,t}(x_{t}|x_{\zeta})||^{2}]\big{\}} \tag{13}\] We jointly train \(K\) encoders and the score loss in eq.(13), where the score is given by eq.(12). To design a score function as a function of the the latent vector \(\zeta\), we use the adaptive Group Norm (AdaGN) strategy as described in [8]. Once the model is trained, we generate image by first sampling noise \(z\) from the standard Gaussian distribution and iteratively applying Euler-Maruyama discretization of the SDE as described by eq.(7). We also experiment with generating samples by combining score components with unconditional score function similar to [12]. For this, we can separately train conditional and unconditional score functions. However, to make it computationally cheap, we pass a vector of ones **1** to realize the unconditional score function \(s_{u}=s_{\theta,\textbf{1}}\), similar to the trick used in [12]. We combine conditional and unconditional score functions with linear combination of coefficients as described in experiments. ## 5 Results and Discussion ### Experimental Details The score network is the modified version of UNet architecture as described in Dhariwal et al. [8]. To condition on the latent vector obtained from encoder, we use the AdaGN as described in [8], which is inspired from adaptive instance normalization (AdaIN) [14], but uses group normalization [30] instead of instance norm. Our architecture is similar to DiffAE's adaptation of AdaGN, _i.e._ \[\texttt{AdaGN}(h,t,\zeta)=\zeta_{s}(t_{s}.\texttt{GroupNorm}(h)+t_{b}) \tag{14}\] where \(h\) is normalized feature map at different layers, \(t_{s},t_{b}\) are obtained from the time embedding by applying MLP and \(\zeta_{s}\) is affine transformation of the latent vector \(\zeta\). We burrow encoder from DiffAE [22], i.e. the first part of UNet architecture used in score function. We experiment with two values of K: \(K=\{3,5\}\). The encoder encodes K vectors of length \(128\), which condition the score functions. Due to the computing constraints, we train for 150K iterations using Adam optimizer and batch size of 32. We experiment on four datasets in computer vision: 1) Celeb-A, 2) LSUN-outdoor_church, 3) Cifar-10 and 4) SVHN. We use image size of \(32\times 32\) for Cifar-10 and SVHN and \(48\times 48\) for LSUN and Celeb-A, again due to resource constraints. ### Natural variation in reconstruction In Fig.(1), we show auto encoding of samples from four datasets. We input a batch of 16 images, all of which are the same. Figure shows that the reconstruction is very close to the ground truth, but, at the same time, there are natural variations. ### Visualizing different components through generation Visualization of different score components can be insightful about what each component is capturing from the image. It is unclear what is the best way to visualize score components which are essentially represented as matrices. We demonstrate that generating image samples using each component can be the best visualization strategy of each components. To be precise, if \(s_{\theta,\zeta}=(s_{\theta,\zeta^{(1)}}+s_{\theta,\zeta^{(2)}}+s_{\theta, \zeta^{(3)}})/3\), then we take each component, say \(s_{\theta,\zeta^{(k)}}\) and generate images using the Euler Maruyama discretization in eq. (7). The samples generated using three components are plotted in Fig.2 where each row is a component. The input image of these components are the same images as in Fig.1. Note that even though input image and the latent vectors are the same, there are natural variations in samples form each component. Several observations are in order. First, this method can result in human interpretable components in some cases, but not always. For example, the third component in the svhn image is clearly capturing digits, however, first and second components are capturing some abstract texture or lighting related information. Similarly, second component of the bird is clearly focusing on the color, but first one is capturing some abstract shape information. This score decomposition forces us to rethink the definition of components of an image. In classical works, disentanglement has been seen as some kind of statistical property of the distribution and methods were proposed to enforce such statistical pattern, for example, independence. Here, however, we see that components could capture interesting pattern that may or may not be independent or interpretable. We can certainly say that the three components are different, but at the same time capture something about the input image. Therefore, they are different disentangled components. Yet, they may or may not possess statistical independence property or complete human interpretability. ### Manipulation of components We manipulate images using these learned components, with the help of an unconditional score function. In Fig.3, we retain two components untouched while we linearly interpolate one component with the unconditional score function to generate images. That is, if \(s^{(1)},s^{(2)},s^{(3)}\) are three components, we generate image with the following score function: \[s^{interp}_{\alpha}=(\alpha*s^{(1)}+s^{(2)}+s^{(3)})/3+(1-\alpha)s_{u}/3 \tag{15}\] In Fig.3, we observe that the first component is associated with the background information and less in the church building. When this component is interpolated, we see that the information like yellow color (related to second component) and building architectures (related to third components) are preserved. As we go from \(\alpha=1\) to \(\alpha=0.1\), we see that the image changes while preserving the yellow color of the church and with different building architectures, but it changes the tree and the background making it more diverse. At \(\alpha=0.1\), we see that a lot of tree has been removed and replaced with something else to create diverse background setting. In Fig.4, we observe similar effect. The first component is changed in second row while retaining second and third component. From the first row, it is clear that the first component is associated with smile. Hence, as we go from \(\alpha=1\) to \(\alpha=0.1\), the sample images are more diverse in terms of smile. At \(\alpha=0.1\), we see a few instances where the mouth is even closed. Similarly, in Fig.5, we change second and third component while retaining the first one. Here also, the first component is associated with the smile while second and third are associated with shape features and hair. We fix the first component and interpolate second and third component as Figure 1: Using all score components results in faithful reconstruction, but with interesting variations follows: \[s_{\alpha}^{inter}=\frac{(\alpha*(s^{(1)}+*s^{(2)})+s^{(3)})}{3}+\frac{2(1-\alpha)s _{u}}{3} \tag{16}\] In doing so, when we go from \(\alpha=1\) to \(\alpha=0.1\), smile is preserved while other features change. Hence, we see image with a lot of diversity in terms of shape, color and hair while preserving smile as we go towards \(\alpha=0.1\). ### Tuning the weight of components In Fig.6, we tune the weight of two score components \(s^{(1)}\) and \(s^{(2)}\). When the \(s^{(2)}\) is weighted heavily, the greenish hue and the corresponding shape of the church are dominating. As we moved more heavily towards \(s^{(1)}\), we started seeing more of a mix between \(s^{(1)}\) and \(s^{3}\). ### Varying the number of components Fig.7 compares five components against three components learned in Cifar10 dataset. Components 4, 5 and 1 in five components are similar to the three in first row. However, Component 3 in second row is different and seems to capture the colored pattern with shape in the middle. ## 6 Conclusion Based on the insights of energy based models, we proposed to learn multiple score components while training a score based generative model. These components are interpretable and provide us more control to manipulate and edit images. In experiments, we discuss the interpretability and demonstrate our ability to edit and generate images through component manipulation. These score components also provide us new interpretable methods in score based generative models. There are some rooms for improvement. Our ability to manipulate images is limited by the number of score components. Scaling this method to higher number of components without computational overhead could be an important future direction. Guiding components towards interpretations more amenable to humans could be another promising direction of future research. Figure 2: Visualizing different score components through generation as a way to interpret components. Images are generated by solving reverse SDE using single score component. Each row is a component and each column is different dataset.
2304.00933
Knowledge Accumulation in Continually Learned Representations and the Issue of Feature Forgetting
Continual learning research has shown that neural networks suffer from catastrophic forgetting "at the output level", but it is debated whether this is also the case at the level of learned representations. Multiple recent studies ascribe representations a certain level of innate robustness against forgetting -- that they only forget minimally in comparison with forgetting at the output level. We revisit and expand upon the experiments that revealed this difference in forgetting and illustrate the coexistence of two phenomena that affect the quality of continually learned representations: knowledge accumulation and feature forgetting. Taking both aspects into account, we show that, even though forgetting in the representation (i.e. feature forgetting) can be small in absolute terms, when measuring relative to how much was learned during a task, forgetting in the representation tends to be just as catastrophic as forgetting at the output level. Next we show that this feature forgetting is problematic as it substantially slows down the incremental learning of good general representations (i.e. knowledge accumulation). Finally, we study how feature forgetting and knowledge accumulation are affected by different types of continual learning methods.
Timm Hess, Eli Verwimp, Gido M. van de Ven, Tinne Tuytelaars
2023-04-03T12:45:52Z
http://arxiv.org/abs/2304.00933v4
# Knowledge Accumulation in Continually Learned Representations and the Issue of Feature Forgetting ###### Abstract By default, neural networks learn on all training data at once. When such a model is trained on sequential chunks of new data, it tends to catastrophically forget how to handle old data. In this work we investigate how continual learners learn and forget representations. We observe two phenomena: _knowledge accumulation_, i.e. the improvement of a representation over time, and _feature forgetting_, i.e. the loss of task-specific representations. To better understand both phenomena, we introduce a new analysis technique called _task exclusion comparison_. If a model has seen a task and it has not forgotten all the task-specific features, then its representation for that task should be better than that of a model that was trained on similar tasks, but not that exact one. Our image classification experiments show that most task-specific features are quickly forgotten, in contrast to what has been suggested in the past. Further, we demonstrate how some continual learning methods, like replay, and ideas from representation learning affect a continually learned representation. We conclude by observing that representation quality is tightly correlated with continual learning performance. ## 1 Introduction Machine learning models typically learn from static datasets and once they are trained and deployed, they are usually not updated anymore. Sometimes models make mistakes. Sometimes they do not work in a domain that was not trained. Sometimes they do not recognize certain classes or corner cases. Whatever the cause, often it is necessary to retrain a model from the beginning with new data to overcome malfunctions. But, retraining a full model is costly and time-consuming, especially in deep learning. The goal of continual learning is to enable models to train continually, to learn from new data when it becomes available. This has proven to be a hard challenge [14, 50], as deep learning models that are continually trained exhibit catastrophic forgetting [35]. Without precautionary measures, new data are learned at the expense of forgetting earlier acquired knowledge. The data to train machine learning models rarely come in a format that is adapted to the problem we intend to solve. Taking the example of visual data, it is near impossible to infer higher-level properties directly from images' raw pixel values. Hence, the first step is usually to transform them into a representation that makes solving the problem at hand an easier job. Deep neural networks are known to excel at this [2]. They learn semantically meaningful representations indirectly while optimizing their parameters to learn an input-output mapping. Sometimes the representation itself is the goal, yet often it is a final layer, or head, that uses the learned representation to assign an output (_e.g._ a class label) to an input. Even though they are commonly trained in unison, it can be useful to think of the representation and the head as two separate entities, working together. With this perspective, it has been noticed that forgetting at the output and in the representation of a neural network is different [13, 62, 8]. When the output of a continually trained network for an old class gets disrupted by learning continually on new data, retraining only the head on a small subset of all past data can recover much of the initial performance [55; 13], but not all. The forgetting that persists is caused by features becoming altered, or overwritten, in the representation itself [62]. Most continual learning methods do not distinguish between the representation and the head, and aim to prevent forgetting in the neural network as a single entity. However, there are works focused explicitly on the representation [45; 32; 41], or that demonstrate how it is easier to continually train a head than to train the representation continually [55; 19]. In this work, we show that both forget catastrophically, yet highlighting some important differences between them. In this work we also emphasize the difference between preventing forgetting _after_ a task is trained, as continual learning is commonly approached, and _before_ it is trained. It is indeed compelling to assume that when performance decreases, some task-specific knowledge is forgotten and continual learning should prevent this. Complementary, forgetting can be prevented _before_ a new task is trained by starting from a stronger, more general representation [43; 5]. Models with better representations need to change less to adapt to new data, reducing the risk of forgetting old data. Representations optimized for the data of one task are partially useful for other tasks, but not in their entirety. In Section 5.1, we use image classification tasks to show that models learn specific features to solve the current task, but those are quickly forgotten. Our results indicate that remembering these more specific features is a challenge, even with contemporary continual learning methods. The part of the representation that is useful for other tasks, we refer to as general, or transferable, features. Those are not as much at risk to be forgotten, because they do not need to be changed. In Section 5.2, our experiments teach us that continually learning a better general representation is possible. In the end, it is the combination of general and specific features that forms the best representations. Together with a well adapted head, see Section 5.4, this leads us to identify three challenges for successful continual learning: * **C1**: Learn a strong, general representation. * **C2**: Remember features learned for individual tasks. * **C3**: Adapt the head to changes in the representation. See Figure 1, for a visualization of these challenges. In this work we focus on challenges one and two, with some attention for the third. We show that, in contrast to what has been suggested before [13; 62], specific features learned for individual tasks are catastrophically forgotten. We highlight the benefits of representation learning techniques and how they help prevent catastrophic forgetting before a new task is trained. Finally, we interpret a few popular continual learning methods in light of these challenges. We start with an overview of current research in light of these challenges Figure 1: Illustration of the central ideas and contributions of this paper. (1) We study how continual learning (CL) models accumulate knowledge in their representation. (2) We test whether representations forget task-specific features by comparing two continually trained models, one that is trained on a task and one that is not. (3) To investigate representations we use linear probes, which unveils insights not obvious when only using continually trained heads. in Section 2, followed by the problem formulation in Section 3, our experimental setup in Section 4, and our results in Section 5. We conclude with practical guidelines for future continual learning research. ## 2 Continual representation learning Representation learningData rarely come in a format that is adapted to the task we want to perform [2]. Except for very simple problems, it is near impossible to directly classify images in their raw pixel representation. For example, many changes in the pixels (_e.g._ translation, rotation, illumination) do not alter the semantics of the image so they should not change the representation. For a long time, researchers have been searching for a representation of images that makes it convenient to solve semantic tasks. Handcrafting features was the standard, _e.g._[12], but this requires engineering and may not result in optimal features. Since the rise of deep learning, features are more commonly learned by neural networks, directly from the raw data. Both Bengio _et al_. [2] and Goodfellow _et al_. [18] define _good_ representations as ones that make it easier to solve tasks of interest, a definition we adopt. They see deep neural networks as inevitable representation learners, even when this is not explicitly the goal. Neural networks trained to predict image-label pairs indirectly learn a representation where semantically different images are linearly separable in the output of the penultimate layer. Yet representations can also be learned directly, which can improve robustness, boost generalization, or reduce the need for labeled data [23]. Generality vs. specificityA remarkable success of deep learning is the generalization capability of neural networks [28]. Instead of merely remembering specific samples, they readily generalize to unseen examples from the same distribution. More so, training on one distribution can improve results on another, or reduce the amount of data required to learn a new task [54]. Using models pre-trained on ImageNet [15] has been a common practice in computer vision for a while now [46, 34, 25]. This indicates that neural networks tend to learn a fairly general representation of visual data, that is useful beyond the training domain. However, to reach the best possible results, generality is not sufficient. Neural networks are typically finetuned on target domain data, adapting the network to the specifics of that target domain. Results in transfer learning for natural images have shown that early layers (close to the input) learn general, Gabor filter-like, features [27] and are easily transferable, while the deeper layers usually require finetuning to adapt to a new task [57]. Head vs. representationThe head of a neural network is typically the part that is most specific. Its parameters are usually divided into subsets, belonging exclusively to one output, and they define which region of the representation space represents which output. This specificity implies that they can quickly become disconnected from the representation when the representation changes but the head is static [4]. Luckily, their independence makes them easier to recover [55]. The paper proposing iCaRL [45] is famously one of the first works to explicitly disentangle the representation and head. Heads can be relatively well learned with small subsets of data, as a linear layer or with non-parametric approaches like k-nearest neighbors [52, 49]. Continual methods have shown successes by updating the last layer only on small memories with balanced data, overcoming much of the observed forgetting [8]. Learning representations continuallyWhile traditionally most continual learning methods have focused on preventing forgetting, recently some methods explicitly try to improve the representation a model learns. Taking inspiration from advances in representation learning [23], recent methods tried to apply contrastive losses [7, 32] and self supervised learning [33, 21, 16, 44] to improve continual learning performance. Other works took ideas from meta-learning [22, 6] to learn representations that can easily adapt to new tasks. Lastly, DualNet takes inspiration from neuroscience and combines fast and slow learners in one system [41]. Learning better representations is one part, correctly evaluating them is another. Zhang _et al_. show, using linear probes on a downstream task, that continually finetuned representations can improve over time, albeit slower than a model trained jointly on all tasks [62]. They further report that neither replay nor EWC seem to improve the quality of continually learned representations, something also found in [8]. Using self-supervised learning, this is contradicted by [21], where both MAS [1] and replay improve the downstream task accuracy. Representation forgettingMeasuring forgetting at the output of a neural network does not tell us everything about the internal state of a network. Intermediate representations can still be useful for past tasks. Retraining the last layer [56] or a set of deeper layers [39] with the earlier layers frozen, hints that representations of lower layers are still useful for seemingly forgotten tasks. Rather than these layers remembering something task-specific, other works interpret this as better generalizability of the lower layers [42, 57, 60]. Early layers may not forget as much, because their representations are so general that they are almost fully reusable for future tasks, while the specificity of deeper layers is overwritten by information of new tasks [42]. Davari _et al_. [13] and Cha _et al_. [8] use linear probes to measure forgetting of the representation in the penultimate layer; both conclude that it is less catastrophic. The former uses task-incremental learning, and shows a performance decrease that is much more moderate than typically observed on the output heads. They report even less forgetting when using replay or SupCon [24] losses, but not with LwF [30]. The latter works in a class-incremental setting, and uses all seen classes to linearly probe the representations. This increases the difficulty of the probed task, which makes the comparison between results ambiguous. As [13] do, during a single run we will always use the same set of classes to evaluate the representation. ## 3 Problem statement and evaluation We follow the common definition of a continual learning setting by assuming a stream \(\mathcal{T}=\{\mathcal{T}_{1},\mathcal{T}_{2},...,\mathcal{T}_{T}\}\) of \(T\) disjoint tasks \(\mathcal{T}_{i}\). Each task consists of training data \(X_{i}\) and targets \(Y_{i}\), as well as respective test data \(\hat{X}_{i}\), \(\hat{Y}_{i}\). During training on each task the model has free access to the training data of that task, but not to the data of other tasks. The only exception are replay memories, which can store small subsets of data from past tasks. On this stream of tasks we continually train a model \(f_{\theta}\), with the goal to learn a model that is good for all tasks. We split the model into a shared backbone with parameters \(\theta_{B}\), and task-specific heads with parameters \(\theta_{H}=\{\theta_{h_{1}},...,\theta_{h_{T}}\}\). For the task-specific heads we assume availability of task identity at all time, formally rendering this a task-incremental scenario [50], but the shared backbone does not use this task information. As our focus is on classification tasks, to measure continual learning performance in the standard way, we define \(\text{ACC}_{t,i}\) as the test accuracy (the percentage of correctly classified test samples) on task \(\mathcal{T}_{t}\) obtained by the model after training on task \(\mathcal{T}_{i}\). Additionally, and central to this work, we explicitly evaluate the quality of the continually learned representations. Inspired by representation learning literature [2; 9; 60], we define the metric _linear probe accuracy_, denoted \(\text{LP}_{t,i}\). After finishing training on task \(\mathcal{T}_{i}\), a new set of parameters \(\theta_{h_{t}}\) for the head of task \(T_{t}\) are first trained with all training data in \(\mathcal{T}_{t}\) while the backbone parameters \(\theta_{B}\) are frozen, and \(\text{LP}_{t,i}\) is then the test accuracy of the resulting model on task \(\mathcal{T}_{t}\). The metric \(\text{LP}_{t,i}\) thus measures the suitability of the model's representation for task \(\mathcal{T}_{t}\) after training up to task \(\mathcal{T}_{i}\). We further define the metric _task-exclusion difference_, denoted \(\text{EXC}_{t,i}\), as the difference between \(\text{LP}_{t,i}\) of model \(f\) and model \(f^{-t}\), _i.e_. \(\text{LP}-\text{LP}^{-t}\) (omitting subscripts \(i\) and \(t\)). Where importantly, model \(f^{-t}\) is trained on stream \(\mathcal{T}\setminus\{T_{t}\}\), hence it has never seen data from task \(\mathcal{T}_{t}\) and there is no task-specific information in its backbone parameters \(\theta_{B}\). This metric is used to measure how much 'task-specific' knowledge model \(f\) has acquired, compared to model \(f^{-t}\). ## 4 Experimentation details Here we list benchmark settings and hyper-parameters to be used throughout the experiments in the next section. DataWe consider two benchmarks common to continual learning. The first is Split MiniImageNet, which uses the MiniImageNet [51] dataset consisting of \(50,000\) train and \(10,000\) test RGB-images of resolution \(84\times 84\) equally divided over \(100\) classes. We split this dataset into \(20\) disjoint tasks such that each task contains five classes. The second benchmark is Split CIFAR-100, which is based on the CIFAR-100 dataset [26] with the same amount of RGB-images and classes as MiniImageNet, but with reduced resolution of \(32\times 32\). We split this dataset into ten disjoint Figure 2: Our proposed ‘task exclusion comparison’ analysis technique, illustrated on Split-MiniImageNet. _Left_: Two examples of the two linear probe (LP) accuracy curves used in the task exclusion comparison. While there is a strong peak after training on the task itself (solid lines), in the end a model that is not trained on that task has an equally good representation (dashed lines). _Right_: The task exclusion difference metric: the result of subtracting the two linear probe accuracy curves illustrated on the left, for several tasks and different methods. \(T_{i}\) means the \(i\)-th task is excluded (see row indices). Results are averaged over 5 seeds, standard errors are smaller than the line width. See Supplemental for exact results. tasks with ten classes each. All experiments are run with five different seeds that also shuffle the class splits over the tasks. Architecture and optimizationThroughout this work ResNet-18 [20] is the base architecture for all models. Unless explicitly stated differently, we adopted its implementation as default in the python-torchvision [40] library. When investigating the effects of the width, this is implemented and indicated by \(\times 0.5\) (halving) or \(\times 2\) (doubling) the amount of filters per layer. All networks are trained from scratch, and pre-trained networks are considered future work. The optimization schedules are adjusted with respect to the training criterion. For supervised training with the cross-entropy loss we mainly use an AdamW [31] optimizer with static learning rate of \(0.001\), weight decay \(0.0005\), and beta-values \(0.9\) and \(0.999\). Each task is trained for \(50\) epochs with mini-batches of size \(128\). Due to its popularity in recent works, the Stable SGD [38] optimizer is also evaluated for the supervised setting. See Supplemental for details. For the SupCon [24] and BarlowTwins [59] optimization criteria, we stuck to optimization schedules proposed in literature for their application to continual learning. In line with observations in Co2L [7], its training regime is based on an SGD optimizer with momentum \(0.9\). The learning rate is scheduled in the same way for every task warming up from \(0.0005\) to \(0.1\) in the first ten epochs, then annealing by a cosine schedule back to its starting value. The first task is trained for \(500\) epochs, all subsequent tasks for \(100\) epochs, with a batch size of \(256\). The projection network necessary for this objective consists of an MLP with hidden dimension of \(512\), projecting to a \(128\) dimensional space. Barlow-Twins optimization is aligned to [33, 16]. We use an Adam optimizer with learning rate \(0.0001\) and weight decay \(0.0005\). We train \(500\) epochs for each task with batch size of \(256\). Again, the projection head is an MLP but with increased hidden and final projection dimension of \(1024\). Finally, SupCon and BarlowTwins rely on (heavy) data augmentations, but wee also found them beneficial when using cross-entropy losses, see Supplemental for details. Probe optimizationTo quantify the quality of the representation we apply probes based on linear- and k-nearest neighbors- (\(k\)NN) classifiers. Linear classifiers consist of a single linear layer. In the optimal probing case, reported mostly throughout the work, it is optimized with access to all training data. Linear probes are optimized for ten epochs using AdamW with hyper-parameters analogue to supervised training detailed above. We train five of these probes in parallel and report the mean to account for statistical deviation. To test continual learning performance, we optimize them with only \(20\) samples per class in Section 5.4. For results with \(k\)NN-classifiers, see Supplemental. Continual learning mechanismsNext to naive finetuning, we consider two widely applied continual learning mechanisms, MAS [1] and replay, to evaluate their benefit for continually learned representations. MAS is using a value of \(\lambda=1.0\) as advocated by its original authors. Replay uses a random selection of \(20\) exemplars per class. The weight of the loss on replayed samples is increased proportionally to the number of previously observed tasks, to prevent favoring the current task in the optimization. ## 5 Results In this section we study improving and forgetting representations for continual learning. Using image classification experiments, we show that representations do forget catastrophically and that continually improving representations is harder than it might have appeared to be. Before we continue, we want to make a disclaimer about the optimizers we used. In Table 1, we show the linear probe accuracy of a downstream task for Split CIFAR-100 and Split MiniImageNet, with Stable SGD as defined in [38], Stable SGD without dropout (Stable SGD\({}^{*}\)) and AdamW. While we found that Stable SGD\({}^{*}\)_can_ outperform AdamW in some cases, more often it was the other way around and Stable SGD was, in fact, unstable in certain settings. By default, we therefore show results with AdamW, unless the findings with Stable SGD are substantially different. In the main text we further only report results on Split MiniImage. All results with Stable SGD and/or CIFAR-100 are in the Supplemental. \begin{table} \begin{tabular}{c c c c} \hline \hline & Stable SGD & Stable SGD\({}^{*}\) & AdamW \\ \hline Split CIFAR-100 & \(35.2\pm 6.2\) & \(42.5\pm 6.5\) & \(62.3\pm 6.3\) \\ Split MiniImageNet & \(67.0\pm 2.0\) & \(74.2\pm 2.6\) & \(68.2\pm 2.7\) \\ \hline \hline \end{tabular} \end{table} Table 1: Linear probe accuracy on a held-out task, after finishing training on the last task. Stable SGD works well on Split MiniImageNet, but not on Split CIFAR-100. Generally we found AdamW to be more stable than Stable SGD. ### Representation forgetting Studying forgetting in representations, there are two effects to take into account when a network is trained on a new task: the representation for a previously learned task could get worse because the network forgets what it had learned (feature forgetting), but it could also improve thanks to transferability of features learned on the new data (knowledge accumulation). These effects will respectively show a decrease and increase in linear probe accuracy. Simply evaluating a single model cannot disentangle both effects. If the linear probe accuracy does not drop much - is this because the network did not forget, or because the representation improved? To answer this question, we train a set of 'exclusion models', in addition to the'standard' continually trained model \(f\). They are trained under exactly the same circumstances, except for the task sequence, where a single task is excluded. Exclusion model \(f^{-t}\) is trained on all tasks in \(\mathcal{T}\) except for task \(\mathcal{T}_{t}\). When the linear probe accuracy of model \(f\) is higher than that of \(f^{-t}\), it indicates that some information specific to task \(\mathcal{T}_{t}\), which is not captured in the general representation, is remembered in the representation of model \(f\). We show the difference in linear probe accuracy between model \(f\) and model \(f^{-t}\) in Figure 2 for several different approaches to continual learning. While the representation of model \(f\) consistently learned something useful, or task-specific, from the data in the exclusion task, in all cases this was forgotten again after continued training on a few other tasks. By the end of training, none of the tested approaches to continual learning had a substantial advantage of having been trained on the exclusion task. MAS, replay and wide Resnets prevent representation forgetting for a while (area under the curve), but eventually forget as well. The advantage of actually training the entire model on a task diminishes, especially for replay, attested by the lower peaks for later tasks. This questions the benefit of continuing to train a model after its representation has largely converged, as the benefit of new task-specific information may not outweigh the forgetting of older tasks. Perhaps closest to our experiments are those from Davari _et al._[13]. They also evaluated the representation of a model after it was trained, and concluded that _"representation forgetting under naive finetuning [...] is not as catastrophic as other metrics suggest"_. We show that this is not a complete explanation. A model that was not trained on a certain task can get almost equally good linear probe accuracy, indicating that most of the task-specific information in the representation _is_ forgotten. In the results of [13], the hidden effect of an improving general representation was not accounted for, and when one does, the conclusion is that representations do forget. This highlights the work that can Figure 3: Linear probe (LP) accuracies on a downstream task for models continually trained on Split MiniImageNet. The evaluated task is a held-out task of the same dataset, on which the model is never trained. Results are averaged over 5 runs on different task orders. \(T_{0}\) indicates results on randomly initialized networks. still be done in the second challenge (C2) for continual learning we identified in the introduction: preventing feature forgetting. ### Improving representations When a model's representation is already well suited for a new task, it will have to change less to get good results. Less change reduces the probability of forgetting something that was crucial to remember past tasks. While most continual learning works are concerned with methods to reduce forgetting, some have observed that certain architectures and training regimes can forget less too [38, 37]. In this section we look at both approaches and find that especially the latter can improve the general representation substantially. We evaluate the quality of a continually learned representation by linear probing with a held-out task, and we report its performance after each trained task. For our experiments we chose the held-out task to always be the last task in \(\mathcal{T}\). OptimizationRecently, Mirzadeh _et al_. showed that training regimes are an important aspect of continual learning [38]. They combine training techniques that improve the final output accuracy, without using continual learning mechanisms that explicitly attempt to prevent forgetting. We revisit similar ideas, but from the representation perspective. Our results are in Figure 2(a). First, inspired by the strong results in [3], we test training each task with and without augmentations. They can reduce the number of shortcut solutions in a neural network [47] and improve the representation significantly, which is confirmed by our experiment. We therefore use augmentations by default in all other experiments. With an SGD optimizer, weight decay can harm performance [14, 38], but we find that with AdamW it does not have an effect on the representation, so we do not investigate weight decay further. Interestingly, dropout is found to be useful for continual learning in [38, 14], but in our experiments the continually learned representations generalize considerably better without dropout. Dropout acts as a regularizer and can prevent overfitting [53], resulting in lower test errors. The earlier works suggesting dropout as a useful practice in continual learning do not consider augmentations during training. Yet when augmentations are combined with dropout, we show that its advantage on the current task largely disappears (see Figure 3(a)). Crucially, we also find that the benefit of dropout is different in nature from that of using augmentations. Dropout improves the current task performance, but not the general representation. This is also found in [25], where ImageNet models trained without dropout transfer better. Considering that dropout also results in faster forgetting (see Figure 3(a)), we conclude that optimizing with augmentations, but without dropout is the preferable choice. MethodsContinual learning methods can prevent forgetting to some extent, but only after a task is learned. Here, we revisit MAS [1] and replay to test how they influence the general representation, see Figure 2(b). While MAS remembers early task representations (Section 5.1), we find that its final representation quality is of the same level as that of finetuning. This is in line with what is reported in both [13, 62]. Instead, Hu _et al_. report an advantage of MAS [21]. This is possibly an effect of their trained tasks (ImageNet) overlapping more with their downstream tasks (_e.g_. classifying planes, cars and pets), as we found that MAS can indeed remember some overlapping information. Also shown in this plot are two baselines, single task and multi task training, to put these results in perspective. Single task is a model trained on only one task, for as many iterations as the sequential models have (_e.g_. at \(\mathcal{T}_{5}\) it is trained for \(250\) epochs on the fifth task). Its results show that representations of sequential models (_e.g_. finetuning) benefit slightly from having more diverse data, even if those data are not seen together. The multi task baseline is a model that is jointly trained on all data seen up to that point (_e.g_. at \(\mathcal{T}_{5}\), it is trained on tasks one until five together for \(50\) epochs). This provides a realistic upper target for the best representation that can be learned in the current setting. Replay and stable SGD\({}^{*}\)Replay has been one of the most reliable and effective continual learning strategies, possibly in part because it learns stronger representations. Yet in [62], it is claimed that replay does not offer benefits in continual representation learning. With AdamW, we show that replay does have an advantage (see Figure 2(b)). In Figure 2(c), we combine replay with both Stable SGD\({}^{*}\) and AdamW. When using Stable SGD\({}^{*}\) with replay, the results improve faster than without replay, but by the end of training the representations for downstream tasks are very similar. We note again that we were not able to reproduce this strong result with Stable SGD\({}^{*}\) (or the original version of Stable SGD) on Split CIFAR-100. One of the important aspects of Stable SGD is its learning rate, which decreases after every task. We hypothesize that replay with AdamW might have a similar effect. After each task the replay loss is weighted more than the loss of the new samples, which might implicitly reduce the learning rate of the new data. Exactly figuring out how replay and training regimes interact is beyond the scope of this work and will thus be left for future work. ArchitecturesThe architecture of a model is an important aspect in deep learning. Both deepening and widening networks have led to increasingly better models [48, 58]. In continual learning, mostly wider networks have shown benefits and are claimed to forget less [36, 13]. In Figure 2(d), we show that using wider networks can also result in increased knowledge accumulation, which is in line with earlier work stating that wide networks generalize better [29]. Mirzadeh _et al._[36] argue that forgetting less is an important part of the story. We agree, but we believe forgetting and generalization are tightly related. Wide networks start from a better representation before a task is trained, and therefore drop less after most of the task-specific knowledge is forgotten (which they do, see Section 5.1). We illustrate this in Figure 3(b). Again, this highlights the importance of having strong representations before a task starts, as it will reduce the amount of changes needed while learning new tasks. ### Training objectives When good representations are a goal, contrastive and self-supervised methodologies are an evident tool. To test these, we compare cross-entropy with a standard supervised contrastive objective (SupCon) and a self-supervised method (Barlow Twins). Intrigued by recent approaches in continual learning featuring these, our hypothesis is that they form richer representations, which could result in less forgetting and better downstream task performance. The results in Figure 5 show that both Barlow Twins and SupCon perform similarly to cross-entropy finetuning with AdamW. For Barlow Twins this is perhaps already a solid result. It does not use any labels, yet it still learns a representation that is on par with an approach that uses all the labels. Using the newest self-supervised techniques [10; 11], can possibly even further increase this performance. SupCon does use all the labels, yet its representation is not better than that of the baseline. Especially, for this objective we found - during preliminary experimentation - that hyper-parameter selection can be delicate and cause random results. A potential explanation, i.e. collapse of the representation, was given in this work [17]. Overall, we want to stay conservative with drawing conclusions. It is possible that supervised cross-entropy training under same data augmentations yields representations that are equally rich. At the same time, we point out that training regimes for self-supervised and contrastive objectives in continual learning are rarely examined with continual learning in mind, and still strongly oriented at joint training setups. ### Continual learning performance As a final experiment, we look at the relation of the representation quality for a downstream task and the continual learning performance on all seen tasks. As a first approach to obtain a measure for continual learning performance, we Figure 4: Details of the linear probe (LP) accuracy on \(\mathcal{T}_{10}\). Panel (a): Dropout increases peak performance, but forgets the representation more. Using augmentations gives a stronger initial representation, resulting in lower forgetting after the task. Panel (b): Wider ResNets start from a stronger representation, so when they forget their performance is still higher. Figure 5: Linear probe (LP) accuracies on a MiniImageNet downstream task for continually trained models using different training objectives: cross entropy (CE), a supervised contrastive loss (SupCon) and self-supervised learning (Barlow Twins). Details are the same as in Figure 3. re-train the heads of the network with \(20\) samples per class, which is as many as used in our replay implementation. As this requires updating the head, we additionally report _k_-NN results with \(20\) samples per class, which does not require any further training. The representation quality is assessed by linear probing a downstream task, _i.e_. the final value in Figure 3. The results are in Figure 6. Both measures of continual learning performance on past tasks correlate strongly with representation quality, with \(r^{2}\) values of \(0.93\) (linear head) and \(0.98\) (_k_-NN). This again highlights the importance of good representations in continual learning. ## 6 Final remarks Before we conclude we want to make some remarks on the results presented in this work. Our experiments are based on a task-incremental setup, but that does not mean they could not be relevant for domain- or class-incremental setups [50]. During training, the task identifier is _only_ used to select the correct head. As the shared backbone does not use task information it could be argued that the problem of continual representation learning is in fact a form of domain-incremental learning. We also note that the training procedure we use is identical to the often used 'label trick' in class-incremental learning [61]. Furthermore, when testing the representation, we either use held-out classes or a set of classes that only one of the models was trained on, which implies that our experimental results would be the same in a class-incremental learning scenario. While task-specific representations obviously boost the performance of their task, it is not unthinkable that they could also improve the general representation. The combination of general and many different task-specific features possibly explains the difference in knowledge accumulation between jointly optimized models and continually trained ones. So while it is important to remember specific features to not forget past tasks, remembering them might also help to improve the general representation itself. We shed a light on how some continual learning methods and representation learning practices influence the accumulation of knowledge in continual learned representations for natural images, but there remain multiple aspects that require further investigation. First, the interaction of replay with Stable SGD is interesting - why does replay not improve the representation with this optimizer, while it does significantly do so with AdamW? Secondly, we were surprised by SupCon not improving upon the finetuned baseline, despite it being specifically designed to learn more diverse representations. How to optimally use SupCon and self supervised losses in continual learning thus remains an open question. ## 7 Conclusion In this work we studied how deep neural networks learn and forget representations when continually trained on a sequence of image classification tasks. We showed that even though the representations of these networks continually accumulate knowledge, they also consistently, and often quickly and catastrophically, forget task-specific features. Learning better general representation and preventing feature forgetting offer two ways forward to improve continual learning performance. We found that the use of augmentations, replay and wide networks can consistently increase the quality of continually learned representations, but these tools do not seem to satisfactorily address feature forgetting. Figure 6: Relation of the general representation quality, as measured by an optimal linear probe (LP) on a downstream task, versus the average continual learning (CL) performance on all seen tasks. With either 20 available samples per class to probe the task-specific heads, or to use in \(k\)-NN. Finally, we hope that with the work we present here, future continual learning solutions will be evaluated not only on their output performance, but also on their representation quality and protection against forgetting specific features. ## 8 Acknowledgements This paper is part of a project that has received funding from the European Union under the Horizon 2020 research and innovation program (ERC project KeepOnLearning, grant agreement No. 101021347) and under Horizon Europe (Marie Sklodowska-Curie fellowship, grant agreement No. 101067759).
2306.16928
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization
Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world. Many existing methods solve this problem by optimizing a neural radiance field under the guidance of 2D diffusion models but suffer from lengthy optimization time, 3D inconsistency results, and poor geometry. In this work, we propose a novel method that takes a single image of any object as input and generates a full 360-degree 3D textured mesh in a single feed-forward pass. Given a single image, we first use a view-conditioned 2D diffusion model, Zero123, to generate multi-view images for the input view, and then aim to lift them up to 3D space. Since traditional reconstruction methods struggle with inconsistent multi-view predictions, we build our 3D reconstruction module upon an SDF-based generalizable neural surface reconstruction method and propose several critical training strategies to enable the reconstruction of 360-degree meshes. Without costly optimizations, our method reconstructs 3D shapes in significantly less time than existing methods. Moreover, our method favors better geometry, generates more 3D consistent results, and adheres more closely to the input image. We evaluate our approach on both synthetic data and in-the-wild images and demonstrate its superiority in terms of both mesh quality and runtime. In addition, our approach can seamlessly support the text-to-3D task by integrating with off-the-shelf text-to-image diffusion models.
Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Mukund Varma T, Zexiang Xu, Hao Su
2023-06-29T13:28:16Z
http://arxiv.org/abs/2306.16928v1
# One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization ###### Abstract Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world. Many existing methods solve this problem by optimizing a neural radiance field under the guidance of 2D diffusion models but suffer from lengthy optimization time, 3D inconsistency results, and poor geometry. In this work, we propose a novel method that takes a single image of any object as input and generates a full 360-degree 3D textured mesh in a single feed-forward pass. Given a single image, we first use a view-conditioned 2D diffusion model, Zero123, to generate multi-view images for the input view, and then aim to lift them up to 3D space. Since traditional reconstruction methods struggle with inconsistent multi-view predictions, we build our 3D reconstruction module upon an SDF-based generalizable neural surface reconstruction method and propose several critical training strategies to enable the reconstruction of 360-degree meshes. Without costly optimizations, our method reconstructs 3D shapes in significantly less time than existing methods. Moreover, our method favors better geometry, generates more 3D consistent results, and adheres more closely to the input image. We evaluate our approach on both synthetic data and in-the-wild images and demonstrate its superiority in terms of both mesh quality and runtime. In addition, our approach can seamlessly support the text-to-3D task by integrating with off-the-shelf text-to-image diffusion models. ## 1 Introduction Single image 3D reconstruction, the task of reconstructing a 3D model of an object from a single 2D image, is a long-standing problem in the computer vision community and is crucial for a wide range of applications, such as robotic object manipulation and navigation, 3D content creation, as well as AR/VR [42; 9; 86]. The problem is challenging as it requires not only the reconstruction of visible parts but also the hallucination of invisible regions. Consequently, this problem is often ill-posed and corresponds to multiple plausible solutions because of insufficient evidence from a single image. On the other hand, humans can adeptly infer unseen 3D content based on our extensive knowledge of the 3D world. To endow intelligent agents with this ability, many existing methods [30; 18; 24; 10; 81; 85; 15; 77] exploit class-specific priors by training 3D generative networks on 3D shape datasets [4]. However, these methods often fail to generalize to unseen categories, and their reconstruction quality is constrained by the limited size of public 3D datasets. In this work, we pursue a generic solution to turn an image of any object, regardless of its category, into a high-quality 3D textured mesh. To achieve this, we propose a novel approach that can effectively utilize the strong priors learned by 2D diffusion models for 3D reconstruction. Compared to 3D data, 2D images are more readily available and scalable. Recent 2D generative models (_e.g._, DALL-E [59; 58], Imagen [65], and Stable Diffusion [64]) and visual-language models (_e.g._, CLIP [56]) have made significant strides by pre-training on Internet-scale image datasets. Since they learn a wide range of visual concepts and possess strong priors about our 3D world, it is natural to marry 3D tasks with them. Consequently, an emerging body of research [25, 22, 47, 55, 35], as exemplified by DreamField [25], DreamFusion [55], and Magic3D [35], employs 2D diffusion models or vision language models to assist 3D generative tasks. The common paradigm of them is to perform per-shape optimization with differentiable rendering and the guidance of the CLIP model or 2D diffusion models. While many other 3D representations have been explored, neural fields are the most commonly used representation during optimization. Although these optimization-based methods have achieved impressive results on both text-to-3D [55, 25, 35] and image-to-3D tasks [43, 68], they face some common dilemmas: (a) **time-consuming**. Per-shape optimization typically involves tens of thousands of iterations of full-image volume rendering and prior model inferences, resulting in typically tens of minutes per shape. (b) **memory intensive**. Since the full image is required for the 2D prior model, the volume rendering can be memory-intensive when the image resolution goes up. (c) **3D inconsistent**. Since the 2D prior model only sees a single view at each iteration and tries to make every view look like the input, they often generate 3D inconsistent shapes (_e.g._, with two faces, or the Janus problem [43, 55]). (d) **poor geometry**. Many methods utilize the density field as the representation in volume rendering. It is common that they produce good RGB renderings but extracting high-quality mesh tends to be difficult. In this paper, instead of following the common optimization-based paradigm, we propose a novel approach to utilize 2D prior models for 3D modeling. At the heart of our approach is the combination of a 2D diffusion model with a cost-volume-based 3D reconstruction technique, enabling the reconstruction of a high-quality 360\({}^{\circ}\) textured mesh from a single image in a feed-forward pass without per-scene optimization. Specifically, we leverage a recent 2D diffusion model, Zero123 [36], which is fine-tuned on Stable Diffusion [64] to predict novel views of the input image given the camera transformation. We utilize it to generate multi-view predictions of the input single image so that we can leverage multi-view 3D reconstruction techniques to obtain a 3D mesh. There are two challenges associated with reconstruction from synthesized multi-view predictions: (a) the inherent lack of perfect consistency within the multi-view predictions, which can lead to severe failures in Figure 1: One-2-3-45 reconstructs a full \(360^{\circ}\) mesh of any object in 45 seconds given a single image of it. In each example, we showcase the input image in the left column, alongside the generated textured and textureless meshes from three different views. optimization-based methods such as NeRF methods [48; 5]. (b) the camera pose of the input image is required but unknown. To tackle them, we build our reconstruction module upon a cost volume-based neural surface reconstruction approach, SparseNeuS [40], which is a variant of MVSNeRF [6]. Additionally, we introduce a series of essential training strategies that enable the reconstruction of 360-degree meshes from inherently inconsistent multi-view predictions. We also propose an elevation estimation module that estimates the elevation of the input shape in Zero123's canonical coordinate system, which is used to compute the camera poses required by the reconstruction module. By integrating the three modules of multi-view synthesis, elevation estimation, and 3D reconstruction, our method can reconstruct 3D meshes of any object from a single image in a feed-forward manner. Without costly optimizations, our method reconstructs 3D shapes in significantly less time, _e.g._, in just 45 seconds. Our method favors better geometry due to the use of SDF representations, and generates more consistent 3D meshes, thanks to the camera-conditioned multi-view predictions. Moreover, our reconstruction adheres more closely to the input image compared to existing methods. See Figure 1 for some of our example results. We evaluate our method on both synthetic data and real images and demonstrate that our method outperforms existing methods in terms of both quality and efficiency. ## 2 Related Work ### 3D Generation Guided by 2D Prior Models Recently, 2D generative models (_e.g._, DALL-E [59; 58], Imagen [65], and Stable Diffusion [64]) and vision-language models (_e.g._, CLIP [56]) have learned a wide range of visual concepts by pre-training on Internet-scale image datasets. They possess powerful priors about our 3D world and have inspired a growing body of research to employ 2D prior models for assisting 3D generative tasks. Exemplified by DreamField [25], DreamFusion [55], and Magic3D [35], a line of works follows the paradigm of per-shape optimization. They typically optimize a 3D representation (_i.e._, NeRF, mesh, SMPL human model) and utilize differentiable rendering to generate 2D images from various views. The images are then fed to the CLIP model [22; 25; 47; 34; 3; 31; 2; 27; 83; 83] or 2D diffusion model [55; 35; 68; 43; 12; 72; 82; 46; 93; 57] for calculating the loss functions, which are used to guide the 3D shape optimization. In addition to optimization-based 3D shape generation, some works train a 3D generative model but leverage the embedding space of CLIP [8; 39; 67], and some works focus on generating textures or materials for input meshes using 2D models' prior [47; 76; 7; 46; 63]. ### Single Image to 3D Before the emergence of CLIP and large-scale 2D diffusion models, people often learn 3D priors from 3D synthetic data [4] or real scans [60]. Unlike 2D images, 3D data can be represented in various formats and numerous representation-specific 3D generative models have been proposed. By combing 2D image encoder and 3D generators, they generates 3D data in various representations, including 3D voxels [18; 79; 10; 81; 80; 85], point clouds [15; 88; 19; 1; 44; 90], polygon meshes [30; 73; 51; 31], and parametric models [54; 94; 95]. Recently, there has been an increasing number of work on learning to generate a 3D implicit field from a single image [84; 45; 66; 24; 53; 17; 20; 26; 50; 78; 49]. As previously mentioned, several recent works leverage 2D diffusion models to perform per-shape optimization, allowing for the text-to-3D task [55; 35; 25] given that diffusion models are typically conditioned on text. To enable the generation of 3D models from a single image, some works [43; 12; 46] utilize textual inversion [16], to find the best-matching text embedding for the input image, which is then fed into a diffusion model. NeuralLift-360 [23] adds a CLIP loss to enforce similarity between the rendered image and the input image. 3DFuse [68] finetunes the Stable Diffusion model with LoRA layers [23] and a sparse depth injector to ensure greater 3D consistency. A recent work Zero123 [36] finetunes the Stable Diffusion model [65] to generate a novel view of the input image based on relative camera pose. In addition to these methods, OpenAI trains a 3D native diffusion model Point-E [52], which uses several million internal 3D models to generate point clouds. Very recently, they published another model Shap-E [29] which is trained to generate parameters of implicit functions that can be used for producing textured meshes or neural radiance fields. ### Generalizable Neural Reconstruction Traditional NeRF-like methods [48; 74] use a neural network to represent a single scene and require per-scene optimization. However, some approaches aim to learn priors across scenes and generalize to novel scenes. These methods typically take a few source views as input and leverage 2D networks for extracting 2D features. The pixel features are then unprojected into 3D space, and a NeRF-based rendering pipeline is applied on top of them. In this way, they can generate a 3D implicit field given a few source views in a single feed-forward pass. Among the methods, some [75; 61; 21; 89; 87; 37; 33; 70; 71] directly aggregate 2D features with MLPs or transformers, while others explicitly construct the 3D feature/cost volume [6; 28; 92; 40], and utilize the voxel feature for decoding density and color. In addition to the density field representation, some methods such as SparseNeuS [40] and VolRecon [62] utilize SDF representations for geometry reconstruction. ## 3 Method Our overall pipeline is illustrated in Figure 2. In Section 3.1, we introduce a view-conditioned 2D diffusion model, Zero123 [36], which is used to generate multi-view images. In Section 3.2, we show that traditional NeRF-based and SDF-based methods fail to reconstruct high-quality meshes from inconsistent multi-view predictions even given ground truth camera poses. Therefore, in Section 3.3, we propose a cost volume-based neural surface reconstruction module that can be trained to handle inconsistent multi-view predictions and reconstruct a 3D mesh in a single feed-forward pass. Specifically, we build upon the SparseNeuS [40] and introduce several critical training strategies to support \(360^{\circ}\) mesh reconstruction. Additionally, in Section 3.4, we demonstrate the necessity of estimating the pose of the input view in Zero123's canonical space for 3D reconstruction. While the azimuth and radius can be arbitrarily specified, we propose a novel module that utilizes four nearby views generated by Zero123 to estimate the elevation of the input view. ### Zero123: View-Conditioned 2D Diffusion Recent 2D diffusion models [59; 65; 64] have demonstrated the ability to learn a wide range of visual concepts and strong priors by training on internet-scale data. While the original diffusion models mainly focused on the task of text-to-image, recent work [91; 23] has shown that fine-tuning pretrained models allows us to add various conditional controls to the diffusion models and generate images based on specific conditions. Several conditions, such as canny edges, user scribbles, depth, and normal maps, have already proven effective [91]. The recent work Zero123 [36] shares a similar spirit and aims to add viewpoint condition control for the Stable Diffusion model [64]. Specifically, given a single RGB image of an object and a relative camera transformation, Zero123 aims to control the diffusion model to synthesize a new image under this transformed camera view. To achieve this, Zero123 fine-tunes the Stable Diffusion on paired images with their relative camera transformations, synthesized from a large-scale 3D dataset [11]. During the creation of the fine-tuning dataset, Zero123 assumes that the object is centered at the origin of the coordinate system and uses a spherical camera, _i.e._, the camera is placed on the sphere's surface and always looks at the origin. For two camera poses \((\theta_{1},\phi_{1},r_{1})\) and \((\theta_{2},\phi_{2},r_{2})\), where \(\theta_{i}\), \(\phi_{i}\), and \(r_{i}\) denote the polar angle, azimuth angle, and radius, their relative camera transformation is parameterized as \((\theta_{2}-\theta_{1},\phi_{2}-\phi_{1},r_{2}-r_{1})\). They aim to learn a model \(f\), such that \(f(x_{1},\theta_{2}-\theta_{1},\phi_{2}-\phi_{1},r_{2}-r_{1})\) is perceptually similar to \(x_{2}\), where \(x_{1}\) and \(x_{2}\) are two images of an object captured from different views. Zero123 finds that such fine-tuning enables the Figure 2: Our method consists of three primary components: (a) **Multi-view synthesis**: we use a view-conditioned 2D diffusion model, Zero123 [36], to generate multi-view images in a two-stage manner. The input of Zero123 includes a single image and a relative camera transformation, which is parameterized by the relative spherical coordinates \((\Delta\theta,\Delta\phi,\Delta r)\). (b) **Pose estimation**: we estimate the elevation angle \(\theta\) of the input image based on four nearby views generated by Zero123. We then obtain the poses of the multi-view images by combining the specified relative poses with the estimated pose of the input view. (c) **3D reconstruction**: We feed the multi-view posed images to an SDF-based generalizable neural surface reconstruction module for \(360^{\circ}\) mesh reconstruction. Stable Diffusion model to learn a generic mechanism for controlling the camera viewpoints, which extrapolates outside of the objects seen in the fine-tuning dataset. ### Can NeRF Optimization Lift Multi-View Predictions to 3D? Given a single image of an object, we can utilize Zero123 [36] to generate multi-view images, but can we use traditional NeRF-based or SDF-based methods [5, 74] to reconstruct high-quality 3D meshes from these predictions? We conduct a small experiment to test this hypothesis. Given a single image, we first generate 32 multi-view images using Zero123, with camera poses uniformly sampled from the sphere surface. We then feed the predictions to a NeRF-based method (TensoRF [48]) and an SDF-based method (NeuS [74]), which optimize density and SDF fields, respectively. However, as shown in Figure 3, both methods fail to produce satisfactory results, generating numerous distortions and floaters. This is primarily due to the inconsistency of Zero123's predictions. In Figure 4, we compare Zero123's predictions with ground-truth renderings. We can see that the overall PSNR is not very high, particularly when the input relative pose is large or the target pose is at unusual locations (_e.g._, from the bottom or the top). However, the mask IoU (most regions are greater than 0.95) and CLIP similarity are relatively good. This suggests that Zero123 tends to generate predictions that are perceptually similar to the ground truth and have similar contours or boundaries, but the pixel-level appearance may not be exactly the same. Nevertheless, such inconsistencies between the source views are already fatal to traditional optimization-based methods. Although the original Zero123 paper proposes another method for lifting its multi-view predictions, we will demonstrate in experiments that it also fails to yield perfect results and entails time-consuming optimization. ### Neural Surface Reconstruction from Imperfect Multi-View Predictions Instead of using optimization-based approaches, we base our reconstruction module on a generalizable SDF reconstruction method SparseNeuS [40], which is essentially a variant of the MVSNeRF [6] pipeline that combines multi-view stereo, neural scene representation, and volume rendering. As illustrated in Figure 2, our reconstruction module takes multiple source images with corresponding camera poses as input and generates a textured mesh in a single feed-forward pass. In this section, we will first briefly describe the network pipeline of the module and then explain how we train the module, select the source images, and generate textured meshes. Additionally, in Section 3.4, we will discuss how we generate the camera poses for the source images. As shown in Figure 2, our reconstruction module takes \(m\) posed source images as input. The module begins by extracting \(m\) 2D feature maps using a 2D feature network. Next, the module builds a 3D cost volume whose contents are computed by first projecting each 3D voxel to \(m\) 2D feature planes and then fetching the variance of the features across the \(m\) projected 2D locations. The cost volume is then processed using a sparse 3D CNN to obtain a geometry volume that encodes the underlying geometry of the input shape. To predict the SDF at an arbitrary 3D point, an MLP network takes the 3D coordinate and its corresponding interpolated features from the geometry encoding volume as input. To predict the color of a 3D point, another MLP network takes as input the 2D features at the projected locations, interpolated features from the geometry volume, and the viewing direction Figure 4: We analyze the prediction quality of Zero123 by comparing its predictions to ground truth renderings across various view transformations. For each view transformation, we report the average PSNR, mask IoU, and CLIP similarity of 100 shapes from the Obiacverse [11] dataset. The prediction mask is calculated by considering foreground objects (_i.e._, non-white regions). Zero123 provides more accurate predictions when the view transformation is small. Figure 3: NeRF-based method [48] and SDF-based method [74] fail to reconstruct high-quality meshes given multi-view images predicted by Zero123. See Figure 1 for our reconstruction results. of the query ray relative to the viewing direction of the source images. The network predicts the blending weights for each source view, and the color of the 3D point is predicted as the weighted sum of its projected colors. Finally, an SDF-based rendering technique is applied on top of the two MLP networks for RGB and depth rendering [74]. **2-Stage Source View Selection and Groundtruth-Prediction Mixed Training.** Although the original SparseNeuS [40] paper only demonstrated frontal view reconstruction, we have extended it to reconstruct 360-degree meshes in a single feed-forward pass by selecting source views in a particular way and adding depth supervision during training. Specifically, our reconstruction model is trained on a 3D object dataset while freezing Zero123. We follow Zero123 to normalize the training shapes and use a spherical camera model. For each shape, we first render \(n\) ground-truth RGB and depth images from \(n\) camera poses uniformly placed on the sphere. For each of the \(n\) views, we use Zero123 to predict four nearby views. During training, we feed all \(4\times n\) predictions with ground-truth poses into the reconstruction module and randomly choose one of the \(n\) ground-truth RGB images views as the target view. We call this view selection strategy as _2-stage source view selection_. We supervise the training with both the ground-truth RGB and depth values. In this way, the module can learn to handle the inconsistent predictions from Zero123 and reconstruct a consistent \(360^{\circ}\) mesh. We argue that our two-stage source view selection strategy is critical since uniformly choosing \(n\times 4\) source views from the sphere surface would result in larger distances between the camera poses. However, cost volume-based methods [40; 28; 6] typically rely on very close source views to find local correspondences. Furthermore, as shown in Figure 4, when the relative pose is small (_e.g._, 10 degrees apart), Zero123 can provide very accurate and consistent predictions and thus can be used to find local correspondences and infer the geometry. During training, we use \(n\) ground-truth renderings in the first stage to enable depth loss for better supervision. However, during inference, we can replace the \(n\) ground-truth renderings with Zero123 predictions, as shown in Figure 2, and no depth input is needed. We will show in the experiments that this groundtruth-prediction mixed training strategy is also important. To export the textured mesh, we use marching cubes [41] to extract the mesh from the predicted SDF field and query the color of the mesh vertices as described in [74]. Although our reconstruction module is trained on a 3D dataset, we find that it mainly relies on local correspondences and can generalize to unseen shapes very well. ### Camera Pose Estimation Our reconstruction module requires camera poses for the \(4\times n\) source view images. Note that we adopt Zero123 for image synthesis, which parameterizes cameras in a canonical spherical coordinate frame, \((\theta,\phi,r)\), where \(\theta\), \(\phi\) and \(r\) represent the elevation, azimuth, and radius. While we can arbitrarily adjust the azimuth angle \(\phi\) and the radius \(r\) of all source view images simultaneously, resulting in the rotation and scaling of the reconstructed object accordingly, this parameterization requires knowing the absolute elevation angle \(\theta\) of one camera to determine the relative poses of all cameras in a standard XYZ frame. More specifically, the relative poses between camera \((\theta_{0},\phi_{0},r_{0})\) and camera \((\theta_{0}+\Delta\theta,\phi_{0}+\Delta\phi,r_{0})\) vary for different \(\theta_{0}\) even when \(\Delta\theta\) and \(\Delta\phi\) are the same. Because of this, changing the elevation angles of all source images together (_e.g._, by 30 degrees up or 30 degrees down) will lead to the distortion of the reconstructed shape (see Figure 10 for examples). Therefore, we propose an elevation estimation module to infer the elevation angle of the input image. First, we use Zero123 to predict four nearby views of the input image. Then we enumerate all possible elevation angles in a coarse-to-fine manner. For each elevation candidate angle, we compute the corresponding camera poses for the four images and calculate a reprojection error for this set of camera poses to measure the consistency between the images and the camera poses. The elevation angle with the smallest reprojection error is used to generate the camera poses for all \(4\times n\) source views by combining the pose of the input view and the relative poses. Please refer to the supplementary for details on how we calculate the reprojection error for a set of posed images. ## 4 Experiments ### Implementation Details For each input image, we generate \(n=8\) images by choosing camera poses uniformly placed on the sphere surface and then generate 4 local images (\(10^{\circ}\) apart) for each of the 8 views, resulting in 32 source-view images for reconstruction. During training, we freeze the Zero123 [36] model and train our reconstruction module on Objayverse-LVIS [11] dataset, which contains 46k 3D models in 1,156 categories. We use BlenderProc [13] to render ground-truth RGB and depth images. For images with background, we utilize an off-the-shelf segmentation network SAM [32] with bounding-box prompts for background removal. Please refer to the supplementary for more details. ### Single Image to 3D Mesh We present qualitative examples of our method in Figures 1 and 5, illustrating its effectiveness in handling both synthetic images and real images. We also compare One-2-3-45 with existing zero-shot single image 3D reconstruction approaches, including Point-E [52], Shap-E [29], Zero123 (Stable Dreamfusion version) [36], 3DFuse [68], and RealFusion [43]. Among them, Point-E and Shap-E are two 3D native diffusion models released by OpenAI, which are trained on several million internal 3D data, while others are optimization-based approaches leveraging priors from Stable Diffusion [64]. Figure 5: Qualitative examples of One-2-3-45 for both synthetic and real images. Each triplet showcases an input image, a textured mesh, and a textureless mesh. Figure 6: We compare One-2-3-45 with Point-E [52], Shap-E [29], Zero123 (Stable Dreamfusion version) [36], 3DFuse [68], and RealFusion [43]. In each example, we present both the textured and textureless meshes. As 3DFuse [68] and RealFusion [43] do not natively support the export of textured meshes, we showcase the results of volume rendering instead. lots of failure cases (see the backpack without shoulder straps, distorted shoe, and stool with three legs). In contrast, our approach leverages a powerful 2D diffusion model to directly produce high-quality multi-view images, rather than relying on 3D space hallucination. This strategy provides better adherence to the input views, alleviates the burden of the 3D reconstruction module, and yields results that are more finely attuned to the input. Furthermore, many approaches encounter challenges in achieving consistent 3D results (also known as the Janus problem [43, 55]), as highlighted in the right figure (two-handle mug, multi-face Mario, and two-face backpack). One of the contributing factors to this issue is that several methods optimize each view independently, striving to make each view resemble the input. In contrast, our method capitalizes on the view-conditioned 2D diffusion model, inherently enhancing 3D consistency. We also quantitatively compare the approaches on Objaverse [11] and GoogleScannedObjects (GSO) [14] datasets. For each dataset, we randomly choose 20 shapes and render a single image per shape for evaluation. To align the predictions with the ground-truth mesh, we linearly search the scaling factor and the rotation angle, apply Iterative Closest Point (ICP) for sampled point clouds, and select the one with the most number of inliers. We follow RealFusion [43] to report F-score (with a threshold of 0.05) and CLIP similarity, and the runtime on an A100 GPU. As shown in Table 1, our method outperforms all baseline approaches in terms of F-Score. As for CLIP similarity, we surpass all methods except a concurrent work Shap-E [29]. We find that CLIP similarity is very sensitive to the color distribution and less discriminative in local geometry variations (_i.e._, the number of legs of a stool, the number of handles of a mug). Regarding running time, our method demonstrates a notable advantage over optimization-based approaches and performs on par with 3D native diffusion models, such as Point-E [52] and Shap-E [29]. Specifically, our 3D reconstruction module reconstructs a 3D \begin{table} \begin{tabular}{c|c|c c c|c c c|c} \hline \hline & Prior & \multicolumn{3}{c|}{F-Score} & \multicolumn{3}{c|}{CLIP Similarity} & \multirow{3}{*}{Time} \\ & Source & GSO & Obj. & avg. & & GSO & Obj. & avg. & Time \\ \hline Point-E [52] & internal & 81.0 & 81.0 & 81.0 & 74.3 & 78.5 & 76.4 & 78 \\ Shap-E [29] & 3D data & 83.4 & 81.2 & 82.3 & **79.6** & **82.1** & **80.9** & 278 \\ \hline Zero123+SD [36] & \multirow{2}{*}{diffusion} & \multirow{2}{*}{2D} & 75.1 & 69.9 & 72.5 & 71.0 & 72.7 & 71.9 & \(\sim\)15min \\ RealFusion [43] & & & 66.7 & 59.3 & 63.0 & 69.3 & 69.5 & \(\sim\)9.0min \\ 3DView [68] & & 60.7 & 60.2 & 64.1 & 71.4 & 70.7 & 72.7 & \(\sim\)30min \\ Ours & & **84.0** & **83.1** & **83.5** & 76.4 & 79.7 & 78.1 & 458 \\ \hline \hline one stage & one stage & no depth & \multicolumn{3}{c|}{all gt} & \multicolumn{1}{c}{all pred} & \multicolumn{1}{c}{all pred} & \multicolumn{1}{c}{all pred} \\ (32 view) & (68) & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ \end{tabular} \end{table} Table 1: Quantitative Comparison on GSO [14] and Objaverse [11] datasets. Figure 8: Ablations on training strategies of the reconstruction module and the number of views. Figure 7: Error distribution of predicted elevations. The median and average are 5.4 and 9.7 degrees. mesh in approximately 5 seconds, with the remaining time primarily spent on Zero123 predictions, which take roughly 1 second per image on an A100 GPU. ### Ablation Study **Training strategies.** We ablate our training strategies in Figure 8. We found that without our 2-stage source view selection strategy, a network trained to consume 32 uniformly posed Zero123 predictions (first column) suffers from severe inconsistency among source views, causing the reconstruction module to fail completely. If we feed only 8 source views (second column) without the four nearby views, the reconstruction fails to capture local correspondence and cannot reconstruct fine-grained geometry. Similarly, when we do not apply the depth loss during training (third column), the network fails to learn how to reconstruct fine-grained geometries. During training, we first render \(n\) ground-truth renderings and then use Zero123 to predict four nearby views for each of them. If we train directly on \(8\times 4\) ground-truth renderings without Zero123 prediction during training (fourth column), it fails to generalize well to Zero123 predictions during inference, with many missing regions. Instead, if we replace the \(n\) ground-truth renderings with \(n\) Zero123 predictions during training (fifth column), the network also breaks due to the incorrect depth supervision. **Elevation estimation.** Our reconstruction module relies on accurate elevation angles of the input view. In Figure 11, we demonstrate the impact of providing incorrect elevation angles (_e_.\(g\)., altering the elevation angles of source views by \(\pm 30^{\circ}\)), which results in distorted reconstruction results. Instead, utilizing our predicted elevation angles can perfectly match results with ground truth elevations. We also quantitatively test our elevation estimation module by rendering 1,700 images from random camera poses. As shown in Figure 8, our elevation estimation module predicts accurate elevations. **Number of source views.** In Figure 8, we also investigate the impact of varying the number of source views on 3D reconstruction. We observe that our method is not very sensitive to the number of views as long as the reconstruction module is retrained with the corresponding setting. \(360^{\circ}\) **reconstruction vs. multi-view fusion.** While our method reconstructs a \(360^{\circ}\) mesh in a single pass, most existing generalizable neural reconstruction approaches [40, 28, 6] primarily focus on frontal view reconstruction. An alternative approach is to independently infer the geometry for each view and subsequently fuse them together. However, we have observed that this strategy often struggles with multi-view fusion due to inconsistent Zero123 predictions, as illustrated in Figure 9. ### Text to 3D Mesh As shown in Figure 11, by integrating with off-the-shelf text-to-image 2D diffusion models [64, 58], our method can be naturally extended to support text-to-image-3D tasks and generate high-quality textured meshes in a short time. See supplementary for more examples. Conclusion In this paper, we present a novel method for reconstructing a high-quality \(360^{\circ}\) mesh of any object from a single image of it. In comparison to existing zero-shot approaches, our results exhibit superior geometry, enhanced 3D consistency, and a remarkable adherence to the input image. Notably, our approach reconstructs meshes in a single forward pass without the need for time-consuming optimization, resulting in significantly reduced processing time. Furthermore, our method can be effortlessly extended to support the text-to-3D task. ## 6 Appendix We first show more qualitative comparison in Section 6.1, which is followed by a demonstration of additional examples on real-world images and the text-to-3D task in Sections 6.2 and 6.3 respectively. Furthermore, we present the details of our elevation estimation module in Section 6.4, training and evaluation details in Section 6.5. We finally show the failure cases and discuss the limitations in Section 6.6. ### More Qualitative Comparison In Figure 12, we demonstrate more qualitative comparison on Oboiverse [11] and GoogleScannedObjects (GSO) [14] datasets. Note that all test shapes are not seen during the training of our 3D reconstruction module. ### More Examples on Real-World Images In Figure 13, we showcase more examples on real-world images and compare our method with the concurrent method Shap-E [29]. The input images are from unsplash.com or captured by ourselves. Note that our results exhibit a closer adherence to the input image. Figure 12: We compare One-2-3-45 with Point-E [52], Shap-E [29], Zero123 (Stable Dreamfusion version) [36], 3DFuse [68], and RealFusion [43]. In each example, we present both the textured and textureless meshes. As 3DFuse [68] and RealFusion [43] do not natively support the export of textured meshes, we showcase the results of volume rendering instead. ### More Examples on Text-to-3D In Figure 14, we present additional examples for the text-to-3D task. It is evident that existing approaches struggle to capture fine-grained details, such as a tree hollow, or achieve compositionality, as seen in examples like an orange stool with green legs, a pineapple-shaped Havana hat, or a rocking horse chair. In contrast, our method produces superior results that adhere more closely to the input text. We hypothesize that controlling such fine-grained attributes in the 3D space using existing optimization strategies is inherently challenging. However, by leveraging established 2D Figure 13: We compare One-2-3-45 with Shap-E [29] on real-world images. In each example, we present the input image, generated textured and textureless meshes. text-to-image diffusion models, our method becomes more effective in lifting a single 2D image to a corresponding 3D textured mesh. ### Details of Elevation Estimation To estimate the elevation angle \(\theta\) of the input image, we first utilize Zero123 [36] to predict four nearby views (10 degrees apart) of the input view. With these predicted views, we proceed to Figure 14: Text-to-3D: We compare our method against two native text-to-3D approaches Stable DreamFusion [55] and 3DFuse [68]. To enable text-to-3D, our method first uses a pretrained text-to-image model DALL-E 2 [58] to generate an image from input text (prompted with “3d model, long shot”), and then uplifts the image to a 3D textured mesh. enumerate all possible elevation angles and compute the re-projection error for each candidate angle. The re-projection error assesses the consistency between camera poses and image observations, akin to the bundle adjustment module employed in the Structure-from-Motion (SfM) pipeline. Specifically, we enumerate all candidate elevation angles in a coarse-to-fine manner. In the coarse stage, we enumerate elevation angles with a 10-degree interval. Once we have determined the elevation angle \(e^{*}\) associated with the smallest re-projection error, we proceed to the fine stage. In this stage, we enumerate elevation angle candidates ranging from \(e^{*}-10^{\circ}\) to \(e^{*}+10^{\circ}\) with a 1-degree interval. This coarse-to-fine design facilitates rapid estimation, completing the elevation estimation module in under 1 second for each shape. Given a set of four predicted nearby views, we perform feature matching to identify corresponding keypoints across each pair of images (a total of six pairs) using an off-the-shelf module LoFTR [69]. For each elevation angle candidate, we calculate the camera pose for the input image by employing the spherical coordinate system with a radius of 1.2 and an azimuth angle of 0. Note that the azimuth angle \(\phi\) and the radius \(r\) can be arbitrarily adjusted, resulting in the rotation and scaling of the reconstructed object accordingly. Subsequently, we obtain the camera poses for the four predicted views by incorporating the specified delta poses. Once we have the four posed images, we compute the re-projection error by enumerating triplet images. For each triplet of images (\(a\), \(b\), \(c\)) sharing a set of keypoints \(P\), we consider each point \(p\in P\). Utilizing images \(a\) and \(b\), we perform triangulation to determine the 3D location of \(p\). We then project the 3D point onto the third image \(c\) and calculate the reprojection error, which is defined as the \(l1\) distance between the reprojected 2D pixel and the estimated keypoint in image \(c\). By enumerating all image triplets and their corresponding shared keypoints, we obtain the mean projection error for each elevation angle candidate. ### Details of Training and Evaluation TrainingWe train the reconstruction module using the following loss function: \[\mathcal{L}=\mathcal{L}_{rgb}+\lambda_{0}\mathcal{L}_{depth}+\lambda_{1} \mathcal{L}_{eikonal}+\lambda_{2}\mathcal{L}_{sparsity} \tag{1}\] where \(\mathcal{L}_{rgb}\) represents the \(l1\) loss between the rendered and ground truth color, weighted by the sum of accumulated weights; \(\mathcal{L}_{depth}\) corresponds to the \(l1\) loss between the rendered and ground truth depth; \(\mathcal{L}_{eikonal}\) and \(\mathcal{L}_{sparsity}\) are the Eikonal and sparsity terms, respectively, following SparseNeuS [40]. We empirically set the weights as \(\lambda_{0}=1\), \(\lambda_{1}=0.1\), and \(\lambda_{2}=0.02\). For \(\lambda_{2}\), we adopt a linear warm-up strategy following SparseNeuS [40]. To train our reconstruction module, we utilize the LVIS subset of the Ojayevse [11] dataset, which consists of 46k 3D models across 1,156 categories. The reconstruction module is trained for 300k iterations using two A10 GPUs, with the training process lasting approximately 6 days. It is important to note that our reconstruction module does not heavily rely on large-scale training data, as it primarily leverages local correspondence to infer the geometry, which is relatively easier to learn and generalize. EvaluationWe evaluate all baseline approaches using their official codebase. Since the approaches take only a single image as input, the predicted mesh may not have the same scale and transformation as the ground-truth mesh. To ensure a fair comparison, we employ the following process to align the predicted mesh with the ground-truth mesh. First, we align the up direction for the results generated by each approach. Next, for each generated mesh, we perform a linear search over scales and rotation angles along the up direction. After applying each pair of scale and z-rotation, we utilize the Iterative Closest Point (ICP) algorithm to align the transformed mesh to the ground-truth mesh. Finally, we select the mesh with the largest number of inliers as the final alignment. This alignment process helps us establish a consistent reference frame for evaluating the predicted meshes across different approaches. ### Failure Cases and Limitations Our method relies on Zero123 for generating multi-view images, which introduces challenges due to its occasional production of inconsistent results. In Figure 15, we present two typical cases that exemplify such inconsistencies. The first case involves an input view that lacks sufficient information, such as the back view of a fox. In this scenario, Zero123 struggles to generate consistent predictions for the invisible regions, such as the face of the fox. As a consequence, our method may encounter difficulties in accurately inferring the geometry for those regions. The second case involves an input view with ambiguous or complex structures, such as the pulp and peel of a banana. In such situations, Zero123's ability to accurately infer the underlying geometry becomes limited. As a result, our method may be affected by the inconsistent predictions generated by Zero123. It is important to acknowledge that these limitations arise from the occasional scenarios, and they can impact the performance of our method in certain cases. Addressing these challenges and refining the reliability of Zero123's predictions remain areas for further investigation and improvement. We have also noticed slight artifacts on the back side of our generated results. As one of the first works in combining view-conditioned 2D diffusion models with generalizable multi-view reconstruction, we believe that there is still ample room for exploring more advanced reconstruction techniques and incorporating additional regularizations. By doing so, we expect to significantly mitigate the minor artifacts and further enhance results in the future. ### Acknowledgements We would like to thank the following sketchfab users for the models used for the demo images in this paper: dimaponomar2019 (backpack), danielpeng (bag), pmlzbt233 (wooden barrel), felixyadomi (cactus), avianinda (burger), shedmon (robocat), ie-niels (stool), phucn (armchair), techCIR (mug), sabriny (fox). All models are CC-By licensed.
2306.09808
Motivic homotopy theory of the classifying stack of finite groups of Lie type
Let $G$ be a reductive group over $\mathbb{F}_{p}$ with associated finite group of Lie type $G^{F}$. Let $T$ be a maximal torus contained inside a Borel $B$ of $G$. We relate the (rational) Tate motives of $\text{B}G^{F}$ with the $T$-equivariant Tate motives of the flag variety $G/B$. On the way, we show that for a reductive group $G$ over a field $k$, with maximal Torus $T$ and absolute Weyl group $W$, acting on a smooth finite type $k$-scheme $X$, we have an isomorphism $A^{n}_{G}(X,m)_{\mathbb{Q}}\cong A^{n}_{T}(X,m)_{\mathbb{Q}}^{W}$ extending the classical result of Edidin-Graham to higher equivariant Chow groups in the non-split case. We also extend our main result to reductive group schemes over a regular base that admit maximal tori. Further, we apply our methods to more general quotient stacks. In this way, we are able to compute the motive of the stack of $G$-zips introduced by Pink-Wedhorn-Ziegler for reductive groups over fields of positive characteristic.
Can Yaylali
2023-06-16T12:46:44Z
http://arxiv.org/abs/2306.09808v5
# Motivic homotopy theory of the classifying stack of finite groups of Lie type ###### Abstract Let \(G\) be a reductive group over \(\mathbb{F}_{p}\) with associated finite group of Lie type \(G^{F}\). Let \(T\) be a maximal torus contained inside a Borel \(B\) of \(G\). We relate the (rational) Tate motives of \(\mathrm{B}G^{F}\) with the \(T\)-equivariant Tate motives of the flag variety \(G/B\). On the way, we show that for a reductive group \(G\) over a field \(k\), with maximal Torus \(T\) and absolute Weyl group \(W\), acting on a smooth \(k\)-scheme \(X\), we have an isomorphism \(A_{G}^{n}(X,m)_{\mathbb{Q}}\cong A_{T}^{n}(X,m)_{\mathbb{Q}}^{W}\) extending the classical result of Edidin-Graham to higher equivariant Chow groups. ###### Contents * 1 Introduction * 2 Rational equivariant motivic homotopy theory * 3 Equivariant motives under split reductive groups * 3.1 Torsors under finite groups * 3.2 The relation between the motives of \([X/T]\) and \([X/G]\) * 4 Torsors under split maximal tori * 4.1 The motive of \(T\)-torsors * 4.2 Motivic cohomology of \(T\)-torsors * 5 Motivic cohomology of quotients up to isogeny * 5.1 The case of split reductive group schemes * 5.2 Arbitrary reductive groups over fields * 5.3 Generalizations ## 1 Introduction Let \(G\) be a reductive group over \(\mathbb{F}_{q}\), a finite field of characteristic \(p>0\), and \(\varphi\colon G\to G\) the \(q\)-Frobenius. Then \(G\) acts on itself via \(\varphi\)-conjugation, i.e. \((g,h)\mapsto gh\varphi(g)^{-1}\). The stabilizer of the neutral element is denoted by \(G^{F}\). If \(\overline{\mathbb{F}}_{q}\) denotes an algebraic closure of \(\mathbb{F}_{q}\), then \(G^{F}(\overline{\mathbb{F}}_{q})=G(\mathbb{F}_{q})\) is a finite group of Lie-type. The representation of finite groups of Lie type over fields of characteristic \(0\) was studied by Deligne and Lusztig (cf. [16]). In their article they construct representations of \(G(\mathbb{F}_{q})\) by an action on the \(\ell\)-adic cohomology of certain varieties, for \(\ell\neq p\). Roughly, the varieties in question are constructed by intersection of Bruhat strata and graph of Frobenius. Let us fix a Borel \(B\) of \(G\).1 The Bruhat strata of \([G/B]\) are induced by the Bruhat decomposition of \(G\) via pullback along \(G/B\to G/B\times^{G}G/B\cong[B\backslash G/B]\). In this article, we want to analyze the the cohomological connection between \(\mathrm{B}G^{F}\) and \([B\backslash G/B]\), i.e. study how their motivic categories are related. Footnote 1: Any reductive group over a finite field is quasi-split and thus admits a Borel. The derived category of \(\ell\)-adic sheaves \(D(\mathrm{B}G^{F},\mathbb{Q}_{\ell})\), for \(\ell\neq p\), encodes information about the action of \(G^{F}\) on \(\ell\)-adic cohomology. One can show that \(\mathrm{B}G^{F}\cong[G/_{\varphi}G]\), where \(G\) acts on itself via \(\varphi\)-conjugation. The restriction of the \(\varphi\)-conjugation to \(B\) yields an adjunction On the other hand there is an adjunction which is induced via the graph of \(\varphi\). Thus, it seems natural that the study of these two adjunctions should lead to information about the geometric representation theory of \(G^{F}\) and connection to the classical theory of Deligne-Lusztig. Instead of rewriting the theory of Deligne-Lusztig in the derived setting, we want to understand the adjunctions above in the motivic setting with rational coefficients. The idea is that first after \(\ell\)-adic realization, we get the classical situation back and further this could lead to information about the \(\mathbb{Q}\)-representations of \(G^{F}\), as we are naturally working with rational coefficients. ### Motives and connection the representation theory Motives were famously envisioned by Grothendieck to capture similar behavior of cohomology theories in an abelian category. The construction of such a category is not an easy task and has been studied for many years. The main approach is to define a derived category of motives with the hope to find a \(t\)-structure on it, so that the heart of this \(t\)-structure defines the abelian category of motives. To capture functorial behavior on cohomology theories one demands a full six functor formalism for the derived category of motives. There are several versions of the derived category of motives which agree under certain assumptions. One version was constructed by Cisinki and Deglise in the case of rational coefficients, which we denote by DM (cf. [1]). They show that the assignment \(X\mapsto\mathrm{DM}(X)\) from smooth \(k\)-schemes indeed admits a six functor formalism (\(\otimes\dashv\underline{\mathrm{Hom}},f^{*}\dashv f_{*},f_{!}\dashv f^{!}\)) and agrees with the classical construction of Morel. In particular, they show that motivic cohomology, i.e. \(\mathrm{Hom}_{\mathrm{DM}(X)}(1_{X},1_{X}(n)[m])\) agrees with Bloch's higher Chow groups \(A^{n}(X,2n-m)_{\mathbb{Q}}\). With the help of the \(6\)-functor formalism, we can define the motive of an \(k\)-scheme \(\pi\colon X\to\operatorname{Spec}(k)\) resp. the global sections via \[M_{k}(X)\coloneqq\pi_{!}\pi^{!}1_{Y}\quad\text{resp.}\quad R\Gamma_{S}(X, \mathbb{Q})\coloneqq\pi_{*}\pi^{*}1_{k}\] computing motivic cohomology resp. homology. The existence of a \(t\)-structure is a more delicate problem and in general not known. Levine shows that for a particular class of schemes \(X\), e.g. finite fields or affine spaces over finite fields, a \(t\)-structure exists on the full triangulated subcategory of Tate-motives \(\operatorname{DTM}(X)\subseteq\operatorname{DM}(X)\) generated by the \(1_{X}(n)\) for \(n\in\mathbb{Z}\) (cf. [11]). Further, using weight structures one can see that \(\operatorname{DTM}(\mathbb{F}_{p})\) is equivalent to the bounded derived category of \(\mathbb{Q}\)-vector spaces. We also have realization functors \(\operatorname{real}_{\ell}\colon\operatorname{DTM}(X)\to D_{\operatorname{ \acute{e}t}}(X,\mathbb{Q}_{\ell})\) for \(\ell\neq p\) that is conservative and \(t\)-exact for the perverse \(t\)-structure on \(D_{\operatorname{\acute{e}t}}(X,\mathbb{Q}_{\ell})\). Let us remark that if for a morphism \(f\colon X\to Y\) we have \(f_{*}1_{X}\in\operatorname{DTM}(Y)\), then this automatically induces an adjunction of \(\operatorname{DTM}(X)\) and \(\operatorname{DTM}(Y)\). In particular, the adjunction \(f^{*}\dashrightarrow f_{*}\) restricts to Tate motives. We call morphisms with such a property _Tate_. Let us now explain the relation between Tate-motives and geometric representation theory. For simplicity, let us stay in our setting above, i.e. \(G/\mathbb{F}_{p}\) is a split reductive group. Let \(T\) be a split maximal torus inside a Borel \(B\) of \(G\). In [10], Soergel and Wendt show that for schemes stratified by affine spaces, such as the flag variety \(G/B\), one can define a subcategory of DM called _stratified Tate-motives_ that admits a \(t\)-structure. This \(t\)-structure is glued from the \(t\)-structure on the strata. For \(G/B\) with the Bruhat stratification, we will denote the category of stratified Tate-motives with \(\operatorname{DTM}_{(B)}(G/B)\). Soergel and Wendt show that \(\operatorname{DTM}_{(B)}(G/B)\) is equivalent to the bounded derived category \(D^{b}(\mathcal{O}^{\mathbb{Z},ev})\) of graded \(\mathcal{O}\coloneqq H^{*}(G/B)\)-modules concentrated in even degrees. To connect this to representations, we have to go further and endow motivic cohomology with group actions. For this, we need to define equivariant motives. To make sense of the following construction, we need to work in the setting of \(\infty\)-categories. The idea is to define \(\operatorname{DM}(\mathcal{X})\) for an Artin-stack \(\mathcal{X}\) via gluing along the atlas. As we essentially have to glue the derived category this only makes sense in the \(\infty\)-categorical framework. Then this gluing can be defined via right Kan-extension from schemes to Artin-stacks (cf. [10]). As one would expect motivic cohomology of a quotient stack \([X/G]\), where \(X\) is a smooth \(k\)-scheme and \(G\) is a linear algebraic group, yields the equivariant Chow-groups \(A^{n}_{G}(X,2n-m)\) of Edidin and Graham (cf. [10]). In this way, one can also extend the stratified Tate-motives to Artin-stacks. In the case of the flag variety, Soergel, Virk and Wendt show that \(\operatorname{DTM}([B\backslash G/B])\) is equivalent to the bounded derived category of bi-Soergel-modules (cf. [10]). Further, they show that applying \(K_{0}\) yields an isomorphism to the Iwahori-Hecke algebra and Verdier duality yields the Kazhdan-Lusztig involution. In particular, the stratified Tate-motives of \([B\backslash G/B]\) with the weight structure and \(6\)-functor formalism carries information about the \(\ell\)-adic geometric representations of \(G\). ### Connection between finite groups of Lie type and their associated flag variety As we have seen above the geometric representation theory of the flag variety is linked to stratified Tate motives. This particular connection uses that the flag variety is stratified by affine spaces and that Tate-motives behave nicely under this stratification. We expect that the geometric representation theory of \(G^{F}\) is linked to Tate-motives on \(\mathrm{B}G^{F}\) the classifying space of \(G^{F}\). In this case, we cannot apply the theory of [11] as \(G^{F}\) is not split reductive. But we can still link Tate motives of \(\mathrm{B}G^{F}\) to Tate motives on \([B\backslash G/B]\) in the following way. The stack \(\mathrm{B}G^{F}\) is equivalent to \([G/_{\varphi}G]\), where \(G\) acts on itself via \(\varphi\)-conjugation. Let us fix a maximal torus \(T\subseteq B\). Let \(T\) act on \(G\) also via \(\varphi\)-conjugation. We can embed \(T\) into \(T\times T\) via \(t\mapsto(t,\varphi(t))\). If we now let \(T\times T\) act on \(G\) via \((t,t^{\prime},g)\mapsto tgt^{\prime-1}\), we get a zigzag of Artin-stacks (1.0.1) It is a fact that \(\mathrm{DM}([G/T\times T])\simeq\mathrm{DM}([B\backslash G/B])\). Thus, on the level of motivic categories, this zigzag yields adjunctions between \(\mathrm{DM}(\mathrm{B}G^{F})\) and \(\mathrm{DM}([B\backslash G/B])\). Now we can formulate the leading question of this article: ( \[*\] ) _Do these adjunctions preserve Tate-motives?_ The answer to this question is positive and yields a first point to access motivic representation theory of \(G^{F}\) via the motivic geometric representation theory of \(G\). ### Equivariant motivic homotopy theory of split reductive groups From now on let \(k\) be a field and \(S\) a regular \(k\)-scheme of finite type. We will first work with split reductive \(S\)-group schemes in this generality. Later on, we are going to focus on the case where \(S=\mathrm{Spec}(k)\) and as any reductive group over a field becomes split after finite Galois extension, we deduce the answer to our leading question (\(*\) *> 2.2) from the split case. Thus, for now let \(G\) be a split reductive \(S\)-group scheme with split maximal torus \(T\) contained in a Borel \(B\) of \(G\). Our main question is about the behavior of Tate motives under the induced maps on DM corresponding to the zigzag (1.0.1). We will work in a more general setting and look at \(a\) and \(b\) separately. ### Equivariant motives and passage to Tori The morphism \(a\) resembles the motivic version of a more classical problem on classical problem on Chow groups. Let \(X\) be an \(S\)-scheme with \(G\)-action, what is the relation between \(A^{\bullet}_{G}(X)\) and \(A^{\bullet}_{T}(X)\)? In [1] Edidin and Graham answer this question for rational Chow groups in the case \(S=\mathrm{Spec}(k)\), i.e. \(A^{\bullet}_{G}(X)_{\mathbb{Q}}\cong A^{\bullet}_{T}(X)^{W}_{\mathbb{Q}}\), where \(W\) denotes the Weyl group of \(T\) in \(G\). This isomorphism is just a shadow of an equivalence that can be seen motivically. **Theorem 1** (3.11).: _Let \(S\) be smooth over \(k\). Let \(G\) be a split reductive \(S\)-group scheme with split maximal torus \(T\) and Weyl group \(W\). Assume \(G\) acts on a locally of finite type \(S\)-scheme \(X\). Then the natural map_ \[[X/T]\to[X/G]\] _is Tate._ _Further, we have_ \[R\Gamma_{S}([X/G]\,,\mathbb{Q})\simeq R\Gamma_{S}([X/T]\,,\mathbb{Q})^{W}.\] In particular, applying this result to motivic cohomology in the case, where \(S=\operatorname{Spec}(k)\), we can extend the classical result to _higher_ Chow groups even in the non-split case. **Corollary 2** (3.12).: _Let \(G\) be a reductive \(k\)-group scheme with split maximal torus \(T\) and absolute Weyl group \(W\). Assume \(G\) acts on a smooth \(k\)-scheme \(X\). Then for all \(n,m\in\mathbb{Z}\), we have_ \[A_{G}^{n}(X,m)_{\mathbb{Q}}\cong A_{T}^{n}(X,m)_{\mathbb{Q}}^{W}.\] The idea of the proof of Theorem 1 is to factorize \([X/T]\to[X/G]\) into \[[X/T]\to[G/N_{G}(T)]\to[X/G]\,.\] Then the first map of the factorization is naturally a \(W\)-torsor and the second map a \(G/N_{G}(T)\)-bundle. For torsors under finite groups etale descent relates motives via \(W\)-invariants. For \(G/N_{G}(T)\)-bundles it suffices to see that \(R\Gamma_{S}(G/N_{G}(T),\mathbb{Q})\) is trivial. We will prove this by reducing the triviality to the equivalence of the map \(K_{0}(S)\to K_{T}(G)^{W}\), which by classical results in equivariant \(K\)-theory is known. #### Motives of \(T\)-torsors Let return to our main setting and consider the embedding \(T\hookrightarrow T\times T\) given by the graph of \(\varphi\). The quotient \(T\times T/T\) under this embedding is isomorphic to \(T\). In particular, this isomorphism gives the map \[b\colon\,[G/_{\varphi}T]\to[G/T\times T]\] the structure of a \(T\)-torsor. So, next we want to understand motives of \(T\)-torsors. So let \(X\to Y\) be a morphism of Artin-stacks that is a \(T\)-torsor. Classically, Chow groups in this setting can be computed rather easily. For each character \(\chi\in T\to\mathbb{G}_{\mathrm{m}}\) we get a \(1\)-dimensional representation \(\kappa(\chi)\) of \(T\). This yields a line bundle \(L_{\chi}\coloneqq X\times^{T}\kappa(\chi)\) on \(Y\). Multiplication with the first Chern class of \(L_{\chi}\) yields an action of the character group \(\hat{T}\) of \(T\) on \(A^{\bullet}(Y)\). In the case of quotient stacks, like the morphism \(b\), we get \[A_{T}^{\bullet}(G)\cong A_{T\times T}^{\bullet}(G)/\hat{T}A_{T\times T}^{ \bullet}(G)\] (cf. [14]). Again, this is just a shadow of of computations for oriented cohomology theories. The idea is the following. As \(T\) is split reductive, we have \(T\cong\mathbb{G}_{\mathrm{m}}^{\ \ r}\) for some \(r\in\mathbb{N}\). By applying successive \(\mathbb{G}_{\mathrm{m}}\)-quotients, we can write \[X\to X_{1}\to X_{2}\to\dots\to X_{r}\cong Y,\] where \(X_{i}\coloneqq\left[X/\,\mathbb{G}_{\mathrm{m}}^{\ i}\right]\). Each of the maps \(X_{i-1}\to X_{i}\) is a \(\mathbb{G}_{\mathrm{m}}\)-torsor. So we may reduce to the case, where \(T=\mathbb{G}_{\mathrm{m}}\). In this case, we can follows [11] and assign the line bundle \(\mathcal{L}\coloneqq X\times^{\mathbb{G}_{\mathrm{m}}}\mathbb{A}^{1}\) over \(Y\). Multiplication with the first Chern class of \(\mathcal{L}\) yields a fiber sequence \[M_{k}(X)\to M_{k}(Y)\to M_{k}(Y)(1)[2].\] Applying this result to arbitrary \(T\)-torsors using the sequence above yields the following. **Proposition 3** (4.3).: _Let \(f\colon X\to Y\) be a \(T\)-torsor of smooth Artin stacks over \(S\). Then \(f\) is a Tate map._ Applying this again to motivic cohomology yields the result for on Chow groups for Artin-stacks. **Corollary 4** (4.6).: _Assume that \(S=\mathrm{Spec}(k)\) and let \(X\to Y\) be a \(T\)-torsor of smooth Artin stacks over \(k\). Then_ \[A^{\bullet}(X)_{\mathbb{Q}}\cong A^{\bullet}(Y)_{\mathbb{Q}}/\hat{T}A^{\bullet }(Y)_{\mathbb{Q}}.\] If \(X\) and \(Y\) are represented by quotients of qcqs schemes by diagonalizable group schemes, for example for \(b\) as above, we can actually replace the Chow ring with equivariant \(K_{0}\). More generally, the analogous statement holds for any oriented cohomology theory that is \(m\)-connective (cf. Remark 4.7). #### Applications to quotients up to conjugation by isogeny We have seen above, that the maps relating \(\mathrm{B}G^{F}\) and \(\left[G/T\times T\right]\) are Tate. But we haven't particularly used the fact that we are interested in Frobenius-conjugation but rather conjugation up to _isogeny_. Thus, we can work in a more general setting, that we describe in the following. Let \(S\) be a quasi-compact smooth \(k\)-scheme. Let \(G\) be a split reductive \(S\)-group scheme, \(P\) resp. \(Q\) be parabolics inside \(G\) with Levi-components \(L\) resp. \(M\). Let \(\varphi\colon L\to M\) be an isogeny. Then \(L\) acts on \(G\) via \((l,g)\mapsto lg\varphi(l)^{-1}\). Let \(T\) be a split maximal torus of \(G\) contained in \(L\). Fix a \(g_{0}\in G(S)\) such that \(g_{0}\varphi(T)g_{0}^{-1}=T\) and denote by \(\widetilde{\varphi}\) the composition of \(\varphi\) and \(g_{0}\)-conjugation. We can embed \(T\) into \(T\times T\) via \(t\mapsto(t,\widetilde{\varphi}(t))\). **Theorem 5** (5.1).: _In the setting above, we have the following zigzag of Tate maps_ _Further, if \(S=\mathrm{Spec}(k)\) we can compute the motivic cohomology of \(\left[G/_{\varphi}L\right]\) as_ \[A^{n}(\left[G/_{\varphi}L\right])_{\mathbb{Q}}\cong\left(A^{n}_{T}(G/B)_{ \mathbb{Q}}/\hat{T}A^{n}_{T}(G/B)_{\mathbb{Q}}\right)^{W_{L}}\] Using results about equivariant \(K\)-theory by Uma and Krishna and our results on the cohomology theory of \(T\)-torsors, we can extend the situation above to equivariant \(K\)-theory and generalize a result by Brokemper. **Corollary 6** (4.9).: _In the setting above assume that \(S=\operatorname{Spec}(k)\), then we have_ \[K_{0}([G/_{\varphi}L])_{\mathbb{Q}}\cong R(T)_{\mathbb{Q}}^{W_{L}}/(f-\tilde{ \varphi}f\mid f\in R(T)_{\mathbb{Q}}^{W_{G}}),\] _where \(W_{G}\) denotes the Weyl group of \(T\) in \(G\)._ **Example 7**.: _Let us give two interesting examples, where Theorem 5 can be used._ 1. _Let_ \(k=\mathbb{F}_{q}\) _be a finite field with_ \(q\)_-elements and assume_ \(S=\operatorname{Spec}(k)\)_. If we set_ \(L=G\) _and_ \(\varphi\) _the_ \(q\)_-Frobenius, then we precisely get the situation of the beginning back. In particular, we see that there is an adjunction between_ \(\operatorname{DTM}(\operatorname{B}G^{F})\) _and_ \(\operatorname{DTM}([B\backslash G/B])\)_._ 2. _Let_ \(k\) _be a finite field of characteristic_ \(p>0\) _and assume_ \(S=\operatorname{Spec}(k)\)_. Another interesting example is the stack of_ \(G\)_-zips of Pink-Wedhorn-Ziegler (cf. Example_ 5.5_). In particular, we can partially recover the computations of Brokemper_ _[_10_]_ _for Chow groups and we can generalize these further to Grothendieck ring of the stack of_ \(G\)_-zips._ ### The case of arbitrary reductive groups over fields As before let \(k\) be a field. Further, let \(G\) be a reductive group over \(k\) with maximal torus \(T\). As before, let \(\varphi\colon L\to M\) be an isogeny between Levi components of parabolics. As we have seen \(\varphi\)-conjugation yields an action of \(L\) on \(G\) and we get a zigzag where \(b\) is induced via the graph of \(\varphi\) (up to conjugation by an element of \(G\)). We are ready to answer the leading question of this article (\(\ast\) *> 3) by reducing to the split-case. To prove Theorem 1, we need to understand torsors under finite groups. This also becomes helpful in this situation. Any reductive group over a field \(k\) becomes split after passing to a finite Galois extension \(K/k\). In particular as \(\operatorname{Spec}(K)\to\operatorname{Spec}(k)\) is a \(\operatorname{Gal}(K/k)\)-torsor and therefore, we can deduce from Theorem 5 the analogous statement for arbitrary reductive groups. **Corollary 8** (5.6).: _Let \(G\) be a reductive group scheme over a field \(k\) and \(T\) a maximal torus. Further, let \(\varphi\colon L\to M\) be an isogeny between Levi components of parabolics. As before, the maps in the induced diagram_ _are Tate._ #### Structure of this article We start this article by recalling properties of the \(\infty\)-category of motives and how to extend this to arbitrary Artin stacks. After defining the necessary notions for this article, we quickly recollect some computational aspects. Afterwards, we start to focus on motives on schemes with group action. First, we explain how to achieve a group action on motives and how torsors under finite groups of Artin-stacks have a particular behavior. Then we concentrate on the case \(T\subseteq G\), a split maximal torus inside a reductive group. Namely, we show that the relation between \(T\)-equivariant Chow groups and \(G\)-equivariant Chow groups extend to the motivic case. Next, we show that \(T\)-torsors of Artin-stacks are Tate and explicitly compute motivic cohomology implying the classical case of Chow groups. In the end, we focus on split reductive groups with conjugation up to isogeny. We use our results from before to get the desired adjunction of Tate motives with the \(T\)-equivariant flag variety. Further, we show to extend all of these results to the case of quasi-split reductive groups. We end the paper with ideas for generalization, that we want to address in the future. #### Setup Throughout, we fix a noetherian excellent scheme \(\tilde{B}\) of dimension at most \(2\) and a regular scheme \(S\) of finite type over \(\tilde{B}\). An Artin-stack is an algebraic stack in the sense of [14]. Every Artin stack will be of finite type over \(S\) and any morphism of Artin stacks will be an \(S\)-morphism. Throughout, we will work in the setting of \(\infty\)-categories and freely use the language of \(\infty\)-categories. Throughout DM denotes the Beilinson motives with coefficients in \(\mathbb{Q}\). Let \(X\) be a \(S\)-scheme. Note that \(\operatorname{DM}(X)\simeq D_{\mathbb{A}^{1},\operatorname{et}}(X,\mathbb{Q})\) by [13, Thm. 16.2.18]. Since we work over an excellent base, the functor \(X\mapsto\operatorname{DM}(X)\) satisfies \(h\)-descent (cf. [13, Thm. 14.3.4]). ### Acknowledgement I would like to thank Paul Ziegler, who communicated this project and shared his thoughts with me. Further, I would like to thank Torsten Wedhorn for multiple discussions and comments. Finally, I would like to thank Arnaud Eteve, Marc Hoyois, Adeel Khan, Jakob Scholbach, Fabio Tanania, Timo Richarz, Thibaud van den Hove for fruitful discussions and feedback. This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) TRR 326 _Geometry and Arithmetic of Uniformized Structures_, project number 444845124 and by the LOEWE grant 'Uniformized Structures in Algebra and Geometry'. ## 2 Rational equivariant motivic homotopy theory In this section, we want to recall some properties of the category of (rational) motives and how to extend this to Artin stacks. We expect that most readers are familiar with the notion of motives and the \(6\)-functor formalism and refer to [14, Syn. 2.1.1] for an overview of the properties of the \(6\)-functor formalism. Nevertheless, to prevent confusion, let us quickly recall some notation of _loc.cit._. **Remark 2.1**.: In the following any scheme and any morphism will be considered in the category of finite type \(S\)-schemes, \(\operatorname{Sch}_{S}^{\operatorname{ft}}\). 1. For any \(S\)-scheme \(X\), \(\operatorname{DM}(X)\) is a stable, presentable, closed symmetric monoidal \(\infty\)-category. The \(\otimes\)-unit will be denoted by \(1_{X}\). It has all limits and colimits. 2. The assignment \(X\mapsto\operatorname{DM}(X)\) can be upgraded to a presheaf of symmetric monoidal \(\infty\)-categories \[\operatorname{DM}^{*}\colon\operatorname{Sch}_{S}^{\operatorname{ft}}\to \operatorname{Cat}_{\infty}^{\otimes},\ X\mapsto\operatorname{DM}(X),\ f \mapsto f^{*}.\] For any morphism of schemes \(f\colon X\to Y\), there is an adjunction \[f^{*}\colon\operatorname{DM}(Y)\xleftrightarrow{}\operatorname{DM}(X)\colon f_ {*}.\] 3. If \(f\) is smooth, then \(f^{*}\) has a left adjoint, denoted \(f_{\sharp}\). 4. The assignment \(X\mapsto\operatorname{DM}(X)\) can be upgraded to a presheaf of \(\infty\)-categories \[\operatorname{DM}:(\operatorname{Sch}_{S}^{\operatorname{ft}})^{\operatorname{ op}}\to\operatorname{Cat}_{\infty},\ X\mapsto\operatorname{DM}(X),\ f \mapsto f^{!}.\] For each \(f\), there is an adjunction \[f_{!}:\operatorname{DM}(X)\rightleftarrows\operatorname{DM}(Y):f^{!}.\] For any factorization \(f=p\circ j\) with \(j\) an open immersion and \(p\) a proper map, there is a natural equivalence \(f_{!}\cong p_{*}j_{\sharp}\). 5. For the projection \(p:\mathbb{G}_{\operatorname{m},S}\times_{S}X\to X\), and any \(M\in\operatorname{DM}(X)\), the map \(p_{\sharp}p^{*}M[-1]\to M[-1]\) in \(\operatorname{DM}(X)\) is a split monomorphism. The complementary summand is denoted by \(M(1)\). The functor \(M\mapsto M(1)\) is an equivalence with inverse denoted by \(M\mapsto M(-1)\). For any integer \(n\) the \(n\)-fold composition is denoted by \(M\mapsto M(n)\) and in the future, we will abbreviate \(\langle n\rangle\coloneqq(n)[2n]\). Let \(\underline{X}\) be a prestack, i.e. presheaf of anima on the category of rings. There are several approaches to the \(\infty\)-category \(\operatorname{DM}(\underline{X})\). If \(\underline{X}\) is an Artin-stack over a field \(k\), there are constructions of its motive similar to equivariant Chow groups. One resolves \(\underline{X}\) by open substacks \((\underline{X}_{i})\) such that on each \(\underline{X}_{i}\) there is a vector bundle \(V_{i}\) together with an open \(U_{i}\) with a free \(G\)-action such that the codimension of \(V_{i}\setminus U_{i}\) tends towards infinity (cf. [10]). This construction was already used for the motive of classifying stacks \(\operatorname{B}G\) by Morel-Voevodsky (cf. [32, SS4]). Totaro then gave an explicit computation of the motive of \(\operatorname{B}\mathbb{G}_{\operatorname{m}}\) over a field (cf. [19] and Example 2.8). Alternatively, Richarz-Scholbach give a construction via certain left and right Kan extension (cf. [18]). Their approach is based on gluing the motivic structure on Beilinson motives to arbitrary prestacks. Indeed, \(\operatorname{DM}(-)\) satisfies \(h\)-descent, so it is rather formal to extend the six functor formalism to Artin stacks, we will use this approach. One should note that this was also discussed in [17], to extend the \(6\)-functor formalism to higher Artin stacks. For computations of the underlying motives it seems to be better to work with the definition of Hoskins-Lehalleur resp. Morel-Voevodsky. Let \(f\colon\mathfrak{X}\to\operatorname{Spec}(k)\) be an Artin stack and let \(M(\mathfrak{X})\) denote the \(k\)-linear motive of \(\mathfrak{X}\) defined in [16]. This defines an object in \(\operatorname{DM}^{*}(\operatorname{Spec}(k))\). Let \(1_{k}\) denote the unit in \(\operatorname{DM}^{*}(\operatorname{Spec}(k))\). We will see in Corollary 2.7 that if \(f\) is smooth, we have \(M(\mathfrak{X})\simeq f_{\sharp}f^{*}1_{k}\). In particular, if we use the approach of [18] and define a motive of a prestack as the \(\sharp\)-push/\(*\)-pull of the unit, we see that our notion of motives on Artin stacks agrees with the classical ones. **Definition 2.2** ([18]).: Let \(y\colon\operatorname{Aff}^{\operatorname{ft}}_{S}\hookrightarrow P( \operatorname{Aff}^{\operatorname{ft}}_{S})\) be the Yoneda embedding, where \(\operatorname{Aff}^{\operatorname{ft}}_{S}\) denotes the (Nerve of the) category of affine schemes of finite type over \(S\). We denote the right Kan extension of \(\operatorname{DM}^{!}_{S}\colon(\operatorname{Aff}^{\operatorname{ft}}_{S})^{ \operatorname{op}}\to\operatorname{DGCat}_{\operatorname{cont}}\) along \(y\), where the transition functors are given via \(\operatorname{!}\)-pullback, with \(\operatorname{DM}_{S}\) - here \(\operatorname{DGCat}_{\operatorname{cont}}\) denotes the \(\infty\)-category of presentable stable \(\mathbb{Q}\)-linear dg-\(\infty\)-categories with colimit preserving functors. For a prestack \(\underline{X}\), we define the \(\infty\)-category of \(S\)_-linear motives (with rational coefficients) of \(\underline{X}\)_ as \(\operatorname{DM}_{S}(\underline{X})\). Note that in [18], Richarz and Scholbach give a definition of \(\operatorname{DM}\) for presheaves on all rings to anima. But as we only work with Artin-stacks that are of finite type over \(S\) our definition suffices. Khan showed in [17, Thm. A.5] that this method of extending the theory of motives to (derived) Artin stacks, does not loose the \(6\)-functor formalism. One way to see this, is that we can use the DESCENT program in [10], since the Beilinson motives satisfy etale descent, in our context. As mentioned in [17], this is equivalent to the construction of [18]. **Theorem 2.3**.: _Let \(\operatorname{\widetilde{\operatorname{DM}}}\) be the restriction of \(\operatorname{DM}\) to Artin-stacks of finite type over \(S\). Then \(\operatorname{\widetilde{\operatorname{DM}}}\) is compatible with the \(6\)-functor formalism in the sense of [18, Syn. 2.1.1]._ Proof.: The proof is the same as [17, Thm A.5]2. Footnote 2: As mentioned in _op.cit._, the method of extending the \(6\)-functor formalism works with any motivic category that satisfies étale descent. **Definition 2.4**.: Let \(\mathfrak{X}\) be an Artin \(S\)-stack with structure morphism \(f\colon\mathfrak{X}\to S\). Then we define the _(rational) \(S\)-linear motive of \(\mathfrak{X}\)_ as \(M_{S}(\mathfrak{X})\coloneqq f_{\sharp}f^{!}1_{S}\). If \(S=\operatorname{Spec}(A)\) is affine, we write \(M_{A}(\mathfrak{X})\). We further define the _global sections of \(\mathfrak{X}\) over \(S\)_ to be \(R\Gamma_{S}(\mathfrak{X},\mathbb{Q})\coloneqq f_{*}f^{*}1_{S}\). **Remark 2.5**.: Let \(f\colon\mathfrak{X}\to S\) be an Artin-stack over \(S\). If \(f\) is smooth, then relative purity implies that \(f_{\sharp}f^{*}\simeq f_{\sharp}f^{!}\) and in particular, we see with \[\operatorname{Hom}_{\operatorname{DM}(S)}(M_{S}(\mathfrak{X}),\mathbb{Q}(n)[m ])\simeq\operatorname{Hom}_{\operatorname{DM}(S)}(\mathbb{Q}(-n)[-m],R\Gamma_ {S}(\mathfrak{X},\mathbb{Q}))\] that \(M_{S}(\mathfrak{X})\) computes motivic cohomology and \(R\Gamma_{S}(\mathfrak{X},\mathbb{Q})\) motivic homology. **Notation 2.6**.: Let \(G\) be an \(S\)-group scheme acting on an \(S\)-scheme \(X\), via a morphism \(a\). For the quotient stack \([X/G]\), we can define a simplicial object in finite type \(S\)-schemes, via its _Bar-resolution_ We denote the corresponding simplicial functor with \(\operatorname{Bar}^{\bullet}(X,G)\). There is also an alternative way to define motives of algebraic stacks via the Bar-resolution. For each \(n\geq 0\) let \(\mathbb{Q}(\operatorname{Bar}^{n}(X,G))\) be the free etale sheaf with coefficients in \(\mathbb{Q}\) associated to \(\operatorname{Bar}^{n}(X,G)\). This yields a simplicial object in etale sheaves of finite type \(S\)-schemes with rational coefficients. The complex associated to this simplicial object induces a motive \(M_{S}(\operatorname{Bar}^{\bullet}(X,G))\) in \(\operatorname{DM}(S)\). For \(S=\operatorname{Spec}(k)\), the spectrum of a field, Hoskins-Lehalleur explain in [10] that this definition is equivalent to their definition of a motive of an Artin stack. The naturally arising question is if \(M_{k}(\operatorname{Bar}^{\bullet}(X,G))\) is equivalent to \(M_{k}([X/G])\) as defined in Definition 2.4. If \([X/G]\) is representable by a smooth scheme, then the answer is positive and follows by cohomological descent for DM with respect to the \(h\)-topology (cf. [1, Thm. 14.3.4, Prop. 5.2.10]). Thus, the answer stays positive for smooth Artin stacks, as DM satisfies \(h\)-descent by gluing (cf. [13, Thm. 2.2.16]). **Corollary 2.7**.: _Let \(k\) be a field. Let \(X\) be a smooth \(k\)-scheme of finite type and \(G\) be a smooth \(k\)-scheme acting on \(X\) with structure map \(f\colon[X/G]\to\operatorname{Spec}(k)\). Then \(M_{k}([X/G])\) is equivalent to \(M_{S}(\operatorname{Bar}^{\bullet}(X,G))\)._ Proof.: This follows from [10, Prop. A.7] and the discussion above. We can use Corollary 2.7 to compute the motive of \(B\operatorname{\mathbb{G}_{m}}\) as in [10]. **Example 2.8**.: Let \(k\) be a field. Further, let \(\operatorname{\mathbb{G}_{m,k}}\) act trivially on \(\operatorname{Spec}(k)\). Then \[M_{k}(B\operatorname{\mathbb{G}_{m,k}})\simeq\operatorname{colim}_{i\in \mathbb{N}}M_{k}(\operatorname{\mathbb{P}}_{k}^{i})\simeq\bigoplus_{i\geq 0}1_{ k}\langle i\rangle.\] **Remark 2.9**.: In the following we want to understand the Gysin sequence for algebraic stacks. Let us quickly recall it in the scheme case. Let \(i\colon Z\hookrightarrow X\) be a closed immersion of \(S\)-schemes of pure codimension \(n\) with open complement \(U\). Let us assume that \(Z\) and \(X\) are smooth over \(S\). In particular, we see that \(i\) is equivalently a regular closed immersion of codimension \(n\). Then there exists a fiber sequence of the form \[M_{S}(U)\to M_{S}(X)\to M_{S}(Z)\langle n\rangle\] (cf. [11, 11.3.4]). We are going to replace \(S\) by a smooth Artin stack \(\mathfrak{Y}\) over \(S\) and \(X\) by a smooth Artin stack over \(\mathfrak{Y}\). For this let us also recall the notion of a (regular) closed immersion of a certain codimension for Artin stacks. Let \(\iota\colon\mathfrak{Z}\hookrightarrow\mathfrak{X}\) be a closed immersion of locally noetherian Artin stacks. Let \(X\to\mathfrak{X}\) be a smooth atlas. Then \(\iota\) is representable and we define the codimension of \(\mathfrak{Z}\) as the codimension of \(\mathfrak{Z}\times_{\mathfrak{X}}X\) in \(X\) (cf. [11, SS6]). We can also define the notion of a regular immersion in that way (cf. [10, 06FM]) and the notion of its codimension. In particular, a closed immersion of \(S\)-smooth Artin stacks \(\mathfrak{Z}\hookrightarrow\mathfrak{X}\) is automatically regularly immersed and the codimension of the regular immersion agrees with the codimension as a closed substack. **Lemma 2.10** (The Gysin sequence).: _Let \(f\colon\mathfrak{X}\to\mathfrak{Y}\) be a smooth schematic morphism of smooth Artin-stacks. Further let \(i\colon\mathfrak{Z}\hookrightarrow\mathfrak{X}\) be a closed immersion of (pure) codimension \(n\) such that \(\mathfrak{Z}\) is smooth over \(\mathfrak{Y}\) with open complement \(j\colon\mathfrak{U}\to\mathfrak{X}\). Further, let us denote \(f_{0}\coloneqq f\circ j\) and \(\bar{f}\coloneqq f\circ i\). Then there exists the following fiber sequence_ \[f_{0!}f_{0}^{!}1_{\mathfrak{Y}}\to f_{!}f^{!}1_{\mathfrak{Y}}\to\bar{f}_{!} \bar{f}^{!}1_{\mathfrak{Y}}\langle n\rangle.\] Proof.: Let \(Y\to\mathfrak{Y}\) be a smooth atlas. Let us define \(X\coloneqq Y\times_{\mathfrak{Y}}\mathfrak{X}\) and let \(\check{C}(Y)_{\bullet}\) resp. \(\check{C}(X)_{\bullet}\) denote the corresponding \(\check{\text{C}}\)ech nerves. By construction \(\check{C}(X)_{\bullet}\) is obtained by \(\check{C}(Y)_{\bullet}\times_{Y}X\). So, by functoriality we get maps \[f_{\bullet}!\colon\operatorname{DM}(\check{C}(Y)_{\bullet})\xRightarrow{ \text{DM}}(\check{C}(X)_{\bullet})\colon f_{\bullet}^{!}\] that induce the maps \(f_{!}\) and \(f^{!}\) after passing to the limit. By construction we have a pullback diagram In particular, by smoothness of the atlas and the exchange equivalence, we have \[j_{Y,\bullet}^{*}f_{!}f^{!}1_{Y}\simeq f_{\bullet}!f_{\bullet}^{!}1_{\check{C }(Y)_{\bullet}}.\] Thus, by smoothness we can use that \(\operatorname{DM}^{*}(\mathfrak{Y})\simeq\operatorname{DM}^{!}(\mathfrak{Y})\) and descent to see that \[\lim_{\Delta}f_{\bullet}!f_{\bullet}^{!}1_{\check{C}(Y)_{\bullet}}\simeq f_{! }f^{!}1_{Y}.\] Analogously, we can write \[\lim_{\Delta}f_{0\bullet}!f_{0\bullet}^{!}1_{\check{C}(Y)_{\bullet}}\simeq f_ {!}f^{!}1_{Y},\quad\lim_{\Delta}\bar{f}_{\bullet}!\bar{f}_{\bullet}^{!}1_{ \check{C}(Y)_{\bullet}}\simeq f_{!}f^{!}1_{Y}.\] Therefore, we may assume that \(\mathfrak{Y}\) is representable by a scheme and by representability of \(f\) also \(\mathfrak{X},\mathfrak{U}\) and \(\mathfrak{Z}\) are representable by schemes. Hence, the result now follows from the classical Gysin sequence (cf. [10, 11.3.4]). Lastly, let us define Tate-motives. As we mentioned in the introduction, the existence of a motivic \(t\)-structure is still an open problem. For a field \(k\) Levine proved that under certain vanishing assumptions on motivic cohomology in \(\operatorname{DM}(k)\), the so called Beilinson-Soule vanishing conjecture, such a \(t\)-structure exists on the full stable subcategory generated by Tate-twists \(1_{k}(n)\) (cf. [13]). The Beilinson-Soule vanishing conjecture holds for example for finite fields. **Definition 2.11**.: Let \(X\) be an Artin stack. We define the category of Tate-motives \(\operatorname{DTM}(X)\) to be the full stable subcategory of \(\operatorname{DM}(X)\) generated by \(1_{X}(n)\), for \(n\in\mathbb{Z}\). An element \(M\in\operatorname{DM}(X)\) is _Tate_, if \(M\in\operatorname{DTM}(X)\). A map \(f\colon X\to Y\) of Artin stacks over \(S\) is called _Tate_ if \(f_{*}1_{X}\) is Tate. Levine further shows, that existence of a weight structure on Tate motives for a field imply that the heart of \(\operatorname{DTM}(\mathbb{F}_{p})\) under the motivic \(t\)-structure is equivalent to the category of graded finite dimensional \(\mathbb{Q}\)-vector spaces \((\mathbb{Q}\text{-}\mathrm{VS}^{\mathbb{Z}})\). In particular, using a classical result of Wildeshaus, we see for example that \(\operatorname{DTM}(\mathbb{F}_{p})\simeq\mathcal{D}^{b}(\mathbb{Q}\text{-} \mathrm{VS}^{\mathbb{Z}})\), where \(\mathcal{D}^{b}\) denotes the bounded derived category. Let us give a particularly interesting example of Tate map that will be used later on. **Example 2.12**.: Let \(G\) be a split reductive \(S\)-group scheme and \(B\subseteq G\) a Borel. Then we claim that the structure map of the flag variety \(G/B\to S\) is Tate. Indeed, the Bruhat decomposition of \(G/B\) yields a stratification by affine spaces indexed by the Weyl group. The length of each Weyl element yields a partial order on the associated Schubert varieties. Using this order, one can show using standard arguments that \(R\Gamma_{S}(G/B,\mathbb{Q})\simeq\bigoplus_{w\in W}1_{S}\langle l(w)-n\rangle\), where \(n\) denotes the relative dimension of \(G/B\) over \(S\) (cf. [23] or [1] for more details on analogous problems). **Remark 2.13**.: If \(f\colon X\to Y\) is a smooth morphism of Artin stacks and \(X\) is smooth over \(S\), then \(D_{Y}(f_{*}1_{X})\simeq f_{!}D_{X}(1_{X})\simeq f_{!}1_{X}\langle\Omega_{X/S}\rangle\) and \(D_{Y}(f_{!}1_{X})\simeq f_{*}1_{X}\langle\Omega_{X/S}\rangle\). Thus, \(f_{*}1_{X}\) is Tate if and only if \(f_{!}1_{X}\) is Tate. In the literature one also considers the stable _cocomplete_\(\infty\)-category generated by Tate-twists. This is usually also referred to as "Tate-motives" but we will differentiate these from our definition. An example of such motives is given by the motive of \(\operatorname{B}\mathbb{G}_{\mathrm{m}}\to\operatorname{Spec}(k)\), the classifying stack of \(\mathbb{G}_{\mathrm{m}}\) over a field \(k\). Indeed, \(M_{k}(\operatorname{B}(\mathbb{G}_{\mathrm{m}}))\simeq\bigoplus_{n\in\mathbb{Z }}1_{k}\langle n\rangle\) and thus this lies in the ind-completion of the category of Tate-motives. **Definition 2.14**.: Let \(X\) be an Artin-stack. We will call an \(M\in\operatorname{DM}(X)\)_completed Tate_ if it is already in the full stable cocomplete subcategory generated by \(1_{X}(n)\). ## 3 Equivariant motives under split reductive groups Let \(G\) be a split reductive \(S\)-group scheme and \(T\) a split maximal torus in \(G\) with Weyl group \(W\), where \(W\) denotes the \(S\)-points of the Weyl group scheme (cf. [1, 20] for more on Weyl groups of split reductive group schemes). Let \(X\) be a scheme with \(G\) action. In this section, we want to show that the natural map \(f\colon\,[X/T]\to[X/G]\) is Tate, i.e. \(f_{*}1_{[X/T]}\) is a Tate-motive in \(\operatorname{DM}([X/G])\). The key idea is to use the factorization \([X/T]\to[X/N]\to[X/G]\), where \(N\) is the normalizer of \(T\) in \(G\). We note that per definition the map \(g\colon\,[X/T]\to[X/N]\) is a \(W\)-torsor. As the constant group scheme associated to \(W\) is finite etale, we see that after passage to an etale cover, \([X/T]\) is isomorphic to the disjoint union of \([X/N]\) indexed by \(W\). Thus, automatically \(g\) is Tate. The map \([X/N]\to[X/G]\) is a \(G/N\)-torsor. Up to taking \(W\)-invariant, we can identify \(G/N\) with \(G/B\). On \(G/B\), we have stratification by Schubert cells, which are affine spaces. In this way, we can decompose \(p_{*}1_{G/B}\) as a direct sum of twists and shifts indexed by \(W\), where \(p\colon G/B\to S\) is the structure map. This will enable us to reduce the question to ordinary equivariant \(K\)-theory. More precisely, it will be enough to show that \(K_{0}(S)\to K_{T}(G)^{W}\) is an isomorphism, which is classically known. Before coming to our main result of this section, we will first introduce group actions on motives. In the end, we will see that our computations before showed that analogous to the case of Chow-groups, we have \(R\Gamma_{S}([X/G]\,,\mathbb{Q})\cong R\Gamma_{S}([X/T]\,,\mathbb{Q})^{W}\). **Remark 3.1**.: A key argument in this section, is that torsors under finite etale group schemes have related motives by taking invariants of the action. This is a major obstruction for the generalization to integral coefficients, as we expect that this is only satisfied if we have etale descent (c.f. [1, Thm. 3.3.32]). Nevertheless, using the theory of etale motives it should to have analogous results after inverting only the residue characteristics of our base scheme. ### Torsors under finite groups To warm up, we first show that torsors under finite groups are Tate. In particular, it will be clear that the canonical map \([X/T]\to[X/N]\), considered above, is Tate **Lemma 3.2**.: _Let \(f\colon X\to Y\) be a \(G\)-torsor of Artin stacks under a finite etale \(S\)-group scheme \(G\). Then \(f\) is Tate, i.e. \(f_{*}1_{X}\) is a Tate-motive in \(\operatorname{DM}(Y)\)._ Proof.: Let \(n\) denote the degree of \(G\) over \(S\). Then we claim that the natural map \(\coprod_{i=1}^{n}1_{Y}\to f_{*}1_{X}\) induced via the unit of \(f_{*}f^{*}\) is an equivalence. Indeed, by \(h\)-descent we may assume that \(f\) is given by the trivial \(G\)-torsor \(G\times_{S}Y\to Y\) and \(Y\) is represented by a scheme. In particular, \(f\) is a finite etale cover of degree \(n\). After passage to an etale cover, we may assume that \(G\times_{S}Y\cong\coprod_{i=1}^{n}Y\) implying the claim. Next, let us analyze the structure of the motives with respect to the base. For this, we need to understand group actions on motives and taking fixed points under these actions. An action of a group \(G\) on a motive \(M\) in \(\operatorname{DM}(X)\) is map \(G\to\operatorname{Aut}_{\operatorname{DM}(S)}(M)\) that is a group homomorphism on \(\pi_{0}\). Or equivalently, it is a map \(\Sigma_{G}\to\operatorname{DM}(X)\) (here \(\Sigma_{G}\) denotes the deloop of the the group \(G\) seen as a discrete category - usually this is denoted with \(\operatorname{B}G\) but to avoid confusion, we changed the notation). **Definition 3.3**.: Let \(M\) be a motive in \(\operatorname{DM}(X)\) with an action by a finite group \(G\). Then we define the _homotopy \(G\)-fixed points of \(M\)_, denoted by \(M^{hG}\), as the limit of the action map \(\Sigma_{G}\to\operatorname{DM}(X)\). **Definition and Remark 3.4**.: Let \(X\) be an Artin stack and with an action by a finite group \(G\). For each \(g\), we have an action map \(a_{g}\colon X\to X\). By construction of \(\operatorname{DM}(X)\) this defines a map \(g\colon 1_{X}\to 1_{X}\) by lax-monoidality of the \(*\)-pushforward. This endows any \(M\in\mathrm{DM}(X)\) with an action via \(g\).. We define the _\(G\)-fixed points of \(M\)_, denoted by \(M^{G}\) as the image3 of the map Footnote 3: Note that \(\mathrm{DM}(X)\) is pseudo-abelian and hence for any idempotent operator we can define its image. \[p=\frac{1}{\#G}\sum_{g\in G}g.\] The canonical map \(M^{G}\to M\) defines an equivalence \(M^{G}\xrightarrow{\sim}M^{hG}\) (cf. [10, 3.3.21]). Let \(X\to Y\) be a map of \(G\)-torsor of Artin-stacks over \(S\) (here we see \(G\) as a constant group scheme on \(S\)). Then the \(G\)-torsor \(Y\)-automorphisms of \(X\) is isomorphic to \(G\) and thus we get a \(G\)-action on \(f_{*}f^{*}M\) for any \(E\in\mathrm{DM}(Y)\) (via \(*\)-pushforward of a \(G\)-torsor \(Y\)-automorphism). Note that \(f_{*}f^{*}E\) can be used to compute the motivic cohomology of \(X\) with coefficients in \(E\) for smooth \(f\) as \[\mathrm{Hom}_{\mathrm{DM}(Y)}(1_{Y},f_{*}f^{*}E)=\mathrm{Hom}_{\mathrm{DM}(Y )}(f^{*}1_{Y},f^{*}E)=\mathrm{Hom}_{\mathrm{DM}(Y)}(M_{Y}(X),E).\] In particular, writing the fixed points as a limit, we see that \[\mathrm{Hom}_{\mathrm{DM}(Y)}(1_{Y},(f_{*}f^{*}E)^{G})=\mathrm{Hom}_{\mathrm{ DM}(Y)}(M_{Y}(X),E)^{G}.\] If \(G\) is finite, then the \(G\)-torsor \(f\) is etale and proper, hence \(f_{*}f^{*}\simeq f_{!}f^{!}\). Let \(k\) be a field and \(X\) a scheme over \(\mathrm{Spec}(k)\). Further, let \(K/k\) be a finite Galois extension and let us denote the base change of \(X\) to \(\mathrm{Spec}(K)\) by \(X_{K}\). The Chow groups of \(X\) and \(X_{K}\) are related by the fixed points under the Galois group, i.e. \(A^{n}(X)=A^{n}(X_{K})^{\mathrm{Gal}(K/k)}\). As one expects, this also holds motivically. This is due to Ayoub and Cisinki-Deglise. As noted in Remark 3.1, we do not expect this to hold, when we do not impose etale descent. Thus, we do not expect the next lemma to hold with integral coefficients. **Lemma 3.5**.: _Let \(f\colon X\to Y\) be a \(G\)-torsor of Artin stacks under a finite group \(G\). Then the unit factors as \(\mathrm{id}\to(f_{*}f^{*})^{G}\to f_{*}f^{*}\) and the map \(\mathrm{id}\to(f_{*}f^{*})^{G}\) is an equivalence._ Proof.: The factorization of the unit follows from the description of \((f_{*}f^{*})^{G}\) as a limit. We claim that \(\mathrm{id}\to(f_{*}f^{*})^{G}\) is an equivalence. It suffices to check this after base change to a smooth atlas of \(Y\). In particular, we may assume that \(Y\) is a scheme. Since \(\mathrm{DM}_{\mathbb{Q}}\) satisfies \(h\)-descent, we may assume4 that \(f\) is a trivial \(G\)-torsor, where it follows from [1, Prop. 2.1.166]. Footnote 4: Using [10, Prop. 3.3.31], we see that \(M_{Y}(X)^{G}\simeq\varphi_{*}\varphi^{*}1_{Y}\), where \(\varphi\colon(\mathscr{X},G)\to Y\) is the induced morphism of the diagram \(G\to\mathrm{Sch}_{S}\) that maps the single point of the category \(G=(*,\mathrm{End}_{G}(*)=G)\) to \(X\) and the morphisms to the actions. Then we can base change using [10, Prop. 3.1.17]. **Example 3.6**.: Let us consider the \(W\)-torsor \(f\colon[X/T]\to[X/N]\). As \(W\) is finite etale, Lemma 3.5 implies that \(f_{*}f^{*}1_{[X/T]}^{W}\simeq f_{!}f^{!}1_{[X/T]}^{W}\simeq 1_{[X/N]}\). After \(\sharp\)- resp. \(*\)-pushforward to the base \(S\) along the structure map \([X/N]\to S\), we see that \[M_{S}([X/N])\simeq M_{S}([X/T])^{W}\text{ resp. }R\Gamma(S,[X/N])\simeq R \Gamma(S,[X/T])^{W}.\] Now assume that \(S=\operatorname{Spec}(k)\) is the spectrum of a field. Applying the latter equivalence to motivic cohomology yields \[A^{n}_{N}(X,m)\cong A^{n}_{T}(X,m)^{W}\] for the equivariant intersection theory of \(X\). ### The relation between the motives of \([X/t]\) and \([X/g]\) We have seen, that the map \([X/T]\to[X/N]\) is Tate and how to use the action of the Weyl group to compute the motive of \([X/N]\) with respect to the motive of \([X/T]\). Now our goal is to show that the map \([X/N]\to[X/G]\) is Tate and how to compute the!-push/pull of this map. This will be achieved by analyzing the motive of \(G/N\). Using that \(G/T\to G/N\) is again a \(W\)-torsor, we will reduce to the case of the flag variety \(G/B\). For this, we will need that equivariant motives do not see the action of split unipotent subgroups. Now let us recall the definition of a split unipotent subgroup. These are extensions of vector bundles, e.g. a Borel \(B\) containing a maximal split torus \(T\) is an extension of \(T\) by a split unipotent subgroup. **Definition 3.7**.: An algebraic \(S\)-group scheme \(U\) is called _split-unipotent_ if there exists a normal series, i.e. a filtration \(U=U_{n}\supseteq U_{n-1}\supseteq\dots\supseteq U_{0}=0\) such that \(U_{i}\) is normal in \(U_{i+1}\), with each successive quotient is isomorphic a vector bundle \(\mathbb{V}(\mathcal{E})\), where \(\mathcal{E}\) is a finite locally free \(\mathcal{O}_{S}\)-module. **Example 3.8**.: The \(S\)-subgroup scheme of unipotent matricies \(\mathbb{U}_{n,S}\) in \(\operatorname{GL}_{n,S}\) is split unipotent. More generally, let \(G\) be a reductive \(S\)-group scheme and \(P\) be a parabolic in \(G\), then the unipotent radical \(R_{u}(P)\) of \(P\) is split unipotent (cf. [1, Exp. XXVI Prop. 2.1]). **Lemma 3.9**.: _Let \(F\) be a linear algebraic \(S\)-group scheme. Consider a split exact sequence of \(S\)-group schemes_ \[1\to U\to F\to H\to 1\] _where \(U\) is split unipotent. Chose a splitting \(\pi\colon H\hookrightarrow F\). Let \(X\) be an \(S\)-scheme of finite type with an \(F\)-action. Then the \(!\)-pullback induces an equivalence_ \[\pi^{!}\colon\operatorname{DM}(X/F)\xrightarrow{\sim}\operatorname{DM}(X/H).\] Proof.: This is analogous to the proof of [13, Prop. 2.2.11] but for completion, we give a proof. The morphism \(\pi^{!}\) induces a morphism \(\pi^{!}_{n}\colon\operatorname{DM}(\operatorname{Bar}^{n}(X,F))\to \operatorname{DM}(\operatorname{Bar}^{n}(X,H))\). Using [1, Lem. B.6], it suffices to show that \(\pi^{!}\colon\operatorname{DM}(F)\to\operatorname{DM}(H)\) is fully faithful. By assumption \(\pi\) is a \(U\)-torsor and using etale descent, we may assume that \(\pi\) is given by the trivial \(U\)-torsor, i.e. \(\pi\colon U\times H\to H\) is given by the projection. Replacing \(H\) by \(S\), it suffices to show that \(\pi^{!}\colon\operatorname{DM}(S)\to\operatorname{DM}(U)\) is fully faithful. As \(U\) is split unipotent, it has a filtration by subgroup \(U_{i}\) with successive quotients isomorphic to a vector bundle. Using the same argumentation as above, we may assume that \(U\) is a vector bundle, in which case the assertion is clear by homotopy invariance. Let \(B\) be a Borel contain \(T\) inside \(G\) and let \(X\) be an \(S\)-scheme with \(B\)-action. The above lemma shows that \(\operatorname{DM}([X/T])\simeq\operatorname{DM}([X/B])\). Our next idea is to use that \(G/T\to G/N\) is a \(W\)-torsor. The results on torsors under finite groups combined with the above lemma will yield that \(M_{S}(G/N)\simeq M_{S}(G/B)^{W}\). As the flag variety \(G/B\) is stratified by Schubert cells, which are affine spaces, one can calculate \(M_{S}(G/B)\) explicitly (cf. [11]). We will use the computations of _op.cit._ to see that the map \(p\colon\,[X/N]\to[X/G]\) is Tate. **Proposition 3.10**.: _Let us assume that \(\tilde{B}=\operatorname{Spec}(k)\) is the spectrum of a field and \(S\) is geometrically regular and of finite type over \(\tilde{B}\). Let \(G\) be a split reductive \(S\)-group scheme with split maximal torus \(T\) and Weyl group \(W\). Let \(N\) denote the normalizer of \(T\) in \(G\). Further, let \(X\) be an \(S\)-scheme with \(G\)-action and \(f\colon\,[X/N]\to[X/G]\) the canonical map. Then the unit \(1_{[X/G]}\to f_{*}f^{*}1_{[X/G]}\) is an equivalence. In particular, \(f_{*}1_{[X/N]}\) is a Tate-motive in \(\operatorname{DM}([X/N])\)._ Let us summarize the idea of the proof. We will show that the unit \(1_{[X/G]}\to f_{*}f^{*}1_{[X/G]}\) is an equivalence. To see this we will use that we have the following pullback diagram (cf. [10, 04Y4]). In particular, after etale-descent, we may assume that \(f\) is given by the projection \(G/N\to S\). Using our calculations about torsors under finite groups, it is enough to show that the induced map \(1_{S}\to R\Gamma_{S}(G/T,\mathbb{Q})^{W}\) is an equivalence. But as \(R\Gamma_{S}(G/T,\mathbb{Q})^{W}\simeq R\Gamma_{S}([G/B]\,,\mathbb{Q})^{W}\), we see that it is Tate. Thus, we will reduce this question to a classical question about Chow rings, at least when \(S\) is a field, namely if the pullback map \[A^{\bullet}(S)=A^{\bullet}_{G}(G)\to A^{\bullet}_{T}(G)^{W}\] is an isomorphism. But this is known by Edidin-Graham (cf. [1]). If \(S\) is a not field, we have to work with \(K\)-theory and then this follows from [13]. Proof of Proposition.: Throughout this proof, we will denote for readability the structure map of an Artin-stack \(\mathcal{X}\) to \(S\) with \(p_{\mathcal{X}}\). Let \(n\) denote the relative dimension of \(G/B\). We will first show that the unit \[1_{[X/G]}\to f_{*}f^{*}1_{[X/G]}\] is an equivalence, proving that the map \(f\) is indeed Tate. By etale descent, we may assume that \(Y\coloneqq[X/G]\) is represented by a scheme and the map \(f\) is given by the structure map \(G/N\to S\). Then it is enough to show that the unit \(1_{S}\to R\Gamma_{S}(G/N,\mathbb{Q})\) is an equivalence, since the pullback of this equivalence along \(p_{X}\) yields the desired equivalence. Note that the natural map \(g\colon G/T\to G/N\) is naturally a \(W\)-torsor. Hence, by Lemma 3.5, we see that \(1_{G/N}\simeq(g_{*}1_{G/T})^{W}\) and thus \(R\Gamma_{S}(G/N,\mathbb{Q})\simeq p_{G/N*}(g_{*}1_{G/T})^{W}\). Since \(p_{G/N*}\) is a right adjoint it commutes with limits. Thus, we have \[p_{G/N*}(g_{*}1_{G/T})^{W}\simeq(p_{G/N*}g_{*}1_{G/T})^{W}\simeq(p_{G/T*}1_{G/T})^ {W}\simeq R\Gamma_{S}(G/T,\mathbb{Q})^{W}\] By Lemma 3.9, we see that \(R\Gamma_{S}(G/T,\mathbb{Q})\simeq R\Gamma_{S}(G/B,\mathbb{Q})\). By Example 2.12, we have that \[R\Gamma_{S}(G/B,\mathbb{Q})\simeq\bigoplus_{w\in W}1_{S}\langle l(w)-n\rangle,\] where \(n\) is the relative dimension of the flag variety \(G/B\). In particular, \(R\Gamma_{S}(G/B,\mathbb{Q})\) and thus \(R\Gamma_{S}(G/T,\mathbb{Q})\) is Tate. As the \(W\)-invariants are defined as an image of a map, we see that \(R\Gamma_{S}(G/T,\mathbb{Q})^{W}\) is also Tate. The \(\infty\)-category of Tate-motives over \(S\) is the stable subcategory of \(\operatorname{DM}(S)\) generated by \(1(r)\), for \(r\in\mathbb{Z}\). Therefore, the natural map \(1_{S}\to R\Gamma_{S}(G/T,\mathbb{Q})^{W}\) is an equivalence if and only if the induced map \[\operatorname{Hom}_{\operatorname{DM}(S)}(1_{S}(r)[m],1_{S})\to\operatorname {Hom}_{\operatorname{DM}(S)}(1_{S}(r)[m],R\Gamma_{S}(G/T,\mathbb{Q})^{W}) \tag{3.10.1}\] is an equivalence for all \(r\in\mathbb{Z}\) and \(m\leq 0\)5. If \(r>0\), then the right hand side is isomorphic to \(K_{-2r+m}(S)^{(-r)}\) and thus vanishes as the negative Adams eigenspaces vanish per definition (cf. [11]). By Remark 3.4 and the computation of \(p_{G/B*}1_{G/B}\), we see that Footnote 5: By commutativity of Hom with suspensions - a colimit. \[\operatorname{Hom}_{\operatorname{DM}(S)}(1_{S}(r)[m],R\Gamma_{S}(G/T, \mathbb{Q})^{W})\cong(\bigoplus_{w\in W}K_{-2r+m}(S)^{(l(w)-n-r)})^{W},\] which also vanishes for \(r>0\) as \(l(w)-n\leq 0\). Therefore, we may assume from now on that \(r\leq 0\). Further, we may assume that \(m\geq 2r\) as otherwise \(-2r+m<0\) and thus also the \(K\)-groups above vanish (since \(S\) is regular noetherian). Writing \(\operatorname{Hom}_{\operatorname{DM}(S)}(1_{S}(r)[m],1_{S})\simeq \operatorname{Hom}_{\operatorname{DM}(S)}(1_{S}(r)[2r-2r+m],1_{S})\) and using that \(-2r+m\geq 0\), we may assume without loss of generality that \(m=2r\) (again use the commutativity of the Hom-functor with limits). Since \(S\) is finite dimensional (cf. [11, Prop. 14.109]), it is a fact that \(K_{0}(S)^{(i)}\) vanishes for all but finitely many \(i\in\mathbb{Z}\) (cf. [11, SS2]). Therefore, the morphism (3.10.1) is an isomorphism if and only if \[\bigoplus_{r\in\mathbb{Z}}\operatorname{Hom}_{\operatorname{DM}(S)}(1_{S} \langle r\rangle,1_{S})\to\bigoplus_{r\in\mathbb{Z}}\operatorname{Hom}_{ \operatorname{DM}(S)}(1_{S}\langle r\rangle,R\Gamma_{S}(G/T,\mathbb{Q})^{W})\] is an isomorphism. Equivalently, we can write this morphism as \[\bigoplus_{r\in\mathbb{Z}}\operatorname{Hom}_{\operatorname{DM}(S)}(1_{S},1_{ S}\langle r\rangle)\to\bigoplus_{r\in\mathbb{Z}}\operatorname{Hom}_{ \operatorname{DM}(S)}(M_{S}(G/T),1_{S}\langle r\rangle)^{W}\] The motive \(M_{S}(G/T)\) is a direct sum of shifts and twists of the unit of \(S\) and therefore compact (cf. [10, Thm. 11.1.13]). By compactness of \(1_{S}\) and \(M_{S}(G/T)\), the above morphism is an equivalence if and only if the morphism \[\operatorname{Hom}_{\operatorname{DM}(S)}(1_{S},\bigoplus_{r\in\mathbb{Z}}1_{ S}\langle r\rangle)\to\operatorname{Hom}_{\operatorname{DM}(S)}(M_{S}(G/T), \bigoplus_{r\in\mathbb{Z}}1_{S}\langle r\rangle)^{W}\] is an isomorphism. By construction of the rational K-theory spectrum in \(\mathrm{DM}(S)\), we see that \(\bigoplus_{n\in\mathbb{Z}}1_{S}\simeq\mathrm{KGL}_{S,\mathbb{Q}}\) (cf. [10, Lem. 14.1.4]). Thus, as \(G/T\) is representable by a scheme (cf. [1, Exp. IX Thm. 5.1]), the right hand side is isomorphic to \(K_{0}(G/T)\cong K_{T}(G)\). Further, the properties of the \(K\)-theory spectrum yield that the induced morphism \(K_{0}(S)\to K_{T}(G)\) is given by the pullback \(K_{0}(S)\to K_{T}(G)\) (cf. [1, SS13.1]). Taking \(W\)-invariants yields a map \[K_{0}(S)\to K_{T}(G)^{W}.\] As \(K_{0}(S)=K_{G}(G)\) and the map above is induced via pullback, we see that it is in fact an isomorphism (cf. [14, Lem. 9.2] - here we need that \(\tilde{B}\) is a field). **Theorem 3.11**.: _Let \(\tilde{B}=\mathrm{Spec}(k)\) be the spectrum of a field and assume that \(S\) is geometrically regular of finite type over \(\tilde{B}\). Let \(G\) be a split reductive \(S\)-group scheme with split maximal torus \(T\) and Weyl group \(W\). Assume \(G\) acts on a locally of finite type \(S\)-scheme \(X\). Then the natural map_ \[[X/T]\to[X/G]\] _is Tate._ _Further, we have have_ \[R\Gamma_{S}([X/G]\,,\mathbb{Q})\simeq R\Gamma_{S}([X/T]\,,\mathbb{Q})^{W}.\] Proof.: Let \(N\) be the normalizer of \(T\) in \(G\). We can factor the map in the theorem as \([X/T]\xrightarrow{f}[X/N]\xrightarrow{g}[X/G]\). The first part of the theorem follows immediately with Lemma 3.2 and Proposition 3.10. For the proof of the second claim, we apply Lemma 3.5 and Proposition 3.10 and get \[(p_{[X/T]\ast}1_{[X/T]})^{W}\simeq(p_{[X/G]\ast}g_{\ast}f_{\ast}1 _{[X/T]})^{W} \simeq p_{[X/G]\ast}g_{\ast}(f_{\ast}1_{[X/T]})^{W}\] \[\simeq p_{[X/G]}g_{\ast}1_{[X/N]}\simeq p_{[X/G]\ast}1_{[X/G]}.\] **Corollary 3.12**.: _Assume that \(S=\mathrm{Spec}(k)\) is the spectrum of a field. Let \(G\) be a reductive \(S\)-group scheme with maximal torus \(T\). Let \(W\) denote the absolute Weyl6 group of \(T\) in \(G\). Assume \(G\) acts on a smooth \(S\)-scheme \(X\). Then we have_ Footnote 6: Let \(\bar{k}\) be an algebraic closure of \(k\), then \(\mathcal{W}\coloneqq N_{G}(T)/T\) is a finite étale group scheme of \(S\) and we set the absolute Weyl group to be \(W\coloneqq\mathcal{W}(\bar{k})\) \[R\Gamma_{S}([X/G]\,,\mathbb{Q})\simeq R\Gamma_{S}([X/T]\,,\mathbb{Q})^{W}.\] _In particular, applying this result to motivic cohomology yields for all \(n,m\in\mathbb{Z}\)_ \[A_{G}^{n}(X,m)_{\mathbb{Q}}\cong A_{T}^{n}(X,m)_{\mathbb{Q}}^{W}.\] Proof.: Any reductive group over \(k\) becomes split after passing to a finite Galois extension \(K/k\) (cf. [12, Exp. XXII Cor. 2.4]). Thus, let \(K\) be such an extension, so that \(T_{K}\) is a split maximal torus. Then we have the following diagram with pullback squares As \(K/k\) is a finite Galois extension, the morphisms \(p_{K}\) and thus also \(f\) are \(H\coloneqq\operatorname{Gal}(K/k)\)-torsors. Then Lemma 3.5 yields the equivalence \[p_{G*}1_{[X/G]}\simeq(p_{K*}p_{K}^{*}p_{G*}1_{[X/G]})^{H}.\] As the diagram above has cartesian squares, we can use smooth base change, to see that \(p_{K}^{*}p_{G*}\simeq p_{G*}g^{*}\). But by Theorem 3.11, we have \[p_{G_{K}*}g^{*}1_{[X/G]}\simeq R\Gamma_{K}([X_{K}/G_{K}]\,,\mathbb{Q})\simeq R \Gamma_{K}([X_{K}/T_{K}]\,,\mathbb{Q})^{W}.\] Commutativity of the above diagram yields \(p_{K*}R\Gamma_{K}([X_{K}/T_{K}]\,,\mathbb{Q})\simeq p_{T*}f_{*}1_{[X_{K}/T_{K}]}\). Thus, we have \[p_{G*}1_{[X/G]}\simeq((p_{T*}f_{*}1_{[X_{K}/T_{K}]})^{W})^{H}.\] As limits commute with limits, we may write the right hand side as \((p_{T*}(f_{*}f^{*}1_{[X/T]})^{H})^{W}\) and again by Lemma 3.5, we have \((f_{*}f^{*}1_{[X/T]})^{H}\simeq 1_{[X/T]}\) concluding the proof. The result about motivic cohomology follows from Remark 3.4 by using [10, Thm. 2.2.10] which proves that motivic cohomology for smooth quotient stacks is computed by the higher equivariant Chow groups of Edidin-Graham.7 Footnote 7: In _loc.cit._ they assume some properties on the groups and on \(X\), as Edidin-Graham need these assumptions to compare higher Chow theory of stacks and equivariant higher Chow theory [11, Prop. 13 (b)]. The assumptions in _loc.cit._ are needed as Bloch only shows the existence of a long exact sequence for higher Chow groups in the case where \(X\) is _quasi-projective_. This result was extended by Levine to all _separated_ schemes (cf. [10]). Thus, the comparison of Edidin-Graham and hence also of Richarz-Scholbach go through in the case of the corollary. ## 4 Torsors under split maximal tori Let us fix a split reductive \(S\)-group scheme \(G\) and \(T\) a split maximal torus of rank \(r\) inside \(G\). In this subsection, we want to understand the motivic homotopy theory of torsors under split maximal tori. More precisely, let us consider the following situation. Let \(X\to Y\) be a \(T\)-torsors of Artin stacks. We want to understand the relation between \(M_{S}(X)\) and \(M_{S}(Y)\). Let us recall the classical case of Chow theory and \(K\)-theory. For this paragraph let us assume that \(X\) and \(Y\) are smooth Artin stacks over \(\operatorname{Spec}(k)\), where \(k\) is a field. Then the group \(\hat{T}\) of characters of \(T\) acts on \(A^{\bullet}(Y)\) in the following way. Let \(\chi\in\hat{T}\) be a character and consider its associated \(1\)-dimensional representation \(\kappa(\chi)\). The quotient \(L_{\chi}\coloneqq X\times^{T}\kappa(\chi)\) is representable by a line bundle over \(Y\). Multiplication with the first Chern class of \(L_{\chi}\) yields an action of \(\hat{T}\) on \(A^{\bullet}(Y)\). Then for Chow rings it is known that \[A^{\bullet}(X)\cong A^{\bullet}(Y)/\hat{T}A^{\bullet}(Y).\] Our goal is to extend this result to motivic homotopy theory, so that it generalizes the result about Chow theory and yields a similar statement for \(K\)-theory. Even though we work with Beilinson motives, we will remark in the end how to extend this to integral \(K\)-theory under some assumptions on \(X\) and \(Y\). ### The motive of \(T\)-torsors Let us denote the character group of \(T\) with \(\hat{T}\). Let \(X\to Y\) be a \(T\)-torsor of Artin stacks over \(S\) and \(\chi\in\hat{T}\) a character. Let \(\mathbb{G}_{\mathrm{m},S}\) act via left multiplication in \(\mathbb{A}^{1}_{S}\). Then \(T\) acts via \(\chi\) on \(\mathbb{A}^{1}_{S}\) and thus, \(X\times^{T}\mathbb{A}^{1}_{S}\to X/T\cong Y\) yields a line bundle over \(Y\). The action of the first Chern class of \(X\times^{T}\mathbb{A}^{1}_{S}\) on the motivic cohomology will be described by a Gysin sequence (cf. Proposition 4.2). **Notation 4.1**.: In the following we want to split up the \(T\)-torsor \(f\colon X\to Y\) into a sequence of \(\mathbb{G}_{\mathrm{m},S}\)-torsors. Note that by splitness \(T\cong\mathbb{G}_{\mathrm{m},S}^{\ \ r}\). Fixing a numbering of the \(\mathbb{G}_{\mathrm{m},S}\)-components of \(T\), we can embed for any \(1\leq k\leq r\) the product \(\mathbb{G}_{\mathrm{m},S}^{\ \ k}\) into \(T\) by \(\mathrm{id}_{\mathbb{G}_{\mathrm{m},S}^{\ \ k}}\times 1^{r-k}\). Then \(\mathbb{G}_{\mathrm{m},S}^{\ \ k}\) acts on \(X\) via this embedding. We get a sequence \[X\to X/\,\mathbb{G}_{\mathrm{m},S}\to X/\,\mathbb{G}_{\mathrm{m},S}^{\ \ 2}\to\cdots\to X/T\cong Y\] of \(\mathbb{G}_{\mathrm{m},S}\)-torsors. We denote the induced maps \(X/\,\mathbb{G}_{\mathrm{m},S}^{\ \ i}\to Y\) with \(f_{i}\). **Proposition 4.2**.: _Let \(X\to Y\) be a \(T\)-torsor of smooth Artin stacks over \(S\). Then, there exists a filtration_ \[M_{Y}(X)=M_{0}\to M_{1}\to\cdots\to M_{r}=1_{Y}\] _in \(\mathrm{DM}(S)\), where \(M_{i}\coloneqq M_{Y}(X/\,\mathbb{G}_{\mathrm{m},S}^{\ \ i})\) such that the cofiber of \(M_{Y}(X_{i-1})\to M_{Y}(X_{i})\) is given by \(M_{Y}(X_{i})\langle 1\rangle\) and the map \(M_{Y}(X_{i})\to M_{Y}(X_{i})\langle 1\rangle\) is induced by multiplication with \(c_{1}(X_{i-1}\times^{\mathbb{G}_{\mathrm{m}}}\mathbb{A}^{1}_{S})\)._ Proof.: This follows by successively using [11, Prop. 2.32]. But for completion let us give a proof by recalling the argument. The morphism \(X_{i-1}\to X_{i}\) is a \(\mathbb{G}_{\mathrm{m}}\)-torsor. In particular, the scheme \(\mathcal{L}_{i}\coloneqq X_{i-1}\times^{\mathbb{G}_{\mathrm{m},S}}_{S}\mathbb{ A}^{1}_{S}\) is a line bundle over \(X_{i}\). Let \(s\colon X_{i}\hookrightarrow V_{i}\) denote the zero section. Certainly, the complement of the closed immersion \(s\) is isomorphic to \(X_{i-1}\). Then the Gysin sequence of Lemma 2.10 yields a fiber sequence \[M_{Y}(X_{i-1})\to M_{Y}(V_{i})\simeq M_{Y}(X_{i})\xrightarrow{\varphi}M_{Y}(X_ {i})\langle 1\rangle.\] By construction of the Gysin sequence \(\varphi\) is given by multiplication with \(c_{1}(\mathcal{L}_{i})\). This concludes the proof. **Corollary 4.3**.: _Let \(f\colon X\to Y\) be a \(T\)-torsor of smooth Artin stacks over \(S\). Then \(f\) is a Tate map._ Proof.: Proposition 4.2 implies that \(M_{Y}(X)\) is a successive extension of \(1_{Y}\). Thus, the result follows from Remark 2.13. ### Motivic cohomology of \(T\)-torsors For any Artin stack \(X\) over \(S\), we will denote its motivic cohomology with \[H^{p}(X,\mathbb{Q}(n))\coloneqq\operatorname{Hom}_{\operatorname{DM}(S)}(M_{S} (X),\mathbb{Q}(n)[p]).\] If \(X\) is representable by a smooth \(S\)-scheme, then we have \(H^{p}(X,\mathbb{Q}(n))\cong K_{2n-p}(X)^{(n)}\). If \(S=\operatorname{Spec}(k)\) is the spectrum of a field and \(X\) is smooth over \(S\), we have \(H^{p}(X,\mathbb{Q}(n))\cong A^{n}(X,2n-p)\). Note that for a smooth Artin stack \(X\) over \(S\) the motivic cohomology vanishes automatically in certain degrees by descent and vanishing of negative \(K\)-theory for regular schemes, i.e. we have that \(H^{p}(X,\mathbb{Q}(n))\cong 0\) for \(p>2n\). **Notation 4.4**.: Consider a \(\chi\in\hat{T}\). The associated line bundle \(L_{\chi}\coloneqq X\times_{S}^{T}\mathbb{V}(\mathcal{E}_{\chi})\) yields a map \(H^{p+2}(Y,\mathbb{Q}(n+1))\to H^{p}(Y,\mathbb{Q}(n))\), by multiplication with the Chern class of \(L_{\chi}\). We denote the image of this map with \(c_{1}(L_{\chi})H^{p}(Y,\mathbb{Q}(n))\). **Proposition 4.5**.: _In the setting of Proposition 4.2 let us further fix an \(n\in\mathbb{Z}\). Then, we have_ \[H^{2n}(X,\mathbb{Q}(n))\cong H^{2n}(Y,\mathbb{Q}(n))/\hat{T}H^{2n}(Y,\mathbb{Q }(n)).\] Proof.: First let us note that it is enough8 to show that Footnote 8: Any character is generated by primitive characters and the corresponding \(1\)-dimensional representation is given by the associated tensor product. \[H^{2n}(X,\mathbb{Q}(n))=H^{2n}(Y,\mathbb{Q}(n))/\langle c_{1}(X\times_{S}^{T} \mathbb{V}(\mathcal{E}_{\chi_{i}}))H^{2n}(Y,\mathbb{Q}(n))\rangle,\] where \(\chi_{i}\) is a primitive character in \(\hat{T}\). Let \(X=X_{0}\to X_{1}\to\dots\to X_{r}=Y\) be the sequence of Proposition 4.2. For each \(0\leq i\leq r\) this yields a long exact sequence on motivic cohomology \[\dots\to H^{2n+2}(X_{i},\mathbb{Q}(n+1)) \to H^{2n}(X_{i},\mathbb{Q}(n))\] \[\to H^{2n}(X_{i-1},\mathbb{Q}(n))\to H^{2n+3}(X_{i},\mathbb{Q}(n+1 ))\to\dots.\] We have \(H^{2n+3}(X_{i},\mathbb{Q}(n+1))=0\) and thus get an exact sequence of the form \[H^{2n+2}(X_{i},\mathbb{Q}(n+1))\xrightarrow{a}H^{2n}(X_{i},\mathbb{Q}(n)) \xrightarrow{b}H^{2n}(X_{i-1},\mathbb{Q}(n))\to 0.\] The map \(b\) is the usual pullback on motivic cohomology. The map \(a\) is induced by multiplication with the Chern class of the line bundle \(\mathcal{L}_{i}=X_{i-1}\times_{S}^{\mathbb{G}_{m}}V(\mathcal{E}_{\chi_{i}})\). As \(X_{r}=Y\), we have \(H^{2n}(X_{r-1},\mathbb{Q}(n))\cong H^{2n}(Y,\mathbb{Q}(n))/c_{1}(\mathcal{L}_ {r})H^{2n}(Y,\mathbb{Q}(n))\). Hence, inductively we see that \[H^{2n}(X,\mathbb{Q}(n))\cong H^{2n}(Y,\mathbb{Q}(n))/\langle c_{1}(\mathcal{ L}_{i})H^{2n}(Y,\mathbb{Q}(n))\rangle_{1\leq i\leq r}.\] We are left to show that \(c_{1}(\mathcal{L}_{i})H^{2n}(Y,\mathbb{Q}(n))=c_{1}(X\times_{S}^{T}V(\mathcal{E}_{ i}))H^{2n}(X,\mathbb{Q}(n))\). For this let us start with \(i=r\). Then by construction \(X\times_{S}^{T}V(\mathcal{E}_{\chi_{r}})\cong X_{r-1}\times_{S}^{\mathbb{G}_{m }}V(\mathcal{E}_{\chi_{r}})\). Inductively, we may replace \(Y\) by \(X_{i}\), where the claim again follows by construction. **Corollary 4.6**.: _Let \(S=\operatorname{Spec}(k)\) be the spectrum of a field and Let \(X\to Y\) be a \(T\)-torsor of smooth Artin stacks over \(S\). Then_ \[A^{\bullet}(X)_{\mathbb{Q}}\cong A^{\bullet}(Y)_{\mathbb{Q}}/\hat{T}A^{\bullet }(Y)_{\mathbb{Q}}.\] Proof.: This follows immediately from Proposition 4.5. **Remark 4.7**.: Proposition 4.2 and Proposition 4.5 can be extended to other cohomology theories in the following way. Let us fix a \(T\)-torsor \(X\to Y\) of smooth Artin stacks. 1. (Rational etale localized cohomology theories) Let us fix a \(T\)-torsor \(X\to Y\) of smooth Artin stacks. Let \(M\in\operatorname{SH}(S)_{\mathbb{Q},\hat{\mathfrak{e}}\mathfrak{t}}\) be an oriented \(E_{\infty}\)-ring spectrum and let us denote its pullback to any smooth Artin-stack \(Z\) with \(M_{Z}\). The orientation of \(M\) yields a Chern class map \[c_{1}\colon\operatorname{Pic}(Z)\to H^{2}(Z,M(1))\coloneqq\operatorname{Hom}_{ \operatorname{SH}(Z)}(1_{Z},M_{Z}(1)[2]).\] In the same fashion as before, for any character \(\chi\in\hat{T}\), we can define \(c_{1}(L_{\chi})H^{n}(Y,M)\). Assume there exists a \(p\in\mathbb{Z}\) such that \(M\) satisfies \[H^{m}(Z,M)\coloneqq\operatorname{Hom}_{\operatorname{SH}(Z)}(1_{Z},M_{Z}[m])=0\] for all \(m>p\). Then \[H^{p}(X,M)\cong H^{p}(Y,M)/\langle c_{1}(X\times_{S}^{T}\mathbb{V}(\mathcal{E} _{\chi_{i}}))H^{p}(Y,M)\rangle.\] If we take for example \(M=\operatorname{KGL}_{\mathbb{Q},S}\), the rational \(K\)-theory spectrum in \(\operatorname{SH}(S)\), then we get \[K_{0}(X)^{\hat{\mathfrak{e}}\mathfrak{t}}_{\mathbb{Q}}\cong K_{0}(Y)^{\hat{ \mathfrak{e}}\mathfrak{t}}_{\mathbb{Q}}/\langle c_{1}(X\times_{S}^{T}\mathbb{V }(\mathcal{E}_{\chi_{i}}))K_{0}(Y)^{\hat{\mathfrak{e}}\mathfrak{t}}_{ \mathbb{Q}}\rangle,\] where \(K_{0}(-)^{\hat{\mathfrak{e}}\mathfrak{t}}_{\mathbb{Q}}\) denotes the etale localized rational \(K\)-theory. 2. (Integral \(K\)-theory) The extension to integral \(K\)-theory is more subtle, as we have to restrict ourselves to certain algebraic stacks9 to make sense of the stable homotopy category. For simplicity, we may assume that \(X=[X^{\prime}/H]\) and \(Y=[Y^{\prime}/F]\) are represented by quotients of quasi-projective schemes by diagonalizable group schemes. Then there is a well defined notion of a stable homotopy category \(\operatorname{SH}(X)\) resp. \(\operatorname{SH}(Y)\) together with a functorial \(E_{\infty}\)-ring object KGL that represents equivariant \(K\)-theory (cf. [10]). Further, Bott-periodicity yields an orientation on KGL. In particular, using that \(\operatorname{KH}(X)\) and \(\operatorname{KH}(Y)\) are connective10 (cf. [10, Thm. 5.7]), we see that Corollary 4.6 holds for integral \(K_{0}\) in this case, i.e. Footnote 10: This holds Nisnevich locally by _loc.cit._ and by descent for \(\operatorname{KH}\). \[K_{0}(X)\cong K_{0}(Y)/\hat{T}K_{0}(Y).\] This result can be glued to the class of so called _scalloped stacks_ (cf. [11] for the notion of scalloped stacks and the construction of \(\operatorname{SH}\)). **Remark 4.8**.: It is not hard to see that both Proposition 4.5 and Corollary 4.6 can be upgraded to integral coefficients and genuine \(K_{0}\), i.e. not completed, if \(X\) and \(Y\) in the assumptions are assumed to be scalloped scalloped stacks. Brokemper has a result on Chow groups of split reductive groups module Frobenius conjugation by a Levi. This can be used to understand the Chow groups of the classifying stacks of finite groups of Lie-type. Remark 4.7 enables us to get an analogous result for rational \(K_{0}\). **Proposition 4.9**.: _Let \(S=\operatorname{Spec}(k)\) be the spectrum of a field. Let \(G\) be a split reductive \(S\)-group scheme with split maximal torus \(T\). Let \(\varphi\colon L\to M\) be an isogeny, where \(L\) resp. \(M\) are Levi-components of parabolic subgroups \(P\) resp. \(Q\) of \(G\). Assume that \(T\subseteq L\). Let \(g_{0}\in G(S)\) such that \(\varphi(T)={}^{g_{0}}T\). Let \(\tilde{\varphi}\colon T\to T\) denote the isogeny \(\varphi\) followed by \(g_{0}^{-1}\)-conjugation. Further, let us denote the Weyl group of \(T\) in \(L\) with \(W_{L}=W(T,L)\) and of \(T\) in \(G\) with \(W_{G}\). Then we have_ \[K_{0}([G/_{\varphi}L])_{\mathbb{Q}}\cong R(T)_{\mathbb{Q}}^{W_{L}}/(f-\tilde{ \varphi}f\mid f\in R(T)_{\mathbb{Q}}^{W_{G}}).\] Proof.: For \(K_{0}\), we could not produce an analogue of Corollary 3.12 using Theorem 3.11 and thus have to use a result of Krishna on equivariant \(G\)-theory11 (cf. [11, Lem. 9.2]). For completion we recall the main argument. Footnote 11: As we work with smooth stacks, \(G\)-theory and \(K\)-theory agree (cf. [11, Prop. A.1]). First, we may replace \(Q\) and \(M\) by \({}^{g_{0}}Q\) and \({}^{g_{0}}M\) and assume that \(\varphi(T)=T\). In particular, \(\tilde{\varphi}\)-conjugation of \(G\) by \(T\) is just \(\varphi\)-conjugation. Now we embed \(T\) into \(T\times_{S}T\) by \(t\mapsto(t,\varphi(t))\). Let \(T\times_{S}T\) act on \(G\) by \((t,t^{\prime}).g\coloneqq tgt^{\prime-1}\). This yields a morphism \([G/_{\varphi}T]\to[G/T\times_{S}T]\). This is \(T\cong T\times_{S}T/T\)-torsor. Thus, by Remark 4.7, we have \[K_{0}([G/_{\varphi}T])_{\mathbb{Q}}\cong K_{0}([G/T\times_{S}T])_{\mathbb{Q}}/ \hat{T}K_{0}([G/T\times_{S}T])_{\mathbb{Q}}.\] By homotopy invariance, we have \(K_{0}([G/T\times_{S}T])_{\mathbb{Q}}\cong K_{0}^{T}(G/B)_{\mathbb{Q}}\). Therefore, we are reduced to classical statements about \(T\)-equivariant \(K\)-theory of flag varieties (cf. [10]) and get \[K_{0}([G/_{\varphi}T])_{\mathbb{Q}}\cong R(T)_{\mathbb{Q}}/(f-\tilde{\varphi} f\mid f\in R(T)_{\mathbb{Q}}^{W_{G}}).\] It follows from [11, Lem. 9.2] that \[K_{0}([G/_{\varphi}L])_{\mathbb{Q}}\cong K_{0}([G/_{\varphi}T])_{\mathbb{Q}}^ {W_{L}}.\] Thus, it suffices to show that \(IR(T)_{\mathbb{Q}}^{W_{L}}=IR(T)_{\mathbb{Q}}\cap R(T)_{\mathbb{Q}}^{W_{L}}\), where \(I=(f-\tilde{\varphi}f\mid f\in R(T)_{\mathbb{Q}}^{W_{G}})\), but this follows from faithfully flatness of \(R(T)^{W_{L}}\hookrightarrow R(T)\) (cf. [12, Thm. 1.2] - see also the proof of [1, Prop. 1.2] resp. [1, Prop. 2.3.2] for a detailed argument in the Chow group case). ## 5 Motivic cohomology of quotients up to isogeny In the following let \(\tilde{B}=\operatorname{Spec}(k)\) be the spectrum of a field and assume \(S\) is geometrically regular and of finite type over \(\tilde{B}\). We will first prove our main question of the introduction (\(*\) *> 1.2) in the case of split reductive groups. Afterwards, we will show how to extend these arguments to arbitrary reductive groups over \(k\). In the end, we want to give some thoughts on generalizations of these results. ### The case of split reductive group schemes Let \(G\) be a split reductive \(S\)-group scheme, \(P\) resp. \(Q\) be parabolics inside \(G\) with Levi-components \(L\) resp. \(M\). Let \(\varphi\colon L\to M\) be an isogeny. Then \(L\) acts on \(G\) via \((l,g)\mapsto lg\varphi(l)^{-1}\). We are interested in the quotient of this action, which we denote by \([G/_{\varphi}L]\) or rather its motive. To do so, we follow the idea of Brokemper in the proof of [1, Prop. 1.2]. Let \(T\) be a split maximal torus of \(G\) contained in \(L\). As \(\varphi\) is an isogeny the image of \(T\) is again a split maximal torus. In particular, up to conjugation by an element \(g_{0}\in G(S)\), we may identify \(T\) with \(\varphi(T)\). The \(g_{0}\)-conjugation of \(G\) induces an isomorphism \(G\to G\) that is \(L\)-equivariant, where \(L\) acts on the right hand side via \((l,g)\mapsto lgg_{0}^{-1}\varphi(l)g_{0}\). In particular, after replacing \(M\) resp. \(Q\) by their \(g_{0}^{-1}\)-conjugation, we may assume that \(\varphi(T)=T\). Then we have the following embedding \(T\hookrightarrow T\times T\), via \(t\mapsto(t,\varphi(t))\). The quotient under this embedding is \(T\times T/T\cong T\). Thus, the naturally induced morphism \([G/_{\varphi}T]\to[G/T\times T]\), where \(T\times T\) acts on \(G\) via \((t,t^{\prime},g)\mapsto tgt^{\prime-1}\), is a \(T\)-torsor. This leaves us with the following picture For morphism \(a\) we note that \(T\) is a split maximal torus inside \(L\) and \(L\) is reductive. Thus, we can apply Theorem 3.11 and see that \(a\) is Tate and further, we have \(R\Gamma_{S}([G/_{\varphi}L]\,,\mathbb{Q})\simeq R\Gamma_{S}([G/_{\varphi}T] \,,\mathbb{Q})^{W_{L}}\), where \(W_{L}\) denotes the Weyl group of \(T\) ind \(L\). The morphism \(b\) is by the above a \(T\)-torsor. Therefore, we can use Corollary 4.3 to see that \(b\) is also Tate. Further, we can compute the motive resp. the motivic cohomology of \([G/_{\varphi}T]\) via the motive resp. the motivic cohomology of \([G/T\times T]\) using Proposition 4.2 and Proposition 4.5. But by invariance under extensions of unipotent groups, we can identify \(\operatorname{DM}([G/T\times T])\) with \(\operatorname{DM}([T\backslash G/B])\) (cf. Lemma 3.9). Therefore, with all of the above we see that the \(T\)-equivariant motivic cohomology resp. motive of the flag variety \(G/B\) yields results about the motivic cohomology resp. the motive of \([G_{\varphi}/L]\). But the author has shown12 in [23] that the motive of \([T\backslash G/B]\) is computed by \(M_{S}(G/B)\otimes M_{S}(BT)\). If \(S=\operatorname{Spec}(k)\), we have seen that \[M_{S}(\mathrm{B}T)\cong\bigotimes_{i=1}^{r}M_{S}(\mathrm{B}\operatorname{\mathbb{ G}_{m}})=\bigotimes_{i=1}^{r}\bigoplus_{j\geq 0}\mathbb{Q}_{S}\langle i\rangle,\] which is completed Tate (cf. Example 2.8). As the motive of the flag variety \(G/B\) is also Tate (cf. [23]), we see that \(M_{S}(T\backslash G/B)\) is completed Tate. Summarizing the above yields the following theorem. **Theorem 5.1**.: _We have the following diagram of Tate maps_ _Further, if \(S=\operatorname{Spec}(k)\) the motives \(R\Gamma_{S}([G/_{\varphi}L]\,,\mathbb{Q})\) and \(M_{S}([G/_{\varphi}L])\) are completed Tate-motives in \(\operatorname{DM}(S)\). And, we can compute the motivic cohomology of \([G/_{\varphi}L]\) as_ \[A^{n}([G/_{\varphi}L])_{\mathbb{Q}}\cong\left(A_{T}^{n}(G/B)_{\mathbb{Q}}/ \hat{T}A_{T}^{n}(G/B)_{\mathbb{Q}}\right)^{W_{L}}\] Proof.: The first assertion is the discussion above. the second resp. third assertion follows again from the discussion above and Remark 2.13 resp. Corollary 4.6. **Remark 5.2**.: The last isomorphism in Theorem 5.1 is also valid in the case, where \(S\) is not the spectrum of a field, after replacing the Chow groups with the right motivic cohomology group as in Proposition 4.5. **Remark 5.3**.: Brokemper has shown that one can give a more explicit computation of the Chow ring of \([G_{\varphi}/L]\) using the computations of Brion [1] (cf. [1, Prop. 1.2]). To be more precise, we can write the last isomorphism of Theorem 5.1 as \[A^{\bullet}([G/_{\varphi}L])_{\mathbb{Q}}\cong S^{W_{L}}/(f-\varphi f\mid f\in S _{+}^{W_{G}}),\] where \(S=\operatorname{Sym}_{\mathbb{Q}}(\hat{T})\cong A_{T}^{\bullet}(*)_{\mathbb{Q}}\), \(S_{+}\) are the elements of positive degree and \(W_{G}\) is the Weyl group of \(T\) in \(G\). A more detailed computation can be found in the proof of [1, Prop. 1.2]. In particular, our motivic result recovers, up to computations of loc. cit. resp. [1], the rational version of Brokemper's result about \(A_{L}^{\bullet}(G)\). We want to give two motivating examples of quotients \([G/_{\varphi}L]\) as above, that appear naturally and apply Theorem 5.1 to see that their motives are Tate. The classifying stack of finite groups of Lie-type and the stack of \(G\)-zips. Both are examples in characteristic \(p>0\). In the last section, we want to use Theorem 5.1 to give an idea how we want to approach geometric representation theory of finite groups of Lie-type \(G^{F}\) by relating it motivically to geometric representation of the Langlands dual of \(G\) (see below for the notation). **Example 5.4**.: Let us assume that \(S=\operatorname{Spec}(\mathbb{F}_{q})\) be a finite field of characteristic \(p>0\). We set \(\varphi\colon G\to G\) to be the \(q\)-Frobenius. This is an isogeny and thus, we can apply Theorem 5.1 in this setting. Further, the stack \([G/_{\varphi}G]\) is isomorphic to \(\operatorname{B}G^{F}\), where \(G^{F}\) is the stabilizer group scheme of the neutral element (cf. [1, Lem. 2.1]). It is well known that \(G^{F}(\overline{\mathbb{F}}_{q})\cong G(\mathbb{F}_{q})\), where \(\overline{\mathbb{F}}_{q}\) denotes an algebraic closure of \(\mathbb{F}_{q}\). Thus, we see that the motive of the classifying stack of a finite group of Lie-type is Tate. Further, we are able to relate Tate-motives of \([T\backslash G/B]\) with Tate-motives of \(\operatorname{B}G^{F}\) via the diagram in Theorem 5.1. One of Brokemper's applications of his computations is the computation of the Chow ring of \(G\)-zips. In a similar fashion we will apply the above results and show that the motive of the stack of \(G\)-zips over a field is completed Tate. **Example 5.5**.: Let \(k\) be a field of characteristic \(p>0\) and let \(S=\operatorname{Spec}(k)\). Let \(G,P,Q\) be as above. Let us denote the unipotent radical of \(P\) resp. \(Q\) with \(R_{u}(P)\) resp. \(R_{u}(Q)\). Further, let \(\varphi\colon P/R_{u}(P)\to Q/R_{u}(Q)\) be an isogeny. The datum \(\mathcal{Z}\coloneqq(G,P,Q,\varphi)\) is called _algebraic zip-datum_. To every algebraic zip-datum like above, we can associate the group \[E_{\mathcal{Z}}\coloneqq\{(p,q)\in P\times Q\mid\varphi(\bar{p})=\bar{q}\}.\] The group \(E_{\mathcal{Z}}\) acts on \(G\) via conjugation \((p,q).g\coloneqq pgq^{-1}\). The quotient stack \(G\text{-Zip}\coloneqq[G/E_{\mathcal{Z}}]\) is called the stack of _G-zips_. There are also alternative constructions using a Tannakian formalism on the stack of \(F\)-zips (cf. [20]). In _op.cit._ there is also an explicit description of the points of \(G\)-Zip. Let \(L\subseteq P\) be a Levi-component of \(P\). Then as seen in the proof of [1, Thm. 2.4.4] there is a split exact sequence \[1\to R_{u}(P)\times R_{u}(Q)\to E_{\mathcal{Z}}\to L\to 1,\] where the splitting is induced by \(L\hookrightarrow E_{\mathcal{Z}}\), \(l\mapsto(l,\varphi(l))\). Therefore, by homotopy invariance, we have \(M_{S}(G\text{-Zip})\simeq M_{S}(G/_{\varphi}L)\) which is completed Tate by Theorem 5.1 and the discussion before the theorem. ### Arbitrary reductive groups over fields For this section, we will assume that \(S=\operatorname{Spec}(k)\) is the spectrum of a field, \(G\) is a reductive \(k\)-group scheme with maximal torus \(T\). Let \(\bar{k}\) denote an algebraic closure of \(k\). The _Weyl group_\(\mathcal{W}\coloneqq\operatorname{Norm}_{G}(T)/T\) is a finite etale group scheme and \(W\coloneqq\mathcal{W}(\bar{k})\) is a finite group, called the _absolute Weyl group_. Note that if \(G\) is split reductive, then \(\mathcal{W}\) is the constant \(k\)-group scheme associated to \(W\). Any reductive group becomes split after passing to a finite Galois extension of \(k\) (cf. [1, Exp. XXII Cor. 2.4]). Let us set \(H\coloneqq G\otimes_{k}K\), where \(K/k\) is a finite Galois extension such that \(M\) the maximal torus \(T_{K}\coloneqq T\otimes_{k}K\) is split. Let us denote by \(\operatorname{Gal}(K/k)\) the Galois group of \(K/k\). We want to extend our results of Section 5 to \(G\). To do so, we use that torsors under finite groups behave nicely (in the sense of Section 3.1) and that the natural projection \(H\to G\) is a \(\operatorname{Gal}(K/k)\)-torsor. Now let us consider the setting of Theorem 5.1, i.e. we let \(P,Q\) be parabolics in \(G\) with respective Levi parts \(L,M\). Further, let \(\varphi\colon L\to M\) be an isogeny. The base change of \(P,Q\) resp. \(L,M\) to \(K\) stays parabolic resp. the corresponding Levi components in \(H\). Also, the base change of an isogeny is an isogeny and thus we have an action of \(L_{K}\) on \(H\) via \(\varphi_{K}\coloneqq\varphi\otimes\operatorname{id}_{K}\)-conjugation. This yields the following pullback diagram The natural map \(p\colon\left[H/_{\varphi_{K}}L_{K}\right]\to\left[G/_{\varphi}L\right]\) is a \(\operatorname{Gal}(K/k)\)-torsor. By Lemma 3.5, we thus have that the map \(\operatorname{id}\to\left(p_{*}p^{*}\right)^{\operatorname{Gal}(K/k)}\) is an equivalence. Therefore, we have \[R\Gamma_{S}(\left[G/_{\varphi}L\right],\mathbb{Q})\simeq R\Gamma_{S}(\left[H/ _{\varphi_{K}}L_{K}\right],\mathbb{Q})^{\operatorname{Gal}(K/k)}\] and Theorem 5.1 yields that \(R\Gamma_{S}(\left[G/_{\varphi}L\right],\mathbb{Q})\) is Tate. By smoothness of \(\left[G/_{\varphi}L\right]\) over \(S\) dualizing yields, that also \(M_{S}(\left[G/_{\varphi}L\right])\) is Tate (cf. Remark 2.13). We can even go further. As in Theorem 5.1, we have a commutative diagram with pullback squares Again by Lemma 3.5 the identity in \(\operatorname{DM}(\left[G/_{\varphi}T\right])\) is equivalent to \(\left(g_{*}g^{*}\right)^{\operatorname{Gal}(K/k)}\). As \(*\)-push/pull commutes with limits, we can compute \(a_{*}\) resp. \(b_{*}\) as \(\left(f_{*}a_{*}^{\prime}\right)^{\operatorname{Gal}(K/k)}\) resp. \(\left(h_{*}b_{*}^{\prime}\right)^{\operatorname{Gal}(K/k)}\). By Theorem 5.1\(a^{\prime}\) resp. \(b^{\prime}\) is Tate, by Lemma 3.2\(f\) and \(h\) are also Tate and as taking invariants under a finite group is given by extensions, we see that \(a\) and \(b\) are both Tate maps. Finally, let us summarize our discussion above in the following Theorem. **Corollary 5.6**.: _In the situation above, we have the following diagram of Tate maps_ _Further, the motives \(R\Gamma_{S}(\left[G/_{\varphi}L\right],\mathbb{Q})\) and \(M_{S}(\left[G/_{\varphi}L\right])\) are completed Tate-motives in \(\operatorname{DM}(S)\)._ Proof.: See discussion above. ### Generalizations In this section, we want to give an overview on the integral version of Theorem 3.11 and Theorem 5.1. We want to mention three questions that came up naturally during the work on this article that we want to address in the future. 1. Under what assumptions can we transport all of these results to motives defined via Spitzweck's motivic cohomology ring spectrum? 2. Is it enough to invert the residue characteristics of the base for all of our results? 3. Can the results of this article be extended to other cohomology theories? 4. Is it possible to extend Theorem 5.1 to arbitrary reductive group schemes over a \(k\)-scheme \(S\), where \(k\) is a field. Let us go a bit more into detail. So, let \(S=\operatorname{Spec}(k)\) be the spectrum of a field and \(X\) be a finite type \(S\)-scheme. further, let \(G\) be a split reductive group over \(k\) with split maximal torus \(T\) and associated Weyl group \(W\). Assume \(G\) acts on \(X\). Question (1) is rather straightforward. For Chow groups one can see that if \(A^{\bullet}_{G}(X)\cong A^{\bullet}_{T}(X)^{W}\) holds integrally if any only if \(G\) is special. But if we assume that \(G\) is special, then any \(G\)-torsor is trivial _Zariski_-locally. If we define integral motives \(\operatorname{DM}_{\mathbb{Z}}\) on Prestacks via right Kan extension of Spitzweck motives (cf. [10]), we see that \(\operatorname{DM}_{\mathbb{Z}}\) satisfies Nisnevich and in particular Zariski descent. Thus, up to technicalities, we expect that all the arguments after Section 3.2 go through. Crucially, Section 3.1 has to be worked out in this context, as both Ayoub and Cisinski-Deglise use rational coefficients. This is not surprising, as one can see that the particular motivic behavior of torsors under finite groups should yield etale descent. We hope that in the case of special group there is a work around. Question (2) is addressed in a similar fashion, instead of Spitzweck motives, we can use etale motive (cf. [12]). As these still satisfy etale descent again all the arguments after Section 3.2 should go through. We expect that inverting the residue characteristics should be enough to recover the statements of Section 3.1 in this case. But still one needs to prove the necessary results, which we expect to be rather straightforward. Question (3) needs more careful treatment. The results about \(T\)-torsors of Section 4 can be extended to other oriented cohomology theories, as we have seen. Section 3.2 is more difficult as it boils down to vanishing results on cohomology theories. The key part is Proposition 3.10. We expect that this proposition still holds for modules over the etale \(K\)-theory ring spectrum but we did not check this thoroughly. If one wants to work with genuine \(K\)-theory, then this statement can not be proven via etale descent and thus needs a more careful treatment. For other cohomology theories one can possibly give a precise vanishing assumption to extend the results of this article. For Question (4) let \(G\) be a reductive group scheme over a \(k\)-scheme \(S\). Assume for now that \(G\) admits a maximal torus. Then all of our constructions make sense and as any maximal torus becomes split after passage to a Galois cover of \(S\), we can use the same argumentation as in Section 5.2 to extend Theorem 5.1 to arbitrary reductive group schemes over \(S\). But the existence of maximal tori is not guaranteed (cf. [10] for examples of non-split reductive group schemes over \(\operatorname{Spec}(\mathbb{Z})\)). But in _op.cit._ there are only examples for non-classical reductive groups and the author is not aware of any other examples. Another approach that one could follow is to use the scheme of maximal tori. If a maximal torus \(T\) exists inside \(G\), then the scheme of maximal tori is isomorphic to \(G/T\). But again, we did not follows this approach any further.
2305.10809
CS-TRD: a Cross Sections Tree Ring Detection method
This work describes a Tree Ring Detection method for complete Cross-Sections of Trees (CS-TRD) that detects, processes and connects edges corresponding to the tree's growth rings. The method depends on the parameters for the Canny Devernay edge detector (sigma), a resize factor, the number of rays, and the pith location. The first five are fixed by default. The pith location can be marked manually or using an automatic pith detection algorithm. Besides the pith localization, CS-TRD is fully automated and achieves an F-Score of 89% in the UruDendro dataset (of Pinus taeda) and 97% in the Kennel dataset (of Abies alba) without specialized hardware requirements.
Henry Marichal, Diego Passarella, Gregory Randall
2023-05-18T08:43:57Z
http://arxiv.org/abs/2305.10809v2
# CS-TRD: a Cross Sections Tree Ring Detection method ###### Abstract This work describes a Tree Ring Detection method for complete Cross-Sections of trees (CS-TRD). The method is based on the detection, processing, and connection of edges corresponding to the tree's growth rings. The method depends on the parameters for the Canny Devernay edge detector (\(\sigma\) and two thresholds), a resize factor, the number of rays, and the path location. The first five parameters are fixed by default. The path location can be marked manually or using an automatic path detection algorithm. Besides the path localization, the CS-TRD method is fully automated and achieves an F-Score of 89% in the UruDendro dataset (of Pinus Taeda) with a mean execution time of 17 seconds and of 97% in the Kennel dataset (of Abies Alba) with an average execution time 11 seconds. **Source Code** A Python 3.11 implementation of CS-TRD is available at the web page of this article1. Usage instructions are included in the README.md file of the archive. The associated online demo is accessible through the web site. Footnote 1: [https://ipolcore.ipol.im/demo/clientApp/demo.html?id=77777000390](https://ipolcore.ipol.im/demo/clientApp/demo.html?id=77777000390) image edge detection, dendrochronology, tree ring detection ## 1 Introduction Most of the available methods for dendrochronology use images taken from cores (small cylinders crossing all the tree growth rings), as opposed to complete transverse cross sections. The image analysis on cores is performed on rectangular divisions as illustrated in Figure 1. Using cores for the analysis presents some advantages. The core is a small piece of the tree, keeping it alive. The rings are measured on a small portion that can be assumed as a sequence of bands with a repetitive contrast, simplifying the image analysis. The analysis of complete sections implies the felling of the tree and, from the image analysis point of view, includes the challenge of generating a pattern of concentric closed curves that represent the tree rings. Note in the examples shown in Figure 2 that several factors make the task difficult: wood knots, fungi appearing as black spots with shapes following radial directions, and cracks that can be very wide. Some applications need the analysis of the whole cross sections, as when we are interested in studying the angular homogeneity of the ring-tree pattern. An example of such a case is when we are interested in the detection of the so-called compression wood [6] for which the lack of homogeneity in the growing pattern produces differential mechanical properties on the wood. Several methods exist for automatically detecting tree rings in core images [18, 19, 24, 20]. As the core approach is more popular, most available datasets are of that type, and machine learning-based methods need those datasets for training. In particular, most of the machine learning-based approaches are, to the best of our knowledge, designed for core images. Core images give partial information on the tree-ring structure, which is important for some applications. This article presents a method for detecting tree rings on images of tree cross-sections. The approach takes advantage of the knowledge of the tree cross-section's general structure and the presence of redundant information on a radial profile for different angles around the tree's pitch. This paper is organized as follows: Section 2 contextualizes this method with the previous work in the field. Section 3 presents the proposed automatic cross section tree-ring detection algorithm (CSTRRD). The implementation details are explained in Section 4. Section 5 briefly presents a dataset for developing and testing the proposed algorithm. Experimental results are shown in Section 6 and Section 7 concludes and discuss future work. ## 2 Antecedents Tree ring detection is an old and essential problem in forestry with multiple uses. Due to the particularity of the species of the concerned trees, many practitioners still use a manual approach, measuring the tree rings with a ruler or other (manual) tree-ring measuring system. This is a tedious and time-consuming task. Cerda et al. [1] proposed a solution for detecting entire growth rings based on the Generalized Hough Transform. This work already suggests some general considerations that lead to the principal steps of our approach, as illustrated in Figure 4. The method was tested on ten images; neither the code nor the data are publicly available. Figure 1: Examples of core tree-ring images taken from a dataset with 239 images [8]. Figure 2: Some examples of the images in the UruDendro dataset. Note the variability of the images and the presence of fungus (for example, in the image L02b), knots (for example, in images F07b and F03c), and cracks (for example, in images F02e and L03c). The first five images are from the same tree at different heights, as the text explains in Section 5. Norell [18] proposes a method to automatically compute the number of annual rings in end faces acquired in sawmill environments. The method applies the Grey Weighted Polar Distance Transform[19] to a rectangular section (core) that includes the pith and avoids knots or other disturbances. Norell used 24 images for training and 20 for testing its method, but the images are neither publicly available nor the method's code. Zhou et al. [24] proposed a method based on the traditionally manual approach, i.e., tracing two perpendicular lines across the slice and counting the peaks using a watershed-based method. They show results on five discs. Neither the algorithm nor the data are available. Henkel et al. [11] propose a semiautomatic method for detecting tree rings on full tree cross sections using an Active Contours approach. The authors report good results on several examples, but neither the data nor the algorithm is available. Kennel et al. [13] uses the Dual-Tree Complex Wavelet Transform[14] as part of an active contour approach. This method, which works in the entire cross-section of the tree, gives very good results on a set of 7 publicly available images. We call it the Kennel dataset and try our method on it in this work in order to compare our results with the ones reported by the authors on those images. To the best of our knowledge, the code is unavailable, so it is impossible to see how it works with our data. Makela et al. [15] proposed an automatic method based on Jacobi Sets for the location of the pith and the ring detection on full cross-sections of trees. Neither the code nor the data are publicly available. Fabijanska et al. [8] proposed a fully automatic image-based approach for detecting tree rings over cores images. The method is based on image gradient peak detecting and linking and is applied over a dataset with three wood species representing the ring-porous species. The same authors also proposed a deep convolutional neural network for detecting tree-rings over cores images in [7]. Comparing both methods, they reported a precision of 43% and a recall of 51% for the classical approach and a precision of 97%, and a recall of 96% for the deep learning approach. Neither the code nor the data are publicly available. In a recent work, Polek et al.[20] uses a machine learning-based approach for automatically detecting tree rings of coniferous species but, as most of the reviewed results, work on cores instead of the whole cross-section. This is the most comprehensive approach, and most algorithms and manual protocols use this type of image input. But if the aim is to detect compressed wood, we must mark the whole cross-section to study the asymmetries between rings. Gillert et al, [9] proposed a method for tree-ring detection over the whole cross-section but applied to microscopy images. They apply a deep learning approach using an Iterative Next Boundary Detection Network, trained and tested with microscopy images. There exist several dendrochronology commercially available software packages. Some consist of a set of tools that help the practitioners to trace the rings manually. Others are semiautomatic, including image-processing tools to propose the ring limits. The performance generally varies significantly with certain wood anatomical features linked to wood species, climate, etc. For example, MtreeRing [22] is built using the R statistical language. It uses mathematical morphology for noise reduction and includes several methods for helping in the detection of rings (watershed-based segmentation, Canny edge detector). Like many other algorithms, it proposes an interactive tool for manual marking. To the best of our knowledge, the code is not publicly available. The CooRecorder [17] is another software application of this class, with several tools to help practitioners in the dendrochronological task, for example, to precisely determine the earlywood-latewood limits, using a zoom visualization and interactive tools. All of these packages work on cores instead of the whole disc. Constantz et al., [3] develop a tool for measuring S. Paniculatum rings. Their software measures trace by constructing transects and the rings' areas. The input for this method is a sketch image in SVG format, with some information about the center and the rings represented by polylines, produced with Adobe Illustrator. ## 3 Approach Our tree-ring detection algorithm, called CS-TRD for Cross-Section Tree-Ring Detector, is heavily based on some structural characteristics of the problem: * The use of the whole horizontal cross-section of a tree (slice) instead of a wood dowel (or core), as most dendrochronology approaches do. * The following properties generally define the rings on a slice: 1. The rings are roughly concentric, even if their shape is irregular. This means that two rings can't cross. 2. Several rays can be traced outwards from the slice path. Those rays will cross each ring only once. 3. We are interested only in the rings corresponding to the latewood to early wood transitions, namely the _annual rings_. ### Definitions To explain the approach, we need some naming definitions; see Figure 3. We call _spider web_ the global structure of the tree-rings we are searching for, which is depicted in a general way in Figure 2(a). It comprises a _center_, associated with the slice path, which is the origin of a certain number of _rays_. The _rings_ are concentric and closed curves that don't cross each other. Each _ring_ is formed by a _curve_ of connected points. Each _ray_ crosses a _curve_ only once. The _rings_ can be viewed as a flexible _curve_ of points with _nodes_ in the intersection with the _rays_. A _chain_ is a set of connected _nodes_. As Figure 2(b) illustrates, a _curve_ is a set of chained nodes (small green dots in the figure, noted \(P_{i}\)). Depending on the position of the _curve_ concerning the _center_, some of those _points_ are _nodes_ (bigger black dots in the figure, denoted \(N_{i}\) hereafter). The _node_ can move along a _ray_ in a radial direction, but the movement of a _node_ in a tangential direction over the _chain_ is forbidden. In other words, _nodes_ can move along a _ray_ as if it were hops sliding along the _rays_. The bigger the number \(Nr\) of _rays_, the better precision of the reconstruction of the _rings_. We fix \(Nr=360\). Note that this is the ideal setting. In real images, _rings_ can disappear without forming a closed curve, _cells_ can have very varied shapes, given the deformation of the _rings_, undetected chains, etc. Figure 2(c), illustrate the nomenclature used in this paper: _Chains_\(Ch_{k}\) and \(Ch_{k+1}\), intersect the _rays_\(R_{m-1}\), \(R_{m}\) and \(R_{m+1}\) in _nodes_\(N_{i-1}\), \(N_{i}\) and \(N_{i+1}\). Those _rays_ and _chains_ (as well as the four corresponding _nodes_) define _cells_\(C_{l-1}\), \(C_{l}\) and \(C_{l+1}\). In general, a _cell_ is limited by four _nodes_, but sometimes that is not the case. For example, when a _chain_ doesn't complete a _ring_ or is not well detected. During the detection process, the algorithm uses this terminology to work. We talk of _chains_ that merge to form a _ring_, of _rays_ that determine a sampling of the _curve_, forming _chains_, of the distribution of a particular measure on the _cells_ produced by a given set of _chains_ and _rays_, etc. ### Method Figure 4 illustrates the intermediate results of the proposed method described by Algorithm 1. The input has to be an image of a tree slice without background. To subtract the background, many methods can be used. We apply a deep learning-based approach [21] based on two-level nested U-structures (\(U^{2}Net\)). Figure 5 shows an example of such a procedure. Given a segmented image of a tree slice -i.e., an image without a background- we need to find the set of pixel chains representing the annual rings (dark to clear transitions). We also need the center \(c\) of the _spider web_ (which corresponds to the tree's path) as input. Detecting this fundamental point is a problem that can be tackled by automatic means [4] or manually marked. In this article, we consider that this point is given (in the demo, both options are available). Some algorithms have debug parameters. For example, in the function \(connect\_chains\) of Algorithm 1, it is possible to set a debug flag to save all the intermediate results. To do that, we need the location where debugging results and the image at different stages will be saved (in some situations, debug results are saved by writing over the image). This paper does not discuss debug parameters because they are not crucial for the method understanding. The debug flag passes the debug parameters. The first step in the Pipeline corresponds to preprocessing the input image to increase the method's performance. PreprocessingThe size of acquired images can vary widely, and this has an impact on the performance. On one side, the bigger the image, the slower the algorithm, as more data must be processed. On the other hand, if the image is too small, the relevant structures will be challenging to detect. Figure 3: (a) The whole structure, called _spider web_, is formed by a _center_ (which corresponds to the slice path), \(Nr\)_rays_ (in the drawing \(Nr=18\)) and the _rings_ (concentric curves). In the scheme, the _rings_ are circles, but in practice, they can be (strongly) deformed as long as they don’t intersect another _ring_. Each ray intersects a ring only once in a point called _node_. The area limited by two consecutive _rays_ and two consecutive _rings_ is named a _cell_. (b) A curve is a set of connected _points_ (small green dots). Some of those _points_ are the intersection with _rays_, named _nodes_ (black dots). A chain is a set of connected _nodes_. In this case, the _node_\(N_{i}\) is the _point_\(p_{n}\). (c) Each _Chain_\(Ch_{k}\) and \(Ch_{k+1}\), intersect the _rays_\(R_{m-1}\), \(R_{m}\) and \(R_{m+1}\) in _nodes_\(N_{i-1}\), \(N_{i}\) and \(N_{i+1}\). Those _rays_ and _chains_ (as well as the four corresponding _nodes_) determines _cells_\(C_{l-1}\), \(C_{l}\) and \(C_{l+1}\). Figure 4: Principal steps of the CS-TRD tree-ring detection algorithm: (a) original image, (b) pre-processed image (resized, equalized, and converted to a grayscale image), (c) the output of the Canny Devernay edge detector, (d) edges filtered by the direction of the gradient, (e) set of detected chains, (f) connected chains, (g) post-processed chains and (h) detected tree-rings. ``` Input:\(Im_{in}\), // segmented input image. Background pixels are in white (255) \(c\), // position of the _pth_ in \(Im_{in}\): center of the _sypder web_ \(\sigma\), // Canny edge detector gaussian kernel parameter \(th_{low}\), // low threshold on the module of the gradient. Canny edge detector parameter \(th_{high}\), // high threshold on the module of the gradient. Canny edge detector parameter \(height\), // height of the image after the resize step \(width\), // width of the image after the resize step \(\alpha\), // threshold on the collinearity of the edge filtering, see Equation (3) \(n_{r}\), // number of rays \(m_{c}\) // minimum chain length Output: A list \(l\_{rings}\), \(k=1,2,\cdots,K\), where each element is a closed _chain_ of points in the image, representing a tree-ring. 1\(im_{pre},c\leftarrow\text{preprocessing}(Im_{in},height,width,c)\) //see Algorithm 2 \(m\_{c}h_{e},G_{x},G_{y}\leftarrow\text{canny\_deverney\_edge\_detector}(Im_{pre}, \sigma,th_{low},th_{high})\) // described in [10] \(l\_{c}h_{f}\leftarrow\text{filter\_edges}(m\_{c}h_{e},\,c,\,G_{x},\,G_{y},\, \alpha,\,Im_{pre})\) //see Algorithm 5 \(l\_{c}h_{s},l\_{nodes}_{s}\leftarrow\text{sampling\_edges}(l\_{c}h_{f},\,c,\, nr,\,m_{c},\,Im_{pre})\) //see Algorithm 7 \(l\_{c}h_{c},l\_{nodes}_{c}\leftarrow\text{connect\_chains}(l\_{c}h_{s},l\_{ nodes}_{s},\,c,\,nr)\) //see Algorithm 8 \(l\_{c}h_{p}\leftarrow\text{postprocessing}(l\_{c}h_{c},l\_{nodes}_{c},\,c)\) //see Algorithm 19 \(l\_{rings}\leftarrow\text{chain\_to\_labelme\_json}(l\_{c}h_{p},\,height,\, width,\,c,\,Im_{in})\)// convert closed chains to json return\(l\_{rings}\) ``` **Algorithm 1**Tree-ring detection algorithm Figure 5: Background removal stage. Input (a) and output (b) using the code available from [21] Algorithm 2 shows the pseudo-code of the preprocessing stage. The first step is resizing the input image to a standard size of 1500x1500 pixels. In Section 6.2.1, we show a series of experiments that lead to choosing these dimensions. The size of the input image can vary, so zoom is applied in such a way as to zoom in or zoom out the input image so the image size for the rest of the processing is fixed. Pith coordinates must be resized as well. This step can be turned off by the user in the demo. From lines 1 to 6, the former logic is implemented. The resize function (Line 5) is shown in Algorithm 3. The dimensions of the input image can vary, so image resize (Line 1, Algorithm 3) is applied using the function _resize_ from Pillow library [2]. The method involves filtering to avoid aliasing if the flag _image.ANTIALIAS_ is set. The center coordinates must be modified accordingly as well. To this aim, we use the following equations: \[cy_{output}=cy*\frac{height_{output}}{height}\ \ \ \ cx_{output}=cx*\frac{ width_{output}}{width} \tag{1}\] Where (\(height\), \(width\)) is the input image dimensions, (\(height\), \(width\)) is the output image dimension, and (cy, cx) is the (original resolution) disk pitch location coordinates in pixels. In line 7, the RGB image is converted to grayscale using the OpenCV [12] function: \[cv2.cvtColor(img,cv2.COLOR\_BGR2GRAY)\] Finally, in line 8, an histogram equalization step is applied to enhance contrast. The method is described in Algorithm 4. The first step (Line 1) changes the background pixels to the mean grayscale value to avoid undesirable background effects during equalization. Both equalized, and masked background images are returned. Then, in Line 2, the Contrast Limited Adaptive Histogram Equalization (CLAHE) [25] method for equalizing images is applied. We use the OpenCV implementation [12]2. The threshold for contrast limiting is set to 10 by means of the \(clipLimit\) parameter. Finally, in Line 3, the background of the equalized image is set to white (255). Footnote 2: [https://www.geeksforgeeks.org/clahe-histogram-equalization-opencv/](https://www.geeksforgeeks.org/clahe-histogram-equalization-opencv/) ``` Input:\(Im_{in}\), // input image without background. Background pixel set to white (255) Parameter: \(height_{output}\), // output image height, in pixels \(width_{output}\), // output image width, in pixels \(cy\), // pith's coordinate y, in pixels \(cx\), // pith's coordinate x, in pixels Output: Preprocessed Image, pitch coordinates scaled to the size of the preprocessed image 1ifNone in [\(height_{output}\), \(width_{output}\)] then 2\(Im_{r},cy_{output},cx_{output}\leftarrow(Im_{in},cy,cx)\) 3 end if 4else 5\(Im_{r},cy_{output},cx_{output}\leftarrow\) resize( \(Im_{in},height_{output},width_{output},cy,cx\)) // See Algorithm 3 6 end if 7\(Im_{g}\leftarrow\) rgb2gray(\(Im_{r}\)) 8\(Im_{pre}\leftarrow\) equalize(\(Im_{g}\)) // See Algorithm 4 return[\(Im_{pre},cy_{output},cx_{output}\)] ``` **Algorithm 2**preprocessing ``` Input:\(Im_{in}\), // the input image Parameter: \(height_{output}\): // output image height, in pixels \(width_{output}\): // output image width, in pixels \(cy\): // pith's coordinate y, in pixels \(cx\): // pith's coordinate x, in pixels Output: Resize image using Pillow Library 1\(Im_{r}\leftarrow\) resize_image_using_pil_lib(\(Im_{in},height_{output},width_{output}\)) \(height,width\leftarrow\) get_image_shape(\(Im_{in}\)) \(cy_{output},cx_{output}\leftarrow\) convert_center_coordinate_to_output_coordinate(\(cy,cx,height,width,height_{output},width_{output}\))// See Equation (1) return\([Im_{r},cy_{output},cx_{output}]\) ``` **Algorithm 4** equalize ``` Input:\(Im_{g}\), // gray scale input image. The background is white (255) Output: Equalized image using OpenCV CLAHE [25] method 1\(Im_{pre},mask\leftarrow\) change_background_intensity_to_mean(\(Im_{g}\)) \(Im_{pre}\leftarrow\) equalize_image_using_clahe(\(Im_{pre}\)) \(Im_{pre}\leftarrow\) change_background_to_value(\(Im_{pre},mask,255\)) return\([Im_{pre}]\) ``` **Algorithm 5** equalize Canny-Devernay edge detector.Line 2 of Algorithm 1, correspond to the edge detection stage. We apply the sub-pixel precision Canny Devernay edge detector [5, 10]. The output of this step is a list of pixel chains corresponding to the edges present in the image. Besides some noise-derived ones, we can group those edges into the following classes: * \(Edges_{T}\): edges produced by the tree growing process. It includes the edges that form the rings. Considering a direction from the pith outward, these edges are of two types: those produced by early wood to late wood transitions, expressed in the images as clear to dark transitions, and the latewood to early wood transitions, expressed as transitions from dark to clear in the images. We are interested in detecting the former ones, hereon called annual rings. * \(Edges_{R}\): mainly radial edges produced by cracks, fungi, or other phenomena. * Other edges produced by wood knots. The gradient vector is normal to the edge and encodes the local direction and sense of the transition. The Canny Devernay filter gives as output both the gradient of the image (composed by two matrices with the \(x\) and \(y\) components of the gradient, named \(G_{x}\) and \(G_{y}\)) as well as the edge chains, in the form of a matrix, called \(m\_ch_{e}\). Successive rows refer to chained pixels belonging to the same edge, and the row [-1,-1] marks the division between edges. The Canny Devernay edge detector has the following parameters: * \(\sigma\): The standard deviation of the Gaussian kernel. * \(th_{low}\): Gradient threshold low, applied to the gradient modulus and associated with the two threshold hysteresis filtering on the edge points. * \(th_{high}\): Gradient threshold high, applied to the gradient modulus and associated to the two threshold hysteresis filtering on the edge points. To use the Devernay-Canny implementation from [10], we needed to build a Python wrapper to execute that code. That code uses as input a PGM image. We feed the Devernay-Canny with the preprocessed image \(Im_{pre}\) saved in disk with that format. Regarding the output, \(m\_ch_{e}\) is a matrix, where each row refers to the pairs \((x,y)\), the coordinates of the edges. Each Devernay curve in the list is separated from the next one by a \((-1,-1)\). Minor code modifications were needed in the IPOL implementation of the Canny Devernay filter [10] to get the image gradient matrices \(G_{x}\) and \(G_{y}\) as output. Filtering the edge chainsWe filter out all the points of the edge chains for which the angle between the gradient vector and the direction of the ray touching that point are greater than \(\alpha\) (30 degrees in our experiments). The \(Edges_{T}\) produced by the early wood transitions points inward, and the \(Edges_{R}\), for which the normal vector is roughly normal to the _rays_, are filtered out. Note that this process breaks an edge chain into several fragments. This is done by Algorithm 5. Given the center \(c\) and a point \(p_{i}\) over an edge _curve_, the angle \(\delta(c\vec{p}_{i},\vec{G_{p_{i}}})\) between the vector \(c\vec{p}_{i}\) at the point \(p_{i}\) and the gradient vector \(\vec{G_{p_{i}}}\) (Figure 6) at the same point is given by: \[\delta(c\vec{p}_{i},\vec{G_{p_{i}}})=\arccos\left(\frac{c\vec{p}_{i}\times \vec{G_{p_{i}}}}{\|c\vec{p}_{i}\|\|\vec{G_{p_{i}}}\|}\right) \tag{2}\] We filter out all the _points_\(p_{i}\) for which the angle \(\delta(c\vec{p}_{i},\vec{G_{p_{i}}})\) is greater than the parameter \(\alpha\): \[\delta(c\vec{p}_{i},\vec{G_{p_{i}}})\geq\alpha \tag{3}\] The filter edges method is shown at Algorithm 5. It gets as input the Deverney edges \(m\_ch_{e}\), the pitch center, the image gradient components in the form of two matrices \(G_{x}\) and \(G_{y}\) and the preprocessed image \(Im_{pre}\). It needs the \(\alpha\) threshold of Equation (3) as a parameter. From lines 1 to 5, it computes the angle between vector \(c\vec{p}_{i}\) and the gradient \(\vec{G_{p_{i}}}\) at point \(p_{i}\). We use the Python _numpy_ library matrix operations to speed up computation. In line 1, we change the edge reference axis. Figure 6 shows vectors \(\vec{Op_{i}}\) and \(c\vec{p}_{i}\), as well as the gradient \(\vec{G_{p_{i}}}\) at edge point \(p_{i}\). The function _change_reference_axis_ change the vector coordinate reference from \(\vec{Op_{i}}\) to \(c\vec{p}_{i}\) and produces a new matrix \(Xb\). This is made by subtracting the pitch vector from each row. Matrix \(Xb\) still has the delimiting edges curve rows with the value [-1,-1] Each edge gradient is saved in matrix \(G\) (line 2), keeping the same edge order of the matrix \(m\_ch_{e}\); this means that \[p_{i}=m\_ch_{e}[i]\rightarrow\vec{G_{p_{i}}}=G[i]\] Where \(p_{i}\) is the i-row of matrix \(m\_ch_{e}\) and \(\vec{G_{p_{i}}}\) is the i-row of matrix \(G[i]\). In Lines 3 and 4, the matrices \(Xb.T(Xb\) transposed matrix) and \(G\) are normalized, dividing the vector by the norm as shown in Equation (4), simplifying Equation (2): \[\delta(R_{i},\vec{G_{p_{i}}})=\arccos\left(\frac{c\vec{p}_{i}\times\vec{G_{ p_{i}}}}{\|c\vec{p}_{i}\|\|\vec{G_{p_{i}}}\|}\right)=\arccos\left(\frac{c \vec{p}_{i}}{\|c\vec{p}_{i}\|}\times\frac{\vec{G_{p_{i}}}}{\|\vec{G_{p_{i}}}\| }\right)=\arccos\left(c\vec{p}_{i\_{unit}}\times\vec{G_{p_{i}\_{unit}}}\right) \tag{4}\] In line 5, Equation (4) is computed in matrix form, and the angle between normalized vectors \(\vec{G_{p_{i}unit}}\) and \(c\vec{p}_{iunit}\) is returned in degrees in the matrix \(\theta\). In line 6, the edge filtering is applied, following Equation (3). If the edge point \(p_{i}\) is filtered out, then \(X_{edges\_filtered}[i]=[-1,-1]\). The edges are converted to **curve** objects in line 7. We say that two edge pixels belong to the same edge if, between them, it does not exist a row in matrix \(X_{edges\_filtered}\) with values [-1,-1]. The object **Curve** inherits the properties of the class **LineString** from the **shapely** package, which is used in the _sampling edges_ stage. Finally, in lines 8 and 9, the curve belonging to the border edges is computed and added to the curve list, \(l\_ch_{f}\). In this context, we name \(border\), the limits of the segmented image concerning the background. The function \(get\_border\_curve\) is shown in Algorithm 6. We use a simple method to compute the border edges. First, we generate a mask which is an image of the same dimensions as \(Im_{pre}\), enlarged by 3 lines and 3 columns before the first and after the last line and columns to avoid border effects of the filtering. The mask image has two values, 0 for the region of the wood slice and 255 for the background. Lines 1 to 4 calculate the mask image. In line 1 we threshold \(Im_{pre}\) masking all the pixels with a value equal to 255, particularly the background. Some internal pixels can also have a value equal to 255. To avoid those pixels in the mask, we blur the mask (line 2), using a Gaussian Kernel with a high \(\sigma\) (in our implementation \(\sigma=11\)), and we set to 255 all the pixels with a value higher than 0 (lines 2 and 3). In line 4, the mask is padded with \(pad=3\). Finally, in line 5, we apply an OpenCV finding contour method to get the border contour on the mask. The OpenCV implementation returns all the contours it finds, including the image's border. We select the contour for which the enclosed area is closest to half the image. This is a criterion that works fine for this purpose. In line 6, the contour object is converted to a **Curve** object. ``` Input:\(m\_ch_{e}\), // matrix of edge curves \(c\), // center of the spider web, in pixels: \(c_{x}\) and \(c_{y}\) \(G_{x}\), // X component of the gradient, a matrix \(G_{y}\), // Y component of the gradient, a matrix \(Im_{pre}\), // Preprocessed image Parameter: \(\alpha\), //Threshold edge filter, Equation (3) Output: A list \(l\_ch_{f}^{k}\), \(k=1,2,\cdots,N\), where each element is a filtered edge curve 1\(Xb\leftarrow\) change_reference_axis(\(m\_ch_{e}\), \(c_{y}\), \(c_{x}\)) 2\(G\leftarrow\) get_gradient_vector_for_each_edge_pixel(\(l\_ch_{e}\), \(G_{x}\), \(G_{y}\)) 3\(Xb_{normalized}\leftarrow\) normalized_row_matrix(Xb.T) 4\(G_{normalized}\leftarrow\) normalized_row_matrix(G) 5\(\theta\leftarrow\) compute_angle_between_gradient_and_edges(\(Xb_{normalized}\), \(G_{normalized}\)) 6\(X_{edges\_filtered}\leftarrow\) filter_edges_by_threshold(\(m\_ch_{e}\), \(\theta\), \(\alpha\)) 7\(l\_ch_{f}\leftarrow\) convert_masked_pixels_to_curves(\(X_{edges\_filtered}\)) 8\(border\_curve\leftarrow\) get_border_curve(\(Im_{pre}\), \(l\_ch_{f}\))// See Algorithm 6 9\(l\_ch_{f}\gets l\_ch_{f}\) + \(border\_curve\) return\(l\_ch_{f}\) ``` **Algorithm 5**filter_edges Sampling edgesGiven the set of filtered chained edge points \(l\_ch_{f}\), a list of _curves_, we sample each _curve_ using the number of rays \(Nr\). The Algorithm 7 describe the procedure. Two parameters are included in this algorithm: \(Nr\), the number of rays (360 by default), and \(min\_chain\_length\), the ``` Input:\(Im_{pre}\), // Preprocessed image \(l\_ch_{f}\), //list of object Curves Output:\(border\_curve\) 1\(mask\)\(\leftarrow\) mask_background(\(Im_{pre}\)) 2\(mask\)\(\leftarrow\) blur(\(mask\)) 3\(mask\)\(\leftarrow\) thresholding(\(mask\)) 4\(mask\)\(\leftarrow\) padding_mask(\(mask\)) 5\(border\_contour\)\(\leftarrow\) find_border_contour(\(mask\), \(Im_{pre}\)) 6\(border\_curve\)\(\leftarrow\) contour_to_curve(\(border\_contour\), \(\mathrm{len}(l\_ch_{f})\)) return\(border\_curve\) ``` **Algorithm 6**get_border_curve Figure 6: Coordinates reference of edge \(p_{i}\), vector \(O_{Pi}^{*}\). Edge filtering computation use vector \(c\vec{p}_{i}\). \(O\) represents the origin of the image coordinate axis. \(C\) represents the path position minimum number of nodes in a _chain_ (the object **chain** is described in the following paragraph). Every _chain_ has two endpoints, so we fix \(min\_chain\_lenght=2\). This algorithm produces as output two lists: one of the objects **Chain** named \(l\_ch_{s}\) and one of the objects **Nodes** named \(l\_nodes_{s}\), which includes all the nodes in all the chains. The object **Chain** contains a list of pointers to all the nodes belonging to that _chain_ (\(l\_nodes\)). This allows us to find all the nodes of a given _chain_. The object **Node** contains the identifier of the _chain_ to which it belongs (\(chain\_id\)). There is no _chain_ without nodes, nor nodes belonging to more than one _chain_. An object **Chain** has the following attributes: * \(l\_nodes\): chained list of the nodes belonging to the chain. * \(id\): identification of the chain. * \(Nr\): total number of rays on the disk. * \(extA\): first endpoint of the chain, named node A. * \(extB\): second endpoint of the chain, named node B. * \(type\): We define three chain types: border, normal, and center. * \(B\_outward\): Pointer to the next chain above the B node. * \(B\_inward\): Pointer to the next chain below the B node. * \(A\_outward\): Pointer to the next chain above the A node. * \(A\_inward\): Pointer to the next chain below the A node. We use the concepts of _outward_ and _inward_ in the attributes of a chain. Both are related to a given endpoint (A or B). Given a _chain_ endpoint and the corresponding _ray_, we find the first _chain_ that intersects that _ray_ going from the chain to the center (named here as _inward_) and the first _chain_ that intersects that _ray_ going from the chain moving away from the center (named here as _outward_). Figure 8 illustrate this. Chains are superposed over the gray-level image. The ray at endpoint A is in blue, the nodes are in red at the intersection between the rays, and the chains are in orange, black, and yellow. Orange and yellow chains are the _visible_ chains for the black chain at endpoint A (outward and inward, respectively); this concept is explained later. Every chain has two endpoint nodes, A and B. Endpoint A is always the furthest node clockwise, while endpoint B is the most distant node counterclockwise. An Object **Node** has the following attributes: * \((x,y)\) node coordinates. Floating point numbers. * \(chain\_id\): identification of the chain to which the node belongs. * \(radial\_distance\): Euclidean distance to the center. It is a floating point number. * \(angle\): angle orientation of the ray passing by that node, in degrees. It is a floating point number. Three metric distances between chains are defined. Given a chain endpoint \(EndPoint_{j}\)(the selected endpoint for the current chain \(Ch_{j}\)) and \(EndPoint_{k}\) (the selected endpoint for chain \(Ch_{k}\)) distances are defined as: * **Euclidean** Given endpoint cartesian coordinates (\(x\),\(y\)), the distance between endpoints is defined as \[\sqrt{\left(x_{j}-x_{k}\right)^{2}+\left(y_{j}-y_{k}\right)^{2}}\] (5) Where \((x_{j},y_{j})\) are the cartesian coordinates of \(Endpoint_{j}\) and \((x_{k},y_{k})\) are the cartesian coordinates of \(Endpoint_{k}\). * **Radial Difference** Given the endpoint Euclidean distance to the path center, this distance is defined as \[\|r_{j}-r_{k}\|\] (6) Where \(r_{j}\) is the Euclidean distance of \(Endpoint_{j}\) to the pitch center, and \(r_{k}\) is the Euclidean distance of \(Endpoint_{k}\) to the pitch center. * **Angular** Given the endpoints ray support angle, \(\theta\) (radii angular direction) this distance is defined as \[\left(\theta_{j}-\theta_{k}+360\right)mod\ 360\] (7) Where \(\theta_{j}\) is the direction of the ray supporting \(Endpoint_{j}\) (in degrees), \(\theta_{k}\) is the direction of the ray supporting \(Endpoint_{k}\) (in degrees), and \(mod\) refers to the module operation. Figure 7 illustrates angle \(\theta_{j}\) of \(Ray_{j}\) given the disk path position \(C\). Algorithm 7 extract the image dimensions from the preprocessed image \(Im_{pre}\). Then we proceed to build the rays. A ray object is a semi-line, with one endpoint at the center \(c\) (the pitch) and the other at the image border. This gives a list of \(Nr\) rays. Then, we compute the intersections between _curves_ and rays using the **Shapely** python library. Note that a _curve_ produced by Devernay is a set of chained pixels, and some of them are also nodes, as shown in Figure 2(b). Once the Nodes are found, we create a _chain_ including only the nodes and not all the points of the corresponding Devernay _curve_. In this sense, a _chain_ is a sampled _curve_. If a _chain_ has less than \(min\_chain\_{enght}\) nodes, we delete it. Finally, we build two artificial _chains_. One of type _center_. This artificial _chain_ has \(Nr\) nodes, all with the same \((x,y)\) coordinates but different angular orientations. The second one is Figure 7: Ray references axis. \(C\) is the pitch center. \(\theta_{j}\) is the angle of \(Ray_{j}\) the disk border. Both artificial chains are beneficial at the connecting chain stage. The field _type_ in the **Chain** object identifies if the _chain_ is a normal one or is one of the two artificial chains just described. ``` 0:\(l\_ch_{f}\), // list of curves \(c\), // center of the \(sypder\ web\)\(Im_{pre}\) // preprocessed image Parameters: \(min\_chain\_length\), // minimum length of a chain \(nr\) // number of total rays Output: A list \(l\_ch_{s}^{k}\), \(k=1,2,\cdots,N\), where each element is a chain; \(l\_nodes_{s}^{k}\), \(k=1,2,\cdots,N_{n}\), where each element is a Node 1\(height\), \(width\gets Im_{pre}\).shape 2\(l\_rays\leftarrow\) build_rays(\(nr\), \(height\), \(width\), \(c\)) \(l\_ch_{s}\),\(l\_nodes_{s}\leftarrow\) intersections_between_rays_and_devernay_curves(\(c\), \(l\_rays\), \(l\_ch_{f}\), \(min\_chain\_length\), \(nr\), \(height\), \(width\)) \(l\_ch_{s}\),\(l\_nodes_{s}\leftarrow\) generate_virtual_center_chain(c, nr, \(l\_nodes_{s}\), \(l\_ch_{s}\)) return\(l\_ch_{s}\),\(l\_nodes_{s}\) ``` **Algorithm 7**sampling_edges Connect chainsWe must now group this set of chains to form the rings. Some of these chains are spurious, produced by noise, small cracks, knots, etc., but most are part of the desired rings, as seen in Figure 4. To connect chains, we must decide if the endpoints of two given chains can be connected, as illustrated by Figure 9. We use a support chain, \(Ch_{0}\) in the figure, to decide whether or not those chains must be connected. Figure 8: A given chain (in black) with two endpoints A and B. Its nodes (in red) appear at the intersection between the Canny Devernay curve and the rays. The ray at endpoint A is in blue. Other chains detected by Canny Devernay are colored in white. Endpoint A’s inward and outward chains are in yellow and orange, respectively. Figure 10: For the _chain support_\(Ch_{0}\), the set of _chain candidates_ is formed by \(Ch_{1}\), \(Ch_{2}\), \(Ch_{4}\), \(Ch_{5}\) and \(Ch_{6}\). _Chain_\(Ch_{3}\) is shadowed by \(Ch_{1}\) but \(Ch_{5}\) is not shadowed by \(Ch_{6}\) because at least one endpoint of \(Ch_{5}\) is visible from \(Ch_{0}\). Note that a _chain_ becomes part of the _candidate chains_ set if at least one of its endpoints is visible from the _chain support_. Figure 9: An illustration of the _connectivity_ issue. (a) The question is if endpoint \(A\) of \(Ch_{3}\) must be connected to endpoint \(B\) of \(Ch_{2}\) (red dashed line) or to endpoint \(B\) of \(Ch_{1}\) (blue dashed line). In figure (b), the same question can be posed for the connection between endpoint B of \(Ch_{1}\) and endpoint A of \(Ch_{2}\), but \(Ch_{1}\) and \(Ch_{2}\) intersect (the endpoints are crossed by the same _ray_), and so this connection is forbidden. Note that we represent the connections by line segments for clarity, but in fact, these are curves in the image space, as we interpolate between _chain_ endpoints in polar geometry To group chains that belong to the same ring, we proceed as follows: 1. We order all the chains by length and begin processing the longest. The processed chain is called _Chain support_, \(Ch_{i}\). Once we finish merging all the possible _candidate chains_ related to that one (\(candidates_{Ch_{i}}\)), we do the same with the next longest _chain_. 2. We find the chains that are visible from the _Chain support_ inwards (i.e., in the direction from _Chain support_ to the center). The concept of _visibility_ here means that at least one endpoint of the _candidate chain_ is visible from the _Chain support_. Visible means that a _ray_ that goes through the endpoint of the _candidate chain_ crosses the _chain support_ without crossing any other _chains_ in between. The set of _candidate chains_ of the _Chain support_\(Ch_{i}\) is named \(candidates_{Ch_{i}}\). This is illustrated by Figure 10, in which case, the _chains candidates_ generated inwards by \(Ch_{0}\) is: \[candidates_{Ch_{0}}=\{Ch_{1},Ch_{2},Ch_{4},Ch_{5},Ch_{6}\}\] _Chain_\(Ch_{3}\) is shadowed by \(Ch_{1}\) and \(Ch_{5}\) is not shadowed by \(Ch_{6}\) because at least one of its endpoints are visible from \(Ch_{0}\). The same process is made for the chains visible from the _Chain support_ outwards. 3. We go through the set \(candidates_{Ch_{i}}\) searching for connections between them. By construction, the _chain support_ is not a candidate to be merged in this step. From the endpoint of a chain, we move forward angularly. The next endpoint of a nonintersecting _chain_ in the \(candidates_{Ch_{i}}\) set is a candidate to be connected to the first one. We say that two _chains_ intersect if there exists at least one _ray_ that cross both _chains_. For example, in Figure 10, \(Ch_{6}\) intersects with \(Ch_{5}\) and non-intersects with \(Ch_{4}\). To decide if both chains must be connected, we must measure the _connectivity goodness_ between them. 4. To define a notion of connectivity goodness, we combine three criteria: 1. _Radial tolerance for connecting chains_. The radial difference between the distance from each chain to be merged (measured at the endpoint to be connected) and the support chain must be small. For example, in Figure 11, if we want to connect node \(N_{i}\) of \(Ch_{l}\) and node \(N_{i+1}\) of \(Ch_{k}\), we must verify that \[\delta R_{i}*(1-Th_{Radial\_tolerance})\leq\delta R_{i+1}\leq\delta R_{i}*(1+Th _{Radial\_tolerance})\] Where \(Th_{Radial\_tolerance}\) is a parameter of the algorithm. We call this condition _RadialTol_. 2. _Similar radial distances of nodes in both chains_. For each chain, we define a set of nodes. For the chain \(Ch_{j}\), this set is \(N_{j}=\{N_{j}^{0},N_{j}^{1},...,N_{j}^{n_{nodes}}\}\) where \(n_{nodes}\) is the number of nodes to be considered, a parameter. See Figure 12. We use the whole chain if it is shorter than \(n_{nodes}\). We measure \(\delta R_{i}\), the radial distance between a node in the given chain and the corresponding node for the same ray in the support chain, as illustrated in Figure 11. This defines two sets, one for each considered chain \(i\) and \(k\): \(Set_{j}=\{\delta R_{j}^{0},..,\delta R_{j}^{n_{nodes}}\}\) and \(Set_{k}=\{\delta R_{k}^{0},..,\delta R_{k}^{n_{nodes}}\}\). We calculate the mean and the standard deviation \(Set_{j}(\mu_{j},\sigma_{j})\) and \(Set_{k}(\mu_{k},\sigma_{k})\). The size of the distribution is defined by the parameter \(Th_{Distribution\_size}\). This defines a range of radial distances associated with each chain: \(Range_{j}=(\mu_{j}-Th_{Distribution\_size}*\sigma_{j},\mu_{j}+Th_{Distribution\_size}* \sigma_{j})\) and \(Range_{k}=(\mu_{k}-Th_{Distribution\_size}*\sigma_{k},\mu_{k}+Th_{Distribution\_ size}*\sigma_{k})\). To connect both chains, there must be a non-null intersection between both distributions: \(Range_{j}\cap Range_{k}\neq 0\). We call this condition _SimilarRadialDist_. 3. _Regularity of the derivative_. Suppose we have two chains \(Ch_{j}\) and \(Ch_{k}\) that can be connected and a set of interpolated nodes between the endpoints of those chains (let's call \(Ch_{jk}\) the set of interpolated nodes between \(Ch_{j}\) and \(Ch_{k}\), indicating that they form a new "interpolating" chain). See Figure 12. The new virtual chain created by the connection between chains \(Ch_{j}\) and \(Ch_{k}\) will encompass the nodes of those two chains and the new interpolated nodes between both chains (\(Ch_{jk}\), colored in red in the figure). To test the regularity of the derivative, we define a set of nodes for each concerned chain. For the chain \(Ch_{j}\), this set is \(\{N_{j}^{0},N_{j}^{1},...,N_{j}^{n_{nodes}}\}\) where \(n_{nodes}\) is the number of nodes to be considered, a parameter (\(n_{nodes}=20\) in the current implementation). We use all its nodes if the chain is shorter than \(n_{nodes}\). For each chain, we compute the centered derivative in each node, \(\delta N^{s}=\frac{\|r_{s+1}-r_{s-1}\|}{2}\), where \(r_{s}\) is the radial distance of the node \(N^{s}\) to the center (i.e., the Euclidean distance between the node and the center of the _spider web_). Therefor radial distance to center of node \(N^{s-1}\) is represented as \(r_{s-1}\) and radial distance to center of node \(N^{s+1}\) is represented as \(r_{s+1}\). The set of derivatives for the nodes of the existing chains is \(Der(Ch_{j},Ch_{k})=\{\delta N_{j}^{0},...,\delta N_{j}^{n_{nodes}},\delta N_{k} ^{0},...,\delta N_{k}^{n_{nodes}}\}\). The condition \(Th_{regular\_derivative}\) is asserted if the maximum of the derivatives in the interpolated chain is less or equal to the maximum of the derivatives in the two neighboring chains times a given tolerance: \[max(Der(Ch_{jk}))\leq max(Der(Ch_{j},Ch_{k}))\times Th_{Regular\_derivative}\] Where \(Th_{Regular\_derivative}\) is a parameter. We call this condition _RegularDeriv._ In order to connect chains \(Ch_{j}\) and \(Ch_{k}\), the following condition must be met: \[\textit{RegularDeriv}\wedge(\textit{SimilarRadialDist}\lor\textit{RadialTol}) \tag{8}\] where \(\vee\) and \(\wedge\) stands for the logical _or_ and _and_ symbols, respectively. Another condition must be met: no other chain must exist between both chains to be connected. If another chain exists in between, it must be connected to the closer one. For example, in Figure 11: Quantities used to measure the connectivity between _chains_. \(\delta R_{i}\) is the radial difference between two successive _chains_ along a _ray_\(R_{i}\) and \(\delta N_{i}\) is the radial difference between two successive _nodes_\(N_{i}\) and \(N_{i+1}\). Note that these nodes can be part of the same _chain_ or be part of two different _chains_ that may be merged. Support chains are represented with the name \(Ch_{i}\). \(Ch_{i}\) visible chains are \(Ch_{j}\), \(Ch_{l}\) and \(Ch_{k}\). Chains \(Ch_{j}\) and \(Ch_{k}\) satisfy similarity conditions Figure 10, it is impossible to connect chains \(Ch_{3}\) and \(Ch_{5}\) because between them appear \(Ch_{4}\). We call this condition _ExistChainOverlapping_. Consequently, Equation (8) is modified as follows \[\textbf{not}\emph{ExistChainOverlapping}\wedge\emph{RegularDeriv}\wedge(\emph{ SimilarRadialDist}\lor\emph{RadialTol}) \tag{9}\] The symbol **not** stands for the _not_ operator. The method iterates this search for connectivity between chains over different neighborhood sizes. The parameter _NeighbourhoodSize_ defines the maximum allowed distance, measured in degrees, for connecting two chains. If the distance between two chains endpoints is longer than _NeighbourhoodSize_, those chains are not connected. The parameter _derivFromCenter_ controls how are estimated the interpolated nodes between two chains, as the ones in red in Figure 12. If \(derivFromCenter=1\), ray angle and radial distance from the center are used to estimate the position of the interpolated nodes. If it is set to 0, the estimation is made by measuring the radial distance to the support chain. We iterate this process for the whole image for five sets of parameters: \(Th_{Radial\_tolerance}\), \(Th_{Distribution\_size}\), \(Th_{Regular\_derivative}\), _NeighbourhoodSize_ and _derivFromCenter_. In each iteration, we relax the parameters. In the first iteration, there are a lot of small chains, but in the second and third iterations, the concerned chains are already more extended and less noisy. Once the merging process is advanced, we can relax the parameters to connect more robust chains. Table 1 summarize the parameter sets. 5. We proceed in the same manner in the outward direction. The former ideas are implemented in Algorithms 8 and 9. Algorithm 8 defines the logic for iterating Figure 12: Nomenclature used for the connect chains algorithm. Given the support chain, \(Ch_{i}\), chains \(Ch_{j}\) and \(Ch_{k}\) are candidates to be connected. \(N_{j}^{n}\) are the nodes of \(Ch_{j}\), with \(n=0\) for the node corresponding to the endpoint to be connected. Similarly, we note \(N_{k}^{n}\) the nodes of \(Ch_{k}\). In red are the nodes created by an interpolation process between both endpoints. We represent the radial distance to the center of \(Node^{*}\) as \(r_{s}\). over the constraints defined in Table 1. In line 1, a square binary intersection matrix \(M\) is computed. Precompute matrix \(M\) will speed up the procedure. Rows and columns of \(M\) span the chain list. Chain \(Ch_{j}\) intersect chain \(Ch_{k}\) if \(M[j,k]=1\). We say that two chains intersect if at least one ray crosses both chains. Lines 2 to 6 are iterated for each parameter set of Table 1. Line 3 defines the parameters for each iteration. The dictionary \(iteration\_params\) has keys for the nodes and chains lists. Both lists may be updated at each iteration because chains may be connected. When two chains are connected, \(M\) is updated as well. In the final iteration (\(i=9\)), the external border chain is added to the chain list in order to be used as a support chain. In line 4, the function which connects the chains is called, returning the updated nodes and chains lists and \(M\) after the connecting stage. Finally, in line 5, nodes and chains lists are updated in the \(iteration\_params\) dictionary for the next iteration. ``` Input:\(l\_ch_{s}\),// chains list \(l\_nodes_{s}\),// nodes list \(c\), // center of the spider web \(nr\)// number of total rays Output: A list \(l\_ch^{f}_{c}\), \(f=1,2,\cdots,N_{f}\), where each element is a chain; \(l\_nodes^{f}_{c}\), \(c=1,2,\cdots,N_{f}\), where each element is a Node 1\(M\leftarrow\) compute_intersection_matrix(\(l\_ch_{s}\), \(l\_nodes_{s}\), \(nr\)). /* Loop for connecting chain main logic, losing the restrictions at each iteration */ 2for\(i\gets 1\) to \(9\)do 3\(iteration\_params\leftarrow\) get_iteration_parameters(\(i\))//Table (1) 4\(l\_ch_{c}\), \(l\_nodes_{c}\), \(M\leftarrow\) connect_chain_main_logic(\(M\), \(c\), \(nr\), \(iteration\_params\)) //see Algorithm 9 5 update_list_for_next_iteration(\(l\_ch_{c}\), \(l\_nodes_{c}\)) 6 end for return\(l\_ch_{c}\),\(l\_nodes_{c}\) ``` **Algorithm 8**Connect Chains Algorithm 9 shows the connectivity main logic. **State** class manages the support chain iteration logic. It contains references to the lists of all the chains and nodes and stores the similarity parameters and the intersection matrix, \(M\). Essentially, the **State** class is the hub of our system, containing all the necessary information to operate. The _system_ comprises all the chains and nodes, and the M intersection matrix. The **State** class updates the chains and nodes lists and the matrix \(M\) whenever two chains are connected. This update is critical for our operation and signifies that the _system_ has been modified. Lines 1 and 2 are initializations. Initialization consists of: \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \(Th_{Radial\_tolerance}\) & 0.1 & 0.2 & 0.1 & 0.2 & 0.1 & 0.2 & 0.1 & 0.2 & 0.2 \\ \hline \(Th_{Distribution\_size}\) & 2 & 2 & 3 & 3 & 3 & 3 & 2 & 3 & 3 \\ \hline \(Th_{Regular\_derivative}\) & 1.5 & 1.5 & 1.5 & 1.5 & 1.5 & 1.5 & 2 & 2 & 2 \\ \hline \(NeighbourhoodSize\) & 10 & 10 & 22 & 22 & 45 & 45 & 22 & 45 & 45 \\ \hline \(deriv\)_FromCenter_ & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ \hline \end{tabular} \end{table} Table 1: Connectivity Parameters. Each column is the parameter set used on that iteration. 1. Sort the chain list by size (i.e. number of nodes) in descending order. 2. To optimize our method for searching visible chains from the chain support, we assign pointers to the _visible_ inward and outward chains at both endpoints (A and B) of each chain. The loop between lines 4 and 20 is applied to all the chains as long as \(State_{i}\neq State_{i-1}\). The condition \(State_{i}=State_{i-1}\) is true when no connections are made after an iteration. In line 4, we get a new support chain, \(Ch_{i}\), for the current iteration. The logic to get the next chains are grouped in the methods \(get\_next\_chain\) (line 4) and \(update\_system\_state\) (line 20), described inAlgorithm 11 and Algorithm 10 respectively. Support chains are iterated following a neighborhood logic for speeding up purposes instead of iterating over the list sequentially. In line 5, outward and inward visible chains are obtained and stored in \(l\_s\_outward\) and \(l\_s\_inward\) lists. To this aim we iterate over \(l\_ch_{s}\) and check if visibility chain pointers (\(B\_outward\), \(B\_inward\), \(A\_outward\), \(A\_inward\)) refers to \(Ch_{i}\). The loop between lines 7 and 19 explores the lists \(l\_s\_inward\) and \(l\_s\_outward\) with iteration variable \(l\_candidates\_Chi\). First, the \(j\_pointer\) index is set to 0. Then, from lines 8 to 11, we set the variable \(location\) to signal if \(l\_candidate_{Ch_{i}}\) is the inward or the outward list. We iterate over the set \(l\_candidates\_Chi\) to look for similar chains, using the similarity criterion defined in Equation (9). The loop over the chains in the subset \(l\_candidates\_Chi\) goes from line 12 to 19. The current chain, \(Ch_{j}\), inside the inner while loop, is indexed by the \(j\_pointer\) index. In line 14, all chains in the subset \(l\_candidates\_Chi\) not intersecting with chain \(Ch_{j}\) are chosen. As rings do not intersect each other, candidates to be part of the same ring can not intersect between them. Line 15 detects \(Ch_{k}^{b}\), the closest chain in \(l\_candidates\_Chi\) to endpoint B of \(Ch_{j}\), that satisfies the similarity constraints (Algorithm 16), and line 16 does the same for \(Ch_{k}^{a}\) concerning endpoint A of \(Ch_{j}\). Line 17 selects which is closest to its corresponding endpoint in \(Ch_{j}\). Line 18 calls the function that connects the closest one to the corresponding endpoint using the Euclidean distance between them (Algorithm 14); finally, in line 19, \(j\_pointer\) is updated. If two chains are connected over this iteration, then in the next iteration, we iterate again over \(Ch_{j}\). Note that when two chains are connected, the candidate chain (\(Ch_{k}\)) is deleted from the list of candidate chains, and their nodes are added to chain \(Ch_{j}\). In line 20, we update the outer while loop system variables to define if the process is finished (i.e., all chains are connected). In line 21, we iterate over all the chains in list \(l\_Ch_{s}\), and if the chain has enough nodes, we complete it, following Algorithm 12. Finally, we return the connected chain list and their nodes, \(l\_ch_{c}\) and \(l\_nodes_{c}\), respectively. Methods \(get\_next\_chain\) and \(update\_system\_state\) contain the logic to get \(Ch_{i}\) at the current iteration. The former is the primary one and is described in Algorithm 10 (a method of \(State_{i}\)). As input, this function receives the support chain \(Ch_{i}\), the outward and inward candidates lists \(l\_s_{outward}\) and \(l\_s_{inward}\), and the system status object \(State_{i}\). This object is mainly used to point to important variables in the connecting module as the chains and nodes lists, \(l\_ch_{s}\) and \(l\_nodes_{s}\). In line 1, the list \(l\_ch_{s}\) is extracted from \(State_{i}\). System status changes if some chains are connected during the current iteration. In other words, if the chain list length at the beginning of the iteration is more extended than at the end, the system has changed. This is done in the method \(system\_status\_change()\) of \(State_{i}\). If the system status changes, lines 2 to 13 are executed. Because the system status has changed, the chains in \(l\_ch_{s}\) are not in order anymore, so we sort them by size again (line 3). In line 4, we define a list \(l\_current\_iteration\) whose elements are all the chains involved in the current iteration, the ones belonging to lists \(l\_s_{outward}\) and \(l\_s_{inward}\) as well as the support chain \(Ch_{i}\). In line 5, we sort them by size; in line 6, we get the longest, called \(longest\_chain\). We are indexing the list \(l\_current\_iteration\), which has all its elements sorted by size. If \(longest\_chain\) equals \(Ch_{i}\), we set as \(next\_chain\_index\) (for the next iteration) the chain that follows in size the support chain \(Ch_{i}\), line 8. If the support chain \(Ch_{i}\) is not the most extended (line 11), we set as \(next\_chain\_index\) the chain index that follows in size, \(longest\_chain\)'s index. Finally, if the system status did not change at the ``` Input:\(M\) //Binary Matrix with intersection chains info \(c\), // center of the _sypder web \(nr\)_, // number of total rays Parameters: \(l\_ch_{s}\), // chains list \(l\_nodes_{s}\), // nodes list \(th\_radial\_tolerance\), // Radial tolerance for connecting chains \(th\_distribution\_size\), // Chains Radial Difference Standard deviations for connecting chains \(th\_regular\_derivative\), // Chains Radial Derivative threshold for connecting chains \(neighbourhood\_size\), //Max Angular distance allowed for connecting chains \(derivative\_from\_center\) // Related to how nodes are interpolated Output: A list \(l\_c{h}_{c}^{k}\), \(k=1,2,\cdots,N_{f}\), where each element is a _chain_, \(l\_nodes_{c}^{k}\), \(k=1,2,\cdots,N_{f}\), where each element is a _Node_ \(State_{i-1}\gets 0\) \(State_{i}\leftarrow\) init_system(\(l\_c{h_{s}}\), \(l\_nodes_{s}\), \(M\), \(c\), \(nr\),\(th\_radial\_tolerance\), \(th\_distribution\_size\), \(th\_regular\_derivative\), \(neighbourhood\_size\), \(derivative\_from\_center\)) while\(State_{i}\neq State_{i-1}\)do \(Ch_{i}\leftarrow\) get_next_chain(\(State_{i}\))//See Algorithm11 \(l\_s\_outward\), \(l\_s\_inward\leftarrow\) get_chains_in_and_out_wards(\(l\_ch_{s}\),\(Ch_{i}\)) for\(l\_candidates\_Ch_{i}\) in (\(l\_s\_outward\), \(l\_s\_inward\)) do \(j\_pointer\gets 0\) if\(l\_candidates\_Ch_{i}==l\_s\_inward\)then \(location\leftarrow\) "inward" else \(location\leftarrow\) "outward" while\(\text{length}(l\_candidates\_Ch_{i})>j\_pointer\)do \(Ch_{j}\leftarrow\)\(l\_candidates\_Ch_{i}[j\_pointer]\) \(l\_no\_intersection\_j\leftarrow\) get_non_intersection_chains(\(M\), \(l\_candidates\_Ch_{i}\), \(Ch_{j}\)) \(Ch_{j}\leftarrow\) get_closest_chain_logic(\(State_{i}\), \(l\_candidates\_Ch_{i}\), \(Ch_{j}\), \(l\_no\_intersection\_j,Ch_{i},location\), \(B\))//See Algorithm15 \(Ch_{k}^{k}\leftarrow\) get_closest_chain_logic(\(State_{i}\), \(l\_candidates\_Ch_{i}\), \(Ch_{j}\), \(l\_no\_intersection\_j,Ch_{i},location\), \(A\))//See Algorithm15 \(Ch_{k}\), \(endpoint\leftarrow\) select_closest_one(\(Ch_{j}\), \(Ch_{k}^{n}\), \(Ch_{k}^{b}\)) connect_two_chains(\(State_{i}\), \(Ch_{j}\), \(Ch_{k}\), \(l\_candidates\_Ch_{i}\), \(endpoint\), \(Ch_{i}\))//See Algorithm14 \(j\_pointer\leftarrow\) update_pointer(\(Ch_{j}\), \(Ch_{k}\), \(l\_candidates\_Ch_{i}\)) \(State_{i}\), \(State_{i-1}\leftarrow\) update_system_status( \(State_{i}\), \(Ch_{i}\), \(l\_s\_outward\), \(l\_s\_inward\))//See Algorithm10 \(l\_c{h_{c}}\),\(l\_nodes_{c}\leftarrow\) iterate_over_chains_list_and_complete_them_if_met_conditions(\(State_{i}\)) return\(l\_c{h_{c}}\),\(l\_nodes_{c}\) ``` **Algorithm 9**Connect Chains Main Logic current iteration, in line 15, we repeat the same sentence as in line 8. Output \(next\_chain\_index\) is returned as an attribute of \(State_{i}\). ``` Input:\(State_{i}\),// class object that has a pointer to all the system objects \(Ch_{i}\), // current support chain \(l\_s\_outward\),// outward chain list \(l\_s\_inward\),// inward chain list Output: chain for next iteration. Stored in class \(State_{i}\) 1\(l\_ch_{s}\gets State_{i}\).get_list_chains() 2if\(State_{i}\).system_status_change()then 3 sort_chain_list_by_descending_size(\(l\_ch_{s}\)) 4\(l\_current\_iteration\gets Ch_{i}\) + \(S_{outward}\) + \(S_{inward}\) 5 sort_chain_list_by_descending_size(\(l\_current\_iteration\)) 6\(longest\_chain\gets l\_current\_iteration[0]//l\_current\_iteration\) is sorted by size 7if\(Ch_{i}=longest\_chain\)then 8\(next\_chain\_index\leftarrow\) get_next_chain_index_in_list(\(l\_ch_{s}\), \(Ch_{i}\)) 9 end if 10 11else 12\(next\_chain\_index\leftarrow\) get_chain_index_in_list(\(l\_ch_{s}\), \(Longest\_chain\)) 13 end if 14 15 end for 16 17 end for 18\(next\_chain\_index\leftarrow\) get_next_chain_index_in_list(\(l\_ch_{s}\), \(Ch_{i}\)) 19 end for 20\(State_{i}.next\_chain\_index\gets next\_chain\_index\) return ``` **Algorithm 10**update_system_status Algorithm 11 implements the function \(get\_next\_chain\), executed at line 5 of Algorithm 9, in order to find the next support chain. It is a method of class \(State_{i}\). In line 1, \(l\_ch_{s}\) is extracted from \(State_{i}\). In line 2, the next support chain \(Ch_{i}\) is extracted from the list \(l\_ch_{s}\) using the \(next\_chain\_index\) variable (output of Algorithm 10). In line 3, the size of the list \(l\_ch_{s}\) is stored in the variable \(size\_l\_chain\_init\), an attribute of \(State_{i}\). The longer the support chain, the better. So, in line 4, if \(Ch_{i}\) is large enough and between its endpoints do not exist overlapping chains, the chain becomes a closed chain (ring), with size equal to Nr, interpolating the nodes (Algorithm 12). Finally, we return the support chain \(Ch_{i}\) for the current iteration. ``` Input:\(State_{i}\),// class object that has pointers to all the system objects Output: next support chain 1\(l\_ch_{s}\gets State_{i}\).get_list_chains() 2\(Ch_{i}\gets l\_ch_{s}[State_{i}.next\_chain\_index]\) 3\(State_{i}.\)size_l_chain_init\(\leftarrow\) length(\(l\_ch_{s}\)) 4\(State_{i}.\)fill_chain_if_there_is_no_overlapping(\(Ch_{i}\)),// See Algorithm 12 return\(Ch_{i}\) ``` **Algorithm 12**get_next_chain Algorithm 12 checks if overlapping chains exist between the endpoints of a given chain and, if it's the case, completes the chain. Lines 2 to 7 check the \(chain\) size. The function returns if it is bigger or equal to the number of rays \(Nr\) or \(chain\) is not closed. Class **chain** has the method \(is\_closed()\), which returns True if the chain has more than \(\textit{threshold}*Nr\) nodes. _threshold_ is a method parameter and, on line 5, is set to 0.9. In lines 8 and 10, we check that between the interpolated nodes does not exist another chain. If it exists, we do not add new nodes to \(chain\). To check if a chain exists between both chains, we build a virtual band between the endpoints to be connected, as illustrated in Figure 13. Let's name \(Ch_{j}\) and \(Ch_{k}\) the two chains to be connected, even if they can be part of the same (long) chain. Chain \(Ch_{i}\) is the support chain. Blue and green nodes define the virtual band between the endpoints to be connected. Red nodes are the nodes to be added to \(chain\) if there are no overlapping chains in the band. The width of the band is a % of the radial distance to the support chain \(Ch_{i}\). In our experiment, we set \(band\_width=0.1\) if the support chain is of type Normal and \(band\_width=0.05\) if the support chain is of type Center. Nodes in red are generated interpolating between the endpoints by a line in polar coordinates (with origin in \(c\)). In line 8, we set all the elements utilized to check for overlapping chains. All the red nodes plus both endpoints are added to the list \(l\_nodes\), the support chain is \(Ch_{i}\) and \(endpoint\_type\) indicates the type of the \(Ch_{j}\) endpoint, in this case, is of type \(B\) (Figure 8). In line 9, the function \(exist\_chain\_overlapping\) checks if overlapping chains exist in the defined band. We say that a chain exists in the band if some node within the band defined in Figure 13 belongs to a different chain than \(Ch_{j}\) or \(Ch_{k}\). In this line we are passing \(chain\) twice because \(Ch_{j}\) is equal to \(Ch_{k}\) (Algorithm 13). Finally, if overlapping chains do not exist, we add the red nodes to the global nodes list and the inner \(chain\) node list (line 13). As we said, the \(l\_nodes\) list also includes both \(chain\) endpoints. The function \(add\_nodes\_list\_to\_system\) modifies the (system) in two ways: it incorporates new nodes to the global nodes list (\(l\_nodes_{s}\)) and updates the visibility information in the chains which have endpoints on the rays in which new nodes were added. ``` Input:\(State_{i}\),// class object that has pointers to all the system objects \(chain\), // chain to be completed if conditions are met. Passed by reference. Output: Void. If nodes are created, they are added to \(chain\) and \(State_{i}\) directly 1\(l\_ch_{s}\gets State_{i}\).get_list_chains() if\(chain\_size\geq chain\).Nrthen 2return 3 end if 4ifnot\(chain\_is\_closed(\textit{threshold}\)=0.9)then 5 return 6 end if 7\(Ch_{i}\), \(l\_nodes\), \(endpoint\_type\leftarrow State_{i}\).compute_all_elements_needed_to_check_if_exist_chain_overlapping(\(chain\)) \(exist\_chain\leftarrow\) exist_chain_overlapping(\(l\_ch_{s}\), \(l\_nodes\), \(chain\), \(chain\), \(endpoint\_type\), \(Ch_{i}\))//See Algorithm 13 8if\(exist\_chain\)then 9return 10 end if 11\(State_{i}\).add_nodes_list_to_system(\(chain\), \(l\_nodes\)) return ``` **Algorithm 12**fill_chain_if_there_is_no_overlapping Figure 13 describes how an overlapping chain is tested between two chains that are candidates to be connected, named here \(Ch_{j}\) and \(Ch_{k}\). Algorithm 13 shows the method. As input, it receives the chain's list, \(l\_ch\_s\), in which to iterate to identify any chain overlapping with a given band. The band is defined by a nodes list, \(l\_nodes\), which includes the (interpolated) red nodes plus \(Ch_{j}\) and \(Ch_{k}\) node endpoints (Figure 13). This band is built by the class **InfoVirtualBand**. The parameter \(band\_width\) is a % of the radial distance to the support chain \(Ch_{i}\). If \(Ch_{i}\) is of type center, \(band\_width\) is equal to 5%, else to 10%. Once the width of the band is defined, we iterate over the nodes of \(l\_nodes\), generating two nodes for each one of them. These two generated nodes belong to the same ray but have different radial distances to the center, as shown in the figure. Suppose the radial difference between the node belonging to \(l\_nodes\) and the node over the support chain, \(N_{i}\), belonging to the same ray is \(\delta R_{i}\). In that case, the generated nodes have the following radial distances: * \(R\left(N_{i}^{green}\right)\)\(\leftarrow\)\(\delta R_{i}\)*\(\left(1\)+\(band\_width\right)\) + \(R(N_{i})\) * \(R\left(N_{i}^{blue}\right)\)\(\leftarrow\)\(\delta R_{i}\)*\(\left(1\)-\(band\_width\right)\) + \(R(N_{i})\) Where \(R(.)\) is the radial distance to the center of a given node, Equation (6). The band information (green and blue nodes) is stored in the \(info\_band\) object. The function \(exist\_chain\_in\_band\_logic\) returns the list of chains belonging to \(l\_ch\_s\) that overlap with the band defined by \(info\_band\). This is made by iterating over the chains belonging to \(l\_ch_{s}\) and checking if they have nodes between the blue and green nodes. The chains that intersect the band are added to list \(l\_chains\_in\_band\). Therefore, if the length of \(l\_chains\_in\_band\) is larger than 0, at least one overlapping chain exists over the given band. ``` Input:\(l\_ch_{s}\),//list chains \(l\_nodes\), // list of interpolated nodes plus the endpoints \(Ch_{j}\),// source chain. Check Figure 13 \(Ch_{k}\),// destination chain. Check Figure 13 \(endpoint\_type\),// source chain endpoint (A or B) \(Ch_{i}\),// support chain Output: Boolean. True if exist chain belonging to \(l\_ch_{s}\) in band 1\(info\_band\)\(\leftarrow\) InfoVirtualBand(\(l\_nodes\), \(Ch_{j}\), \(Ch_{k}\), \(endpoint\_type\),\(Ch_{i}\)) \(l\_chains\_in\_band\)\(\leftarrow\) exist_chain_in_band_logic(\(l\_ch\_s\), \(info\_band\)) \(exist\_chain\)\(\leftarrow\) len(\(l\_chains\_in\_band\)) \(>\) 0 return\(exist\_chain\) ``` **Algorithm 13**exist_chain_overlapping Algorithm 14 describes the procedure to connect two chains. In line 1, new nodes to connect both chains are generated and added to chain \(Ch_{j}\). Nodes are generated through polar coordinates linear interpolation. Visibility chain information over the rays in which new nodes are generated is also updated. In line 2, nodes from chain \(Ch_{k}\) are added to chain \(Ch_{j}\), and the neighborhood information is updated, particularly the visible chains (as both chains are merged). Neighborhood chains list information is updated in line 3, and the \(Ch_{k}\) chain is deleted from all lists (line 4). The intersection matrix, M, is updated in line 5, as new intersections can appear. Therefore visibility chains pointer may need to be updated. Additionally, as one chain is deleted, matrix M reduces its dimension by one. Finally, all chain ids are updated, given the new situation in line 6. Chains id are organized in a sequential manner and without holes between them. This is because chain id is used for indexing the interpolation matrix. All the objects involved in this logic are passed by reference and are updated, including the \(Ch_{j}\) chain. The method to find the (closest) candidate chain to be connected to chain \(Ch_{j}\), given a support chain \(Ch_{i}\), is implemented in \(get\_closest\_chain\_logic\) (Algorithm 15). It finds the \(Ch_{k}\) chain to be connected to the corresponding \(Ch_{j}\) endpoint and checks if a symmetric condition is fulfilled. The **Input:**\(State_{i}\), \(Ch_{j}\), // current chain to be connected \(Ch_{k}\), // closest chain to be connected with \(Ch_{j}\) \(l\_candidates\_Ch_{i}\), // set of chains where to pick chains to connect with \(Ch_{j}\) \(endpoint\), //\(Ch_{j}\) endpoint to be connected \(Ch_{i}\), //support chain \(Ch_{i}\) **Output:** Void. Nodes are added to \(Ch_{j}\) and system list are updated in \(State_{i}\) ``` 1generate_new_nodes(\(State_{i}\), \(Ch_{j}\), \(Ch_{k}\), \(endpoint\), \(Ch_{i}\)) 2updating_chain_nodes(\(State_{i}\), \(Ch_{j}\), \(Ch_{k}\)) 3updating_chain_after_connect(\(State_{i}\), \(Ch_{j}\), \(Ch_{k}\)) 4delete_closest_chain(\(State_{i}\), \(Ch_{k}\), \(l\_candidates\_Ch_{i}\)) 5updating_intersection_matrix(\(State_{i}\), \(Ch_{j}\), \(Ch_{k}\)) 6updating_chains_ids(\(State_{i}\), \(Ch_{k}\)) 7return ``` **Algorithm 14**connect_two_chains Figure 13: Red nodes are the interpolated ones between \(Ch_{j}\) and \(Ch_{k}\) chains. Blue chains are the ones that define the outer band (outward), while green defines the inward band. \(Ch_{i}\) is the support chain. symmetric condition means that if \(Ch_{k}\) chain is the closest to the \(Ch_{j}\)\(endpoint\), then \(Ch_{j}\) must be the closest to the \(Ch_{k}\)'s corresponding \(endpoint\). In line 1, \(get\_closest\_chain\), find the nearest chain to the corresponding endpoint of \(Ch_{j}\), called \(Ch_{k}\), within the chain set \(l\_no\_intersection\_j\). In line 2, all the chains included in \(l\_candidates\_ch_{i}\) that do not intersect with \(Ch_{k}\) are added to the set \(l\_no\_intersection\_k\). From lines 4 to 9, \(Ch_{k}\) endpoint type is defined, named \(endpoint_{k}\). In line 10, the closest chain to \(Ch_{k}\) called \(symmetric\_chain\), is obtained from the set \(l\_no\_intersection\_k\). Finally, in line 11 is checked that \(symmetric\_chain\) is \(Ch_{j}\) and that the addition of \(Ch_{k}\) and \(Ch_{j}\) lengths is smaller than Nr. If all the former conditions are met, the \(Ch_{k}\) chain is returned. ``` Input:\(State_{i}\), \(Ch_{j}\), // current chain \(l\_candidates\_Ch_{i}\), // set of visible chains from \(Ch_{i}\) \(l\_no\_intersection\_j\),//list of chains belonging to \(l\_candidates\_Ch_{i}\) that do not intersect with \(Ch_{j}\) \(Ch_{i}\), //support chain \(location\), // location of set \(l\_candidates\_Ch_{i}\) with respect to \(Ch_{i}\) (inward or outward) \(enpoint\),//\(Ch_{j}\) endpoint A or B to be connected Output:closest chain to \(Ch_{j}\) 1\(M\gets State_{i}\).M \(Ch_{k}\leftarrow\) get_closest_chain(\(State_{i}\), \(Ch_{j}\), \(l\_no\_intersection\_j\), \(Ch_{i}\), \(location\), \(endpoint\), \(M\)) // See Algorithm16 2\(l\_no\_intersection\_k\leftarrow\) get_non_intersection_chains(\(M\), \(l\_candidates\_Ch_{i}\), \(Ch_{k}\)) if\(endpoint\) = Bthen 3\(endpoint_{k}\) = A 4 end if 5else 6\(endpoint_{k}\) = B 7 end if 8\(symmetric\_chain\leftarrow\) get_closest_chain(\(State_{i}\), \(Ch_{k}\), \(l\_no\_intersection\_k\), \(Ch_{i}\), \(location\), \(endpoint_{k}\), \(M\))// See Algorithm16 9ifnot (\(symmetric\_chain==Ch_{i}\)) and not (\(lenght(Ch_{k})+lenght(Ch_{j}))\leq Nr\)then 10\(Ch_{k}\) = None 11 end if 12return\(Ch_{k}\) ``` **Algorithm 15**get_closest_chain_logic Algorithm 16 describes the logic to search for the closest candidate chain that met some conditions, as described in item 3. In line 2, all the chains in the neighborhood are selected. The neighborhood is defined by the \(Ch_{j}\) endpoint and the \(neighbourhood\_size\)\(State_{i}\) attribute. For example, given endpoint A with an angle of 0 degrees and \(neighbourhood\_size=20\), all the chains included in \(l\_candidates\_Ch_{i}\) with endpoint B angle in \([0-20,0]=[340,360]\) are selected and returned in ascending angular order with respect to the \(Ch_{j}\) endpoint. From lines 5 to 12, the main loop logic is defined. Two conditions allow to exit of the loop: a chain that satisfies conditions from Equation (9) is found, or no chains in the set \(l\_sorted\_chains\_in\_neighbourhood\) satisfy the conditions. The Equation (9) is implemented in function \(connectivity\_goodness\_condition\) (line 7). If \(candidate\_chain\) satisfies the conditions, it could happen that exists a chain in the subset \(l\_no\_intersection\_j\) closer to \(Ch_{j}\) in terms of the connectivity goodness conditions but further in the angular distance. So in line 9, a control mechanism is added (Algorithm 17). The control mechanism (line 9, Algorithm 16) to solve the issue shown in Figure 14 is implemented ``` Input:\(State_{i}\), \(Ch_{j}\), // current chain \(l\_no\_intersection\_j\), // chains that no intersect with \(Ch_{j}\), set of candidates to connect with \(Ch_{j}\)\(Ch_{i}\), // support chain \(location\), // inward o outward position of \(Ch_{j}\) regarding to the support chain \(endpoint\), // \(Ch_{j}\) endpoint \(M\),// intersection matrix Output: closest chain to \(Ch_{j}\) that satisfies the connectivity goodness conditions. 1\(neighbourhood\_size\gets State_{i}.neighbourhood\_size\) 2\(l\_sorted\_chains\_in\_neighbourhood\leftarrow\) get_chains_in_neighbourhood(neighbourhood\_size,\)\(l\_no\_intersection\_j\), \(Ch_{j}\), \(Ch_{i}\), \(endpoint\), \(location\)) 3\(next\_id\gets 0\) 4\(Ch_{k}\leftarrow\) None 5while\(len(l\_sorted\_chains\_in\_neighbourhood)>next\_id\)do 6\(candidate\_chain\leftarrow\) l_sorted_chains_in_neighbourhood[\(next\_id\)] 7\(pass\_control\), \(radial\_distance\leftarrow\) connectivity_goodness_condition(\(State_{i}\), \(Ch_{j}\), \(candidate\_chain\), \(Ch_{i}\), \(endpoint\))// See Algorithm18 8if\(pass\_control\)then 9\(Ch_{k}\leftarrow\) get_the_closest_chain_by_radial_distance_that_does_not_intersect(\(Ch_{j}\), \(endpoint\), \(location\), \(radial\_distance\), \(candidate\_chain\), \(M\), \(l\_sorted\_chains\_neighbourhood\))// See Figure14 and Algorithm17 10 break 11 12 end if 13\(next\_id\gets next\_id\) + 1 14 15 end if return\(Ch_{k}\) ``` **Algorithm 16**get_closest_chain Figure 14: \(Ch_{i}\) is the support chain. The candidates chains for connection with \(Ch_{j}\), are \(Ch_{k}\) and \(Ch_{l}\). The angular closest chain to \(Ch_{j}\) is the noisy \(Ch_{l}\). \(Ch_{k}\) is the radially closest chain to \(Ch_{j}\), these means that \(\|r_{j}-r_{k}\|<\|r_{j}-r_{l}\|\). Where \(r_{i}\) is the chain’s endpoint distance to the support chain. by Algorithm 17. In angular terms, the closest chain to \(Ch_{j}\) that satisfies Equation (9) is \(Ch_{l}\). However, another chain exists, \(Ch_{k}\), which is more similar but not the closest in terms of angular distance, Equation (7). To fix this, we get all the chains that intersect to \(Ch_{l}\) and satisfy Equation (9) with \(Ch_{j}\). We sort them by radial proximity to \(Ch_{j}\), Equation (6), and return the best candidate chain as the closer one in terms of radial distance. Line 1 of Algorithm 17, get the chains that intersect with \(candidate\_chain\). Note that \(candidate\_chain\) is the closest chain of angular distance, Equation (7), to \(Ch_{j}\). In line 2, the former chains subset, \(l\_intersections\_candidate\), is filtered by the Equation (9) condition. In line 3, chains that satisfy that condition are sorted in ascending order by radial difference with the \(Ch_{j}\) endpoint. Therefore, in Figure 14, \(Ch_{k}\) would be the first element and \(Ch_{l}\) the second. In line 4, the radially closest one to \(Ch_{j}\) is returned. ``` Input:\(State_{i}\), \(Ch_{j}\), // current chain \(Ch_{i}\), // support chain \(endpoint\), // \(Ch_{j}\) endpoint \(candidate\_chain\_radial\_distance\), //radial difference between \(Ch_{j}\) and \(candidate\_chain\) endpoints \(candidate\_chain\), // angular closer chain to \(Ch_{j}\) \(M\),// intersection matrix \(l\_sorted\_chains\_in\_neighbourhood\), // chains in \(Ch_{j}\) endpoint neighbourhood sorted by angular distance Output: closest chain to \(Ch_{j}\) that satisfies connectivity goodness conditions. 1\(l\_intersections\_candidate\leftarrow\) intersection\(\_chains(M\), \(candidate\_chain\), \(l\_sorted\_chains\_in\_neighbourhood)\) \(l\_intersections\_candidate\_set\leftarrow\) get_all_chain_in_subset_that_satisfy_condition(\(State_{i}\), \(Ch_{j}\), \(Ch_{i}\), \(endpoint\), \(candidate\_chain\_radial\_distance\), \(candidate\_chain\), \(l\_intersections\_candidate\)) sort_set_list_by_distance(\(l\_intersections\_candidate\_set\)) \(Ch_{k}\leftarrow\) l_intersections_candidate_set[0].cad return\(Ch_{k}\) ``` **Algorithm 17**get_the_closest_chain_by_radial_distance_that_does_not_intersect The **connectivity_goodness_condition** function is described by Algorithm 18. From lines 1 to 4, the parameters (Table 1) are extracted from the \(State_{i}\) class. In line 6, the chain size condition is verified and saved in \(size\_condition\). In line 7, the endpoint condition is verified. Figure 15 shows an example of this check where \(Ch_{i}\) is the support chain for \(Ch_{i+1}\) and \(Ch_{i+2}\). Both endpoints \(A_{i+1}\) from chain \(Ch_{i+1}\) and endpoint \(B_{i+2}\) from chain \(Ch_{i+2}\) are visible. It is not possible to connect chains \(Ch_{i+1}\) and \(Ch_{i+2}\) through endpoints \(B_{i+1}\) and \(A_{i+2}\) because these endpoints do not belong to the chain support \(Ch_{i}\) angular domain. In line 8, the Equation (9) condition is verified. A boolean result about the similarity condition and the distribution of the radial distance between \(Ch_{j}\) and \(candidate\_chain\) are returned. The later is defined as \(distribution\_distance=\|mean(radials_{ch_{j}})-mean(radials_{candidate\_ chain})\|\). Finally, in line 9, all conditions are verified. The function returns both the boolean check of the conditions and the value of \(distribution\_distance\). ``` Input:\(State_{i}\), \(Ch_{j}\), \(M\), ``` Input:\(State_{i}\), \(Ch_{j}\), // current chain \(candidate\_chain\), // chain closer to \(Ch_{j}\) \(Ch_{i}\), // support chain of \(Ch_{j}\) and \(candidate\_chain\) \(endpoint\), // \(Ch_{j}\) endpoint Output: a boolean indicating if conditions are met, \(distribution\_distance\) (radial difference between both chains) /* Parameter extraction, from Table 1 */ 1\(th\_radial\_tolerance\gets State_{i}.th\_radial\_tolerance\) \(th\_distribution\_size\) \(th\_regular\_derivative\gets State_{i}.th\_regular\_derivative\) \(derivative\_from\_center\gets State_{i}.derivative\_from\_center\) /* Condition checks */ 2\(distribution\_distance\leftarrow\) None \(size\_condition\gets Ch_{j}.size\) + \(candidate\_chain.size\) \(\leq\) Nr \(endpoint\_conditions\leftarrow\) check_endpoints(\(Ch_{i},Ch_{j},candidate\_chain,endpoint\)) \(similarity\_condition\), \(distribution\_distance\leftarrow\) similarity_conditions(\(State_{i}\), \(th\_radial\_tolerance\),\(th\_distribution\_size\), \(th\_regular\_derivative\), \(derivative\_from\_center\), \(Ch_{i}\), \(Ch_{j}\), \(candidate\_chain\), \(endpoint\)) // Equation (9) \(check\gets size\_condition\) and \(endpoint\_condition\) and \(similarity\_condition\) return\(check\), \(distribution\_distance\) ``` **Algorithm 18**connectivity_goodness.condition Figure 15: Endpoint condition check. See the text for an explanation. 1. It can remain some chains belonging to the same ring but not forming a closed chain. In many cases, this is due to small overlapping between chains. To solve this problem, we cut the overlapping chains in such a way as to avoid intersections between them and then try to reconnect the resulting chains that respect the connectivity goodness conditions. Figure 16 illustrates the problem. 2. Given two closed chains which contain a set of chains between them. Suppose the added angular length of the non-overlapping chains between the rings is more significant than 180 degrees. In that case, we consider that those uncompleted chains have enough information about the ring, so we complete it. The completion is based on the interpolation between both rings and the location of the existing chains. The chains that become part of the closed chain are the ones that meet the connectivity goodness conditions. 3. To test the connectivity goodness in this stage, we use the values on the last column of Table 1. The method is described by Algorithm 19. It uses the center of the spider web and the chains and nodes lists. In line 1, the function is initialized, * \(l\_ch_{c}\) is copied into new list \(l\_ch_{p}\) * Function variables are initialized as \(chain\_was\_completed=FALSE\), \(idx\_start=NONE\) Figure 16: a) F03d disk after connecting stage. b) There is a ring that cannot be closed because of chain intersection issues. c) Ring is closed after the postprocessing stage. The main loop spans all the closed chains and includes lines 2 to 14. In line 3, the **DiskContext** object is instantiated. This object handles the logic to iterate over the regions delimited by the closed chains and go from the smaller to the bigger area (defined between the chain and the center). The two neighboring closed chains, and all the chains between them are identified in line 5 (ctx.update()). Some information is stored in the following variables * \(inward\_ring\): The inward closed chain. If it is the first iteration, the chain is of type center (an artificial chain in the center with \(area=0\)). * \(outward\_ring\): The outward closed chain. If the chain is of type border, this is the last iteration. * \(l\_within\_chains\): chain subset delimited by \(inward\_ring\) and \(outward\_ring\). A ring defines an internal area from the chain to the center. All closed chains (rings) are sorted by their inner area. The current index iteration is stored in the variable _idx_ of object **DiskContext**. The **shapely** python library is used to get the chains in regions between two rings. A region is determined by two shapely **Polygon**, one external and one internal. A **Polygon** is a list of points. Each closed chain is codified as a shapely **Polygon**. A method of the object **Polygon** allows us to find the set of uncompleted chains inside a region. The loop defined between lines 4 and 12 iterates over the closed chains. In line 6, the function for split and connect chains is called. If a chain inside \(l\_within\_chains\) is closed during the call to \(split\_and\_connect\_chains\), we exit the inner loop. If a chain was completed during a call, \(chain\_was\_completed\) is set to TRUE in line 6. The next iteration will work with the same \(inward\_ring\), but the formerly closed chain is used as \(outward\_ring\). The set \(l\_within\_chains\) is modified accordingly. In line 8, \(idx\_start\) variable is set for the **DiskContext** object in the next iteration. The chains are connected if enough information between inward and outward chains and connectivity goodness conditions are met (line 10). In line 15, all the chains with enough nodes (more than \(0.95Nr\) nodes) are closed. In that case, new nodes are added to obtain a chain with Nr nodes. To this aim, we linearly interpolate between the inward and outward rings, going from one endpoint to another. Finally, the list of all post-processed chains, both closed and not closed, \(l\_ch_{p}\) is returned. The method \(split\_and\_connect\_chains\) is described in Algorithm 20. It iterates over all the chains within a region to connect them. At every chain endpoint, the method cuts all the chains that intersect the ray passing through that endpoint and checks the connectivity goodness condition, Equation (8), between the divided chains to find connections between them. Notice that this module removes the chain overlapping constraint. The parameters used by this module are: 1. \(neighbourhood\_size=45\) 2. \(Th_{Radial\_tolerance}=0.2\) 3. \(Th_{Distribution\_size}=3\) 4. \(Th_{Regular\_derivative}=2\) The parameter \(neighbourhood\_size\) defines the maximum angular distance (Equation (7)) to consider candidate chains departing from an endpoint in both directions. Given a source chain (the current chain \(Ch_{j}\)), every chain in the region that overlaps \(Ch_{j}\) in more than \(neighbourhood\_size\) is not considered a candidate chain to connect because if there is a very long overlapping, that chain is probably part of another ring. In line 1, the method is initialized, and variables \(connected\), \(completed\_chain\), and \(Ch_{j}\) are set to FALSE. Also, the _chains_ in \(l\_within\_chains\) list are sorted ``` Input:\(l\_ch_{c}\), // chains list \(l\_nodes_{c}\), // nodes list \(c\) // center of the _sypder web_ Output: A list of post-processed _chains_\(l\_ch_{p}^{k}\), \(k=1,2,\cdots,N_{f}\) 1\(l\_ch_{p}\)\(\leftarrow\) initialization(\(l\_ch_{c}\)) 2whileTrue do 3ctx \(\leftarrow\) DiskContext(\(l\_ch_{p}\), \(idx\_start\)) 4while\(len(ctx\_completed\_chains)>0\)do 5\(l\_within\_chains\), \(inward\_ring\), \(outward\_ring\)\(\leftarrow\) ctx.update() \(chain\_was\_completed\)\(\leftarrow\) split_and_connect_chains(\(l\_within\_chains\), \(inward\_ring\), \(outward\_ring\), \(l\_ch_{p}\), \(l\_nodes_{c}\),\(ctx\_neighbourhood\_size\)) // See Algorithm 20 6ifchain_was_completedthen 7idx_start \(\leftarrow\) ctx.idx 8break 9connect_chains_if_there_is_enough_data(\(ctx\), \(l\_nodes_{c}\), \(l\_ch_{p}\)) 10ifctx.exit()then 11break 12 13ifnot\(chain\_was\_completed\)then 14break 15 16 complete_chains_if_required(\(l\_ch_{p}\)) 17return\(l\_ch_{p}\) ``` **Algorithm 19**Posprocessing Main Logic by size in descending order. The chain nodes inside the region are stored in the \(l\_inward\_nodes\) list (line 2). The main loop, lines 3 to 15, iterates over the chains in \(l\_within\_chains\). The loop terminates when either current chain \(Ch_{j}\) is closed or all chains in list \(l\_within\_chains\) have been tested. In line 10, a new (non-treated) chain is extracted for the current iteration and stored in \(Ch_{j}\). The splitting and connecting logic called \(split\_and\_connect\_neighbouring\_chains\) is executed in line 13. The best candidate chain for endpoint A (\(Ch_{k}^{a}\)) is determined at this point, while the best candidate chain for endpoint B (\(Ch_{k}^{b}\)) is obtained in line 14. \(Ch_{i}^{a}\) and \(Ch_{i}^{b}\) are the support chains of chains \(Ch_{k}^{a}\) and \(Ch_{k}^{b}\) respectively. The radial distance (Equation (6)) of chain \(Ch_{k}^{a}(Ch_{k}^{b})\) to \(Ch_{j}\) trough endpoint \(A(B)\) is \(diff_{a}(diff_{b})\). The radially closest candidate chain is connected in function \(connect\_radially\_closest\_chain\) (line 15), and the maximum number of nodes (\(Nr=360\)) constraint is verified. The same node interpolation as in line 15 of Algorithm 19 is used. ``` Input:\(l\_within\_chains\), // uncompleted chains delimited by inward_ring and outward_ring \(inward\_ring\), // inward ring of the region \(outward\_ring\), // outward ring of the region \(l\_ch_{p}\), // chain list \(l\_nodes_{c}\), // full nodes list Parameter: \(neighbourhood\_size\) // size to search for chains that intersect the other endpoint Output:boolean value indicating if a chain has been completed on the region 1\(connected\), \(completed\_chain\), \(Ch_{j}\leftarrow\) initialization_step(\(l\_within\_chains\)) 2\(l\_inward\_nodes\leftarrow\) get_nodes_from_chain_list(\(l\_within\_chains\)) 3whileTrue do 4ifnot\(connected\)then 5if\(Ch_{j}\) is not None and \(Ch_{i}.is\_closed()\)then 6 complete_chain_using_2_support_ring(\(inward\_ring\), \(outward\_ring\), \(Ch_{j}\)) completed_chain \(\leftarrow\) True 7\(Ch_{j}\leftarrow\) None 8 9else 10\(Ch_{j}\leftarrow\) get_next_chain( \(l\_within\_chains\)) 11if\(Ch_{j}==\) Nonethen 12break 13\(Ch_{k}^{a}\), \(diff\_a\), \(Ch_{i}^{a}\leftarrow\) split_and_connect_neighbouring_chains(\(l\_inward\_nodes\), \(l\_within\_chains\), \(Ch_{j}\), \(A\), \(outward\_ring\), \(inward\_ring\), \(neighbourhood\_size\)) 14\(Ch_{k}^{b}\), \(diff\_b\), \(Ch_{i}^{b}\leftarrow\) split_and_connect_neighbouring_chains(\(l\_inward\_nodes\), \(l\_within\_chains\), \(Ch_{j}\), \(B\), \(outward\_ring\), \(inward\_ring\), \(neighbourhood\_size\), \(aux\_chain=Ch_{k}^{a}\))//See Algorithm 21 15\(connected\), \(Ch_{i}\), \(endpoint\leftarrow\) connect_radially_closest_chain(\(Ch_{j}\), \(Ch_{k}^{a}\), \(diff\_a\), \(Ch_{i}^{a}\), \(Ch_{k}^{b}\), \(diff\_b\), \(ch_{i}^{b}\), \(l\_ch_{p}\), \(l\_within\_chains\), \(l\_nodes_{c}\), \(inward\_ring\), \(outward\_ring\)) 16return\(completed\_chain\) ``` **Algorithm 20**split_and_connect_chains Given a source chain, \(Ch_{j}\), the logic for splitting neighborhood chains and searching for candidates is implemented by Algorithm 21. Chains that intersect the ray supporting \(Ch_{j}\) endpoint are split. In line 1, the angle domain of \(Ch_{j}\) is stored in \(Ch_{j}\_angle\_domain\). In line 2, variable \(Ch_{j}\_node\) stores the \(Ch_{j}\) node endpoint. The closest chain ring to the \(Ch_{j}\) endpoint (in Euclidean distance) is selected as the support chain, \(Ch_{i}\). From lines 4 to 7, we have the logic to get all chains in the region delimited by two rings intersecting the ray \(Ray_{e}\) supporting the endpoint. First, we store in \(l\_nodes\_ray\) all the nodes over \(Ray_{e}\), pinpointing the chains supporting those nodes, and keep those chains in \(l\_endpoint\_chains\). To be cut, the overlapping between a chain in the set \(l\_endpoint\_chains\) and \(Ch_{j}\) must be smaller than \(neighbourhood\_size\). Otherwise, it is filtered because that chain belongs to another ring (line 7). The method in line 8 effectively cut the chosen chains. Once a chain is cut, it produces a \(sub\_chain\) nonintersecting \(Ch_{j}\), stored as a candidate chain in \(l\_candidates\) (line 8) and in the list \(l\_no\_intersections\_j\) (line 9). In line 10, all chains that intersect \(Ch_{j}\) in the second endpoint and are in the neighborhood of the first endpoint are added to \(l\_candidates\). Also, the chains in the \(Ch_{j}\) chain neighborhood, which does not intersect its endpoint but intersects in the other endpoint, are split (line 11). In line 12, all chains in \(l\_no\_intersections\_j\) that are far away in terms of angular distance (Equation (7)) from the given endpoint of \(Ch_{j}\) are removed. The nonintersecting chains in the endpoint neighborhood are stored in \(l\_filtered\_no\_intersection\_j\). In line 13, chains in \(l\_filtered\_no\_intersection\_j\) are added to \(l\_candidates\). In line 14, all chains from \(l\_candidates\) which do not satisfy the connectivity goodness conditions of Equation (8) are discarded. In line 15, the closest chain that meets the connectivity goodness conditions is returned, \(Ch_{k}\), and \(diff\), the radial difference between \(Ch_{k}\) and \(Ch_{j}\) endpoints(Equation (6)), and the support chain, \(Ch_{i}\). Method \(split\_intersecting\_chains\) is described in Algorithm 22. Given an endpoint ray direction, we iterate over all the intersecting chains in that \(direction\). Given a chain to be split, \(inter\_chain\), and the node, \(split\_node\), we divide the chain nodes in two chains cutting the nodes list in the position of \(split\_node\). Remember that the list of nodes within a chain is clockwise sorted. After splitting the chain in \(sub\_ch1\) and \(sub\_ch2\), we select the sub chain that does not intersect \(inter\_chain\), line 5. Then, if \(Ch_{k}\) intersects \(Ch_{j}\) in the other endpoint, this means that \(inter\_chain\) intersects \(Ch_{j}\) at both endpoints, we repeat the logic over the other endpoint but for \(Ch_{k}\) instead of \(inter\_chain\). The split chain list is returned in \(l\_search\_chains\). Another critical method from Algorithm 19 is \(connect\_chains\_if\_there\_is\_enough\_data\). When there is a unique chain longer than \(information\_threshold\) (180 in our experiments), we interpolate between its endpoints using both inward and outward support chains. When there are several chains in the region, we get the largest subset of chains in the region that non intersect each other. Suppose the chains over this subset have an angular domain bigger than \(information\_threshold\) (180 in our experiments). In that case, we iterate over the chains within the subset (sorted by size) and connect all the chains that satisfy the similarity condition using the last column of Table 1. ### Pith detection The pith position is an input for the method. In the demo it can be set manually or using the method proposed for Decelle et al, [4], which is at the IPOL site. ## 4 Implementation The implementation was made in Python 3.11. ### Input and Output The demo requires as input, a segmented image and the pith position. A command line execution example is: $ python main.py --input IMAGE_PATH --cx CX --cy CY ``` Input:\(l\_within\_nodes\), // nodes within region \(l\_within\_chain\), // uncompleted chains delimited by inward and outward rings \(Ch_{j}\), // current source chain. The one that is being connected if conditions are met \(endpoint\),// endpoint of source chain to find candidate chains to connect \(outward\_ring\), // outward support chain ring \(inward\_ring\), // inward support chain ring \(neighbourhood\_size\), // total nodes size to search for chains that intersect the other endpoint Output: Source chain \(Ch_{k}\), \(radial\_distance\) and closest support chain to endpoint, \(Ch_{i}\) \(Ch_{j}\_angle\_domain\leftarrow\) get_angle_domain\((Ch_{j})\) \(Ch_{j}\_node\leftarrow\) get_node_endpoint\((Ch_{j},endpoint)\) \(Ch_{i}\leftarrow\) select_support_chain\((outward\_ring\), \(inward\_ring\), \(Ch_{j}\_node)\) \(l\_nodes\_ray\leftarrow\) select_nodes_within_region_over_ray\((Ch_{j}\), \(Ch_{j}\_node\), \(l\_within\_nodes)\) \(l\_chain\_id\_ray\leftarrow\)extract_chains_ids_from_nodes(\(l\_nodes\_ray\)) \(l\_endpoint\_chains\leftarrow\) get_chains_from_ids(\(l\_within\_chains\), \(l\_chain\_id\_ray\)) \(l\_filtered\_chains\leftarrow\) remove_chains_with_higher_overlapping_threshold(\(Ch_{j}\_angle\_domain\), \(l\_endpoint\_chain\),\(neighbourhood\_size\)) \(l\_candidates\leftarrow\) split_intersecting_chains(\(Ch_{j}\_node\_angle\), \(l\_filtered\_chains\), \(Ch\_{j}\))//See Algorithm 22 \(l\_no\_intersections\_j\leftarrow\) get_chains_that_no_intersect_src_chain(\(Ch_{j}\), \(Ch_{j}\_angle\_domain\), \(l\_within\_chains\),\(l\_endpoint\_chains\)) add_chains_that_intersect_in_other_endpoint(\(l\_within\_chains\), \(l\_no\_intersections\_j\), \(l\_candidates\), \(Ch_{j}\), \(neighbourhood\_size\), \(endpoint\)) \(l\_candidates\leftarrow\) split_intersecting_chain_in_other_endpoint(\(endpoint\), \(Ch_{j}\), \(l\_within\_chains\), \(l\_within\_nodes\), \(l\_candidates\)) \(l\_filtered\_no\_intersection\_j\leftarrow\) filter_no_intersected_chain_far(\(l\_no\_intersections\_j\), \(Ch_{j}\), \(endpoint\), \(neighbourhood\_size\)) \(l\_candidates\leftarrow\)\(l\_candidates\)\(+\)\(l\_filtered\_no\_intersection\_j\) \(l\_ch_{k}\_Euclidean\_distances\), \(l\_ch_{k}\_radial\_distances\), \(l\_ch_{k}\leftarrow\) get_chains_that_satisfy_similarity_conditions(\(Ch_{i}\), \(Ch_{j}\), \(c\_candidates\), \(endpoint\)) \(Ch_{k}\), \(diff\leftarrow\) select_closest_candidate_chain(\(l\_ch_{k}\), \(l\_ch_{k}\_Euclidean\_distances\), \(l\_ch_{k}\_radial\_distances\), \(l\_within\_chains\), \(aux\_chain\)) return\(Ch_{k}\),\(diff\),\(Ch_{i}\) ``` **Algorithm 21**split_and_connect_neighbouring_chains ``` Input:\(direction\), // endpoint direction for split chains \(l\_filtered\_chains\), // list of chains to be split \(Ch_{j}\), // source chain. The one that is being to connect if conditions are met Output: split chain list \(l\_search\_chains\leftarrow[]\) forinter_chain in \(l\_filtered\_chains\)do 3\(split\_node\leftarrow\)get_node_by_angle(\(direction\)) \(sub\_ch1\), \(sub\_ch2\leftarrow\)split_chain(\(inter\_chain\), \(split\_node\)) \(Ch_{k}\leftarrow\)select_no_intersection_chain_at_endpoint(\(sub\_ch1\), \(sub\_ch2\), \(Ch_{j}\), \(direction\)) /* Longest chains intersect two times */ ifintersection_between_chains(\(Ch_{k}\), \(Ch_{j}\))then */ \(split\_node\_2\leftarrow\)get_node_by_angle(\(node\_direction\_2\)) \(sub\_ch1\), \(sub\_ch2\leftarrow\)split_chain(\(Ch_{k}\), \(split\_node\_2\)) \(Ch_{k}\leftarrow\)select_no_intersection_chain_at_endpoint(\(sub\_ch1\), \(sub\_ch2\), \(Ch_{j}\), \(node\_direction\_2\)) change_id(\(Ch_{k}\)) \(l\_search\_chains\gets l\_search\_chains\) + \(Ch_{k}\) return\(l\_search\_chains\) ``` **Algorithm 22**split_intersecting_chains --output_dir OUTPUT_DIR --root REPO_ROOT_DIR As output, the method returns a JSON file with the tree-rings position in Labelme format [23]. The parameters of the program are the following: * -input: path to the segmented image. * -cx: path x's coordinate. * -cy: path y's coordinate. * -output_dir: directory where intermediate and final results are saved. * -root: repository root path ### Parameters Table 2 summarises parameters that the user can modify if needed. Program command line parameters are the following: * -sigma: Gaussian filtering standard deviation \(\sigma\). * -th_low: Low threshold on the gradient module for the Canny Devernay filter. * -th_high: High threshold on the gradient module for the Canny Devernay filter. * -height: image height after the resizing process. * -width: image width after the resizing process. * -alpha: threshold on the collinearity of the edge filtering (Equation (3)). * -nr: total number of rays. * -min_chain_lenght: minimun chain lenght. ### Installation and Use The main program language is Python. However, the edge detector stage uses the code in C from IPOL ([10]) and must be compiled. The source code is included in our repository because some minor modifications were made to extract the image gradient. The procedure to install the application is the following: ``` $cdrepo_root/ $apt-getupdate&&apt-getinstall-y$(cat.ipol/packages.txt)&& rm-rf/var/lib/apt/lists/* $pip3install-no-cache-dir-requirements.txt $cd./externas/devernay_1.0&&makeclean&&make ``` ## 5 Datasets To test the proposed method, we use two datasets: 1. The **UruDendro** dataset. An online database [16] with images of cross-sections of commercially grown _Pinus taeda_ trees from northern Uruguay, ranging from 13 to 24 years old, composed of twelve individual trees collected in February 2020 in Uruguay. Six trees correspond to a lumber company (denoted by the letter F), and the other six correspond to a plywood company (denoted by the letter L). Each company applied different silviculture practices. The individuals were identified by the letter of the company, a two-digit number, and a lowercase letter corresponding to the height where each cross-section was obtained. Heights were coded as follows: a = 10 cm above the ground, b = 165 cm, c = 200 cm, d = 400 cm, and e = 435 cm. The cross-sections were about 5 to 20 cm thick and were dried at room temperature without further preparation. As a consequence of the drying process, radial cracks and blue fungus stains were developed in the cross-sections. Surfaces were smoothed with a handheld planer and a rotary sander. Photographs were taken under different lighting conditions; cross-sections a, b, and e were photographed indoors and moistened to maximize contrast between early- and late-wood. Pictures of dry cross-sections c and d were taken outdoors. The dataset has 64 \begin{table} \begin{tabular}{|c|c|c|c|} \hline & stage & Parameter & Default \\ \hline Basic & edges detector & Gaussian filtering \(\sigma\) & 3 \\ \cline{2-4} & preprocessing & height & None \\ \cline{2-4} & width & None \\ \cline{2-4} & filtering, sampling, connect & Pith Position & Required \\ \hline Advanced & edges detector & Gradient threshold low & 5 \\ \cline{2-4} & & Gradient threshold high & 15 \\ \cline{2-4} & edges filtering & collinearity threshold (\(\alpha\)) & 30\({}^{\circ}\) \\ \cline{2-4} & sampling & rays number (nr) & 360 \\ \cline{2-4} & & min chain length & 2 \\ \hline \end{tabular} \end{table} Table 2: Method parameters. Basic parameters can be modified by the user in the demo. images of different resolutions, described in Table 4. The collection contains several challenging features for automatic ring detection, including illumination and surface preparation variation, fungal infection (blue stains), knot formation, missing bark and interruptions in outer rings, and radial cracking. The proposed CS-TRD tree-ring detection method was checked against manual delineation of all rings by users of varying expertise using the Labelme tool [23]. At least two experts annotate all images. Figure 2 show some images in this UruDendro dataset. 2. The **Kennel** dataset. Kennel et al. [13] made available a public dataset of 7 images of Abies alba and presented a method for detecting tree rings. We were unable to process the annotations given by the authors. The characteristics of this dataset are described in Table 3. We label the dataset with the same procedure as the UruDendro dataset to evaluate the results. ## 6 Experiments and Results ### Metric To evaluate the method, we develop a metric based on the one proposed by Kennel et al., [13]. To say if a ring is detected, we define an influence area for each ring as the set of pixels closer to that ring. For each ray, the frontier is the middle point between the nodes of consecutive ground truth rings. Figure 17.b show the influence area for disk F03d. Each ground truth ring is colored in black and is the center of its influence area. Figure 17.a shows the red detections and the green ground truth marks for the same image. The influence area associates a detected curve with a ground truth ring. In both cases, the nodes are associated with the \(Nr\) rays. Given a ground truth ring, we assign it to the closest detection using: \[Dist=\sqrt{\frac{1}{Nr}\sum_{i=0}^{Nr-1}\left(dt_{i}-gt_{i}\right)^{2}} \tag{10}\] Where \(i\) represents the ray direction, \(dt_{i}\) is the radial distance ( Equation (6)) of detected node \(i\), and \(gt_{i}\) is the radial distance ( Equation (6)) of the corresponding ground truth node \(i\). The closest detection can be extremely far away. To assign a detected curve to a ground truth ring, we must guarantee that the given chain is the closest one to the ring and that it is close enough. To this aim, we use the influence area of each ground truth ring (see Figure 17). Given a detected curve, we compute the proportion of nodes of that chain that belongs to the influence region of the closest ring. If that measure exceeds a parameter (\(th\_pre=60\%\)), we assign the detected curve to \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Image** & **Marks** & **Rings** & **Height (pixels)** & **Width (pixels)** \\ \hline **AbiesAlba1** & \(4\) & \(52\) & \(1280\) & \(1280\) \\ \hline **AbiesAlba2** & \(2\) & \(22\) & \(1280\) & \(1280\) \\ \hline **AbiesAlba3** & \(3\) & \(27\) & \(1280\) & \(1280\) \\ \hline **AbiesAlba4** & \(1\) & \(12\) & \(1024\) & \(1024\) \\ \hline **AbiesAlba5** & \(3\) & \(30\) & \(1280\) & \(1280\) \\ \hline **AbiesAlba6** & \(2\) & \(21\) & \(1280\) & \(1280\) \\ \hline **AbiesAlba7** & \(1\) & \(48\) & \(1280\) & \(1280\) \\ \hline \end{tabular} \end{table} Table 3: The Kennel dataset, are signaled the name and dimensions of each image as well as the number of expert marks and the number of rings in each one. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Image** & **Marks** & **Rings** & **Height (pixels)** & **Width (pixels)** \\ \hline [MISSING_PAGE_POST] ** & 4 & 16 & 1800 & 1800 \\ \hline \end{tabular} \end{table} Table 4: The UruDendro dataset, are signaled the name and dimensions of each image as well as the number of expert marks and the number of rings in each one. the ground truth ring. If not, the detected curve is not assigned to any ground truth ring. In other words, at least 60% of the nodes of a detected curve must be in the influence area of the ground truth ring to be assigned to it and to say that we have detected that ring (hence to declare a true positive). Figure 17.c show the error in pixels between the ground truth rings and the detected curves assigned to them. The red color represents a low error, while the yellow-green color represents a high error. Note how the error is concentrated around the knoth, which perturbs the precise detection of some rings. Once all the detected chains are assigned to the ground truth rings, we calculate the following indicators: 1. True Positive (TP): if the detected closed chain is assigned to the ground truth ring. 2. False Positive (FP): if the detected closed chain is not assigned to a ground truth ring. 3. False Negative (FN): if a ground truth ring is not assigned to any detected closed chain. Finally, the Precision measurement is given by \(P=\frac{TP}{TP+FP}\), the Recall measurement by \(R=\frac{TP}{TP+FN}\) and the F-Score by \(F=\frac{2PR}{P+R}\). Results for the Kennel dataset are shown in Table 6 and for the UruDendro dataset in Table 7. For example, in the image _F03d_, the method fails to detect two ground truth rings, so \(FN=2\). The other rings are correctly detected. The table also shows the execution time for each image and the RMSE error( Equation (10)) between the detected and ground truth rings. #### 6.2.1 Edge detector optimization stage The algorithm relies heavily on the edge detector stage. The first experiment tests different \(\sigma\) values for the Canny Devernay edge detector to get the one that maximizes the F-Score for the dataset UruDendro. This dataset presents significant variations in image resolution and allows us to study the global performance with different dimensions of the input images. We compute the average F-Score for the original image sizes and when all the images in the dataset are scaled to several sizes: 640x640, 1000x1000, 1500x1500. Results are shown in Figure 18. The best result (average F-Score of 0.89) is obtained for size 1500x1500 and \(\sigma=3.0\). The execution time varies with image size, as shown in the figure. The average execution time for the 1500x1500 size is 17 seconds. The same experiment is done over Kennel et al., [13] dataset. Results are shown in Figure 19. As before, the best F-Score is obtained for the 1500x1500 resolution, but with \(\sigma=2.5\). The lower optimal \(\sigma\) can be related to the Kennel dataset having images with more rings on the disk, 30 on average, while the UruDendro dataset has 19 rings per disk on average. The more the disks, the less their width. Table 5 summarizes the results of this experiment for both datasets. Figure 19: Experiment results over Kennel dataset. Each curve represents a different image resolution: 640x640, 1000x1000, 1500x1500, and original resolution. The blue curve refers to the original image size. Figure 18: Experiment results over the UruDendro dataset. Each curve represents different image sizes: 640x640, 1000x1000, 1500x1500, and original resolution. The blue curve refers to the original image size. #### 6.2.2 Pith position sensibility The next experiment measures how sensitive the method is to errors in the pith estimation. Figure 20 shows 48 different pith positions used in this experiment. We selected eight different pith positions over six rays. These radial displaced pith positions are selected as follows: * Three positions are marked inside ring 1, with an error over the ray direction of 25%, 50%, and 75%. * One is marked on ring 1. * Three positions are marked between the first and second rings, with increasing errors of 25% over the ray direction. * another position is marked on ring 2. We run the algorithm for each disk with each of these pith positions, giving 48 results. We get the average RMSE and F-Score measures over the six ray directions for each radially displaced pith position, i.e., the mean for the six pith positions that are 25% off the center and so on. In this manner, we have two vectors (one for RMSE and the other for the F-Score) with eight coordinates for each one. Experiments are made over the UruDendro dataset, using an image size of 1500x1500 and \(\sigma=3.0\). Figure 21 shows the average F-Score over the whole dataset for each error position, while Figure 21 shows the average RMSE over the same dataset. As was expected, F-Score decreases as the error in the pith estimation increases. Figure 21 shows that the RMSE is less sensitive to pith error. #### 6.2.3 Metric precision threshold In this experiment, we see how the performance varies with different values of \(th\_pre\). This parameter controls the number of nodes of the ring that lie within the Influence Area to be considered in the \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline dataset & image sizes & \(\sigma\) & P & R & F & RMSE & ExecTime(s) \\ \hline UruDendro dataset & 1500x1500 & 3.0 & 0,95 & 0,86 & 0,89 & 5.27 & 17.3 \\ \hline Kennel dataset & 1500x1500 & 2.5 & 0,97 & 0,97 & 0,97 & 2.4 & 11.1 \\ \hline \end{tabular} \end{table} Table 5: Mean performance values for both datasets’ optimal image resolution. Figure 20: Pith position experiment. Given six ray directions, eight different pith positions are marked. The method is executed for each marked pith position. Ground Truth ring are marked in green detection-to-ground-truth assignation step. Figure 22 and Figure 23 show results for UruDendro and Kennel datasets, respectively. As can be expected, higher precision implies higher RMSE but lower F-Score. Given these results, we fix \(th\_pre=60\%\) as a default value, which seems a good compromise. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline **Name** & **TP** & **FP** & **TN** & **FN** & **P** & **R** & **F** & **RMSE** & **Time (sec.)** \\ \hline AbiesAlba1 & 49 & 1 & 0 & 3 & 0.98 & 0.94 & 0.96 & 3.66 & 18.01 \\ \hline AbiesAlba2 & 20 & 0 & 0 & 2 & 1.00 & 0.91 & 0.95 & 0.95 & 9.21 \\ \hline AbiesAlba3 & 26 & 1 & 0 & 1 & 0.96 & 0.96 & 0.96 & 1.30 & 8.93 \\ \hline AbiesAlba4 & 11 & 0 & 0 & 1 & 1.00 & 0.92 & 0.96 & 5.88 & 8.96 \\ \hline AbiesAlba5 & 30 & 1 & 0 & 0 & 0.97 & 1.00 & 0.98 & 1.29 & 9.06 \\ \hline AbiesAlba6 & 20 & 0 & 0 & 1 & 1.00 & 0.95 & 0.98 & 1.26 & 7.63 \\ \hline AbiesAlba7 & 45 & 0 & 0 & 3 & 1.00 & 0.94 & 0.97 & 3.58 & 13.78 \\ \hline Average & & & & & & 0.99 & 0.95 & 0.97 & 2.56 & 10.80 \\ \hline \end{tabular} \end{table} Table 6: Results on the Kennel dataset with \(th\_pre=60\). Images resized to 1500x1500 and edge detector parameter \(\sigma=2.5\) Figure 23: Performance metrics computed for different values of \(th\_pre\) parameter on the Kennel dataset. Figure 24: Method result for disk AbiesAlba1 (zoom in over pith center). a) Filter stage output. b) Chain stage output. c) Postprocessing stage output. At a) and b), we can see how the method fails to detect edges for path. It is possible because the \(\sigma\) threshold is too high to detect things at this resolution. At c), we can see how the red chain was not closed due to a size smaller than \(information\_threshold\) (180) \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline Figure 25 illustrates some results of the CS-TRD ring-tree detection algorithm over the UruDendro dataset. Disks F02a, F02b, F02c, F02d, F02e, F03c, and L03c have an F-Score above 93%, which indicates that the method detects almost all the disk rings. Metric results over the database are shown in Table 7. The algorithm successfully detects rings over cracks (disks F02a, F02b, and F02e) and knots (disk L03c). Figure 26 illustrates how the method behaves under the presence of knots. It fails to detect the first (pth) and third rings. In addition, it detects a false ring over the knot and fails to detect the last ring. Despite the former error, the method succeeded in detecting 18 rings, which makes an F1-Score of 90%. Figure 27 illustrates our method results for disk L09e. Despite the presence of two important cracks and some fungus stains, the method successfully detects 13/15 rings, which means an F1-Score of 93%. As seen in Table 7, the CS-TRD algorithm generally works fine even if for some images it has problems. Let's discuss some examples, such as images L02b, F07e, and L02d. Figure 28 illustrates the results for disk L02b. Figure 28c, shows the detected rings in red and the ground truth in green. Four detections are closed curves and determined as correct (TP), while two are determined as incorrect (FP). Counting from the center to the border, the first detection is correct, and the next two are bad, corresponding to the second and third rings. Analyzing the chains step output shown in Figure 28a, it seems clear that there is not enough edge information to see the rings due to the fungus stain. A similar situation happens for disk F07e, Figure 29. There is a strong fungus stain presence which makes that some rings do not have enough edges to form a closed curve. The method results are slightly better for disk L02d with an F-Score of 58% in the presence of the same fungus stain issue that former disks. Figure 30a illustrates this case and how the fungus perturbs the edge detection step in the middle of the disk. ## 7 Conclusions and future work An automatic method (besides the pith detection, for which an automatic algorithm exists [4]) for Tree Ring Detection of cross-section wood images is presented, which achieves an F-Score of 97% in the Kennel dataset and an F-Score of 89% in the (more difficult) UruDendro dataset. The method executes at an average execution time of 17 seconds in the UruDendro dataset and 11 seconds in Kennel dataset. Compared with the time that each annotator needed to delineate every disk manually, 3 hours on average, is a vast improvement. CS-TRD method can be fully implemented in C++ to accelerate the execution time compared to a Python implementation3. This will allow using the method in real-time applications. Footnote 3: [https://medium.com/agents-and-robots/the-bitter-truth-python-3-11-vs-cython-vs-c-performance-for-simulations-babc85cdef5](https://medium.com/agents-and-robots/the-bitter-truth-python-3-11-vs-cython-vs-c-performance-for-simulations-babc85cdef5) In the future, we will include the automatic detection of the pith, extend the method to other tree species and explore machine-learning techniques to learn the pattern in the data. Figure 25: Some results for the UruDendro dataset. Figure 28: Method result for disk L02b. Note how the fungus stain perturbs the edge detection step. Figure 27: Method result for disk L09e. Note how the method succeeds in detecting almost all the rings (FN=2 and FP=0) despite the cracks and fungus stain Figure 26: Method result for disk F04c. Note how the knot stain perturbs the edge detection step. Figure 30: Method result for disk L02d. Note how the fungus stain perturbs the edge detection step. Figure 29: Method result for disk F07e. Note how the fungus stain perturbs the edge detection step. Figure 31: Results over images from the Kennel dataset with 1500x1500 image size and \(\sigma=2.5\). ### Image Credits Images from the UruDendro dataset. Images taken from a [8] original images from the Kennel dataset.
2310.12270
Towards Simpler Sorting Networks and Monotone Circuits for Majority
In this paper, we study the problem of computing the majority function by low-depth monotone circuits and a related problem of constructing low-depth sorting networks. We consider both the classical setting with elementary operations of arity $2$ and the generalized setting with operations of arity $k$, where $k$ is a parameter. For both problems and both settings, there are various constructions known, the minimal known depth being logarithmic. However, there is currently no known construction that simultaneously achieves sub-log-squared depth, effective constructability, simplicity, and has a potential to be used in practice. In this paper we make progress towards resolution of this problem. For computing majority by standard monotone circuits (gates of arity 2) we provide an explicit monotone circuit of depth $O(\log_2^{5/3} n)$. The construction is a combination of several known and not too complicated ideas. For arbitrary arity of gates $k$ we provide a new sorting network architecture inspired by representation of inputs as a high-dimensional cube. As a result we provide a simple construction that improves previous upper bound of $4 \log_k^2 n$ to $2 \log_k^2 n$. We prove the similar bound for the depth of the circuit computing majority of $n$ bits consisting of gates computing majority of $k$ bits. Note, that for both problems there is an explicit construction of depth $O(\log_k n)$ known, but the construction is complicated and the constant hidden in $O$-notation is huge.
Natalia Dobrokhotova-Maikova, Alexander Kozachinskiy, Vladimir Podolskii
2023-10-18T19:10:59Z
http://arxiv.org/abs/2310.12270v1
# Towards Simpler Sorting Networks ###### Abstract In this paper, we study the problem of computing the majority function by low-depth monotone circuits and a related problem of constructing low-depth sorting networks. We consider both the classical setting with elementary operations of arity 2 and the generalized setting with operations of arity \(k\), where \(k\) is a parameter. For both problems and both settings, there are various constructions known, the minimal known depth being logarithmic. However, there is currently no known construction that simultaneously achieves sub-log-squared depth, effective constructability, simplicity, and has a potential to be used in practice. In this paper we make progress towards resolution of this problem. For computing majority by standard monotone circuits (gates of arity 2) we provide an explicit monotone circuit of depth \(O(\log_{2}^{5/3}n)\). The construction is a combination of several known and not too complicated ideas. For arbitrary arity of gates \(k\) we provide a new sorting network architecture inspired by representation of inputs as a high-dimensional cube. As a result we provide a simple construction that improves previous upper bound of \(4\log_{k}^{2}n\) to \(2\log_{k}^{2}n\). We prove the similar bound for the depth of the circuit computing majority of \(n\) bits consisting of gates computing majority of \(k\) bits. Note, that for both problems there is an explicit construction of depth \(O(\log_{k}n)\) known, but the construction is complicated and the constant hidden in \(O\)-notation is huge. Introduction A sorting network receives an array of numbers and outputs the same numbers in the non-decreasing order. It consists of _comparators_, each of which is given some fixed pair of array entries as an input and it swaps them if they are not in the non-decreasing order. The main parameters of a sorting network are the size, that is, the number of comparators, and the depth, that is, the number of layers in the network, where each layer consists of several comparators applied to disjoint pairs of variables. Sorting networks are a classical model in theoretical computer science with vast literature devoted to them, see, for example [4, 1, 22, 26, 24, 17, 28, 7], see also the Knuth's book [18] and the Baddar's and Batcher's book [3]. Despite considerable efforts, still there are many open problems related to sorting networks. In this paper, our main interest is the depth of sorting networks. There is a related setting of computing majority function by monotone Boolean circuits. Majority function receives as input a sequence of \(n\) bits and outputs \(1\) if and only if more than a half of the inputs are \(1\)'s. Monotone Boolean circuits consist of AND and OR gates of fan-in \(2\). Constructing a monotone Boolean circuit for the majority function can only be easier than constructing a sorting network. This is because a sorting network can be transformed into a monotone Boolean circuit which computes majority and has the same depth. Indeed, if we restrict inputs to \(\{0,1\}^{n}\), then each comparator can be simulated by a pair of AND and OR gates (AND computes the minimum of two Boolean inputs and OR computes the maximum). And the majority is just the median bit of the sorted array. For the depth of sorting networks, there are several simple and practical constructions of depth \(\Theta(\log^{2}n)\)[18, 4, 24]. A construction with \(O(\log n)\) depth was given by Ajtai, Komlos and Szemeredy [1] and is usually referred to as the AKS sorting network. Although their bound on the depth is asymptotically optimal, the construction is very complicated and impractical due to a large constant hidden in the O-notation. There are some simplifications and improvements of this construction [26, 28], but the construction is still elaborate and is not practical. As for the lower bounds, there is a folklore \((2-o(1))\log_{2}n\) depth lower bound for networks sorting \(n\) numbers. It was improved by Yao [34] and later by Kahale et al. [17] with the current record about \(3.27\log_{2}n\). As we discussed above, any construction for a sorting network translates to a monotone circuit for majority of the same depth. In particular, we get an \(O(\log n)\)-depth monotone circuit for majority from the AKS sorting network. Yet again, the resulting circuit has the same disadvantages as the AKS construction. But in contrast to sorting networks, there is an alternative construction of a monotone depth-\(O(\log n)\) Boolean circuit for majority due to Valiant [32]. His construction is simple and has a reasonable constant hidden in the O-notation, but it is randomized. It was partially derandomized and made closer to practice by Hoory, Magen and Pitassi [14]. But still all known fully deterministic constructions that are simple and practical are of depth \(\Theta(\log^{2}n)\). Thus, there is an open problem for both sorting networks and monotone circuits for majority to come up with simple and deterministic construction of sub-log-squared depth. One potential approach to this is to consider sorting networks with comparators that have \(k>2\) inputs. We will call them \(k\)-sorting networks. They appear in the literature since 70s, the setting is mentioned already in the Knuth's book [18, Problem 5.3.4.54], followed by numerous works [30, 25, 5, 23, 10, 21, 29, 13, 35]. They are usually studied to better understand the structure of ordinary sorting networks (for example, a version of AKS sorting network with improved constant relies on \(k\)-sorting network in intermediate constructions [8]). In particular, \(k\)-sorting networks are closely related to recursive constructions of sorting networks. Having a good construction of a \(k\)-sorting network, one can apply it to its own comparators, getting a construction with smaller \(k\), until eventually \(k\) becomes \(2\), and we get an ordinary sorting network. Parker and Parbery [25] constructed a simple and potentially practical \(k\)-sorting network of depth \(\leqslant 4\log_{k}^{2}n\) (in case when \(n\) is an integral power of \(k\)). At the same time, as Chvatal shows in his lecture notes [8], the AKS sorting network also generalizes to this setting, giving a construction of depth \(O(\log_{k}n)\). However, as with the AKS sorting network itself, this construction is complicated and impractical. So the search for simple constructions continues. As for the lower bounds, any \(k\)-sorting network with \(n\) inputs must have depth at least \(\log_{k}n\) because otherwise outputs cannot be connected to all \(n\) inputs. Dobokhotova-Maikova et al. [11] improved this bound to roughly \(2\log_{k}n\). They also found optimal values of \(k\) for small values of depth \(d\). More specifically, for sorting networks of depth \(d=1,2\) they show that \(k\) cannot be smaller than \(n\), for \(d=3\) the optimal value is \(k=\left\lceil\frac{n}{2}\right\rceil\) and for \(d=4\) the optimal value is \(k=\Theta(n^{2/3})\). These results indicate that small depth \(k\)-sorting networks are not enough for iterative approach to sub-log-squared sorting network and we need either good \(k\)-sorting network constructions of depth greater than \(4\), or additional ideas. Just as with sorting networks, we can consider circuits for majority function that are constructed from _threshold_ gates of fan-it at most \(k\). A thresh old gate is a Boolean function that first sorts its input bits in the non-decreasing order, and then outputs the \(i\)th one from the beginning, for some fixed \(1\leq i\leq k\). For \(k=2\), AND and OR are the only two threshold functions. In general, there are \(k\) threshold functions of fan-in \(k\). By taking one copy of each, we get a comparator of arity \(k\). Thus, as in the case \(k=2\), a \(k\)-sorting network can be transformed into a circuit of the same depth which computes majority and consists of threshold gates of fan-in \(k\). In other words, constructing a \(k\)-sorting network can only be harder than constructing a circuit for majority with threshold gates of fan-in \(k\). There is a line of work, initiated by Kulikov and Podolskii [20], which addresses the following question: given \(d\) and \(n\), what is the minimal \(k\) for which there exists a circuit with threshold gates of fan-in \(k\), which has depth \(d\) and computes majority on \(n\) bits? The paper [20] shows that, up to a polylogarithmic factor, \(k\geq n^{14/(7d+6)}\). In subsequent works, special attention was given to the case \(d=2\). In this case, the lower bound of [20] is \(k\geq n^{14/20}\). It was improved to \(k\geq n^{4/5}\) by Engels et al. [12]. Then a linear lower bound \(k\geq n/2-o(n)\) was obtained by Hrubes et al. [15]. An upper bound \(k\leq 2n/3+O(1)\) was given in [27]. Let us also mention an upper bound \(k\leq n-2\) for circuits that only use majority gates [19, 2]. Now, for \(d\geq 3\) the situation is less clear. For \(d=3\), the paper [20] gave an upper bound \(k=O(n^{2/3})\). In turn, their lower bound in this case is of order \(n^{14/27}\). We are not aware of any non-trivial upper bound for \(d\geq 4\). Our results.In this paper we make progress towards better constructions of monotone circuits for majority and sorting networks. First, we give an explicit and reasonably simple construction of a monotone circuit for majority of depth \(O(\log^{5/3}n)\). **Theorem 1**.: _There is a polynomial time constructable monotone circuit for majority of polynomial size and depth \(O(\log^{5/3}n)\)._ Our proof combines several relatively simple steps. We start with partial derandomization of Valiant's construction using some ideas from the paper by Cohen et al. [9]. Next we apply to the resulting randomized circuit two operations several times. The first of them is a brute-force derandomization, that searches through all possible random bits of the randomized circuit. The second one is a composition with a \(k\)-sorting network of depth \(O(\log_{k}^{2}n)\). For such a network we can use either the construction of Parker and Parbery [25], or, for the better constant, our next result. In our second result we come up with the new architecture for \(k\)-sorting networks. As an application of this architecture we construct a \(k\)-sorting network of depth \(2\log_{k}^{2}n\), improving the constant compared to the result of [25]. More precisely, we prove the following theorem. **Theorem 2**.: _For any \(n\) and for any \(k\) such that \(\log k=\omega(\log\log n)\) (or, to put it differently, \(k\) is growing faster than any \(\mathsf{polylog}(n)\)), there exists a \(k\)-sorting network of depth at most \((2+o(1))\log_{k}^{2}n\)._ The key idea behind this construction is to represent the input array as a hypercube of high dimension and sort various sections of this cube. We note that the idea of representing an array as a multidimensional structure is not new, for example, Leighton [22] in his ColumnSort represented the array as a two dimensional table. However, in our construction it is important that we use the dimension greater than \(2\), since we use the fact that the sections of the cube have non-trivial intersection. On the conceptual level, the main novelty in our construction is the notion of \(s\)-sorting. We call the array \(s\)-sorted if the whole array is sorted correctly apart from some interval of length at most \(s\). Most (if not all) log-squared-depth sorting network constructions adopt the divide and conquer strategy. The \(O(\log_{k}^{2}n)\)-depth construction in [25] is not an exception, to sort an array of size \(n\), they split it into subarrays of size \(n/k\), sort them recursively and merge them afterward. However, merging \(k\) subarrays using \(k\)-sorting network is relatively expensive. To improve over previous construction, we work with \(s\)-sorted subarrays instead. We show how to merge them effectively (using the hypercube idea) and then show how we can build a recursive construction based on them. To additionally illustrate applications of our construction, we consider constant depth sorting networks and circuits for majority. We show that there is a \(\operatorname{MAJ}_{k}\)-circuit for \(\operatorname{MAJ}_{n}\) for \(k=O(n^{3/5})\). For a second application we address the question of \(k\)-sorting networks for \(k=O(n^{1/2})\). In [18] Knuth posed a problem of constructing a minimal depth \(k\)-sorting network for the input of size \(k^{2}\). Parker and Parbery [25] gave a construction of depth \(9\). We improve this to depth \(8\) at the cost of using comparators of size \(O(k)\) for \(k^{2}\) input size. The rest of the paper is organized as follows. In Section 2 we provide necessary preliminary information. In Section 3 we construct a monotone circuit for majority of depth \(O(\log^{5/3}n)\). In Section 4 we provide a new construction of \(k\)-sorting networks and deduce the corollaries. In Section 5 we discuss some open problems. Preliminaries We use the standard notation \([n]=\{1,\ldots,n\}\). We sometimes omit the base of the logarithms, by default we assume that the base is \(2\). ### Sorting Networks A depth-\(d\)\(k\)-sorting network with \(n\) inputs consists of \(d+1\) arrays \(A_{1},\ldots,A_{d+1}\), each of length \(n\). Between any two arrays \(A_{i}\) and \(A_{i+1}\) there is a _layer of comparators_ (the first layer is between \(A_{1}\) and \(A_{2}\), the second layer is between \(A_{2}\) and \(A_{3}\), and so on). A layer of comparators is a partition of the set \(\{1,2,\ldots,n\}\) into subsets of size at most \(k\) called _comparators_. The input is given in an array \(A_{1}\) and all other arrays are computed by the network one by one in the following way. If \(S\subseteq[n]\) is a comparator from the \(i\)th layer, then it is applied to the entries \(\{A_{i}[j]\mid j\in S\}\). It sorts their values in the non-decreasing order and puts the results into the entries \(\{A_{i+1}[j]\in A_{i+1}\mid i\in S\}\). We say that a network is _sorting_ if for any input \(A_{1}\) the array \(A_{d+1}\) is sorted. We reserve the name _sorting network_ for \(2\)-sorting networks. It is well known that to check that the sorting network sorts all possible inputs, it is enough to check that it sorts just \(0/1\)-inputs. **Lemma 3** (Zero-one principle [18]).: _A network with \(n\) inputs sorts all integer sequences in the non-decreasing order if and only if it sorts all sequences from \(\{0,1\}^{n}\) in the non-decreasing order._ By this principle, when constructing sorting networks, we can assume that each input cell receives either \(0\) or \(1\). The following simple observation will be useful to us. **Lemma 4**.: _If \(t\) largest or \(t\) smallest entries in the array are positioned correctly (i.e., in the last \(t\) cells and in the first \(t\) cells, respectively), then after the application of several comparators they are still positioned correctly._ Proof.: We can show by induction on \(i\) that the smallest and the largest entries do not move if they are already positioned correctly. The key observation is that if some of these entries are inputted into one of the comparators \(S\), they will not be moved. ### From Sorting Networks to Majority Circuits We use the standard notion of Boolean circuits (see, e.g. [16]). As inputs, we allow Boolean variables and Boolean constants \(0\) and \(1\). The size of the circuit is the number of gates in it. Given a \(k\)-sorting network we can get a circuit computing majority from it. More specifically, restrict the inputs to the network to \(\{0,1\}^{n}\) and consider one \(k\)-comparator \(S\). Note that its \(k\)th output is equal to \(1\) if and only if there is at least one \(1\) in the input. In other words, the \(k\)th output is equal to OR of input bits. Its \((k-1)\)th output is equal to \(1\) if and only if there are at least two \(1\)s in the input. More generally, it is easy to see that the \((k-i)\)th output of the \(k\)-comparator outputs a threshold function \[\operatorname{THR}_{k}^{i}(x)=\begin{cases}1&\text{if }|x|>i,\\ 0&\text{otherwise},\end{cases}\] where \(|x|\) denotes the weight of the vector \(x\in\{0,1\}^{k}\), that is, the number of \(1\)s in it. We reserve the notation \(\operatorname{MAJ}_{k}(x)\) for the function \(\operatorname{THR}_{k}^{k/2}(x)\). We can substitute each comparator in the network by \(k\) majority functions. Note that by adding several constants \(0\) or \(1\) as inputs to the gate we can convert any \(\operatorname{THR}_{k}^{i}\) function into \(\operatorname{MAJ}_{k^{\prime}}\) with \(k^{\prime}\leqslant 2k\). Now, it remains to observe that the median bit in the output array computes exactly \(\operatorname{MAJ}_{n}\). Thus, as a result, we get the following lemma. **Lemma 5**.: _Any \(k\)-sorting network of depth \(d\) and size \(s\) can be effectively converted into a circuit of depth \(d\) and size \(ks\) consisting of \(\operatorname{MAJ}_{2k}\) gates and computing majority. In the case \(k=2\), we get just a monotone circuit consisting of \(\operatorname{AND}\) and \(\operatorname{OR}\)._ ### Approximate Majority By \(\varepsilon\)-approximate majority function \(\operatorname{MAJ}_{n}^{\varepsilon}\) we denote the partial function that outputs \(\operatorname{MAJ}_{n}\) of its input but is defined only on the inputs where the fraction of ones in it is bounded away by \(\varepsilon\) from \(1/2\). We need the following known result. **Theorem 6** ([33]).: _For any constant \(\varepsilon>0\), one can compute \(\operatorname{MAJ}_{n}^{\varepsilon}\) explicitly by a monotone circuit of size \(\text{poly}(n)\) and depth \(O(\log n)\)._ ### \(t\)-Wise Independent Hash Functions We need the notion of \(t\)-wise independent hash functions. **Definition 7**.: For integers \(N\) and \(t\) such that \(t\leqslant N\), a family of function \(\mathscr{H}=\{h\colon[N]\to[N]\}\) is \(t\)-wise independent if for all distinct \(x_{1},\ldots,x_{t}\in[N]\) the random variables \(h(x_{1}),\ldots,h(x_{t})\) are independent and uniformly distributed in \([N]\), when \(h\in\mathscr{H}\) is drawn uniformly. **Theorem 8** ([31]).: _For every integer \(n\) and \(t\) such that \(t\leqslant 2^{n}\) there is a family of \(k\)-wise independent functions \(\mathscr{H}=\{h\colon\{0,1\}^{n}\to\{0,1\}^{n}\}\) such that choosing a random function from \(\mathscr{H}\) takes \(nt\) random bits and evaluating a function from \(\mathscr{H}\) takes time \(\text{poly}(n,t)\)._ **Theorem 9** ([6]).: _Let \(X\) be the average of \(N\)\(t\)-wise independent random variables \(X_{1},\ldots,X_{N}\in[0,1]\) for even \(t\). Then for any \(\varepsilon>0\) we have_ \[\Pr\left[|X-\mathrm{E}[X]|\geqslant\varepsilon\right]\leqslant 1.1\left(\frac{t }{N\varepsilon^{2}}\right)^{t/2}.\] ## 3 Sub-log-squared Circuit for Majority In this section, we provide a proof of Theorem 1. Our goal is to compute \(\mathrm{MAJ}_{n}\) by an explicit circuit of polynomial size and \(o(\log^{2}n)\) depth. We assume for convenience that \(n\) is odd (for even \(n\) we can consider a circuit for \(n+1\) and substitute one variable by a constant). We start with some inferior circuit and perform several operations that allow us to gradually improve the parameters. However, on our way, we need to consider randomized circuits as well, and apart from size and depth, we will also be interested in the number of random bits and the error probability. More specifically, a circuit is an \((s,d,r,\mathsf{err})\)-circuit for majority if its size is at most \(2^{s}\), depth is at most \(d\), we can construct a circuit using at most \(r\) random bits and the error probability on each input is at most \(2^{-\mathsf{err}}\). Here all parameters are functions in the number of inputs \(n\) (we write \(\mathsf{err}=\infty\) when the circuit is correct with probability 1). All circuits we are going to consider are effectively constructible: there is an algorithm that given the values of random bits constructs a circuit in polynomial time in the size of the circuit. Given a circuit with some parameters, we will use two operations to obtain new circuits. We are introducing these operations in the next two lemmas. Their effect on the circuit is summarized in the table below. \begin{tabular}{|c|c|c|} \hline Initial circuit & Brute-force derandomization & Downward self-reduction \\ \hline \(s(n)\) & \(O(s(n)+r(n))\) & \(O(\log n)+s(2k)\) \\ \hline \(d(n)\) & \(d(n)+O(r(n))\) & \(O\left(\left(\frac{\log_{2}n}{\log_{2}k}\right)^{2}d(2k)\right)\) \\ \hline \(r(n)\) & \(0\) & \(r(2k)\) \\ \hline \(\mathsf{err}(n)\) & \(\infty\) & \(\mathsf{err}(2k)-O(\log n)\) \\ \hline \end{tabular} **Lemma 10** (Brute-force derandomization).: _If there is an \((s,d,r,2)\)-circuit \(C\), then there is an \((O(s+r),d+O(r),0,\infty)\)-circuit._ This lemma allows us to get rid of randomness but increases the depth and the size of the circuit if \(r\) is large. Proof.: Consider a randomized circuit \(C_{y}(x)\), where \(x\in\{0,1\}^{n}\) is an input and \(y\in\{0,1\}^{r}\) is the sequence of random bits. Assume \(C_{y}(x)\) has the parameters, as in the statement of the lemma. Consider circuits \(C_{y}(x)\) for all possible values of \(y\) and observe that for any \(x\) the fraction of circuits that output \(\mathrm{MAJ}_{n}(x)\) is at least \(1-1/4=3/4\). Thus, if we feed \(C_{y}(x)\) for all \(y\) into a circuit from Theorem 6 computing \(\mathrm{MAJ}_{2^{r}}^{\varepsilon}\), the output is exactly \(\mathrm{MAJ}_{n}(x)\). The size of the resulting circuit is at most \(2^{r}\cdot 2^{s}+\mathrm{poly}(2^{r})\), where the first term corresponds to computing \(C_{y}(x)\) for all \(y\) and the second term corresponds to computing \(\mathrm{MAJ}_{2^{r}}^{\varepsilon}\). Thus, the size is \(2^{O(s+r)}\). Since all \(C_{y}(x)\) can be computed in parallel, the depth of the circuit is at most \(d+O(r)\). The resulting circuit does not use random bits and is always correct. **Lemma 11** (Downward self-reduction).: _If there is an \((s(n),d(n),r(n),\mathsf{err}(n))\)-circuit \(C\), then for any \(k<n\) there is an \((O(\log n)+s(2k),O(\log_{k}^{2}n\cdot d(2k)),r(2k),\mathsf{err}(2k)-O(\log n))\)-circuit._ This operation increases the depth (if \(d(n)\) is sub-log-squared), but allows to reduce other parameters. Proof.: Consider a \(k\)-sorting network of depth \(O(\log_{k}^{2}n)\), given by [25] or by our Theorem 2 (the latter allows only for limited values of \(k\), but the values we will actually use in the construction below are within the limits). By Lemma 5 this network gives us a monotone circuit with the same parameters consisting of \(\mathrm{MAJ}_{2k}\) gates computing \(\mathrm{MAJ}_{n}\), denote this circuit by \(C(x)\), where \(x\in\{0,1\}^{n}\). Consider a \((s(2k),d(2k),r(2k),\mathsf{err}(2k))\)-circuit \(C_{y}\) on \(k\) inputs, where \(y\in\{0,1\}^{r(2k)}\). Fix \(y\) and substitute each \(\operatorname{MAJ}_{2k}\) gate in \(C\) by \(C_{y}\). Denote the resulting circuit by \(D_{y}(x)\). This is a standard monotone Boolean circuit, its size is \(\operatorname{poly}(n)\cdot 2^{s(2k)}\), its depth is \(O(\log_{k}^{2}n\cdot d(2k))\) and the number of random bits is \(r(2k)\). It remains to show that the error probability is not too large. For this fix some input \(x\in\{0,1\}^{n}\). Consider all \(\operatorname{MAJ}_{2k}\) gates in \(C(x)\) and denote their inputs when \(x\) is fed to \(C\) by \(z^{1},z^{2},\ldots,z^{t}\). Here \(t\) is the size of \(C\) and is polynomial in \(n\). For each \(z^{i}\) the probability over random \(y\) that \(C_{y}(z^{i})\) computes \(\operatorname{MAJ}_{2k}(z^{i})\) incorrectly is at most \(2^{-\mathsf{err}(2k)}\). By union bound, with probability at least \(1-t2^{-\mathsf{err}(2k)}\) we have \(C_{y}(z^{i})=\operatorname{MAJ}_{k}(z^{i})\) for all \(i\) and thus \(D_{y}(x)\) computes \(\operatorname{MAJ}_{n}(x)\) correctly. Thus, the probability of error of the resulting circuit is at most \[t\cdot 2^{-\mathsf{err}(2k)}=2^{-\mathsf{err}(2k)+O(\log n)}\] Now we describe our starting circuit. Interestingly, it is constructed as a partial derandomization of Valiant's construction. **Lemma 12**.: _There is an explicit circuit for majority with parameters \((O(\log n),O(\log n),O(\log^{3}n),\Omega(\log^{2}n))\)._ We provide the proof of Lemma 12 in Section 3.1 below, but before that, we explain how to finish the construction of the desired circuit for \(\operatorname{MAJ}_{n}\). Starting with the circuit provided by Lemma 12, we first apply downward self-reduction with the parameter \(k\) satisfying \(\log k=C\sqrt{\log n}\) for some big enough constant \(C>0\), then we apply brute-force derandomization, and then we apply downward self-reduction again with \(k\) satisfying \(\log k=\log^{2/3}n\). We summarize the changes in the parameters after each step in the table below. \begin{tabular}{|l||l|l|l|l|} \hline & Initial & Step 1 & Step 2 & Step 3 \\ & circuit & & & \\ \hline & & Self-reduction & Brute-force & Self-reduction \\ & & with & derandomization & with \\ & & \(\log k=\sqrt{\log n}\) & & \(\log k=\log^{2/3}n\) \\ \hline \(s(n)\) & \(O(\log n)\) & \(O(\log n)\) & \(O(\log^{3/2}n)\) & \(O(\log n)\) \\ \hline \(d(n)\) & \(O(\log n)\) & \(O(\log^{3/2}n)\) & \(O(\log^{3/2}n)\) & \(O(\log^{5/3}n)\) \\ \hline \(r(n)\) & \(O(\log^{3}n)\) & \(O(\log^{3/2}n)\) & \(0\) & \(0\) \\ \hline \(\mathsf{err}(n)\) & \(\Omega(\log^{2}n)\) & \(\Omega(\log n)\) & \(\infty\) & \(\infty\) \\ \hline \end{tabular} **Remark 13**.: _Note that with the two operations in hand, there are not that many options to apply them to a given initial construction. It is not hard to check that applying downward self-reduction two times in a row is not better than applying it once with the appropriate value of \(k\). Clearly, there is no need to apply the derandomization step twice. From this, it is not hard to see that our sequence of operations is actually optimal. Once the optimal sequence of operations is established, it is not hard to check that our choice of parameters in downward self-reductions is optimal as well._ ### Proof of Lemma 12 In this subsection, we are going to prove Lemma 12. The high-level idea is to partially derandomize Valiant's construction. To make the presentation self-contained we first recall the idea behind this construction. Suppose we have independent random bits \(x,y,z\) that are equal to \(1\) with probability \(p\) and consider \(\operatorname{MAJ}_{3}(x,y,z)\). It is not hard to see that it outputs \(1\) with probability \(f(p)=p^{3}+3p^{2}(1-p)\). Consider \(p=\frac{1}{2}+\varepsilon\) for some \(\varepsilon>0\) and denote \(\varepsilon^{\prime}=f(p)-\frac{1}{2}\). Then \[\varepsilon^{\prime}=f(p)-\frac{1}{2}=f(p)-f(\frac{1}{2})=f^{\prime}(\alpha)( p-\frac{1}{2})=f^{\prime}(\alpha)\varepsilon\] for some \(\alpha\in[\frac{1}{2},p]\). Note that \(f^{\prime}(p)=6p-6p^{2}=6p(1-p)\). It is easy to see that for \(\alpha\in[\frac{1}{2},\frac{2}{3}]\) we have \(f^{\prime}(\alpha)\geqslant\frac{4}{3}\). Thus, for \(\varepsilon\in[\frac{1}{2},\frac{2}{3}]\) we have \(\varepsilon^{\prime}\geqslant\frac{4}{3}\varepsilon\). Now, we can use this in the following way. Consider \(\operatorname{MAJ}_{n}\) for odd \(n\) and consider its arbitrary input \(x\). Without loss of generality, assume that \(\mathrm{MAJ}_{n}(x)=1\). If we draw one variable from \(x\) uniformly at random, it is equal to \(1\) with probability at least \(\frac{1}{2}+\frac{1}{n}\). Consider a \(\mathrm{MAJ}_{3}\) gate and feed to it three independently and uniformly drawn input variables. By the analysis above the output of such a \(\mathrm{MAJ}_{3}\) gate is equal to \(1\) with probability at least \(\frac{1}{2}+\frac{4}{3}\cdot\frac{1}{n}\). Now we can repeat this: consider three such \(\mathrm{MAJ}_{3}\) gates and feed their outputs to another \(\mathrm{MAJ}_{3}\) gates. The result is equal to \(1\) with probability \(\frac{1}{2}+\left(\frac{4}{3}\right)^{2}\frac{1}{n}\). After \(O(\log n)\) many iterations, we get a \(O(\log n)\)-depth randomized circuit consisting of \(\mathrm{MAJ}_{3}\) gates that output the correct value with probability at least \(\frac{2}{3}\). Valiant's argument further improves this probability, but we will not need this part of the argument. The randomized circuit above uses too many random bits. Now we are going to modify the construction in a way, that uses randomness more efficiently. We will use some ideas from [9]. Construct the following circuit consisting of \(\mathrm{MAJ}_{3}\) gates. The circuit contains \(\Theta(\log n)\) layers, each containing \(N=n^{3}\) gates. The bottom layer consists of input variables, each repeated \(\frac{N}{n}=n^{2}\) times (it is redundant to copy variables several times, we do this exclusively for the sake of uniformity of the construction). In other layers, each gate computes the \(\mathrm{MAJ}_{3}\) function of some gates from the previous layer. To assign the inputs to each gate, for each layer \(j\) we draw three fresh (and independent of each other) \(t\)-wise independent hash functions \(f_{j},g_{j},h_{j}\colon[N]\to[N]\) for \(t=\Theta(\log n)\). For a gate with number \(i\) in layer \(j\) we set its inputs to be gates with numbers \(f_{j}(i)\), \(g_{j}(i)\) and \(h_{j}(i)\) in layer \((j-1)\). Before we finish the construction of the circuit, let us analyze the current part. Consider some input \(x\in\{0,1\}^{n}\), assume without loss of generality that \(\mathrm{MAJ}_{n}(x)=1\). Denote by \(\frac{1}{2}+\varepsilon_{i}\) the fraction of gates on level \(i\) that output \(1\). For \(i=1\) we have \(\varepsilon_{1}\geqslant\frac{1}{n}\). Each gate on level \(i\) receives three independent inputs from the previous level. Thus, the probability that it outputs \(1\) is at least \(\frac{1}{2}+\frac{4}{3}\varepsilon_{i-1}\) (we have shown this above only for \(\varepsilon_{i-1}\leqslant\frac{2}{3}\), but these values of \(\varepsilon_{i-1}\) are enough for our construction as well). Thus, the expected fraction of ones in level \(i\) is also at least \(\frac{1}{2}+\frac{4}{3}\varepsilon_{i-1}\). Now we would like to use concentration inequality to show that with high probability the fraction of correct values is not much smaller than its expectation. Note that the outputs of the gates on level \(i\) are t-wise independent. Let \(\varepsilon=\frac{1}{6n}\) and denote by \(X_{i}\) the output of \(i\)-th gate. Then by Theorem 9 we have \[\Pr\left[\left|\sum_{i}X_{i}/N-(\frac{1}{2}+(4/3)\varepsilon_{j-1})\right| \geqslant\varepsilon\right]\leqslant 1.1\left(\frac{t}{N\varepsilon^{2}} \right)^{t/2}=2^{-\Theta(\log^{2}n)}.\] By union bound the probability that on each level \(\varepsilon_{j}\geqslant\frac{4}{3}\varepsilon_{j-1}-\varepsilon\) is at least \[1-O(\log n)\cdot 2^{-\Theta(\log^{2}n)}=1-2^{-\Theta(\log^{2}n)}.\] Thus, we can show by induction on \(j\) that with probability at least \(1-2^{\Theta(\log^{2}n)}\) we have \[\varepsilon_{j}\geqslant\frac{4}{3}\varepsilon_{j-1}-\varepsilon\geqslant \frac{7}{6}\varepsilon_{j-1}+\frac{1}{6}\varepsilon_{j-1}-\frac{1}{6n} \geqslant\frac{7}{6}\varepsilon_{j-1},\] where in the last inequality we use that by induction hypothesis we have \(\varepsilon_{j-1}\geqslant\left(\frac{7}{6}\right)^{j-1}\cdot\varepsilon_{1} \geqslant\frac{1}{n}\). Thus, just like in Valiant's argument, after \(O(\log n)\) iterations, with porbability \(1-2^{-\Theta(\log^{2}n)}\), we have \(\varepsilon_{j}\geqslant\frac{2}{3}\). At this point, it remains to apply to the last layer a circuit from Theorem 6. It is easy to see that the size of the resulting circuit is \(\operatorname{poly}(n)\), the depth is \(O(\log n)\), error probability is \(2^{-\Theta(\log^{2}n)}\). As for the random bits, note that in the construction we need \(O(\log n)\)\(t\)-wise independent hash functions from \([N]\) to \([N]\). By Theorem 8 there are families of such functions defined using \(O(t\log N)\) random bits. In total we need \[O(\log n)\cdot O(t\log N)=O(\log^{3}n)\] random bits. This finishes the proof of Lemma 12. **Remark 14**.: _Instead of applying a circuit for Approximate Majority to the last layer, we could do the following: sample \(m=O(\log^{2}n)\) gates from the last layer uniformly at random and then compute the majority on these \(m\) gates using some simple circuit of depth \(O(\log^{2}m)\). By Chernoff's inequality, this adds at most \(2^{-\Omega(m)}=2^{-\Theta(\log^{2}n)}\) to the error probability, and we need \(O(\log^{3}n)\) random bits. In turn, the increase in depth and size is negligible._ ## 4 \(k\)-Sorting Network Construction ### Proof Strategy Before we proceed to the proof we would like to illustrate the idea considering some specific value of \(k\). For convenience, we assume that \(n\) is a cube of a natural number. **Lemma 15**.: _Assume that \(n=t^{3}\) for natural \(t\). Then there is a depth-4 \(k\)-sorting network with \(k=2t^{2}=2n^{2/3}\)._ We present the proof using a geometric interpretation of an input array as a three-dimensional cube. However, note that a similar result is implicit in [22] and it is essentially the same construction, just in different terms. We also note that it is known that this is the optimal (up to a constant factor) value of \(k\) for depth-4 sorting networks [11]. Proof.: We represent entries of an input array as a 3-dimensional cube with the side \(t\) (see Figure 0(a)). We place the first \(t^{2}\) entries of an array in the bottom layer of the cube, the next \(t^{2}\) entries in the second layer of the cube and so on. In each layer the entries are positioned row by row. To be more precise, assume that the array \(A\) is enumerated as \([a_{1},\dots,a_{n}]\) We reenumerate the same array as \[[a_{111},a_{112},\ldots,a_{11t},a_{121},\ldots,a_{12t},\ldots,a_{ttt}].\] That is, entries of an array are enumerated by sequences \((x,y,z)\in\{1,\ldots,t\}^{3}\) in the lexicographic order. In Figure 1\(a_{xyz}\) corresponds to a subcube with coordinates \((x,y,z)\). In the first layer of the sorting network we split the cube into vertical slices of width 1 and feed each slice to a \(t^{2}\)-comparator (see Figure 1b). To be more precise, for each \(i=1,\ldots,t\) we feed entries \(a_{xyi}\) for all \(x,y\) into one comparator. On the second layer of the network we split the cube into vertical slices of width 1 in another direction and feed each slice to a \(t^{2}\)-comparator (see Figure 1c). In other works, for each \(i=1,\ldots,t\) we feed entries \(a_{xz}\) for all \(x,z\) into one comparator. On the third layer we split the cube into horizontal slices of width 2 (for odd \(t\) the last slice is of width 1) and feed the slices to comparators of arity at most \(2t^{2}\) (see Figure 1d). Finally, on the fourth layer of the network we split the cube into horizontal slices of width 2 again, but now the first slice is of width 1 (for even \(t\) the last slice is of width 1 as well). Thus, the slices on this layer are shifted compared to the previous one (see Figure 1e). It remains to prove that this sorting network sorts correctly. Consider any input \(x\in\{0,1\}^{n}\). Note that the cube consists of \(t^{2}\) vertical columns with \(t\) entries in each column: each column \(A_{yz}\) is obtained by fixing \(y\) and \(z\) in \(a_{xyz}\) and considering all possible \(x\). We are interested in the weight \(w_{yz}\) of each column, that is the number of 1s in it. For the input \(A\) the weights of the columns can be any numbers from 0 to \(t\). Now consider the array after the first layer of the network. Note that now each vertical slice of the first layer of the network is sorted. This means that in each of these slices in the first several rows (from bottom to top) there are only 0s, then there might be a row containing both 0s and 1s, and then all remaining rows contain 1s. In particular, the weights of two columns in the same slice differ by at most 1. Now consider the second layer of the network and consider two different slices \(S_{i}=\{a_{x,i,z}\mid x,z\in[t]\}\) and \(S_{j}=\{a_{x,j,z}\mid x,z\in[t]\}\). Note that each of them contains exactly one column from each slice of the first layer. We know that the weights of the columns in the same slice of the first layer differ by at most 1. Thus, in total, the number of 1s in two slices of the second layer differ by at most \(t\). In other words, for each \(z\) the first slice contains the column \(A_{iz}\) and the second slice contains the column \(A_{jz}\). We know that on the input of the second layer of the network \(|w_{iz}-w_{jz}|\leqslant 1\). Thus, \[|\sum_{z}w_{iz}-\sum_{z}w_{jz}|\leqslant t.\] Denote by \(r_{i}\) the number of rows in slice \(S_{i}\) that consists of only 1s after the second layer of the network. We just showed that the slice \(S_{i}\) can have one more extra row of 1s, one less row of 1s or something in between. Overall, for the number \(r_{j}\) of rows consisting of 1s in \(S_{j}\) we have \(|r_{i}-r_{j}|\leqslant 1\). As a result, the weights of columns in slices \(S_{i}\) and \(S_{j}\) can differ by at most 2. Since this is true for any \(i\) and \(j\), we have that the weights of all columns in the cube after the second layer of the sorting network differ by at most 2. To put it another way, there is a horizontal slice of width 2, such that below this slice we have only 0s and above this slice we have only 1s. Thus it remains to sort entries of this slice. Note that on layers 3 and 4 of the network there is a comparator that sorts exactly this slice. Note that by Lemma 4 all other comparators of layers 3 and 4 do not harm the sorting. This argument can be extended to the cubes of arbitrary dimension \(d\). More specifically, for \(n=t^{d}\) and for \(k=(d-1)t^{d-1}\) we can represent entries of an input array as a \(d\)-dimensional cube with side \(d\), sort'vertical' slices (we need to fix one of the coordinates in \(d\)-dimensional space as vertical) in all \(d-1\) directions and then sort horizontal slices. This results into \((d-1)\) layers of the sorting network and for horizontal slices we need recursive calls for the arrays of size approximately \(2dt^{d-1}\). Actually, it is expensive to make two recursive calls for horizontal layers, instead we use an additional trick to make just one recursive call. Although our \(k\)-sorting network construction can be expressed in terms of high dimensional hypercubes, we prefer to give a more general exposition, using a concept of _\(s\)-sorted arrays_. ### Merging \(s\)-Sorted Arrays The following definition plays a key role in our sorting network construction. **Definition 16**.: A \(0/1\)-array \(A\) of length \(n\) is _\(s\)-sorted_ if there is an integer interval \(I=\{i,\ldots,i+s-1\}\subseteq[n]\), such that \(A[j]=0\) for \(j<i\) and \(A[j]=1\) for \(j\geqslant i+s\). We call _\(I\) unsorted interval_. As an immediate corollary of Lemma 4, we get the following. **Corollary 17**.: _Suppose a sorting network gets an \(s\)-sorted array with unsorted interval \(I\). Then the output is also \(s\)-sorted with \(I\) as an unsorted interval._ We give a construction of a depth-1 sorting network that "merges" \(p\) arrays of length \(n\) that are already \(s\)-sorted into one array which is \((sp+O(np^{2}/k))\)-sorted, where \(k\) is the arity of the sorting network. **Lemma 18**.: _Assume that \(k\geqslant tp\) for some integers \(t\) and \(p\). Suppose we have \(p\)\(s\)-sorted arrays of size \(n\) each. Assume additionally that \(n\) is divisible by \(t\). Then there is a depth-1 \(k\)-sorting network that merges these arrays into one array of size \(np\) that is \((sp+2\frac{np}{t})\)-sorted. If additionally we assume that \(s\) is divisible by \(n/t\), then the resulting array is \((sp+\frac{np}{t})\)-sorted._ Proof.: Represent each array as a table with \(\frac{n}{t}\) columns and \(t\) rows. We assume the following ordering on the entries of this table: to compare two entries, we first compare the indices of their rows, and then the indices of their columns. Position the tables one under another in a unified table with \(tp\) rows. Note that \(tp\leqslant k\) and apply \(k\)-comparator to each column in parallel. We claim that the resulting array in the large table is \((sp+2\frac{np}{t})\)-sorted. To see that observe, that in each small table, an unsorted interval of length at most \(s\) occupies at most \(\left\lceil\frac{st}{n}\right\rceil+1\) rows (any other row either consists entirely of 0s or entirely of 1s). In the large table, this gives us at most \(p\left(\left\lceil\frac{st}{n}\right\rceil+1\right)\) non-constant rows. After sorting each column individually, 0-rows will move to the top, 1-rows will move to the bottom and all other \(p\left(\left\lceil\frac{st}{n}\right\rceil+1\right)\) rows will be in between on them. They constitute an unsorted interval and the size of it is at most \[\frac{n}{t}\cdot p\left(\left\lceil\frac{st}{n}\right\rceil+1\right).\] For general \(s\) we can upper bound this as follows \[\frac{n}{t}\cdot p\left(\left\lceil\frac{st}{n}\right\rceil+1\right)\leqslant \frac{n}{t}\cdot p\left(\frac{st}{n}+2\right)=sp+2\frac{np}{t}.\] If \(s\) is divisible by \(n/t\), note that we can just drop the rounding operation and the size of an unsorted interval is at most \[\frac{n}{t}\cdot p\left(\left\lceil\frac{st}{n}\right\rceil+1\right)=\frac{n} {t}\cdot p\left(\frac{st}{n}+1\right)=sp+\frac{np}{t}.\] Applying previous lemma several times we get the following. **Lemma 19**.: _Consider arbitrary \(n\) and \(k\) and denote \(t=\left\lfloor\sqrt{k}\right\rfloor\). Then there is a \(k\)-sorting network of depth \(\left\lceil\log_{t}n\right\rceil-1\) that on any input outputs an \(s\)-sorted array for \(s\leqslant\frac{2\left\lceil\log_{t}n\right\rceil n}{t}\)._ Proof.: Denote \(d=\lceil\log_{t}n\rceil\) and observe that \(n\leqslant t^{d}\). Introduce the following notation: \[n_{i}=\begin{cases}t^{i+1}&\text{for }i=1,\ldots,d-2,\\ t^{d-1}p&\text{for }i=d-1,\end{cases}\] where \(p\) is such that \(t^{d-1}(p-1)<n\leqslant t^{d-1}p\). In particular, since \(p\geqslant 2\), we have \(p-1\geqslant p/2\) and \[n>t^{d-1}(p-1)\geqslant t^{d-1}p/2.\] For the convenience of presentation, we add \(t^{d-1}p-n\) dummy inputs equal to \(1\) to the end of the array to make the size of the input to be equal to \(t^{d-1}p\). By Lemma 4 these inputs will never change their position and can be removed from the sorting network. We start with an unsorted array as an input and repeatedly apply Lemma 18 to get the array consisting of blocks that are \(s\)-sorted for some \(s\). More specifically, after level \(i\) of the network we get the blocks of size \(n_{i}\) that are \(s_{i}\) sorted for \[s_{i}=\begin{cases}(i-1)t^{i}&\text{for }i=1,\ldots,d-2,\\ (d-2)t^{d-2}p&\text{for }i=d-1.\end{cases}\] On the first step we split the input into blocks of size \(t^{2}\) and apply comparators to them, the resulting blocks are \(0\)-sorted. On the \(i\)-th step for \(i=2,\ldots,d-1\) we already have blocks of size \(n_{i-1}=t^{i}\) from the previous step that are \(s_{i-1}\)-sorted for \(s_{i-1}=(i-2)t^{i-1}\). Note that \(n_{i-1}=t^{i}\) is divisible by \(t\) and \(s_{i-1}\) is divisible by \(n_{i}/t=t^{i-1}\). We apply Lemma 18 and for \(i<d-1\) get blocks of size \(n_{i-1}t=n_{i}\) that are \(s\)-sorted for \(s=s_{i-1}t+n_{i-1}=(i-1)t^{i}\). For \(i=d-1\) we have just \(p\) subarrays to merge and after the step we get the whole array of size \(t^{d-1}p\) that is \(s\)-sorted for \(s=(d-3)t^{d-2}p+\frac{t^{d-1}p}{t}=(d-2)t^{d-2}p\). Finally, observe that \[s\leqslant(d-2)t^{d-2}p\leqslant d\frac{2n}{t}\] as desired. ### Computing Majority Before constructing a sorting network we solve a simpler task of computing majority function. **Theorem 20**.: _For any \(n\) and for any \(k\) such that \(\log k=\omega(\log\log n)\) (or, to put it differently, \(k\) is growing faster than any \(\mathsf{polylog}(n)\)), there exists a \(\mathrm{MAJ}_{k}\)-circuit for \(\mathrm{MAJ}_{n}\) of depth at most \((2+o(1))\log_{k}^{2}n\)._ The rest of the section is devoted to the proof of Theorem 20. First observe that to compute \(\mathrm{MAJ}_{n}\) correctly by a monotone circuit it is enough to compute it correctly on minterm and maxterm inputs: the computation on other inputs follows by monotonicity. Thus, we can assume in our construction that the input contains almost the same number of 0s and 1s. We will construct a sorting network that sorts all such inputs correctly. From the sorting network we get the circuit of the same depth. Suppose we need to sort an array of size \(n\) with approximately the same number of 0s and 1s. We apply Lemma 19 to the array. This results in a \(Y\)-sorted array for \(Y=\frac{2\lceil\log_{t}n\rceil n}{t}\) for \(t=\lfloor\sqrt{k}\rfloor\). Since the number of 0s and 1s in the array is approximately equal, the smallest \(\frac{n}{2}-Y\) and the largest \(\frac{n}{2}-Y\) elements are sorted correctly (otherwise, the length of unsorted interval is larger than \(Y\)). Thus, it remains to sort specific interval of length \(2Y\) and we can do it recursively. Overall, we get the following recursive relation. \[T(n)\leqslant\lceil\log_{t}n\rceil-1+T\left(2Y\right)\leqslant\log_{t}n+T \left(\frac{4\lceil\log_{t}n\rceil n}{t}\right).\] To solve this recursive relation we use the following lemma. **Lemma 21**.: _Assume that \(\log k=\omega(\log\log n)\). Suppose that \(T(n)=\text{const}\) for \(n\) up to some constant and_ \[T(n)\leqslant 2\log_{k}n+C+T\left(\left\lceil\frac{D(\log_{k}n)n}{\sqrt{k}} \right\rceil\right)\] _for some constants \(C\) and \(D>0\). Then \(T(n)\leqslant(2+o(1))\log_{k}^{2}n\)._ Proof.: To simplify the presentation, we ignore rounding of the argument of \(T\) first, and address it later. Denote \(\alpha=\frac{\sqrt{k}}{D\log_{k}n}\). We have \[T(n) \leqslant 2\log_{k}n+C+T\left(\frac{n}{\alpha}\right)\] \[\leqslant 2\log_{k}n+C+2\log_{k}\frac{n}{\alpha}+C+T\left(\frac{n}{ \alpha^{2}}\right)\] \[\leqslant 2\sum_{i=0}^{\log_{\alpha}n}\left(\log_{k}\frac{n}{\alpha ^{i}}+C\right)\] \[=2\left(\log_{k}n+\left(\log_{k}n-\log_{k}\alpha\right)+\left( \log_{k}n-2\log_{k}\alpha\right)+\ldots+0\right)+2C\log_{\alpha}n\] \[\leq 2\frac{\log_{k}n}{\log_{k}\alpha}\frac{\log_{k}n}{2}+2C\log_ {\alpha}n=\log_{k}^{2}n\log_{\alpha}k+2C\log_{\alpha}n.\] It is easy to see that the term \(2C\log_{\alpha}n\) is negligible, since \(\alpha\gg k^{1/3}\). We analyze \(\log_{\alpha}k\) factor separately: \[\log_{\alpha}k =\log_{\frac{\sqrt{k}}{D\log_{k}n}}k=\frac{\log_{2}k}{\log_{2} \frac{\sqrt{k}}{D\log_{k}n}}\leqslant\frac{\log_{2}k}{\frac{1}{2}\log_{2}k-D- \log_{2}\log_{k}n}\] \[=\frac{\log_{2}k}{\frac{1}{2}\log_{2}k-D-\log_{2}\log_{2}n+\log_ {2}\log_{2}k}.\] For \(\log k=\omega(\log\log n)\) this term is \(2+o(1)\) and we have \[T(n)\leqslant(2+o(1))\log_{k}^{2}n.\] To address the rounding operation, note that \(\left\lceil\frac{n}{\alpha}\right\rceil\leqslant\frac{n}{\alpha}+1\leqslant \frac{2n}{\alpha}\) for \(\frac{n}{\alpha}\geqslant 1\). Thus, in the presence of rounding we will have \(\sum_{i}\log_{k}\frac{2n}{\alpha}\) in the calculation above instead of \(\sum_{i}\log_{k}\frac{n}{\alpha}\). This amounts to substituting \(D\) by \(2D\) and does not change the result of the calculation since \(D\) is an arbitrary constant. ### Constructing Sorting Network In this section, we finish the proof of Theorem 2. We adopt the same strategy as for the computation of majority. More specifically, we apply Lemma 19 recursively to get \(s\)-sorted array for smaller and smaller \(s\). However, now our task is more tricky. In the proof of Theorem 20 when we get to an \(s\)-sorted array we know exactly where the unsorted interval is located (in the middle of the array). However, now we need to sort arbitrary input arrays and an unsorted interval can be anywhere. We construct the network recursively. We assume that at the beginning of each step, we have an \(s\)-sorted array (at the beginning of the process \(s=n\)). Denote the unsorted interval by \(A\), \(|A|\leqslant s\). Split the array into consecutive blocks \(B_{1},\ldots,B_{p}\) of size \(s\) (the last block \(B_{p}\)) might be smaller. The recursive step consists of two stages. In the first stage, we split the array into blocks \(B_{1}\cup B_{2}\), \(B_{3},\cup B_{4}\), and so on, each block of size \(2s\) (one last block might be smaller). In the second stage, we split the array into blocks \(B_{1}\), \(B_{2}\cup B_{3}\), \(B_{4}\cup B_{5}\), and so on (again the last block might be smaller than \(2s\)). Before describing each of the stages, observe that either in the first stage or in the second stage (or in both) the interval \(A\) falls completely into one of the blocks. Indeed, \(A\) can intersect with at most two consecutive blocks \(B_{i}\), \(B_{i+1}\) and in one of the stages, they form a single block. In the first stage, we apply Lemma 19 to each of the blocks \(B_{1}\cup B_{2},B_{3},\cup B_{4},\ldots\) separately. As a result, each block is \(s^{\prime}\)-sorted for \(s^{\prime}\leq\frac{4\left|\log_{\left\lfloor\sqrt{k}\right\rfloor}n\right|s }{\left\lfloor\sqrt{k}\right\rfloor}\). Moreover, if the block consisted of only \(0\)s and \(1\)s, then it does not change. If \(A\) is contained in one of the blocks of the first stage, we are already done: there is only one initially unsorted block that by Lemma 19 after the step is \(s^{\prime}\)-sorted. By Corollary 17 this property remains true after the additional comparators we apply for the other case. If \(A\) is split between two blocks of the first stage, then after the stage we have two consecutive unsorted blocks, each of them is \(s^{\prime}\)-sorted. Denote unsorted parts by \(C_{1},C_{2}\). Note that by Corollary 17\(C_{1},C_{2}\subseteq A\) and thus, \(C_{1}\) and \(C_{2}\) fall into one block of the second stage. It is tempting to apply Lemma 19 to the blocks of the second stage as well. However, this application is too expensive and will not result in the desired bound. Instead we do the following. We represent each block of the second stage (of size at most \(2s\)) as a table with \(p=\left\lceil 2s/k\right\rceil\) columns and \(k\) rows, filled in row by row from top to bottom. For convenience, if the last row is not complete, add dummy variables equal to \(1\) to complete the row. Each of the intervals \(C_{1},C_{2}\) occupy at most \(\left\lceil s^{\prime}/p\right\rceil+1\) rows. There might be another row that contain a switch between blocks \(B_{i}\) and \(B_{i+1}\). Each other row consist either entirely of \(0\)s, or entirely of \(1\)s. Denote the number of all \(0\) rows by \(a\) and the number of all \(1\) rows by \(b\). We apply a comparator to each column separately. As a result, each column will contain \(a\) zeros in the beginning, \(b\) ones in the end and some part in between. The number of rows in the middle part is at most \(2\left\lceil s^{\prime}/p\right\rceil+3\). The number of entries in these rows is at most \[s^{\prime\prime}=p(2\left\lceil s^{\prime}/p\right\rceil+3)\leqslant 2s^{\prime} +5p\leqslant 3s^{\prime}\] for large enough input size. Thus, after the second stage we get \(s^{\prime\prime}\)-sorted array and we are done with the recursion step. Thus, we get that \(s^{\prime\prime}\leqslant 12^{\frac{\lceil\log_{t}n\rceil s}{t}}\) and we get the following recursive relation \[T(n)\leqslant\log_{t}n+T\left(\frac{12\lceil\log_{t}n\rceil n}{t}\right).\] We apply Lemma 21 again to get \(T(n)\leqslant(2+o(1))\log_{k}^{2}n\). This finishes the proof of Theorem 2. ### Other Applications In this section we give two more examples of results that follow from our construction. **Lemma 22**.: _There is a \(\operatorname{MAJ}_{k}\)-circuit of depth \(4\) computing \(\operatorname{MAJ}_{n}\) for \(k=O(n^{3/5})\)._ Proof.: Denote \(r=\lceil n^{1/5}\rceil\). For simplicity we pad the input with constants \(0\) and \(1\) to make the size of the array \(r^{5}\) without changing the output of majority. We will use \(k\)-sorters for \(k=4r^{3}\). As in the proof of Theorem 20 it is enough to compute \(\operatorname{MAJ}_{n}\) on minterms and maxterms, thus we can assume that there are approximately equal number of \(0\)s and \(1\)s in the input. We will build a \(k\)-sorting network and the existence of the circuit follows. On the first layer of the network we split the input into blocks of size \(r^{3}\) and sort them. On the second layer we use Lemma 18 with \(p=r\) and \(t=r^{2}\). As a result we get blocks of size \(r^{4}\) that are \(r^{2}\)-sorted. On the third level we apply Lemma 18 again with the same values of \(p\) and \(t\). As a result we have that the whole input is now \(2r^{3}\)-sorted. On the last layer of the network just as in the proof of Theorem 20 we apply \(4r^{3}\)-comparator to the middle of the array. In [18] Knuth posed a problem of constructing a minimal depth \(k\)-sorting network for the input of size \(k^{2}\). Parker and Parbery [25] gave a construction of depth \(9\). Here we slightly improve on this at the cost of using comparators of size \(O(k)\). **Lemma 23**.: _There is a \(k\)-sorting network of depth \(8\) that sorts an array of size \(n\) with \(k=O(n^{1/2})\)._ Proof.: As usual pad an array with constants to make \(n=r^{4}\) for some integer \(r\). Thus \(k=O(r^{2})\). We follow the same strategy as in Section 4.4. First we apply Lemma 19 that uses three layers of network and results in a \(s\)-sorted array for \(s=O(r^{3})\). Then, we apply Lemma 19 again to the blocks of size \(O(r^{3})\) to get a network of depth 2 that results in each block being \(O(r^{2})\)-sorted. Then we apply one more layer to merge unsorted intervals in different blocks to get the array that is \(O(r^{2})\)-sorted. Finally, we again split the array into blocks, this time of size \(O(r^{2})\) to complete the sorting using two layers. In total we use \(3+2+1+2=8\) layers. ## 5 Conclusion The obvious open problems are to come up with explicit constructions of sorting networks and monotone circuits for majority of smaller depth. One specific problem, is to extend our \(O(\log^{5/3}n)\) construction to sorting networks. The obstacle that we encountered is that there is no randomized construction of low depth sorting network that we can use as a start. Another interesting question is to extend our \(O(\log^{5/3}n)\) construction to get a \(\operatorname{MAJ}_{k}\)-circuit for \(\operatorname{MAJ}_{n}\) of depth \(O(\log_{k}^{5/3}n)\). Such a construction can be used instead of \(O(\log_{k}^{2}n)\)-depth circuit in downward self-reduction to further improve the upper bound. Again the obvious obstacle is that it is not clear how to get a starting construction.
2303.00994
Fast Randomized Subspace System Identification for Large I/O Data
In this article, a novel fast randomized subspace system identification method for estimating combined deterministic-stochastic LTI state-space models, is proposed. The algorithm is especially well-suited to identify high-order and multi-scale systems with both fast and slow dynamics, which typically require a large number of input-output data samples for accurate identification using traditional subspace methods. Instead of working with such large matrices, the dataset is compressed using randomized methods, which preserve the range-spaces of these matrices almost surely. A novel identification algorithm using this compressed dataset, is proposed. This method enables the handling of extremely large datasets, which often make conventional algorithms like N4SID, MOESP, etc. run out of computer memory. Moreover the proposed method outperforms these algorithms in terms of memory-cost, data-movement, flop count and computation time for cases where these algorithms still work in-spite of large data sizes. The effectiveness of the proposed algorithm is established by theoretical analysis and various real and simulated case studies.
Vatsal Kedia, Debraj Chakraborty
2023-03-02T06:07:18Z
http://arxiv.org/abs/2303.00994v2
Fast Multivariable Subspace Identification (FMSID) of Combined Deterministic-Stochastic/General LTI Systems for Large Input-Output Data ###### Abstract In this article, a novel fast subspace identification method for estimating combined deterministic-stochastic LTI state-space models corresponding to large input-output data is proposed. The algorithm achieves lesser runtime RAM usage, reduced data movement between slow (RAM) and fast memory (processor cache), and introduces a novel fast method to estimate input (\(B\)), feedforward (\(D\)) and steady state Kalman gain (\(K\)) matrices. By design, the proposed algorithm is specially well-suited to identify multi-scale systems with both fast and slow dynamics. Identification of these systems require high-frequency data recordings over prolonged periods, leading to large input-output data sizes. For such large data sizes, the proposed algorithm outperforms the conventional subspace methods like NASID and MOESP in terms of memory-cost, flop-count, and computation time. The effectiveness of the proposed algorithm is established by theoretical analysis, various case studies including estimation of practical systems like a nuclear reactor and by comparison with existing fast subspace methods in the literature. ## I Introduction Due to the easy availability of sensor readings and the simultaneous development of highly precise identification algorithms (e.g. see [1] and the references therein), data-driven system identification has acquired widespread adoption. Undoubtedly, permanent storage on local hard-drives or on cloud has become cheap and accessible [2]. On the other hand, advances in industrial sensor technology have made long sustained recordings of industrial processes feasible. This has led to wide availability of large amounts of system level input-output data. This data can potentially be used for developing accurate models of the underlying dynamical systems. Conventional system identification algorithms running on personal computers, require to access the stored data by copying it to temporary storage such as random access memory (RAM) and then to processor cache memory (PCM). However, processor caches remain relatively expensive and of limited capacity (see Fig.1). This necessitates frequent transfer of portions of the data between RAM and PCM, thereby degrading algorithm performance. Hence, in this paper, we propose a novel subspace identification algorithm which can handle extremely large input-output datasets with limited cache capacity at a much higher speed as compared to conventional methods. Input-output data is collected from real-time processes in sampled form.Tuning the sampling frequency and the time period over which the data is collected is a simple way to regulate data size for system identification. It is known that sampling time plays a very crucial role in identifying the underlying model [1]. Conventionally, the sampling frequency is chosen to be around ten times the "guessed" bandwidth of the system ([1], pg. 452). In other words, the sampling frequency is determined by the fastest eigenvalue of the system. On the other hand, the total duration of the collected data is guided by the slowest eigenvalue [3]. Hence for the system which have both very slow, as well as very fast modes, the total number of samples required to identify all the modes becomes very large. For example, in PHWR nuclear reactors the fastest time-constants are in the order of 0.05 seconds, while the slowest oscillations due to Xenon occur over 20 hours [4]. A quick calculation shows that, sampling at 100 times per second for five days (roughly six times the slowest time constant) leads to a single signal producing a \(100\times 60\times 60\times 24\times 5=43200000\) sized vector. Other examples exhibiting fast and slow dynamics include, blast furnaces [5], reactive distillation columns [6], batteries [7] etc. Sub-sampling and/or reducing the recording duration risks mis-identification of the modes. This results in system identification tasks with necessarily very large input-output data sizes. In addition to the scenario mentioned above, there are several other reasons for large input-output data size. Theoretically, consistency and normality of the estimates holds asymptotically (see [1], [8], [9]). Hence even in practice, data-size should be large enough so that asymptotic results holds approximately. Moreover, since error covariance is inversely proportional to the sample size, [1] robustness of the estimates in the presence of noise requires adequately long sample sizes. Further, various assumptions in popular subspace identification approaches, such as the inputs and additive noises being uncorrelated, [10] requires large data lengths. An early example of identification of state space models of linear dynamical systems is the Ho-Kalman algorithm [11] Fig. 1: Memory heirarchy [31] based on the impulse response data. Subsequently, subspace identification methods based on input-output data were developed ([1, 10]). Variations of subspace based methods include Canonical Variate Analysis (CVA) [12], Multivariable Output-Error State Space (MOESP) [13] method and Numerical algorithms for State Space System Identification (N4SID) [10]. All subspace identification algorithms involve QR and/or SVD decomposition to be performed on data matrices. When dealing with large sample sizes as mentioned above, this leads to prohibitively large time/space complexity. For further discussion let us define the algorithm computation-time as [14], \[T_{algo}=\#flops\times\gamma+\#messages\times\alpha+\#words\times\beta \tag{1}\] where, \(\gamma\) denotes time per flop, \(\alpha\) denotes latency and \(\beta\) is inverse of the memory bandwidth. The last two terms of the above equation constitute the communication between slow (RAM) and fast memory (PCM). The \(\#words\) is calculated as the total read and written words between RAM and PCM, while \(\#messages\) is calculated by the total number of data packets moved between PCM and RAM during read and write ([14, 15]). In the above context, fast subspace identification has also been investigated in [16, 17, 18] where the focus has been only on the usage of faster QR decomposition methods. In these papers, algorithmic performance was typically characterized by flop counts. However very less or no focus was given on memory usage and data movement, which play crucial roles in determining the computation time for large data matrices (see (1)). In [16], a fast method based on Schur decomposition and bilanczos algorithm was proposed to estimate \(\{A,C\}\) pair only. In [17] and [19], a more general algorithm was proposed to compute Cholesky factors from the data matrices. The algorithm is based on displacement structure and generalized Schur algorithm [20]. However these methods have been shown to produce inaccurate estimates for some commonly encountered types of data matrices (e.g. see Example 2 in [19]). In [18], fast multi-order subspace identification was proposed based on only output data (i.e. it was assumed that \(u(t)=0\)). The main focus of the paper was faster estimation of system matrix (\(A\in\mathbb{R}^{n\times n}\)) at multiple-order (where \(n\) is high) from the observability matrix (\(\Theta\)) which is smaller in size as compared to data matrix. But the major computational bottleneck comes from QR decomposition that depends on data size \(N\) (since, \(N>>n\)). Although in [17] and [18], the authors have mentioned that for large dataset iterative QR can be used by formulating block matrices, no analysis and/or clarity on the implementation were presented. A review of subspace identification techniques from an industrial process viewpoint can be found in [21], where challenges in identifying systems having both very fast and slow dynamics, are pointed out. To the best of the authors knowledge, the presence of such widely separated timescales and the consequent need for processing large data sizes always lead to degraded efficiency in known algorithms. To address the above issues, we propose a fast subspace identification method for combined LTI deterministic-stochastic system which simultaneously reduces the data movement between RAM and PCM, for any available PCM size. The proposed algorithm: (i) improves RAM runtime usage, (ii) decreases flop count, and (iii) reduces the amount of data transfers required between RAM and PCM. The primary idea behind the algorithm is partitioning the input/output (I/O) data (say with \(\mathbb{R}^{m\times N}\)) into smaller matrices (say with \(\mathbb{R}^{m\times N_{d}}\)) such that these smaller matrices fits exactly into the available cache memory. This leads to full utilization of the PCM resulting in reduced communication (see (1)). The usual operations such as QR and SVD required are done sequentially on these smaller \(\mathbb{R}^{m\times N_{d}}\) sized matrices [14]. Firstly, these computations require less RAM usage during runtime due to iterative updates. Additionally, estimating \(\{B,D,K\}\) matrices for conventional methods like N4SID and MOESP depends on I/O data size (\(N\)) making it computationally expensive. A new method to identify \(\{B,D,K\}\) matrices is introduced which is independent of \(N\), that takes lesser flops and computation-time as compared to the known methods. The primary idea behind fast estimation of \(\{B,D,K\}\) matrices is, to exploit the structure of matrices. Our algorithm outperforms conventional subspace methods (i) N4SID and, (ii) MOESP for large sample sizes. Our main contributions are, 1. Combined deterministic-stochastic system identification for large I/O data size. 2. Fast QR decomposition due to reduced data movement between PCM and RAM. 3. Fast \(\{B,D,K\}\) estimate independent of data size \(N\). 4. Lesser runtime RAM usage for QR decomposition due to iterative update. A preliminary version of this work has been accepted for the publication at the 2022 American Control Conference (ACC) [22]. This paper includes the following major extensions compared to [22]: 1. Identification of stochastic part (section III-E) 2. Theoretical analysis for the proposed algorithm proving that the proposed algorithm is fast as compared to conventional algorithms (Theorem 1) for large data sizes. 3. The proposed algorithm is applied to identify a model for a real world pressurized heavy water nuclear reactor (PHWR) bulk power variation (section V). The paper is organized as follows. In section II, conventional subspace identification for combined deterministic-stochastic system and related issues is reviewed briefly. A fast subspace identification algorithm for general LTI model is presented in section III. In the next section, algorithm performance of the proposed method for combined deterministic-stochastic subspace identification is presented. In section V, various case studies have been presented to validate the proposed algorithm. The paper is concluded in section VI, with future directions. ## II Preliminaries and Problem Formulation We assume that the input \(u(t)\in\mathbb{R}^{m}\) and the output \(y(t)\in\mathbb{R}^{p}\) of the following \(n^{th}\) order discrete time LTI system of form (2), is recorded up to \(N_{t}\) samples: i.e. \(\{u(i),y(i)\}\)\(\forall i\in\{0,1,\dots,N_{t}-1\}\). \[\begin{split}& x(t+1)=Ax(t)+Bu(t)+Ke(t)\\ & y(t)=Cx(t)+Du(t)+e(t)\end{split} \tag{2}\] Here \(e(t)\in\mathbb{R}^{p}\) is known as the innovations process vector and is assumed to be a white noise sequence with zero mean and finite covariance i.e. \(\mathbb{E}\{e(t_{1})e^{T}(t_{2})\}=\eta\delta_{t_{1}t_{2}}\) for all time instants \(t_{1}\) and \(t_{2}\), where, \(\eta>0\) and \(\delta\) is the Kronecker delta function. The system parameters \(\{A,B,C,D\}\) are of appropriate dimensions: \(A\in\mathbb{R}^{n\times n}\), \(B\in\mathbb{R}^{n\times m}\), \(C\in\mathbb{R}^{p\times n}\), \(D\in\mathbb{R}^{p\times m}\) while the Kalman gain is denoted by \(K\in\mathbb{R}^{n\times p}\). The objective of any subspace identification algorithm is to estimate the model order \(n\) and system parameters \(\{A,B,C,D,K\}\) up to similarity transforms. In the next subsection we briefly review some conventional subspace algorithms based on [23]. ### _Conventional subspace algorithms_ Given I/O time-series data sequence \(\{u(i),y(i)\}\)\(\forall i\in\{0,1,\dots,N_{t}-1\}\), a prediction horizon \(k\) is chosen such that \(k>n\) and the block size \(N:=N_{t}-2k+2\) is defined. Using this data, the following matrices are created: past input block Hankel matrix (\(U_{p}\)) \(\in\mathbb{R}^{km\times N}\) \[U_{p}:=\begin{bmatrix}u(0)&u(1)&\dots&u(N-1)\\ u(1)&u(2)&\dots&u(N)\\ \vdots&\vdots&\ddots&\vdots\\ u(k-1)&u(k)&\dots&u(k+N-2)\end{bmatrix}\] and the future input block Hankel matrix (\(U_{f}\)) \(\in\mathbb{R}^{km\times N}\) \[U_{f}:=\begin{bmatrix}u(k)&u(k+1)&\dots&u(k+N-1)\\ u(k+1)&u(k+2)&\dots&u(k+N)\\ \vdots&\vdots&\ddots&\vdots\\ u(2k-1)&u(2k)&\dots&u(2k+N-2)\end{bmatrix}\] Similarly, \(Y_{p},Y_{f}\in\mathbb{R}^{kp\times N}\) are created using the past/future output data. Although no recordings of noise are assumed to be available, for the sake of notational convenience, similar matrices are also defined for the corresponding past and future innovations processes: \(E_{p},E_{f}\in\mathbb{R}^{kp\times N}\). The past input and output data is combined into \(W_{p}:=\begin{bmatrix}U_{p}^{T}&Y_{p}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{k(m+p )\times N}\). We further denote, \(\Theta_{k}\in\mathbb{R}^{kp\times n}\) as the extended observability matrix, \(\Psi_{k}\in\mathbb{R}^{kp\times km}\) as impulse response Toeplitz matrix, and \(\Phi_{k}\in\mathbb{R}^{kp\times kp}\) as noise impulse response Toeplitz matrix as shown below. \[\Theta_{k}=\begin{bmatrix}C\\ CA\\ CA^{2}\\ \vdots\\ CA^{k-1}\end{bmatrix} \Psi_{k}=\begin{bmatrix}D&0&\dots&0\\ CB&D&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ CA^{k-2}B&\dots&CB&D\end{bmatrix}\] \[\Phi_{k}=\begin{bmatrix}I&0&\dots&0\\ CK&I&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ CA^{k-2}K&\dots&CK&I\end{bmatrix}\] **Assumption 1**.: _[_23_]_ _Following are assumed:_ 1. _The input_ \(u(t)\) _is persistently exciting of order_ \(2k\)_._ 2. _The input_ \(u(t)\) _is uncorrelated with innovations_ \(e(t)\)_._ 3. _No feedback from the output_ \(y(t)\) _to the input_ \(u(t)\) _exists._ 4. _Eigenvalues of_ \((A-KC)\) _are stable._ 5. _The pair_ \(\{A,C\}\) _is observable and the pair_ \(\{A,[B,K]\}\) _is controllable._ Using (2) recursively and with the data matrices as defined above, we get, \[Y_{f}=\Theta_{k}X_{f}+\Psi_{k}U_{f}+\Phi_{k}E_{f}. \tag{3}\] Under the assumptions listed above and for large prediction horizons \(k\), it can be shown [23] that \(X_{f}=L_{p}W_{p}\) for \(L_{p}:=\begin{bmatrix}\Upsilon_{k}&\Upsilon_{k}^{e}\end{bmatrix}\in\mathbb{R}^{n \times k(m+p)}\), where \[\Upsilon_{k}=\begin{bmatrix}\bar{A}^{k-1}\bar{B}&\bar{A}^{k-2}\bar{B}&\dots& \bar{B}\end{bmatrix}\in\mathbb{R}^{n\times km}\] is the modified reversed extended controllability matrix and \[\Upsilon_{k}^{e}:=\begin{bmatrix}\bar{A}^{k-1}K&\bar{A}^{k-2}K&\dots&K\end{bmatrix} \in\mathbb{R}^{n\times kp}\] is the modified reversed extended stochastic controllability matrix where \(\bar{A}:=A-KC\) and \(\bar{B}:=B-KD\). Thereby (3) reduces to \[Y_{f}=\Theta_{k}L_{p}W_{p}+\Psi_{k}U_{f}+\Phi_{k}E_{f}. \tag{4}\] Subspace algorithms uses orthogonal and/or oblique projections to extract subspaces that contains system related information like the extended observability matrix (\(\Theta_{k}\)) and/or a Kalman state sequence (\(X_{f}\)). One of the methods to accomplish this is to orthogonally project \(Y_{f}\) onto the joint span of \(W_{p}\) and \(U_{f}\) as follows: \[\begin{split} Y_{f}/\begin{bmatrix}W_{p}\\ U_{f}\end{bmatrix}&=\Theta_{k}L_{p}W_{p}/\begin{bmatrix}W_{p}\\ U_{f}\end{bmatrix}+\Psi_{k}U_{f}/\begin{bmatrix}W_{p}\\ U_{f}\end{bmatrix}\\ &+\Phi_{k}E_{f}/\begin{bmatrix}W_{p}\\ U_{f}\end{bmatrix}\\ &\\ &=\Theta_{k}L_{p}W_{p}+\Psi_{k}U_{f}\end{split} \tag{5}\] The third term in the above equation becomes zero because \(E_{f}\) is not correlated with \(W_{p}\) and \(U_{f}\) in open-loop [23]. Now \(Y_{f}\) orthogonally projected onto the joint span of \(W_{p}\) and \(U_{f}\) can also be written as, \[\begin{split} Y_{f}/\begin{bmatrix}W_{p}\\ U_{f}\end{bmatrix}&=Y_{f}/_{U_{f}}W_{p}+Y_{f}/_{W_{p}}U_{f}\\ &=\underbrace{L_{W_{p}}W_{p}}_{:=\zeta}+L_{U_{f}}U_{f}\end{split} \tag{6}\] On comparing equation (5) and (6) we get \(\zeta=\Theta_{k}L_{p}W_{p}\). This \(\zeta\) is defined (see (6)), as the oblique projection of \(Y_{f}\) along \(U_{f}\) onto \(W_{p}\) and hence can be computed from data. An efficient way to calculate such orthogonal/oblique projections is using the \(QR\) decomposition. #### Iii-A1 QR step Perform LQ decomposition on \(H:=\begin{bmatrix}U_{f}^{T}&W_{p}^{T}&Y_{f}^{T}\end{bmatrix}^{T}\in\mathbb{R} ^{2k(m+p)\times N}\) to obtain the decomposition of \(Y_{f}\) as shown in (4). The LQ decomposition of \(H\) can be written as, \[H=\begin{bmatrix}U_{f}\\ W_{p}\\ Y_{f}\end{bmatrix} =\begin{bmatrix}R_{11}&0&0\\ R_{21}&R_{22}&0\\ R_{31}&R_{32}&R_{33}\end{bmatrix}\begin{bmatrix}Q_{1}^{T}\\ Q_{2}^{T}\\ Q_{3}^{T}\end{bmatrix} \tag{7}\] Hence, from the LQ decomposition \(Y_{f}\) becomes, \[Y_{f}=\underbrace{R_{33}Q_{1}^{T}+R_{32}Q_{2}^{T}}_{I}+\underbrace{R_{33}Q_{3}^{T }}_{II} \tag{8}\] Now, substituting the values of \(Q_{1}^{T}\) and \(Q_{2}^{T}\) in terms of \(R\) factors from the LQ decomposition (see (7)) into (8) we get, \[Y_{f} =R_{32}R_{22}^{\dagger}W_{p}+(R_{31}-R_{32}R_{22}^{\dagger}R_{21} )R_{11}^{-1}U_{f} \tag{9}\] \[+R_{33}Q_{3}^{T}\] The above equation is purely in terms of known data matrices. Now, the orthogonal projection of \(Y_{f}\) onto the joint span of \(W_{p}\) and \(U_{f}\) becomes (since \(Q_{3}\) is orthogonal to \(Q_{1}\) and \(Q_{2}\) see (8)), \[Y_{f}\begin{bmatrix}W_{p}\\ U_{f}\end{bmatrix}=R_{32}R_{22}^{\dagger}W_{p}+(R_{31}-R_{32}R_{22}^{\dagger}R_ {21})R_{11}^{-1}U_{f} \tag{10}\] On comparing (6) and (10), \[\bar{L}_{p}:=L_{W_{p}}=R_{32}R_{22}^{\dagger} \tag{11}\] Now, using (3) and (5), the first terms of \(Y_{f}\) can be equated as, \[\zeta:=\underbrace{\Theta_{k}X_{f}}_{theoretical}=\underbrace{\bar{L}_{p}W_{p}}_{ data} \tag{12}\] where, \(\zeta\in\mathbb{R}^{kp\times N}\). #### Iii-A2 SVD step Next we calculate the SVD of \(\zeta\) as follows: \[\zeta =\begin{bmatrix}U_{1}&U_{2}\end{bmatrix}\begin{bmatrix}S_{1}&0 \\ 0&S_{2}\end{bmatrix}\begin{bmatrix}V_{1}^{T}\\ V_{2}^{T}\end{bmatrix} \tag{13}\] \[=U_{1}S_{1}V_{1}^{T}+\underbrace{U_{2}S_{2}V_{2}^{T}}_{noise}\] \[\approx U_{1}S_{1}V_{1}^{T}=\underbrace{U_{1}S_{1}^{1/2}}_{ \Theta_{k}}\underbrace{S_{1}^{1/2}V_{1}^{T}}_{\hat{X}_{f}}\] The second term is ignored assuming that the noise component is negligible as compared to the system contribution. The order of the system is determined from \(rank(\zeta)\) = \(rank(\Theta_{k})\) = \(n\). ### _Estimating system matrices_ A brief overview of N4SID and MOESP class of algorithms is presented below. #### Iii-B1 N4sid System parameters are found by using the estimate of the state-vector \(X_{f}\) and solving below equation as least squares assuming model given in process form, \[\begin{bmatrix}\bar{X}_{k+1}\\ \bar{Y}_{k}\end{bmatrix}=\begin{bmatrix}A&B\\ C&D\end{bmatrix}\begin{bmatrix}\bar{X}_{k}\\ \bar{U}_{k}\end{bmatrix}+\begin{bmatrix}\bar{W}_{k}\\ \bar{V}_{k}\end{bmatrix} \tag{14}\] where, \(\bar{X}_{k+1},\bar{X}_{k}\in\mathbb{R}^{n\times(N-1)}\), \(\bar{U}_{k}\in\mathbb{R}^{m\times(N-1)}\) and \(\bar{Y}_{k}\in\mathbb{R}^{p\times(N-1)}\). The second term in RHS of above equation depicts the residual of the least squares solution where, \(\bar{W}_{k}\in\mathbb{R}^{n\times(N-1)}\) and \(\bar{V}_{k}\in\mathbb{R}^{p\times(N-1)}\) are matrices corresponding to process noise \(w(k)\) and measurement noise \(v(k)\) respectively. The residuals can be used to estimate the covariance matrices \(\{Q,S,R\}\) from which \(K\) can be estimated easily. For more details on implementation and structure of \(\bar{X}_{k+1}\), \(\bar{X}_{k}\), \(\bar{U}_{k}\), \(\bar{Y}_{k}\), \(\bar{W}_{k}\) and \(\bar{V}_{k}\), refer to ([10, 24]). #### Iii-B2 Moesp Once we have obtained the oblique projection \(\zeta\) (see (12)), the SVD of the weighted \(\zeta\) defined by \(\zeta_{MOESP}:=W_{1}\zeta W_{2}\) is calculated where \(W_{1}=I\) and \(W_{2}=\Pi_{\bar{U}_{f}}^{\downarrow}\) (orthogonal projection matrix). \[\zeta_{MOESP}=\zeta\Pi_{U_{f}}^{\downarrow}=U_{1}\Sigma_{1}V_{1m}^{T} \tag{15}\] The extended observability matrix \(\Theta_{k}\) can be obtained from above equation as, \[\Theta_{k}=U_{1}\Sigma_{1}^{1/2} \tag{16}\] Now, \(A\) is estimated using shift invariance property on \(\Theta_{k}\) and \(C\) is estimated by reading \(1^{st}\)-\(p\) rows of \(\Theta_{k}\) as, \[\hat{A} =arg\underset{A\in\mathbb{R}^{n\times n}}{\min}||\Theta_{k}^{ \downarrow}A-\Theta_{k}^{\uparrow}||_{F} \tag{17}\] \[=(\Theta_{k}^{\downarrow})^{\dagger}\Theta_{k}^{\uparrow}\] and \[\hat{C}=\Theta_{k}(1:p,:) \tag{18}\] where, \(\Theta_{k}^{\downarrow}:=\Theta_{k}(1:p(k-1),:)\) and \(\Theta_{k}^{\uparrow}:=\Theta_{k}(p+1:kp,:)\). **NOTE 1**.: _Once \(A\) and \(C\) are fixed, the state space basis of the system is fixed and thus \(\{B,D,K\}\) are uniquely defined._ We review two different approaches to estimate \(\{B,D,K\}\). Firstly, MATLAB's n4sid routine that involves full data to estimate \(\{B,D,K\}\) and secondly the Verhaegan approach that makes use of full data to estimate \(K\). \((i)\) MATLAB's n4sid routine [26] uses 'MOESP' based on [13] and computes the weighting matrices for calculating SVD. Now, estimation of \(\{B,D\}\) is done by the solving set of \(N\) linear equations (overdetermined system) given by, \[y=(C(qI-A)^{-1}B+D)u \tag{19}\] where, \(u\) and \(y\) denote input data sequence and output data sequence respectively. Estimation of \(K\) is done using residuals similar to N4SID. For details see [1] and [26]. \((ii)\) According to Verhaegen, the \(\{B,D\}\) gets estimated using a different approach as compared to MATLAB-MOESP (see equation (45) in [27]). From SVD (see (12)), define \(U_{2}^{T}:=\begin{bmatrix}L_{1}&L_{2}&\ldots&L_{k}\end{bmatrix}\in\mathbb{R}^{( kp-n)\times km}\) and \(U_{2}^{T}R_{31}R_{11}^{-1}:=\begin{bmatrix}M_{1}&M_{2}&\ldots&M_{k}\end{bmatrix} \in\mathbb{R}^{(kp-n)\times km}\). Then, solve the following overdetermined system of linear equations to estimate \(\{B,D\}\) as, \[\begin{bmatrix}M_{1}\\ M_{2}\\ \vdots\\ M_{k-1}\\ M_{k}\end{bmatrix}=\begin{bmatrix}L_{1}&L_{2}&\ldots&L_{k-1}&L_{k}\\ L_{2}&L_{3}&\ldots&L_{k}&0\\ \vdots&\vdots&\iddots&\vdots\\ L_{k-1}&L_{k}&&\\ L_{k}&0&\ldots&0&0\end{bmatrix}\begin{bmatrix}I&0\\ 0&\Theta_{k}^{\downarrow}\end{bmatrix}\begin{bmatrix}D\\ B\\ \end{bmatrix} \tag{20}\] We will briefly outline the estimation of \(K\) according to Verhaegen in [28]. The algorithm is as follows: 1. Estimate the initial condition from data \(\hat{x}_{d}(0)\) using \(\{\hat{A},\hat{B},\hat{C},\hat{D}\}\) as \(\hat{x}_{d}(0)=\hat{\Theta}_{N}^{\uparrow}(Y-\hat{Y}_{x_{d}(0)})\)[29]. 2. Calculate the deterministic state sequence \((x_{d}(t))\) and output \((y_{d}(t))\) using \(\{\hat{A},\hat{B},\hat{C},\hat{D}\}\) and initial condition \((\hat{x}_{d}(0))\). 3. Estimate covariances using \((y_{s}(t))\) and \(\Theta_{k}\) as defined in [28] where, \(y_{s}(t)=y(t)-y_{d}(t)\). 4. Solve Riccati equation and then estimate \(K\). An important observation to note that, above method needs reconstruction of states \((x_{d}(t))\) to estimate \(K\). The issue is that all these methods make use of full data to estimate \(\{B,D,K\}\). Major steps of conventional subspace identification is presented in Algorithm 1. ``` 1 Formulate data matrices from input-output data. Perform LQ decomposition on \(H=\begin{bmatrix}U_{f}^{T}&U_{p}^{T}&Y_{p}^{T}&Y_{f}^{T}\end{bmatrix}^{T}\in \mathbb{R}^{2k(m+p)\times N}\) (see (7)) Perform SVD on \(\zeta=\bar{L}_{p}W_{p}\in\mathbb{R}^{kp\times N}\) to estimate \(\Theta_{k}\) and/or \(X_{f}\) (see (13)). Estimate \(\{A,B,C,D,K\}\) using N4SID or MOESP type algorithm (refer section II-B). ``` **Algorithm 1**Conventional subspace identification ### _Main issues_ Following are the major issues that we address in this paper: 1. LQ decomposition of large input-output data matrix (\(H\)). 2. After SVD, the estimation \(\{B,D,K\}\) involves full data while \(\{A,C\}\) is estimated using smaller matrix \(\Theta_{k}\). In order to address issue 1, we use sequential QR decomposition based on [14]. Because of the two reasons, firstly we need only \(R\) factor for further computation and sequential QR allows us to ignore intermediate \(Q\) factors. Secondly, the algorithm takes minimum data movement between slow and fast memory (communication optimal). Therefore, above approach is faster and saves space as compared to conventional QR. To address issue 2, we propose a novel method by exploiting the structure of various matrices. A brief overview of sequential QR is presented below, for details refer [14, 15]. #### Iii-C1 Sequential QR decomposition The algorithm is as follows, given any \(A\in\mathbb{R}^{N\times m}\) where, \(N>>m\) i.e. tall matrix. Let \(A\) is partitioned in \(d\) block row as, \(A:=\begin{bmatrix}A_{1}^{T}&A_{2}^{T}&\ldots&A_{d}^{T}\end{bmatrix}^{T}\), where, \(A_{i}\in\mathbb{R}^{N_{d}\times m}\), \(N_{d}:=N/d\in\mathbb{Z}\). The factor \(d\) can be chosen such that it satisfies \(N_{d}\geq m\) and \(\mathbb{R}^{m\times N_{d}}\) fits into PCM. For example, consider \(d=3\), \(A:=\begin{bmatrix}A_{1}\\ A_{2}\\ A_{3}\end{bmatrix}\). Now, begin with QR decomposition of \(A_{1}\) as, \(A_{1}=Q_{1}R_{1}\) and \(A=\begin{bmatrix}A_{1}\\ A_{2}\\ A_{3}\end{bmatrix}=\begin{bmatrix}Q_{1}R_{1}\\ A_{2}\\ A_{3}\end{bmatrix}=\begin{bmatrix}Q_{1}&\\ &I\end{bmatrix}\begin{bmatrix}R_{1}\\ A_{2}\\ A_{3}\end{bmatrix}\). Now combine \(R_{1}\) and \(A_{2}\) for QR decomposition as, \(\begin{bmatrix}R_{1}\\ A_{2}\end{bmatrix}=Q_{2}R_{2}\) and \(\begin{bmatrix}R_{1}\\ A_{2}\\ A_{3}\end{bmatrix}=\begin{bmatrix}Q_{2}R_{2}\\ A_{3}\end{bmatrix}=\begin{bmatrix}Q_{2}&\\ I\end{bmatrix}\begin{bmatrix}R_{2}\\ A_{3}\end{bmatrix}\). Continue this sequential QR until all \(A_{i}\) factors are covered. Finally, \(A=\begin{bmatrix}A_{1}\\ A_{2}\\ A_{3}\end{bmatrix}=QR_{3}\). At the end of \(d=3\) iteration we get \(R_{3}\). At each iteration \(R_{i}\) is required for further calculation which is updated iteratively. #### Iii-C2 Full fast memory utilization Let \(W\) denotes size of fast memory (in terms of floating point words). There exists an optimal choice for \(d\) in the sense minimum data movement possible between slow and fast memory [15]. Defined as, \[d^{*}=\frac{Nm}{W-m(m+1)/2} \tag{21}\] **Assumption 2**.: \(W\geq\frac{Nm}{d}+\frac{m(m+1)}{2}\)__ The second term in above assumption is due to storing previous block's \(R\) factor. ### _Algorithm performance_ We will evaluate the algorithm performance based on the following performance metrics. 1. Memory-cost (denoted by \(M\)) is defined as the RAM space needed for the algorithm to identify the system parameters. Let, one unit space be required to store one word [14] for example, the matrix \(A\in\mathbb{R}^{m\times n}\) takes \(mn\) space. 2. Flop count (denoted by \(F\)) is defined as the number of additions and multiplications required to estimate the system matrices \(\{A,B,C,D,K\}\) once QR and SVD steps are done (section II-B). 3. Data movement between PCM (fast memory) and RAM (slow memory) is characterized by the number of words moved (\(\#words\)) and number of messages (\(\#messages\)). **Definition 1**.: The algorithm cost (\(C\)) is defined as the sum of all three criterion mentioned above: \[C=M+F+(\#words+\#messages) \tag{22}\] ### _Problem Formulation_ Based on the above criteria we define the following problem: **Problem 1**.: Design a sequential system identification algorithm to reduce cost defined in (22) along with minimizing the error between predicted and actual output (MSE minimization). \[MSE=\frac{1}{N_{v}}\sum_{i=1}^{N_{v}}(y_{i}-\hat{y}_{i})^{2}\] where, \(i=\{1,2,....,N_{v}\}\), \(y_{i}\) is actual output and \(\hat{y}_{i}\) is predicted output. For the conventional methods (refer previous section), the cost defined in (22) can increase substantially for large \(N\): 1. Memory-cost (\(M_{conv}\)) will be largely due to input-output data, formulating big matrices like \(U_{p}\), \(U_{f}\), \(Y_{p}\), \(Y_{f}\) and \(W_{p}\) and intermediate matrices like \(\zeta\). Space required for input-output data is \((p+m)N\), \(2k(m+p)N\) for QR decomposition (see (7)), \(k(m+p)N\) for \(W_{p}\) and \(kpN\) to store \(\zeta\). Adding all of them, we get, \[M_{conv}\approx(p+m+3km+4kp)N\] (23) An additional memory-cost of \(O(N)\) will be needed for the stochastic part i.e. \(K\) in all the algorithms, we ignore the exact calculations. 2. Flop-count (\(F_{conv}\)) to estimate \(\{A,B,C,D,K\}\) will be dominated by computation complexity of at least \(O(N)\) in all conventional methods. \[F_{conv}\approx O(N)\] (24) For example, N4SID uses least squares with \(N\) equations (see (14)), also computes \(\{Q,S,R\}\) from model residual that involves \(O(N)\) computation to estimate \(K\). Estimation of \(\{B,D,K\}\) by MOESP approach involves \(O(N)\) computation. 3. Data moved: The \(\#words\) and \(\#messages\) will be dominated by QR and SVD. Data movement turns out to be approximately \(O(N^{2})\) due to size of \(Q\) matrix from QR decomposition by (7) and right singular vector matrix \(V^{T}\) from SVD by (13). ## III Fast Subspace Identification In order to address issues discussed in the previous section II-C. We propose an algorithm based on sequential QR decomposition, calculating SVD cleverly and a new faster method to estimate \(\{B,D,K\}\) matrices. In the proposed method \(\{A,B,C,D,K\}\) are estimated independent of I/O data size (\(N\)) which is very large. ### _Modified sequential reduced QR_ Since, we need only the \(R\) factor from the QR decomposition for further computation (see (11)), we can use sequential QR method which, at the end of sequence gives the \(R\) matrix. Additionally, ignoring all the \(Q_{i}\). Now, \(H^{T}=QR\) implies \(H=LQ=R^{T}Q^{T}\), so, \(L=R^{T}=\begin{bmatrix}R_{11}&0&0\\ R_{21}&R_{22}&0\\ R_{31}&R_{32}&R_{33}\end{bmatrix}\), where, \(L\in\mathbb{R}^{2k(m+p)\times 2k(m+p)}\). Let the number of partitions \(d\) be such that \(N_{d}\geq 2k(m+p)\) and Assumption 2 is satisfied. Then \(H=\begin{bmatrix}H_{1}&H_{2}&\ldots&H_{d}\end{bmatrix}\) and the computation has to be done on \(H_{i}^{T}\in\mathbb{R}^{N_{d}\times 2k(m+p)}\)\(\forall i\in\{1,d\}\). We formulate \(H_{i}^{T}\) directly from input-output data i.e. smaller matrices \(U_{i}^{T}\), \(U_{f_{i}}^{T}\), \(Y_{p_{i}}^{T}\) and \(Y_{f_{i}}^{T}\) such that \(H_{i}^{T}:=\begin{bmatrix}U_{f_{i}}^{T}&U_{p_{i}}^{T}&Y_{p_{i}}^{T}&Y_{f_{i}}^{ T}\end{bmatrix}\). Then we perform sequential "reduced" QR as given by algorithm 2. ``` 1 Generate \(H_{i}\in\mathbb{R}^{2k(m+p)\times N_{d}}\) sequentially from input-output data \(\forall i\in[1,d]\). 2\(i\gets 1\); 3while\(i\neq d\)do 4if\(i=1\)then 5\(R_{1}=reducedqr(H_{1}^{T})\); 6 7else 8\(R_{i+1}=reducedqr\left(\begin{bmatrix}R_{i}\\ H_{i+1}^{T}\end{bmatrix}\right)\) 9\(i\gets i+1\); 10 11 12After d-steps, \(L=R_{d}^{T}\) use this to extract \(\bar{L}_{p}\) using (11). 13. ``` **Algorithm 2**Modified sequential reduced QR In the method above, assuming \(W\geq k(m+p)\{2N_{d}+2k(m+p)+1\}\) (see Assumption 2). The QR computation is done on smaller matrices (\(H_{i}\in\mathbb{R}^{2k(m+p)\times N_{d}}\)) in fast memory with full fast memory utilization. Therefore data movement between slow and fast memory is reduced, in turn leading to fast QR implementation (see section II-C1) and algorithm computation-time \(T_{algo}\) (refer equation (1)). Define, \(T_{QR}:=T_{algo}\) for Householder QR; \(T_{SQR}:=T_{algo}\) for sequential QR \((SQR)\) and \(T_{MSQR}:=T_{algo}\) for modified sequential reduced QR \((MSQR)\). **Lemma 1**.: _For proposed fast subspace identification, \(T_{QR}>T_{SQR}>T_{MSQR}\)._ Proof.: It has been shown in ([14], refer Table \(2.1\)) that sequential QR communicates less (reduced data movement between slow and fast memory) than Householder QR i.e. \(\#words_{QR}>\#words_{SQR}\) and \(\#messages_{QR}>\#messages_{SQR}\). While \(\#flops\) is same for both the case. Assuming, \(\alpha,\beta\) and \(\gamma\) to be same for both cases. Then from equation (1), we can deduce that, \(T_{QR}>T_{SQR}\). This proves the first inequality. Now, we know, \(\#words=\#words_{read}+\#words_{write}\). The modified sequential QR is done by reading the data matrices sequentially, hence \(\#words_{read}\) is same for \(MSQR\) and \(SQR\). While we additionally ignore the \(Q_{i}\)\(\forall i\in\{1,d\}\) making number of words moved during write-cycle to be less as compared to the \(SQR\) i.e. \(\#words_{write_{SQR}}>\#words_{write_{MSQR}}\). Therefore, using same arguments as above, \(T_{SQR}>T_{MSQR}\). Hence, \(T_{QR}>T_{SQR}>T_{MSQR}\). ### _Sequential SVD computation_ As mentioned in preliminaries, the next step is to calculate oblique projection \(\zeta\) in (see (12)) and then perform SVD according to (13). Since, we are interested only in left singular vectors of \(\zeta\) (see (16)), so calculating \(\zeta\in\mathbb{R}^{kp\times N}\) first and then performing SVD becomes inefficient. Hence we propose an efficient method using the MSQR decomposition proposed above. The computation of \(\zeta^{T}=W_{p}^{T}\bar{L}_{p}^{T}\in\mathbb{R}^{N\times k(m+p)}\) involves two steps namely block multiplication and sequential QR. Step-1 (Block multiplication): We formulate \(W_{p_{i}}^{T}\) directly from input-output data i.e. \(W_{p_{i}}^{T}=\begin{bmatrix}U_{p_{i}}^{T}&Y_{p_{i}}^{T}\end{bmatrix}\)\(\forall i\in\{1,d\}\) where \(W_{p_{i}}^{T}\in\mathbb{R}^{N_{d}\times k(m+p)}\). Now, \(\zeta_{i}^{T}\) can be computed as, \[\zeta_{i}^{T}=W_{p_{i}}^{T}\bar{L}_{p}^{T} \tag{25}\] Step-2 (Modified sequential reduced QR): Perform QR on \(\zeta_{i}^{T}\)\(\forall i\in\{1,d\}\) sequentially as explained in the above sub-section (updating \(R_{i}\) iteratively while ignoring \(Q_{i}\)). At the end of \(d\)-step we get \(R_{\zeta}\in\mathbb{R}^{k(m+p)\times k(m+p)}\) i.e. \[\zeta^{T}=Q_{\zeta}R_{\zeta} \tag{26}\] Now, perform SVD on \(R_{\zeta}\), \[R_{\zeta}=U_{r}\Sigma_{r}V_{r}^{T} =\begin{bmatrix}U_{1}&U_{2}\end{bmatrix}\begin{bmatrix}\Sigma_{1}&0 \\ 0&0\end{bmatrix}\begin{bmatrix}V_{1}^{T}\\ V_{2}^{T}\end{bmatrix} \tag{27}\] \[=U_{1}\Sigma_{1}V_{1}^{T}\] **Lemma 2**.: _Left singular vectors of \(\zeta\) are equal to right singular vectors of \(R_{\zeta}\). Singular values of \(\zeta\) and \(R_{\zeta}\) are same._ Proof.: Using, (26) and (27), \[\begin{split}\zeta=R_{\zeta}^{T}Q_{\zeta}^{T}&=V_{1} \Sigma_{1}U_{1}^{T}Q_{\zeta}^{T}=V_{1}\Sigma_{1}(U_{1}^{T}Q_{\zeta}^{T})\\ &=V_{1}\Sigma_{1}\bar{U}_{1}^{T}\end{split} \tag{28}\] The last step follows from the fact, product of orthogonal matrices is an orthogonal matrix. So, instead doing SVD of \(\zeta\) we perform SVD of \(R_{\zeta}\) to estimate \(\Theta_{k}\) which can be computed using (27) as, \[\Theta_{k}=V_{1}\Sigma_{1}^{1/2} \tag{29}\] Recall that to estimate \(\Theta_{k}\) we need only left singular vectors (16). The \(\Theta_{k}\) estimate in (29) will be the same as MOESP case (see eq (15)) since, multiplication by an orthogonal matrix from the right in (28) does not affect the estimate. \[\zeta_{MOESP}=\zeta\Pi_{U_{f}}^{\perp}=V_{1}\Sigma_{1}\bar{U}_{1}^{T}\Pi_{U_{f} }^{\perp}=V_{1}\Sigma_{1}\bar{U}_{1m}^{T} \tag{30}\] ### _Estimating model order, A and C_ Model order (\(n\)) can be estimated from rank(\(R_{\zeta}\)). In other words, the estimated rank is equal to the number of significant singular values of \(R_{\zeta}\). This is usually inferred from a plot of the logs of the singular values in \(\Sigma_{r}\) as obtained in (27). Then, \(A\) and \(C\) are estimated using \(\Theta_{k}\) obtained in (29), as in (17) and (18). ### _Estimating B and D_ We propose a new fast method to estimate \(B\) and \(D\) which does not depend on I/O data size (\(N\)). Firstly, \(\Psi_{k}\) is estimated using the already calculated LQ decomposition by comparing the second term of (5) and (10), \[\Psi_{k}=(R_{31}-R_{32}R_{22}^{\dagger}R_{21})R_{11}^{-1} \tag{31}\] Then, \(\{B,D\}\) can be estimated using \(\Psi_{k}\) and \(\Theta_{k}^{\downarrow}\) obtained from (29). We extract the first \(m\)-columns from the estimated \(\Psi_{k}\) calculated in (31) and compare with the first block column of \(\Psi_{k}\) from (3). Therefore, \[M:=\underbrace{\Psi_{k}(:,1:m)}_{\text{from \eqref{eq: From (38), \(\frac{1}{N}R_{33}R_{33}^{T}=\Phi_{k}\Omega\Phi_{k}^{T}\), the Cholesky factor becomes, \[\frac{1}{\sqrt{N}}R_{33}=\Phi_{k}\begin{bmatrix}\omega_{1}&0&\ldots&0\\ 0&\omega_{2}&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&\omega_{k}\end{bmatrix} \tag{39}\] such that \(\omega_{i}\omega_{i}^{T}=\Omega_{i}\); \(\forall i\in\{1,k\}\) and \(\omega_{i}\in\mathbb{R}^{p\times p}\). Define, \(\tilde{R}_{33}:=\frac{1}{\sqrt{N}}R_{33}\) and substituting the structure of \(\Phi_{k}\) above we get \[\tilde{R}_{33} =\begin{bmatrix}I&0&\ldots&0\\ CK&I&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ CA^{k-2}K&\ldots&CK&I\end{bmatrix}\begin{bmatrix}\omega_{1}&0&\ldots&0\\ 0&\omega_{2}&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&\omega_{k}\end{bmatrix} \tag{40}\] \[=\begin{bmatrix}\omega_{1}&0&\ldots&0\\ CK\omega_{1}&\omega_{2}&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ CA^{k-2}K\omega_{1}&CA^{k-3}\omega_{2}&\ldots&\omega_{k}\end{bmatrix}\] Now we exploit the structure of the above equation to estimate \(K\). Define, \[P_{k}:=\underbrace{\tilde{R}_{33}(:,1:p)}_{\text{from }\text{IQ (8)}}= \underbrace{\begin{bmatrix}\omega_{1}\\ CK\omega_{1}\\ \vdots\\ CA^{k-2}\omega_{1}\end{bmatrix}}_{structure\text{ of (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eq Proof.: Adding all terms of the RHS column in Table I and ignoring lower-order terms, we get, \[M_{p}\approx(p+m)N+(3m+4p)kN_{d} \tag{43}\] Using (23) and (43), it is easy to deduce that \(M_{p}<M_{conv}\) (due to iterative update and ignoring \(Q_{i}\) in the proposed method). We define the % reduction in memory-cost as, \[\begin{split}&\%Reduction=\frac{M_{conv}-M_{p}}{M_{conv}}\times 1 00\\ &=\frac{k(3m+4p)(N-N_{d})}{(p+m+3km+4kp)N}\times 100\\ &\text{substituting }N_{d}=N/d\\ &=\frac{k(3m+4p)(d-1)}{(p+m+3km+4kp)d}\times 100\end{split} \tag{44}\] Case: \(N>>N_{d}\) implies \(d>>1\) then % Reduction in memory-cost becomes, \[\%Reduction=\frac{k(3m+4p)}{(p+m+3km+4kp)}\times 100 \tag{45}\] ### _Flop-count (\(F_{p}\))_ Flop-count is considered only after QR and SVD are implemented (see Table II). In the proposed method, \(\{A,B,C,D,K\}\) is estimated independent of I/O data size \(N\). **Lemma 5**.: _Flop-count for proposed method is independent of data size \(N\) i.e. \(F_{p}\approx O(k^{3}m^{3})\) and \(F_{p}<F_{conv}\)._ Proof.: It is easy to see that to estimate \(\{A,B,C,D\}\), we need to calculate the flops only for \(A\) (17), \(\Psi_{k}\) (31) and matrix multiplication in (35) to estimate \(B\). Since, \(C\) and \(D\) are directly read-off from the intermediate calculations. In the proposed method, \(K\) is estimated using Algorithm 3. We can see that we need to calculate flop count only for step-2 and step-5. For calculation details refer Table II. Therefore, under assumption \(m\geq p\) to estimate \(\{A,B,C,D,K\}\) we require, \[F_{p}\approx O(k^{3}m^{3}) \tag{46}\] Therefore, \(F_{p}\) is independent of data-size \(N\). The % reduction in flop-count using (24) and (46) can be written as, \[\begin{split}\%Reduction&=\frac{F_{conv}-F_{p}}{F_ {conv}}\times 100\\ &=\frac{O(N)-O(k^{3}m^{3})}{O(N)}\times 100\end{split} \tag{47}\] Since, \(km<<N\), therefore, \(F_{p}<F_{conv}\). ### _Data moved_ It has been shown in [14] that the sequential QR takes asymptotically lesser \(\#messages\) and \(\#words\) moved as compared to block Householder QR, resulting in faster computation. In our case, we have applied modified sequential QR twice: (i) to obtain \(R_{d}\) from \(H\) and (ii) to obtain left singular vectors of \(\zeta\) i.e. \(R_{\zeta}\). Since, all the computation is done in the fast memory the required data movement is reduced for our algorithm. **Lemma 6**.: _For the proposed algorithm, \((\#words+\#messages)_{p}\) moved is of \(O(N)\). Also, \((\#words+\#messages)_{p}<<(\#words+\#messages)_{conv}\)._ Proof.: Since, \(\#words\) will be dominated by reading the data matrices sequentially making approx \(O(N)\). The \(\#messages\) will depend on number of partition i.e. \(\#messages\) approx \(O(d)\). Hence, data moved is approx \(O(N)\). It has been shown in section II-E, \((\#words+\#messages)_{conv}\) is of \(O(N^{2})\). Hence, \((\#words+\#messages)_{p}<<(\#words+\#messages)_{conv}\). A comparison for all the performance criteria for combined deterministic-stochastic identification is shown in Table III. Details about intermediate steps are shown in Table IV. Finally from Lemma 4, Lemma 5 and Lemma 6 it follows that \(C_{p}<C_{conv}\). **Theorem 1**.: _For combined deterministic-stochastic subspace identification, the algorithm cost for proposed method is less than the conventional algorithm i.e. \(C_{p}<C_{conv}\)._ **NOTE 3**.: _In [30], an alternate method is proposed for estimating \(K\) which, like the proposed method in this article, is independent of the data size \(N\). However for this method the memory-cost \(M^{K}\) turns out to be \(np(k-1)^{2}+p^{2}(k-1)^{2}+(kp)^{2}+pn\). The major memory-cost comes from the matrix formulation in equation \((\ref{eq:29})\) of [30]. Further equations \((\ref{eq:28})\) and \((\ref{eq:29})\) in [30]) requires to solve a least square problem in \(p(k-1)^{2}\) equations and \(pn\) unknowns, along with some intermediate calculations to finally estimate \(K\). Hence we can conclude that, the proposed method to estimate \(K\) is more efficient in both memory cost as well as flop count as compared to [30]._ ## V Case studies All simulations were performed on intel core-i7 (9th generation) having Level2 cache of 2 MiB (fast memory), 32 GB RAM (slow memory) and 1 TB hard-drive using MATLAB2020b. We have defined the algorithm cost \(C\) (22) which includes \(\#messages\) and \(\#words\). However these quantities cannot be measured directly in the experiments we perform next. On the other hand we know that the algorithm computation time \(T_{algo}\) (see (1)). Since, we can measure \(T_{algo}\) easily using "tic-toc" in MATLAB, we use \(T_{algo}\) as a proxy for flops, \(\#messages\) and \(\#words\) in the numerical experiments below. Further, since \(W\) is unknown, the optimal value of \(d\) as in (21) is also not known. Instead we choose \(d\) empirically based on I/O data size. For all the case studies, the memory-cost was computed theoretically based on the derived results as described above while the actual computation-time was measured experimentally using the "tic-toc" command in MATLAB. Average computation time (ACT) was calculated by taking average time taken over 100 simulation runs. ### _Synthetic models_ 1. Consider the following SISO system \[A=\begin{bmatrix}-0.03&0.9849\\ 0&-10\end{bmatrix}\text{, }B=\begin{bmatrix}\ddagger\\ \end{bmatrix}\text{, }C=\begin{bmatrix}2.4622&2.5\end{bmatrix}\text{,}\] \[D=\begin{bmatrix}0\end{bmatrix}\text{, }K^{T}=\begin{bmatrix}1&2\end{bmatrix}\] In this case, \(e\) is a white noise sequence with zero mean and \(\Omega\) = 1.0079e-5. The sampling time, based on the fast dynamics, is taken as \(\Delta_{t}\) = 0.01 sec., while the number of training samples \(N_{t}\) = 50019, \(k\) = 10, \(N\) = 50000 and \(d\) = 40. This implies \(N_{d}\) = 1250 \(<N\) and \(N_{v}\) = 15000. The performance comparison is given in Table V. The estimated model is validated by plotting \(y\) and \(\hat{y}\) as shown in Fig. 1. 2. Consider the following MIMO system \[A=\begin{bmatrix}-5.2070&-1.8545&3.9312\\ -1.9278&-5.3306&-2.5527\\ 3.9688&-2.5146&-6.5024\end{bmatrix}\text{, }B=\begin{bmatrix}1&0\\ 1&1\\ 0&1\end{bmatrix}\text{, }C=\begin{bmatrix}1&0&1\\ 1&1&0\end{bmatrix}\text{, }D=\begin{bmatrix}0\end{bmatrix}\text{, }K=\begin{bmatrix}0&0.2\\ 0.1&0\\ 0.1&0\end{bmatrix}\] In this case, \(e\) is a white noise sequence with zero mean and var(\(e_{1}\)) = 1.0031e-05 ; var(\(e_{2}\)) = 1.0073e-05. The sampling time, based on the fast dynamics, is taken as \(\Delta_{t}\) = 0.01 sec., while the number of training samples \(N_{t}\) = 32119, \(k\) = 10, \(N\) = 32100 and \(d\) = 50. This implies \(N_{d}\) = 642 \(<N\) and \(N_{v}\) = 10000. The performance comparison is given in Table VI. The estimated model was validated by plotting \(y\) and \(\hat{y}\) as shown in Fig. 3. and slow dynamics. In PHWR nuclear reactors the fastest time-constants are in the order of 0.05 seconds, while the slowest oscillations due to Xenon occur over 20 hours [4]. For more details about the system dynamics considered in this case study, the reader is referred to [32]. We estimate only the closed-loop model parameters. **Bulk power model:** This is a \(7^{th}\) order SISO model with 6 zeros. The fastest pole \(\lambda_{fastest}=-6.834\) while the slow dynamics is dominated by \(\lambda_{slowest}=-0.02739\). The sampling time, based on the fast dynamics, is taken as \(\Delta_{t}\) = 0.02 sec., while the number of training samples \(N_{t}\) = 22599, \(k\) = 50, \(N\) = 22500, \(d\) = 100. This implies \(N_{d}\) = 225 \(<<N\) and \(N_{v}\) = 7500. The estimated model was validated by plotting \(y\) and \(\hat{y}\) as shown in Fig. 4. In this example, we have considered three subcases (a), (b) and (c) with different noise variance 1.0124e-x where x = 25, 15 and 10 respectively. Performance comparison is given in Table VII. For all cases, the bold font in Table VII denotes the best performance in terms of least MSE, computation-time and memory space. These case studies also verify that as the number of training samples \((N_{t})\) increases, the conventional subspace method takes significantly longer computation time as compared to proposed method. A significant reduction in memory-cost along with comparable MSE was observed. ## VI Conclusion and Future Work A novel fast subspace identification algorithm to identify combined deterministic-stochastic LTI state-space model has been presented. The proposed algorithm is able to outperform the conventional subspace methods in terms of memory cost (\(M\)), flop-count (\(F\)) and computation-time. A comparable to slightly higher MSE was observed in the experiments for the proposed method. It should be further noted that here we have considered fast memory as PCM and slow memory as RAM. In case when data set is too large that it can not fit into RAM then the same concept can be extended assuming fast memory as RAM and slow memory as SSD/HDD. ## VII Acknowledgment The authors gratefully acknowledge the contribution from Dr P. Vaswani and Nuclear Power Corporation on India Ltd. in providing us the bulk power model of the Indian PHWR.
2310.11779
A Multivariate Skew-Normal-Tukey-h Distribution
We introduce a new family of multivariate distributions by taking the component-wise Tukey-h transformation of a random vector following a skew-normal distribution. The proposed distribution is named the skew-normal-Tukey-h distribution and is an extension of the skew-normal distribution for handling heavy-tailed data. We compare this proposed distribution to the skew-t distribution, which is another extension of the skew-normal distribution for modeling tail-thickness, and demonstrate that when there are substantial differences in marginal kurtosis, the proposed distribution is more appropriate. Moreover, we derive many appealing stochastic properties of the proposed distribution and provide a methodology for the estimation of the parameters in which the computational requirement increases linearly with the dimension. Using simulations, as well as a wine and a wind speed data application, we illustrate how to draw inferences based on the multivariate skew-normal-Tukey-h distribution.
Sagnik Mondal, Marc G. Genton
2023-10-18T08:17:12Z
http://arxiv.org/abs/2310.11779v1
# A Multivariate Skew-Normal-Tukey-\(h\) Distribution # A Multivariate Skew-Normal-Tukey-\(h\) Distribution Sagnik Mondal\({}^{1}\) and Marc G. Genton\({}^{1}\) \({}^{1}\)Statistics Program, King Abdullah University of Science and Technology, Thuwal 23955-6900, Saudi Arabia. E-mail: [email protected], [email protected] **Abstract:** We introduce a new family of multivariate distributions by taking the component-wise Tukey-\(h\) transformation of a random vector following a skew-normal distribution. The proposed distribution is named the skew-normal-Tukey-\(h\) distribution and is an extension of the skew-normal distribution for handling heavy-tailed data. We compare this proposed distribution to the skew-\(t\) distribution, which is another extension of the skew-normal distribution for modeling tail-thickness, and demonstrate that when there are substantial differences in marginal kurtosis, the proposed distribution is more appropriate. Moreover, we derive many appealing stochastic properties of the proposed distribution and provide a methodology for the estimation of the parameters in which the computational requirement increases linearly with the dimension. Using simulations, as well as a wine and a wind speed data application, we illustrate how to draw inferences based on the multivariate skew-normal-Tukey-\(h\) distribution. **Keywords:** Heavy-tails; Lambert's-\(W\); Non-Gaussian distribution; Skew-normal; Skew-\(t\); Tukey-\(h\). ## 1 Introduction In recent decades, there has been a growing interest in developing parametric multivariate distributions flexible enough to handle skewness and tail-thickness for various statistical applications. In a multivariate setup, two of the most popular methods to introduce both skewness and tail-thickness are: 1. **Perturbation of symmetry** of an elliptically contoured distribution which is capable of capturing tail-thickness. Examples of such distributions include the multivariate skew-\(t\) distribution (Azzalini and Capitanio, 2003) and the multivariate extended skew-\(t\) distribution (Arellano-Valle and Genton, 2010b). 2. **Transformation** of a random vector following some elliptically contoured distribution for imposing skewness and tail-thickness. Examples of such transformations are the Tukey \(g\)-and-\(h\) transformation (Field and Genton, 2006) and the Sinh-Arcsinh transformation (Jones and Pewsey, 2009) in the multivariate case, and the Lambert's-\(W\) transformation (Goerg, 2011) in the univariate case. The primary parametric model obtained by perturbing the symmetry of an elliptically contoured distribution, which instigated the research in this area, is the multivariate skew-normal distribution introduced by Azzalini and Dalla Valle (1996). Many distributions such as the multivariate skew-\(t\) distribution, the multivariate extended skew-normal distribution, and the multivariate extended skew-\(t\) distribution were built upon the foundation of the skew-normal distribution. These distributions can be viewed as special cases of the multivariate unified skew-elliptical distribution studied by Arellano-Valle and Genton (2010a). For more on these types of distributions, readers are referred to the books by Genton (2004) and Azzalini and Capitanio (2014), and to a recent review by Azzalini (2022). Since the skew-normal distribution is obtained by perturbing the symmetry of the Gaussian distribution and the skew-\(t\) distribution is obtained by perturbing the symmetry of the Student's-\(t\) distribution, the skew-normal distribution is not capable of handling tail-thickness while the skew-\(t\) distribution is more apt for modeling heavy-tailed data. However, one shortcoming of the skew-\(t\) distribution is that it cannot handle different tail-thickness for different marginals, since the tail-thickness is controlled only by one parameter. There has been a proposal by Miller (1968) to introduce a multivariate Student's-\(t\) distribution with different tail-thickness parameters for different marginals. However, the probability density function (pdf) of the proposed distribution involves complicated hypergeometric functions that make inference with such a distribution very challenging. The second approach above for introducing skewed and heavy-tailed distribution is to use some non-linear transformation on a light-tailed elliptically symmetric random variable. The Lambert's-\(W\) transformation, proposed by Goerg (2011) in the univariate case, can impose both skewness and tail-thickness on a Gaussian random variable using a single parameter. However, as this transformation is not one-to-one, the pdf of its multivariate extension becomes almost impossible to track down, especially for higher dimensions. Goerg (2015) solved this issue by slightly changing the Lambert's-\(W\) transformation and made it one-to-one. This modified transformation is a generalized version of the Tukey-\(h\) transformation. Although Goerg (2015) proposed this new distribution in the univariate setting, he only briefly mentioned how it can be extended to the multivariate setting by applying this transformation component-wise. Other examples include the Sinh-Arcsinh (SAS) transformation and the Tukey \(g\)-and-\(h\) transformation which are monotonic and control skewness and tail-thickness with separate parameters. Field and Genton (2006) presented a multivariate \(g\)-and-\(h\) distribution which is based on the component-wise Tukey's \(g\)-and-\(h\) transformation of a random vector following a Gaussian distribution. As a result, it permits different kurtosis for different marginals. However, one drawback of this distribution is drawing inferences. Since the inverse of Tukey's \(g\)-and-\(h\) transformation does not have a closed form, the likelihood function cannot be readily calculated. Moreover, for parameter estimation, some definitions of multivariate quantiles are needed. This can be computationally challenging when the dimension is high because the number of directions in which the quantiles have to be computed grows exponentially with dimension. Jones and Pewsey (2009) discussed mainly the univariate SAS distribution and its various stochastic and inferential properties. The idea of the multivariate expansion of this family has also been discussed by Jones and Pewsey (2009). It consists in using the transformation on the marginals of a standardized but correlated multivariate Gaussian distribution. A similar approach has been taken by Rubio et al. (2016) who proposed a distribution that is capable of modeling higher skewness than the original SAS distribution by applying the two-piece transformation to the symmetric SAS distribution. Yan et al. (2020) used the SAS distribution in the context of a bivariate random field for wind data and discussed how to draw inference based on it. However, inference in the multivariate scenario is yet to be thoroughly explored. In this article, we propose a new multivariate distribution by combining these two techniques, the perturbation of symmetry for skewness and the transformation for tail-thickness. We introduce the skew-normal-Tukey-\(h\) distribution by taking the Tukey-\(h\) transformation on the components of a skew-normal random vector to introduce tail-thickness on each component. Moreover, by changing the marginal kurtosis parameter, we can have different kurtosis for different marginals. We study some basic statistical properties of the skew-normal-Tukey-\(h\) distribution. Furthermore, we discuss how to draw inferences based on this distribution. We compare the proposed distribution with the skew-\(t\) distribution since both of them are extensions of the skew-normal distribution for handling heavy-tailed data. Finally, we justify in which scenarios the skew-normal-Tukey-\(h\) distribution is more appropriate compared to the skew-\(t\) distribution using a simulation study and two data applications. It should be pointed out that the aforementioned two methods for constructing skewed and heavy-tailed distributions are not exhaustive. There exists a variety of proposals in the statistics literature. For example, distributions studied by Branco and Dey (2001) and Wang et al. (2004) are very similar to the definition of the skew-normal distribution. Genton and Loperfido (2005) proposed a definition of generalized skew-elliptical distributions which bring such different skewed distributions defined by perturbation of symmetry under one umbrella. Another avenue for the introduction of skewness and tail-thickness was explored by Forbes and Wraith (2014) and further generalized by Wraith and Forbes (2015) under the name of location-scale mixtures of Gaussian distributions. Various other non-Gaussian distributions for modeling skewed and heavy-tailed data can also be obtained using the theory of copulas (Sklar, 1959). We refer interested readers to the books by Joe (1997) and Nelsen (2007), and the references therein, for more details on copulas. These are some other examples of parametric families proposed for modeling various skewed and heavy-tailed or light-tailed data. The rest of the article is organized as follows. In Section 2, we formally define the skew-normal-Tukey-\(h\) distribution, whereas various of its stochastic properties are discussed in Section 3. In Section 4, we illustrate how to draw inferences based on the skew-normal Tukey-\(h\) distribution. In Sections 5 and 6, we present simulation studies and two applications to wine data and to wind speed data showing when the skew-normal-Tukey-\(h\) distribution is more appropriate compared to the skew-\(t\) distribution. Finally, in Section 7, we conclude our article and discuss some avenues for future research work. ## 2 Multivariate Skew-Normal-Tukey-\(h\) Distribution In this section, we define the multivariate skew-normal-Tukey-\(h\) distribution. We start by defining an alternative parameterization of the multivariate skew-normal distribution. ### Skew-Normal Distribution The multivariate skew-normal distribution was introduced by Azzalini and Dalla Valle (1996) and later studied in Azzalini and Capitanio (1999). A random vector \(\mathbf{Y}\in\mathbb{R}^{p}\) is said to have a multivariate skew-normal distribution with location parameter \(\mathbf{\xi}\in\mathbb{R}^{p}\), symmetric positive definite scale parameter \(\mathbf{\Omega}\in\mathbb{R}^{p\times p}\), and skewness parameter \(\mathbf{\alpha}\in\mathbb{R}^{p}\), if its pdf is \[f_{\mathbf{Y}}(\mathbf{y})=2\phi_{p}\left(\mathbf{y};\mathbf{\xi},\mathbf{\Omega}\right)\Phi\{\mathbf{ \alpha}^{\top}\mathbf{\omega}^{-1}(\mathbf{y}-\mathbf{\xi})\},\quad\mathbf{y}\in\mathbb{R}^{p}, \tag{1}\] where \(\phi_{p}(\cdot;\mathbf{\mu},\mathbf{\Sigma})\) is the pdf of a \(p\)-dimensional normal distribution with mean \(\mathbf{\mu}\in\mathbb{R}^{p}\) and positive definite covariance matrix \(\mathbf{\Sigma}\in\mathbb{R}^{p\times p}\), and \(\mathbf{\omega}=\text{diag}(\mathbf{\Omega})^{1/2}\). Here, and from now on, we call this distribution with the parameterization in Equation (1) the Azzalini skew-normal (\(\mathcal{ASN}\)) distribution and we denote it by \(\mathbf{Y}\sim\mathcal{ASN}_{p}(\mathbf{\xi},\mathbf{\Omega},\mathbf{\alpha})\). As used in Mondal et al. (2023), the \(\mathcal{ASN}_{p}(\mathbf{\xi},\mathbf{\Omega},\mathbf{\alpha})\) distribution can be reparameterized by means of the relations \(\mathbf{\Omega}=\mathbf{\Psi}+\mathbf{\eta}\mathbf{\eta}^{\top}\) and \(\mathbf{\alpha}=(1+\mathbf{\eta}^{\top}\mathbf{\Psi}^{-1}\mathbf{\eta})^{-1/2}\mathbf{\omega}\mathbf{ \Psi}^{-1}\mathbf{\eta}\), where \(\mathbf{\Psi}\in\mathbb{R}^{p\times p}\) is a symmetric positive definite matrix, \(\mathbf{\eta}\in\mathbb{R}^{p}\) and \(\mathbf{\omega}=\text{diag}(\sqrt{\Psi_{11}+\eta_{1}^{2}},\dots,\sqrt{\Psi_{pp}+ \eta_{p}^{2}})\), with \(\Psi_{ii}\) and \(\eta_{i}\) being the \(i\)th diagonal element of \(\mathbf{\Psi}\) and \(\mathbf{\eta}\), respectively, for \(i=1,\dots,p\). Conversely, by letting \(\mathbf{\omega}=\text{diag}(\mathbf{\Omega})^{1/2}\), \(\bar{\mathbf{\Omega}}=\mathbf{\omega}^{-1}\mathbf{\Omega}\mathbf{\omega}^{-1}\) and \(\mathbf{\delta}=(1+\mathbf{\alpha}^{\top}\bar{\mathbf{\Omega}}\mathbf{\alpha})^{-1/2}\bar{ \mathbf{\Omega}}\mathbf{\alpha}\), we have \(\mathbf{\Psi}=\mathbf{\omega}(\bar{\mathbf{\Omega}}^{-1}+\mathbf{\alpha}\mathbf{\alpha}^{\top})^ {-1}\mathbf{\omega}=\mathbf{\omega}(\bar{\mathbf{\Omega}}-\mathbf{\delta}\mathbf{\delta}^{\top}) \mathbf{\omega}\) and \(\mathbf{\eta}=\mathbf{\omega}\mathbf{\delta}.\) With this alternative parameterization, the pdf of \(\mathbf{Y}\) from Equation (1) is \[f_{\mathbf{Y}}(\mathbf{y})=2\phi_{p}\left(\mathbf{y};\mathbf{\xi},\mathbf{\Psi}+\mathbf{\eta}\mathbf{\eta} ^{\top}\right)\Phi\Bigg{\{}\frac{\mathbf{\eta}^{\top}\mathbf{\Psi}^{-1}(\mathbf{y}-\mathbf{ \xi})}{\sqrt{1+\mathbf{\eta}^{\top}\mathbf{\Psi}^{-1}\mathbf{\eta}}}\Bigg{\}},\quad\mathbf{y} \in\mathbb{R}^{p}. \tag{2}\] Azzalini and Dalla Valle (1996) used this parameterization up to minor differences. Moreover, Adcock and Shutes (2001), Adcock (2004), and Adcock (2005) have also used the same parameterization. With this parameterization, a \(p\)-variate random vector \(\mathbf{Y}\) is said to have a skew-normal (\(\mathcal{SN}\)) distribution with location parameter \(\mathbf{\xi}\in\mathbb{R}^{p}\), symmetric positive definite scale matrix \(\mathbf{\Psi}\in\mathbb{R}^{p\times p}\), and skewness parameter \(\mathbf{\eta}\in\mathbb{R}^{p}\) if its pdf is given by Equation (2). We denote it by \(\mathbf{Y}\sim\mathcal{SN}_{p}(\mathbf{\xi},\mathbf{\Psi},\mathbf{\eta})\). Many interesting properties of the \(\mathcal{SN}\) distribution with the parameterization in Equation (2) have been derived in Mondal et al. (2023). The following results are given here as they will be useful later on, while their proofs can be found in Mondal et al. (2023): * _Stochastic representation of \(\mathcal{SN}\) distribution:_ If \(\mathbf{Y}\sim\mathcal{SN}_{p}(\mathbf{\xi},\mathbf{\Psi},\mathbf{\eta})\), then \(\mathbf{Y}=\mathbf{\xi}+U\mathbf{\eta}+\mathbf{W}\), where \(U\) and \(\mathbf{W}\) are independently distributed, with half-normal \(U\) denoted by \(U\sim\mathcal{HN}(0,1)\), and \(\mathbf{W}\sim\mathcal{N}_{p}(\mathbf{0},\mathbf{\Psi})\). * _Affine transformation of the \(\mathcal{SN}\) distribution:_ If \(\mathbf{Y}\sim\mathcal{SN}_{p}(\mathbf{\xi},\mathbf{\Psi},\mathbf{\eta})\), then for any fixed vector \(\mathbf{a}\in\mathbb{R}^{q}\) and any fixed matrix \(\mathbf{B}\in\mathbb{R}^{q\times p}\) of full row rank and \(q\leq p\): \(\mathbf{a}+\mathbf{B}\mathbf{Y}\sim\mathcal{SN}_{q}(\mathbf{a}+\mathbf{B}\mathbf{\xi},\mathbf{B}\mathbf{\Psi} \mathbf{B}^{\top},\mathbf{B}\mathbf{\eta})\). * _Marginal distributions of the \(\mathcal{SN}\) distribution:_ Let \(\mathbf{Y}\sim\mathcal{SN}_{p}(\mathbf{\xi},\mathbf{\Psi},\mathbf{\eta})\) and consider the partition of \(\mathbf{Y}=(\mathbf{Y}_{1}^{\top},\mathbf{Y}_{2}^{\top})^{\top}\) with \(\mathbf{Y}_{i}\) of size \(p_{i}\) (\(i=1,2\)) and such that \(p_{1}+p_{2}=p\), with corresponding partitions of the parameters in blocks of matching sizes. Then \(\mathbf{Y}_{i}\sim\mathcal{SN}_{p_{1}}(\mathbf{\xi}_{i},\mathbf{\Psi}_{ii},\mathbf{\eta}_{i}),i= 1,2\). The \(\mathcal{ASN}\) and \(\mathcal{SN}\) parameterizations describe the same distribution but the simplicity of the marginal distributions in the \(\mathcal{SN}\) parameterization (see above) will prove useful for inferential purposes later on. ### Skew-Normal-Tukey-\(h\) Distribution We introduce tail-thickness in the skew-normal distribution by taking the Tukey-\(h\) transformation of all the components of a random vector following a \(\mathcal{SN}\) distribution. The Tukey-\(h\) transformation is \[\tau_{h}(x)=x\exp(hx^{2}/2),\quad x\in\mathbb{R},\quad h\geq 0. \tag{3}\] Moreover, for \(\mathbf{x}=(x_{1},\dots,x_{p})^{\top}\in\mathbb{R}^{p}\), we define \[\mathbf{\tau}_{\mathbf{h}}(\mathbf{x})=\{\tau_{h_{1}}(x_{1}),\dots,\tau_{h_{p}}(x_{p})\}^{ \top},\quad\mathbf{h}=(h_{1},\dots,h_{p})^{\top},\,h_{i}\geq 0,i=1,\dots,p. \tag{4}\] **Definition 1** (Skew-normal-Tukey-\(h\) distribution).: _A random vector \(\mathbf{Y}\in\mathbb{R}^{p}\) with the stochastic representation \(\mathbf{Y}=\mathbf{\xi}+\mathbf{\omega}\mathbf{\tau}_{\mathbf{h}}(\mathbf{Z})\), where \(\mathbf{Z}\sim\mathcal{S}\mathcal{N}_{p}(\mathbf{0},\bar{\mathbf{\Psi}},\mathbf{\eta})\) and \(\bar{\mathbf{\Psi}}\) is a \(p\times p\) correlation matrix, is said to have a multivariate skew-normal-Tukey-\(h\) distribution. Here \(\mathbf{\xi}\in\mathbb{R}^{p}\) is the location parameter, \(\mathbf{\omega}=\text{diag}(\omega_{11},\ldots,\omega_{pp})\) is a \(p\times p\) diagonal scale matrix such that \(\omega_{ii}>0\), \(i=1,\ldots,p\), \(\mathbf{\eta}\in\mathbb{R}^{p}\) is the skewness parameter, and \(\mathbf{h}\) is the tail-thickness parameter vector such that \(\mathbf{h}=(h_{1},\ldots,h_{p})^{\top}\in\mathbb{R}^{p}\), \(h_{i}\geq 0\), \(i=1,\ldots,p\). We denote \(\mathbf{Y}\sim\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}_{p}(\mathbf{\xi},\mathbf{ \omega},\bar{\mathbf{\Psi}},\mathbf{\eta},\mathbf{h})\)._ We define the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) distribution with a correlation matrix \(\bar{\mathbf{\Psi}}\) and a diagonal scale matrix \(\mathbf{\omega}\). The \(\bar{\mathbf{\Psi}}\) parameter governs the dependence structure in the model and \(\mathbf{\omega}\) is a diagonal matrix consisting of the marginal scale parameters. To make all the parameters identifiable we restrict \(\bar{\mathbf{\Psi}}\) to be a correlation matrix. It is immediate from the definition of the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) distribution that when \(\mathbf{h}=\mathbf{0}\) the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) distribution reduces to the \(\mathcal{S}\mathcal{N}\) distribution. The Tukey-\(h\) transformation applied on the marginals of the skew-normal distribution imposes tail-thickness in the distribution. Moreover, since we can vary the components of the \(\mathbf{h}\) parameter over the marginals, the resulting distribution can have different kurtosis for different marginals. In this way, we propose an extension of the skew-normal distribution, capable of handling different marginal tail-thickness. In that sense, the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) distribution is different from the skew-\(t\) distribution. The skew-\(t\) distribution can also be thought as an extension of the skew-normal distribution for modeling tail-thickness in the data, but it is incapable of capturing different kurtosis for different marginals. It should be pointed out that the proposed \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) distribution belongs to the Lambert-\(W\times F\) family of distributions (Goerg, 2015), where \(F\) represents the cumulative distribution function of the skew-normal distribution. The main difference is that Goerg (2015) proposed the location-scale Lambert-\(W\times F\) distribution with \(\mu_{X}=\mathbb{E}(X)\) as the location parameter and \(\sigma_{X}=\sqrt{\mathbb{V}\text{ar}(X)}\) as the scale parameter, \(X\sim F\), and the transformation is applied on \((X-\mu_{X})/\sigma_{X}\). For defining the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) distribution, we start with a "standard" skew-normal distribution and apply the Tukey-\(h\) transformation on it, and then we use a location-scale transformation on the transformed random variable. ## 3 Properties of the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) Distribution We outline some basic probabilistic properties of the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) distribution such as its pdf, cumulative distribution function (cdf), moments, marginal and conditional distributions, and canonical form. Due to the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) definition using the \(\mathcal{S}\mathcal{N}\) distribution, many of the \(\mathcal{S}\mathcal{N}\) appealing properties get transferred to the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) distribution. This is one of the reasons we defined the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) with the \(\mathcal{S}\mathcal{N}\) distribution parameterized in Equation (2). ### Probability Density Function of \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) In the next proposition we present the pdf of the \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) distribution. The univariate \(\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}\) pdf can be found using Theorem 1 of Goerg (2015) using \(F\) as the skew-normal distribution. We extend Theorem 1 of Goerg (2015) with \(F\) as the skew-normal distribution to the multivariate setup in the next proposition. **Proposition 1**.: _The pdf of \(\mathbf{Y}\sim\mathcal{S}\mathcal{N}\mathcal{T}\mathcal{H}_{p}(\mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}},\mathbf{\eta},\mathbf{h})\) is, for \(\mathbf{y}\in\mathbb{R}^{p}\):_ \[f_{\mathbf{Y}}(\mathbf{y})=2\phi_{p}\{\mathbf{g}(\mathbf{y});\mathbf{0},(\bar{\mathbf{\Psi}}+\mathbf{\eta} \mathbf{\eta}^{\top})\}\Phi\Bigg{\{}\frac{\mathbf{\eta}^{\top}\bar{\mathbf{\Psi}}^{-1}\mathbf{ g}(\mathbf{y})}{\sqrt{1+\mathbf{\eta}^{\top}\bar{\mathbf{\Psi}}^{-1}\mathbf{\eta}}} \Bigg{\}}\prod_{i=1}^{p}\Bigg{\{}\frac{1}{\omega_{ii}}\left(\frac{\exp[\frac{1 }{2}W_{0}\{h_{i}(\frac{y_{i}-\xi_{i}}{\omega_{ii}})^{2}\}]}{h_{i}(\frac{y_{i} -\xi_{i}}{\omega_{ii}})^{2}+\exp[W_{0}\{h_{i}(\frac{y_{i}-\xi_{i}}{\omega_{ii}} )^{2}\}]}\right)\Bigg{\}}, \tag{5}\] _where \(\mathbf{g}(\mathbf{y})=\{g_{1}(y_{1}),\ldots,g_{p}(y_{p})\}^{\top}\), \(g_{i}(y_{i})=(\frac{y_{i}-\xi_{i}}{\omega_{ii}})\exp[-\frac{1}{2}W_{0}\{h_{i}( \frac{y_{i}-\xi_{i}}{\omega_{ii}})^{2}\}]\), \(i=1,\ldots,p\), and \(W_{0}(\cdot)\) is the principal branch of the Lambert's-\(W\) function._ **Proof.** Consider the transformation \(z=x\exp(hx^{2}/2)\). Then \(hz^{2}=hx^{2}\exp(hx^{2})\Rightarrow hx^{2}=W_{0}(hz^{2})\Rightarrow x=z\exp\{-W _{0}(hz^{2})/2\}\), where \(W_{0}(\cdot)\) is the principal branch of the Lambert's-\(W\) function (Corless et al., 1996). This essentially means that \(W_{0}(\cdot)\) is the inverse function of the function \(f(x)=x\exp(x)\), \(x\in\mathbb{R}\). Although the inverse of \(f(x)\) is not unique when \(x<0\), it is unique when \(x>0\). For us the argument of \(W_{0}(\cdot)\) is \(hz^{2}\geq 0\), which makes the inverse of the Tukey-\(h\) transformation unique (see also Lemma 5 in Goerg (2015)). Hence, the inverse of the Tukey-\(h\) transformation (3) is \[\tau_{h}^{-1}(z)=z\exp\{-W_{0}(hz^{2})/2\}, \tag{6}\] and it is a one-to-one function as it should be since \(\tau_{h}(z)\) is one-to-one for \(h\geq 0\). Moreover, \[\frac{\partial}{\partial z}\tau_{h}^{-1}(z)=\frac{\exp\{W_{0}(hz^{2})/2\}}{hz ^{2}+\exp\{W_{0}(hz^{2})\}},\] and is obtained using the fact that \(W_{0}^{\prime}(z)=1/[z+\exp\{W_{0}(z)\}]\). With the form of \(\tau_{h}^{-1}(z)\) and \(\frac{\partial\tau_{h}^{-1}(z)}{\partial z}\) it is straightforward to show that the pdf of \(\mathbf{Y}\) reduces to Equation (5). The pdf of the \(\mathcal{SNTH}\) distribution is given in closed form in Proposition 1 and it involves the principal branch \(W_{0}(\cdot)\) of the Lambert's-\(W\) function. Although \(W_{0}(\cdot)\) does not have a closed form, it is a well studied function and the function has been already implemented in many softwares, including in R Core Team (2022) in the LambertW package by Goerg (2011). This is an advantage of the \(\mathcal{SNTH}\) distribution over the multivariate Tukey \(g\)-and-\(h\) distribution in the sense that the inverse of the Tukey \(g\)-and-\(h\) transformation is not in a closed form. As a result, the computation of the probability density function and the log-likelihood of the \(\mathcal{SNTH}\) distribution is somewhat simpler compared to that of the multivariate Tukey \(g\)-and-\(h\) distribution. To illustrate the effects of the skewness and the tail-thickness parameters of the \(\mathcal{SNTH}\) distribution, we present the contour plots of \(\mathcal{SNTH}_{2}(\mathbf{0},\text{diag}(1,1),\bar{\mathbf{\Psi}},\mathbf{\eta},\mathbf{h})\) probability densities with \(\bar{\mathbf{\Psi}}=\left(\begin{smallmatrix}1&0.4\\ 0.4&1\end{smallmatrix}\right)\) for four different pairs of \(\mathbf{\eta}\) and \(\mathbf{h}\): \(\mathbf{\eta}=(0,0)^{\top}\) and \(\mathbf{h}=(0,0)^{\top}\) corresponding to a normal density; \(\mathbf{\eta}=(0,0)^{\top}\) and \(\mathbf{h}=(0.05,0.1)^{\top}\) corresponding to a Normal-Tukey-\(h\) density; \(\mathbf{\eta}=(-1,2)^{\top}\) and \(\mathbf{h}=(0,0)^{\top}\) corresponding to Figure 1: Bivariate probability density contours of various distributions. Contours are given so that their coverage probabilities are approximately 0.05, 0.25, 0.5, 0.75, and 0.95. a \(\mathcal{SN}\) density; and \(\mathbf{\eta}=(-1,2)^{\top}\) and \(\mathbf{h}=(0.05,0.1)^{\top}\) corresponding to a \(\mathcal{SNTH}\) density. For comparison we also plot the density contours of a skew-\(t\) distribution with \(\mathbf{\xi}=(0,0)^{\top}\), \(\mathbf{\Omega}=\left(\begin{smallmatrix}2&-1.6\\ -1.6&5\end{smallmatrix}\right)\), \(\mathbf{\alpha}=(-1.02,2.15)^{\top}\), and \(\nu=5\), and a Student's \(t\) distribution with these same parameters (i.e., the same skew-\(t\) with \(\mathbf{\alpha}=\mathbf{0}\)). The \(\mathbf{\Omega}\) and \(\mathbf{\alpha}\) parameters are obtained so that they correspond to \(\bar{\mathbf{\Psi}}=\left(\begin{smallmatrix}1&0.4\\ 0.4&1\end{smallmatrix}\right)\) and \(\mathbf{\eta}=(-1,2)^{\top}\) using the relationship between the parameters of the \(\mathcal{ASN}\) and the \(\mathcal{SN}\) parameterizations. All the density contours are plotted in Figure 1. The contours are drawn for the levels with approximate coverage probabilities \(0.05\), \(0.25\), \(0.5\), \(0.75\), and \(0.95\). The density contour plots in the first row correspond to the density contours of the second row when the corresponding skewness parameters are set to zero. Although the contours in the first row are all symmetric, their symmetry differs from each other. More precisely, in Figure 1, the normal and the Student's \(t\) probability contours are centrally symmetric whereas the normal-Tukey-\(h\) probability contours are sign-invariant symmetric, which is a special case of central symmetry. It can be concluded from Figure 1 that the shapes of the Student's \(t\) and skew-\(t\) density contours are similar to that of the normal and the skew-normal densities, respectively, with more spacing in-between the different levels for the formers due to thicker tails. The contours of the normal-Tukey-\(h\) density and the \(\mathcal{SNTH}\) density look similar to the normal and the skew-normal density contours, respectively, but the former have been stretched along the two axes. Since the extent of this stretching can be different along the two axes, the \(\mathcal{SNTH}\) density contours can represent a variety of shapes with changes in the skewness and the tail-thickness parameters. ### Cumulative Distribution Function of \(\mathcal{SNTH}\) The cdf of the \(\mathcal{SNTH}\) distribution can be obtained in closed form involving the principal branch \(W_{0}(\cdot)\) of the Lambert's-\(W\) function as shown next. **Proposition 2**.: _The cdf of \(\mathbf{Y}\sim\mathcal{SNTH}_{p}(\mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}},\mathbf{\eta}, \mathbf{h})\) is \(F_{\mathbf{Y}}(\mathbf{y})=2\Phi_{p+1}(\mathbf{y}_{**};\mathbf{0},\mathbf{\Omega}_{**})\) where \(\Phi_{p+1}\) is the multivariate Gaussian cdf of dimension \(p+1\), \(\mathbf{y}_{**}=\left\{\tau_{h_{1}}^{-1}\left(\frac{y_{1}-\xi_{1}}{\omega_{11}} \right),\ldots,\tau_{h_{p}}^{-1}\left(\frac{y_{p}-\xi_{p}}{\omega_{pp}}\right),0\right\}^{\top}\) and \(\mathbf{\Omega}_{**}=\left(\begin{smallmatrix}\bar{\mathbf{\Psi}}+\mathbf{\eta}\mathbf{\eta} ^{\top}&-\mathbf{\eta}\\ -\mathbf{\eta}^{\top}&1\end{smallmatrix}\right)\)._ **Proof.** Let \(\mathbf{Y}=\mathbf{\xi}+\mathbf{\omega}\mathbf{\tau}_{\mathbf{h}}(\mathbf{Z})\), where \(\mathbf{Z}\sim\mathcal{SN}_{p}(\mathbf{0},\bar{\mathbf{\Psi}},\mathbf{\eta})\). Then the cdf of \(\mathbf{Y}\) is \[F_{\mathbf{Y}}(\mathbf{y}) =\mathbb{P}(Y_{1}\leq y_{1},\ldots,Y_{p}\leq y_{p})=\mathbb{P} \left[Z_{1}\leq\tau_{h_{1}}^{-1}\left(\frac{y_{1}-\xi_{1}}{\omega_{11}}\right),\ldots,Z_{p}\leq\tau_{h_{p}}^{-1}\left(\frac{y_{p}-\xi_{p}}{\omega_{pp}} \right)\right]\] \[=F_{\mathbf{Z}}\left\{\tau_{h_{1}}^{-1}\left(\frac{y_{1}-\xi_{1}}{ \omega_{11}}\right),\ldots,\tau_{h_{p}}^{-1}\left(\frac{y_{p}-\xi_{p}}{\omega_{ pp}}\right)\right\}=2\Phi_{p+1}(\mathbf{y}_{**};\mathbf{0},\mathbf{\Omega}_{**}),\quad\mathbf{y} \in\mathbb{R}^{p},\] where \(\tau_{h}^{-1}(z)\) is given in Equation (6). The cdf of \(\mathbf{Z}\), \(F_{\mathbf{Z}}(\cdot)\), is obtained using Proposition 12 of Mondal et al. (2023). \(\square\) ### Marginal Distributions of \(\mathcal{SNTH}\) Similar to the \(\mathcal{SN}\) distribution, the marginals of the \(\mathcal{SNTH}\) distribution are also from the same family, as shown in the next proposition. **Proposition 3**.: _Let \(\mathbf{Y}\sim\mathcal{SNTH}_{p}(\mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}},\mathbf{\eta}, \mathbf{h})\) and consider the partition \(\mathbf{Y}=(\mathbf{Y}_{1}^{\top},\mathbf{Y}_{2}^{\top})^{\top}\) with \(\mathbf{Y}_{i}\) of size \(p_{i}\) (\(i=1,2\)) and such that \(p_{1}+p_{2}=p\), with corresponding partitions of the parameters in blocks of matching sizes, as follows:_ \[\mathbf{\xi}=\left(\begin{matrix}\mathbf{\xi}_{1}\\ \mathbf{\xi}_{2}\end{matrix}\right),\mathbf{\omega}=\left(\begin{matrix}\mathbf{\omega}_{11 }&\mathbf{0}\\ \mathbf{0}&\mathbf{\omega}_{22}\end{matrix}\right),\bar{\mathbf{\Psi}}=\left(\begin{matrix} \bar{\mathbf{\Psi}}_{11}&\bar{\mathbf{\Psi}}_{12}\\ \bar{\mathbf{\Psi}}_{21}&\bar{\mathbf{\Psi}}_{22}\end{matrix}\right),\mathbf{\eta}=\left( \begin{matrix}\mathbf{\eta}_{1}\\ \mathbf{\eta}_{2}\end{matrix}\right),\mathbf{h}=\left(\begin{matrix}\mathbf{h}_{1}\\ \mathbf{h}_{2}\end{matrix}\right).\] _Then \(\mathbf{Y}_{i}\sim\mathcal{SNTH}_{p_{i}}(\mathbf{\xi}_{i},\mathbf{\omega}_{ii},\bar{\mathbf{ \Psi}}_{ii},\mathbf{\eta}_{i},\mathbf{h}_{i})\), \(i=1,2\)._ **Proof.** Since, \(\mathbf{Y}\sim\mathcal{SNTH}_{p}(\mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}},\mathbf{\eta},\mathbf{h})\), then by definition there exists a random vector \(\mathbf{Z}\sim\mathcal{SN}_{p}(\mathbf{0},\bar{\mathbf{\Psi}},\mathbf{\eta})\) such that \(\mathbf{Y}=\mathbf{\xi}+\mathbf{\omega}\mathbf{\tau}_{\mathbf{h}}(\mathbf{Z})\). Consider the partition \(\mathbf{Z}=(\mathbf{Z}_{1}^{\top},\mathbf{Z}_{2}^{\top})^{\top}\), similar to \(\mathbf{Y}\). Then, \(\mathbf{Z}_{i}\sim\mathcal{SN}_{p_{i}}(\mathbf{0},\bar{\mathbf{\Psi}}_{ii},\mathbf{\eta}_{i}), \ i=1,2\), and \(\mathbf{Y}_{i}=\mathbf{\xi}_{i}+\mathbf{\omega}_{ii}\mathbf{\tau}_{\mathbf{h}_{i}}(\mathbf{Z}_{i})\). Hence, \(\mathbf{Y}_{i}\sim\mathcal{SNTH}_{p_{i}}(\mathbf{\xi}_{i},\mathbf{\omega}_{ii},\bar{\mathbf{ \Psi}}_{ii},\mathbf{\eta}_{i},\mathbf{h}_{i}),\ i=1,2\). \(\Box\) Although the marginals of the \(\mathcal{SNTH}\) remain in the same family, the same cannot be said for any general affine transformation of the \(\mathcal{SNTH}\) distribution. The distribution of an arbitrary affine transformation of a \(\mathcal{SNTH}\) random vector is not of a known type. ### Mean and Variance-Covariance of \(\mathcal{SNTH}\) The mean vector and the variance-covariance matrix of the \(\mathcal{SNTH}\) distribution can be obtained in closed form. The next proposition presents these results. **Proposition 4**.: _Let \(\mathbf{Y}\sim\mathcal{SNTH}_{p}(\mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}},\mathbf{\eta}, \mathbf{h})\). The mean vector \(\mathbf{\mu}=\mathbb{E}(\mathbf{Y})\) and variance-covariance matrix \(\mathbf{\Sigma}=(\sigma_{ij})=\mathbb{V}\text{ar}(\mathbf{Y})\) are defined by:_ \[\mu_{i} =\xi_{i}+\omega_{ii}\sqrt{\frac{2}{\pi}}\frac{\eta_{i}}{\sqrt{1-h _{i}}\{1-h_{i}(1+\eta_{i}^{2})\}},\quad\text{ if }h_{i}<\frac{1}{1+\eta_{i}^{2}},\] \[\sigma_{ii} =\omega_{ii}^{2}\left[\frac{1+\eta_{i}^{2}}{\{1-2h_{i}(1+\eta_{i }^{2})\}^{3/2}}-\frac{2}{\pi}\frac{\eta_{i}^{2}}{(1-h_{i})\{1-h_{i}(1+\eta_{i }^{2})\}^{2}}\right],\quad\text{ if }h_{i}<\frac{1}{2(1+\eta_{i}^{2})},\] \[\sigma_{ij} =\omega_{i}\omega_{j}\Bigg{[}\frac{\sqrt{\det(\mathbf{A}^{(ij)})}}{ \sqrt{\det(\bar{\mathbf{\Psi}}_{i,j}+\mathbf{\eta}_{i,j}\mathbf{\eta}_{i,j}^{\top})}}a_{1 2}^{(ij)}-\frac{2}{\pi}\frac{\eta_{i}\eta_{j}}{\sqrt{(1-h_{i})(1-h_{j})}\{1-h_ {i}(1+\eta_{i}^{2})\}\{1-h_{j}(1+\eta_{j}^{2})\}}\Bigg{]},\] \[\text{ if }\mathbf{A}^{(ij)}\text{ is positive definite},\] _where \(\mathbf{\eta}_{i,j}=(\eta_{i},\eta_{j})^{\top},\ \bar{\mathbf{\Psi}}_{i,j}=\left( \begin{smallmatrix}1&\bar{\mathbf{\Psi}}_{ij}\\ \bar{\mathbf{\Psi}}_{ij}&1\end{smallmatrix}\right)\), \(\mathbf{A}^{(ij)}=\{(\bar{\mathbf{\Psi}}_{i,j}+\mathbf{\eta}_{i,j}\mathbf{\eta}_{i,j}^{\top}) ^{-1}-\mathbf{H}_{i,j}\}^{-1}=\begin{pmatrix}a_{11}^{(ij)}&a_{12}^{(ij)}\\ a_{12}^{(ij)}&a_{22}^{(ij)}\end{pmatrix}\), \(\mathbf{H}_{i,j}=\text{diag}(h_{i},h_{j})\), \(i\neq j\), and \(i,j=1,\ldots,p\)._ **Proof.** Since \(\mathbf{Y}\sim\mathcal{SNTH}_{p}(\mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}},\mathbf{\eta}, \mathbf{h})\), then \(\mathbf{Y}\) can be written as \(\mathbf{Y}=\mathbf{\xi}+\mathbf{\omega}\mathbf{\tau}_{\mathbf{h}}(\mathbf{Z})\), where \(\mathbf{Z}\sim\mathcal{SN}_{p}(\mathbf{0},\bar{\mathbf{\Psi}},\mathbf{\eta})\). Then using the fact \(Z_{i}\sim\mathcal{SN}(0,1,\eta_{i})\): \[\mathbb{E}\{\tau_{h_{i}}(Z_{i})\} =\int_{\mathbb{R}}x\exp(h_{i}x^{2}/2)2\phi(x;0,1+\eta_{i}^{2}) \Phi\left(\frac{\eta_{i}x}{\sqrt{1+\eta_{i}^{2}}}\right)\text{d}x\] \[=\int_{\mathbb{R}}\frac{\sqrt{1+\eta_{i}^{2}}}{\sqrt{1-h_{i}(1+ \eta_{i}^{2})}}t\frac{2}{\sqrt{2\pi}\sqrt{1+\eta_{i}^{2}}}\exp(-t^{2}/2)\Phi \left(\frac{\eta_{i}t}{\sqrt{1-h_{i}(1+\eta_{i}^{2})}}\right)\frac{\sqrt{1+ \eta_{i}^{2}}}{\sqrt{1-h_{i}(1+\eta_{i}^{2})}}\text{d}t\] \[\quad\Bigg{(}\text{using the change of variable }t=\frac{\sqrt{1-h_{i}(1+\eta_{i}^{2})}}{\sqrt{1+\eta_{i}^{2}}}x\Bigg{)}\] \[=\frac{\sqrt{1+\eta_{i}^{2}}}{1-h_{i}(1+\eta_{i}^{2})}\mathbb{E} (X_{i})\quad\text{with }X_{i}\sim\mathcal{ASN}\left(0,1,\frac{\eta_{i}}{\sqrt{1-h_{i}(1+\eta_{i}^{2})}}\right)\] \[=\sqrt{\frac{2}{\pi}}\frac{\eta_{i}}{\sqrt{1-h_{i}}\{1-h_{i}(1+ \eta_{i}^{2})\}},\quad h_{i}(1+\eta_{i}^{2})<1,\ i=1,\ldots,p;\] \[\mathbb{E}[\{\tau_{h_{i}}(Z_{i})\}^{2}]=\int_{\mathbb{R}}x^{2}\exp (h_{i}x^{2})2\phi(x;0,1+\eta_{i}^{2})\Phi\left(\frac{\eta_{i}x}{\sqrt{1+\eta_{i} ^{2}}}\right)\text{d}x\] \[=\int_{\mathbb{R}}\frac{1+\eta_{i}^{2}}{1-2h_{i}(1+\eta_{i}^{2})}t^{2 }\frac{2}{\sqrt{2\pi}\sqrt{1+\eta_{i}^{2}}}\exp(-t^{2}/2)\Phi\left(\frac{\eta_{i }t}{\sqrt{1-2h_{i}(1+\eta_{i}^{2})}}\right)\frac{\sqrt{1+\eta_{i}^{2}}}{\sqrt{ 1-2h_{i}(1+\eta_{i}^{2})}}\text{d}t\] \[\quad\left(\text{ using the change of variable }t=\frac{\sqrt{1-2h_{i}(1+\eta_{i}^{2})}}{\sqrt{1+\eta_{i}^{2}}}x\right)\] \[=\frac{1+\eta_{i}^{2}}{\{1-2h_{i}(1+\eta_{i}^{2})\}^{3/2}} \mathbb{E}(X_{i}^{2})\quad\text{with }X_{i}\sim\mathcal{ASN}\left(0,1,\frac{\eta_{i}}{ \sqrt{1-2h_{i}(1+\eta_{i}^{2})}}\right)\] \[=\frac{1+\eta_{i}^{2}}{\{1-2h_{i}(1+\eta_{i}^{2})\}^{3/2}},\quad 2 h_{i}(1+\eta_{i}^{2})<1,\;i=1,\ldots,p.\] Hence: \[\mathbb{V}\text{ar}\{\tau_{h_{i}}(Z_{i})\}=\frac{1+\eta_{i}^{2}}{\{1-2h_{i}(1+ \eta_{i}^{2})\}^{3/2}}-\frac{2}{\pi}\frac{\eta_{i}^{2}}{(1-h_{i})\{1-h_{i}(1+ \eta_{i}^{2})\}^{2}},\;h_{i}<\frac{1}{2(1+\eta_{i}^{2})},\;i=1,\ldots,p.\] \[\mathbb{E}\{\tau_{h_{i}}(Z_{i})\tau_{h_{j}}(Z_{j})\} =\int_{\mathbb{R}^{2}}x_{1}x_{2}\exp\{(h_{i}x_{1}^{2}+h_{j}x_{2}^ {2})/2\}2\phi_{2}(\mathbf{x};\mathbf{0},\bar{\mathbf{\Psi}}_{i,j}+\mathbf{\eta}_{i,j}\mathbf{\eta} _{i,j}^{\top})\Phi\left(\frac{\mathbf{\eta}_{i,j}^{\top}\bar{\mathbf{\Psi}}_{i,j}^{-1} \mathbf{x}}{\sqrt{1+\mathbf{\eta}_{i,j}^{\top}\bar{\mathbf{\Psi}}_{i,j}^{-1}}\mathbf{\eta}_{i, j}}\right)\text{d}\mathbf{x}\] \[=\int_{\mathbb{R}^{2}}x_{1}x_{2}\frac{\sqrt{\det(\mathbf{A}^{(ij)})}}{ \sqrt{\det(\bar{\mathbf{\Psi}}_{i,j}+\mathbf{\eta}_{i,j}\mathbf{\eta}_{i,j}^{\top})}}2\phi _{2}(\mathbf{x};\mathbf{0},\mathbf{A}^{(ij)})\Phi\left(\frac{\mathbf{\eta}_{i,j}^{\top}\bar{ \mathbf{\Psi}}_{i,j}^{-1}\mathbf{\omega}_{\mathbf{A}^{(ij)}}\mathbf{\omega}_{\mathbf{A}^{(ij)}}^{- 1}\mathbf{x}}{\sqrt{1+\mathbf{\eta}_{i,j}^{\top}\bar{\mathbf{\Psi}}_{i,j}^{-1}}\mathbf{\eta}_{ i,j}}\right)\text{d}\mathbf{x}\] \[=\frac{\sqrt{\det(\mathbf{A}^{(ij)})}}{\sqrt{\det(\bar{\mathbf{\Psi}}_{i, j}+\mathbf{\eta}_{i,j}\mathbf{\eta}_{i,j}^{\top})}}\mathbb{E}(X_{i}X_{j})\] \[=\frac{\sqrt{\det(\mathbf{A}^{(ij)})}}{\sqrt{\det(\bar{\mathbf{\Psi}}_{i, j}+\mathbf{\eta}_{i,j}\mathbf{\eta}_{i,j}^{\top})}}\mathbb{E}(X_{i}X_{j})a_{12}^{(ij)}, \quad\text{if}\mathbf{A}^{(ij)}\text{ is positive definite},\;i\neq j,\;i,j=1, \ldots,p,\] where \((X_{i},X_{j})^{\top}\sim\mathcal{ASN}_{2}\left(\mathbf{0},\mathbf{A}^{(ij)},\frac{ \mathbf{\omega}_{\mathbf{A}^{(ij)}}\mathbf{\Psi}_{i,j}^{-1}\mathbf{\eta}_{i,j}}{\sqrt{1+\mathbf{ \eta}_{i,j}^{\top}\mathbf{\Psi}_{i,j}^{-1}}\mathbf{\eta}_{i,j}}\right)\) and \(\mathbf{\omega}_{\mathbf{A}^{(ij)}}=\{\text{diag}(\mathbf{A}^{(ij)})\}^{1/2}\). The moments related to the \(\mathcal{ASN}\) distribution are obtained from Chapter 2 (univariate) and Chapter 5 (multivariate) of Azzalini and Capitanio (2014). The rest of the proof is straightforward and hence omitted. To this point, we have closed-form expressions of the mean vector and the variance-covariance matrix for the \(\mathcal{NTH}\) distribution. However, we cannot have a closed-form expression for its moment generating function or characteristic function. This is because the distribution of any general affine transformation of the \(\mathcal{NTH}\) distribution is not known. ### Marginal Skewness and Kurtosis of \(\mathcal{SNTH}\) Here we discuss some results related to the skewness and kurtosis of the \(\mathcal{SNTH}\) distribution. The Mardia's measures of multivariate skewness and kurtosis (Mardia, 1970) for the \(\mathcal{SNTH}\) distribution cannot be derived in closed form. However, their univariate counterparts can be derived. Similar to the skew-\(t\) distribution, the Pearson's measures of skewness and excess-kurtosis are also unbounded for the univariate \(\mathcal{SNTH}\) distribution, suggesting that it is also the case in the multivariate setting. **Proposition 5**.: _The Pearson's measures of skewness and excess-kurtosis of \(Y\sim\mathcal{SNTH}_{1}(0,1,1,\eta,h)\) are \(\gamma_{1}=\mu_{3}/\mu_{2}^{3/2}\) and \(\gamma_{2}=\mu_{4}/\mu_{2}^{2}-3\), where \(\mu_{2}=\mathbb{V}ar(Y)\), \(\mu_{3}=\mathbb{E}\{Y-\mathbb{E}(Y)\}^{3}=\mathbb{E}(Y^{3})-3\mathbb{E}(Y^{2}) \mathbb{E}(Y)+2\mathbb{E}(Y)^{2}\), \(\mu_{4}=\mathbb{E}\{Y-\mathbb{E}(Y)\}^{4}=\mathbb{E}(Y^{4})-4\mathbb{E}(Y^{3}) \mathbb{E}(Y)+6\mathbb{E}(Y^{2})\mathbb{E}(Y)^{2}-3\mathbb{E}(Y)^{4}\) with:_ \[\mathbb{E}(Y^{3})=\sqrt{\frac{2}{\pi}}\frac{(1+\eta^{2})^{3/2}}{\{1-3h(1+\eta^{ 2})\}^{2}}\Bigg{[}\frac{2\eta^{3}+3\eta\{1-3h(1+\eta^{2})\}}{\{(1+\eta^{2})(1-3h) \}^{3/2}}\Bigg{]},\quad h<\frac{1}{3(1+\eta^{2})},\] \[\mathbb{E}(Y^{4})=\frac{3(1+\eta^{2})}{\{1-4h(1+\eta^{2})\}^{5/2}},\quad h<\frac{ 1}{4(1+\eta^{2})}.\] **Proof.** The expressions of \(\mathbb{E}(Y)\), \(\mathbb{E}(Y^{2})\), and \(\mathbb{V}\mathrm{ar}(Y)\) are given in Proposition 4. Since \(Y\sim\mathcal{SNTH}(0,1,1,\eta,h)\), we have \(Y=\tau_{h}(Z)\), where \(Z\sim\mathcal{SN}(0,1,\eta)\). Hence, \[\mathbb{E}(Y^{3}) =\int_{\mathbb{R}}x^{3}\exp(3hx^{2}/2)2\phi(x;0,1+\eta^{2})\Phi \left(\frac{\eta x}{\sqrt{1+\eta^{2}}}\right)\mathrm{d}x\] \[=\frac{1}{\sqrt{1-3h(1+\eta^{2})}}\int_{\mathbb{R}}x^{3}2\phi \left(x;0,\frac{(1+\eta^{2})}{1-3h(1+\eta^{2})}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\Phi\Bigg{(} \frac{\eta}{\{1-3h(1+\eta^{2})\}^{1/2}}\frac{(1+\eta^{2})^{-1/2}}{\{1-3h(1+ \eta^{2})\}^{-1/2}}x\Bigg{)}\mathrm{d}x\] \[=\frac{1}{\sqrt{1-3h(1+\eta^{2})}}\mathbb{E}(X^{3})\quad\text{ with }X\sim\mathcal{ASN}\left(0,\frac{(1+\eta^{2})}{1-3h(1+\eta^{2})},\frac{\eta}{\{1-3h(1+ \eta^{2})\}^{1/2}}\right)\] \[=\sqrt{\frac{2}{\pi}}\frac{(1+\eta^{2})^{3/2}}{\{1-3h(1+\eta^{2} )\}^{2}}\Bigg{[}\frac{2\eta^{3}+3\eta\{1-3h(1+\eta^{2})\}}{\{(1+\eta^{2})(1-3 h)\}^{3/2}}\Bigg{]},\quad h<\frac{1}{3(1+\eta^{2})},\] and \[\mathbb{E}(Y^{4}) =\int_{\mathbb{R}}x^{4}\exp(2hx^{2})2\phi(x;0,1+\eta^{2})\Phi \left(\frac{\eta x}{\sqrt{1+\eta^{2}}}\right)\mathrm{d}x\] \[=\frac{1}{\sqrt{1-4h(1+\eta^{2})}}\int_{\mathbb{R}}x^{4}2\phi \left(x;0,\frac{(1+\eta^{2})}{1-4h(1+\eta^{2})}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\Phi\Bigg{\{} \frac{\eta}{\{1-4h(1+\eta^{2})\}^{1/2}}\frac{(1+\eta^{2})^{-1/2}}{\{1-4h(1+ \eta^{2})\}^{-1/2}}x\Bigg{\}}\mathrm{d}x\] \[=\frac{1}{\sqrt{1-4h(1+\eta^{2})}}\mathbb{E}(X^{4})\quad\text{ with }X\sim\mathcal{ASN}\left(0,\frac{(1+\eta^{2})}{1-4h(1+\eta^{2})},\frac{\eta}{\{1-3h(1+ \eta^{2})\}^{1/2}}\right)\] \[=\frac{3(1+\eta^{2})}{\{1-4h(1+\eta^{2})\}^{5/2}},\quad h<\frac{1 }{4(1+\eta^{2})}.\] The \(3^{\text{rd}}\) and \(4^{\text{th}}\) order moments of the \(\mathcal{ASN}\) distribution are obtained from Chapter 2 of Azzalini and Capitanio (2014). \(\Box\) We provide plots of the \(\gamma_{1}\) and \(\gamma_{2}\) measures for the \(\mathcal{SNTH}_{1}(0,1,1,\eta,h)\) distribution against \(\eta\) and \(h\) for different fixed \(h\) and \(\eta\), respectively, in Figure 2. From the plots, it is clear that the parameter \(\eta\) dictates the extent of skewness in the distribution. Moreover, for a fixed \(\eta\), the extent of skewness increases with increase in \(h\) and vice-versa. Similarly, the extent of the tail-thickness is dictated by the parameter \(h\) and for a fixed \(h\), the tail-thickness increases with increase in \(\eta\) and vice-versa. Here we only plot \(\gamma_{2}\) against \(h\) for positive \(\eta\) as \(\gamma_{2}\) is only a function of \(\eta^{2}\). The plots show how the effect of \(\eta\) and \(h\) on skewness and kurtosis are intertwined. Nevertheless, we associate the parameter \(\eta\) with the skewness and the parameter \(h\) with the tail-thickness of the \(\mathcal{SNTH}\) distribution. It is also worth pointing out from the plots that the \(\gamma_{2}\) measure cannot be less than zero for the \(\mathcal{SNTH}\) distribution. Hence, the \(\mathcal{SNTH}\) distribution is not suitable for scenarios when tail-thickness of the data is less than that of the Gaussian distribution. The \(h\) parameter of the \(\mathcal{SNTH}\) distribution is the counterpart of the \(\nu\) parameter of the skew-\(t\) distribution since these two parameters primarily control the tail-thickness in their respective distributions. The relationship between \(h\) and \(\nu\) is studied here using two simulation experiments. In the first experiment we simulate \(500\) realizations from \(\mathcal{SNTH}_{1}(0,1,1,1.5,h)\), where \(h\) varies in the interval \([0.02,1]\). We fit the skew-\(t\) distribution to the simulated \(\mathcal{SNTH}\) data for varying \(h\) with the R (R Core Team, 2022) package sn(Azzalini, 2015) and note the estimate of \(\nu\). For each \(h\), we repeat this experiment \(100\) times and present the boxplots of \(\nu\) estimates as a function of \(h\) in Figure 2(a). Moreover, the estimates' means are indicated by the red dots. Similar experiment results are provided in Figure 2(b), where we present the boxplots of the \(100\) estimates of \(h\) obtained by fitting the \(\mathcal{SNTH}\) distribution to \(100\) replicates of size \(500\) from the skew-\(t\) distribution with location, scale, and skewness parameter as \(0\), \(1\), and \(1.5\), with varying degrees of freedom \(\nu\in[0.7,5.3]\). From the two boxplots in Figure 3, Figure 3: Boxplots of estimated \(\nu\) parameter against the true \(h\) parameter in (a) and estimated \(h\) against the true \(\nu\) parameter in (b). The red dots in each plot correspond to the means of the estimates based on \(100\) replicates. Figure 2: Plots of the measures of skewness and kurtosis for the \(\mathcal{SNTH}_{1}(0,1,1,\eta,h)\) distribution. we can see how the two tail-thickness parameters of the \(\mathcal{SNT}\mathcal{H}\) and the skew-\(t\) are related. As \(\nu\) in the skew-\(t\) distribution increases, the kurtosis decreases and that corresponds to the decrease in \(h\) in the \(\mathcal{SNT}\mathcal{H}\) distribution and vice-versa. A similar experiment, done in the bivariate case, yields some interesting results. In this experiment, we simulate \(500\) realizations from \(\mathcal{SNT}\mathcal{H}_{2}\left(\left(\begin{smallmatrix}0\\ 0\end{smallmatrix}\right),\mathbf{I}_{2},(\begin{smallmatrix}1&0,3\\ 0.3&1\end{smallmatrix}),\left(\begin{smallmatrix}-1.5\\ 2\end{smallmatrix}\right),\left(\begin{smallmatrix}h_{1}\\ h_{2}\end{smallmatrix}\right)\right)\) with varying \(h_{2}\in[0.01,1]\), for fixed \(h_{1}\in\{0.2,0.4,0.6,0.8,1\}\). We fit a bivariate skew-\(t\) distribution to the \(\mathcal{SNT}\mathcal{H}\) observation and note the estimate of \(\nu\). Based on \(100\) replicates, we plot the median of \(\hat{\nu}\)s against \(h_{2}\) for different \(h_{1}\) in Figure 4. Moreover, we smooth the curve using local polynomial fitting. From this plot we see that a particular \(\hat{\nu}\) can be obtained for different pairs of \(h_{1}\) and \(h_{2}\). To emphasize that, we have highlighted the line \(\hat{\nu}=1\) which cuts all the curves in the plot. From here we conclude that the skew-\(t\) distribution is not suitable for scenarios when there is a great disparity between marginal kurtosis. When \(h_{2}\) is very small and \(h_{1}\) is large, the skew-\(t\) model puts more emphasis on \(h_{1}\) and the overall estimate of \(\nu\) in that case becomes small, which corresponds to heavier tail in the fitted distribution. As \(h_{2}\) increases, the true distribution becomes more heavy-tailed but the fitted distribution becomes less heavy-tailed. ### Conditional Distribution of \(\mathcal{SNT}\mathcal{H}\) Before deriving the conditional distribution of the \(\mathcal{SNT}\mathcal{H}\) family, we first discuss the result about the conditional distribution of the \(\mathcal{SN}\) distribution. To do that, we need to revisit the family of the extended skew-normal distribution (Adcock and Shutes, 2001; Arnold and Beaver, 2000; Capitanio et al., 2003; Arellano-Valle and Genton, 2010b) but with the \(\boldsymbol{\Psi}\)-\(\boldsymbol{\eta}\) parameterization, similar to the definition of the \(\mathcal{SN}\) distribution in Section 2.1. A \(p\)-variate random vector \(\boldsymbol{Y}\) has an extended skew-normal distribution if its pdf is \[f_{\boldsymbol{Y}}(\boldsymbol{y})=\frac{1}{\Phi(\tau)}\phi_{p}(\boldsymbol{y };\boldsymbol{\xi}+\tau\boldsymbol{\eta},\boldsymbol{\Psi}+\boldsymbol{\eta \boldsymbol{\eta}}^{\top})\Phi\left\{\frac{\tau+\boldsymbol{\eta}^{\top} \boldsymbol{\Psi}^{-1}(\boldsymbol{y}-\boldsymbol{\xi})}{\sqrt{1+\boldsymbol{ \eta}^{\top}\boldsymbol{\Psi}^{-1}\boldsymbol{\eta}}}\right\},\quad\boldsymbol{ y}\in\mathbb{R}^{p}, \tag{7}\] where \(\boldsymbol{\xi}\in\mathbb{R}^{p}\) is the location parameter, \(\boldsymbol{\Psi}\in\mathbb{R}^{p\times p}\) is the symmetric positive definite scale matrix, \(\boldsymbol{\eta}\in\mathbb{R}^{p}\) is the skewness parameter, and \(\tau\in\mathbb{R}\) is the extension parameter. We denote \(\boldsymbol{Y}\sim\mathcal{ESN}_{p}(\boldsymbol{\xi},\boldsymbol{\Psi}, \boldsymbol{\eta},\tau)\). From the pdf of the \(\mathcal{ESN}\) distribution in Equation (7) we have, when this extension parameter \(\tau=0\), that the \(\mathcal{ESN}\) distribution reduces to the \(\mathcal{SN}\) distribution. Like the \(\mathcal{SN}\) distribution, a random vector \(\boldsymbol{Y}\sim\mathcal{ESN}_{p}(\boldsymbol{\xi},\boldsymbol{\Psi}, \boldsymbol{\eta},\tau)\) also has a Figure 4: Curves obtained by smoothing the median of \(\hat{\nu}\)s from fitted bivariate skew-\(t\) based on \(100\) replicates from \(\mathcal{SNT}\mathcal{H}_{2}\) as a function of true \(h_{2}\) for different values of \(h_{1}\). concise stochastic representation \[\mathbf{Y}=\mathbf{\xi}+\tau\mathbf{\eta}+\mathbf{\eta}U+\mathbf{W}, \tag{8}\] where \(U\,\overset{d}{=}\,(Z|Z+\tau>0)\), \(Z\sim\mathcal{N}(0,1)\), \(\mathbf{W}\sim\mathcal{N}_{p}(\mathbf{0},\mathbf{\Psi})\), and \(Z\) and \(\mathbf{W}\) are independently distributed. The last statement is directly obtained from Proposition 1 of Arellano-Valle and Genton (2010b) (see their Equation (10) with \(\nu\to\infty\)). As a consequence of this stochastic representation, the marginals of the \(\mathcal{ESN}\) distribution also remain in the same family and the parameters of the marginal distribution are just the corresponding marginal parameters, similar to the \(\mathcal{SN}\) distribution. We need this definition of the \(\mathcal{ESN}\) distribution because the conditionals of the \(\mathcal{SN}\) family belongs to the \(\mathcal{ESN}\) family. Let \(\mathbf{Y}\sim\mathcal{SN}_{p}(\mathbf{\xi},\mathbf{\Psi},\mathbf{\eta})\), and consider the partition of \(\mathbf{Y}=(\mathbf{Y}_{1}^{\top},\mathbf{Y}_{2}^{\top})^{\top}\) with \(\mathbf{Y}_{i}\) of size \(p_{i}\) (\(i=1,2\)) and such that \(p_{1}+p_{2}=p\), with corresponding partitions of the parameters in blocks of matching sizes. Then the conditional distribution of \(\mathbf{Y}_{1}\) given \(\mathbf{Y}_{2}=\mathbf{y}_{2}\), \(\mathbf{y}_{2}\in\mathbb{R}^{p_{2}}\), is \[(\mathbf{Y}_{1}|\mathbf{Y}_{2}=\mathbf{y}_{2})\sim\mathcal{ESN}_{p_{1}}(\mathbf{\xi}_{1.2}, \bar{\mathbf{\Psi}}_{11.2},\bar{\mathbf{\eta}}_{1.2},\bar{\tau}_{1.2}), \tag{9}\] where \(\mathbf{\xi}_{1.2}=\mathbf{\xi}_{1}+\mathbf{\Psi}_{12}\mathbf{\Psi}_{22}^{-1}(\mathbf{y}_{2}-\mathbf{ \xi}_{2})\), \(\mathbf{\Psi}_{11.2}=\mathbf{\Psi}_{11}-\mathbf{\Psi}_{12}\mathbf{\Psi}_{22}^{-1}\mathbf{\Psi}_{21}\), \(\mathbf{\eta}_{1.2}=\mathbf{\eta}_{1}-\mathbf{\Psi}_{12}\mathbf{\Psi}_{22}^{-1}\mathbf{\eta}_{2}\), \[\bar{\mathbf{\eta}}_{1.2}=\frac{\mathbf{\eta}_{1.2}}{\sqrt{1+\mathbf{\eta}_{2}^{\top}\mathbf{ \Psi}_{22}^{-1}\mathbf{\eta}_{2}}},\;\text{and}\;\bar{\tau}_{1.2}=\frac{\mathbf{\eta} _{2}^{\top}\mathbf{\Psi}_{22}^{-1}(\mathbf{y}_{2}-\mathbf{\xi}_{2})}{\sqrt{1+\mathbf{\eta}_{2 }^{\top}\mathbf{\Psi}_{22}^{-1}\mathbf{\eta}_{2}}}.\] This result can be verified by the fact that the conditional distribution of the \(\mathcal{ASN}\) family belongs to the extended skew-normal distribution proposed by Arellano-Valle and Genton (2010b) (see Section 5.3.2 in Azzalini and Capitanio (2014)) and by reparameterizing to the \(\mathbf{\Psi}\)-\(\mathbf{\eta}\) parameterization. In the next proposition we derive the conditional distribution of the \(\mathcal{SNT}\mathcal{H}\) family. We show that the conditional distributions of the \(\mathcal{SN}\) family and the \(\mathcal{SNT}\mathcal{H}\) family are related. **Proposition 6**.: _Let \(\mathbf{Y}\sim\mathcal{SNT}\mathcal{H}_{p}(\mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}}, \mathbf{\eta},\mathbf{h})\), and consider the partition of \(\mathbf{Y}=(\mathbf{Y}_{1}^{\top},\mathbf{Y}_{2}^{\top})^{\top}\) with \(\mathbf{Y}_{i}\) of size \(p_{i}\) (\(i=1,2\)) and such that \(p_{1}+p_{2}=p\), with corresponding partitions of the parameters in blocks of matching sizes. Then the conditional distribution of \(\mathbf{Y}_{1}\) given \(\mathbf{Y}_{2}=\mathbf{y}_{2}\) is_ \[(\mathbf{Y}_{1}|\mathbf{Y}_{2}=\mathbf{y}_{2})\,\overset{d}{=}\,\mathbf{\tau}_{h_{1}}(\mathbf{Y}_ {0}),\quad\mathbf{Y}_{0}\sim\mathcal{ESN}_{p_{1}}(\mathbf{\xi}_{1.2},\bar{\mathbf{\Psi}}_{1 1.2},\bar{\mathbf{\eta}}_{1.2},\bar{\mathbf{\tau}}_{1.2}),\] _where \(\mathbf{\xi}_{1.2}=\bar{\mathbf{\Psi}}_{12}\bar{\mathbf{\Psi}}_{22}^{-1}\mathbf{g}_{2}(\mathbf{y}_ {2})\), \(\bar{\mathbf{\Psi}}_{11.2}=\bar{\mathbf{\Psi}}_{11}-\bar{\mathbf{\Psi}}_{12}\bar{\mathbf{\Psi}}_ {22}^{-1}\bar{\mathbf{\Psi}}_{21}\), \(\mathbf{\tau}_{h_{1}}(\cdot)\) is the same as in Equation (4), \(\mathbf{g}(\mathbf{y})\) is the same as in Equation (5), \(\mathbf{g}(\mathbf{y})=\{\mathbf{g}_{1}(\mathbf{y}_{1}),\mathbf{g}_{2}(\mathbf{y}_{2})\}^{\top}\) with \(\mathbf{g}_{1}(\mathbf{y}_{1})=\{g_{1}(y_{1}),\ldots,g_{p_{1}}(y_{p_{1}})\}^{\top}\) and \(\mathbf{g}_{2}(\mathbf{y}_{2})=\{g_{p_{1}+1}(y_{p_{1}+1}),\ldots,g_{p}(y_{p})\}^{\top}\), and_ \[\bar{\mathbf{\eta}}_{1.2}=\frac{\mathbf{\eta}_{1}-\bar{\mathbf{\Psi}}_{12}\bar{\mathbf{\Psi}}_{22 }^{-1}\mathbf{\eta}_{2}}{\sqrt{1+\mathbf{\eta}_{2}^{\top}\bar{\mathbf{\Psi}}_{22}^{-1}\mathbf{ \eta}_{2}}},\quad\bar{\tau}_{1.2}=\frac{\mathbf{\eta}_{2}^{\top}\bar{\mathbf{\Psi}}_{22 }^{-1}\mathbf{g}_{2}(\mathbf{y}_{2})}{\sqrt{1+\mathbf{\eta}_{2}^{\top}\bar{\mathbf{\Psi}}_{22 }^{-1}\mathbf{\eta}_{2}}}.\] **Proof.** From Proposition 3, the marginal pdf of \(\mathbf{Y}_{2}\) is \[f_{\mathbf{Y}_{2}}(\mathbf{y}_{2}) =2\phi_{p_{2}}\{\mathbf{g}_{2}(\mathbf{y}_{2});\mathbf{0},(\bar{\mathbf{\Psi}}_{2 2}+\mathbf{\eta}_{2}\mathbf{\eta}_{2}^{\top})\}\Phi\Bigg{\{}\frac{\mathbf{\eta}_{2}^{\top} \bar{\mathbf{\Psi}}_{22}^{-1}\mathbf{g}_{2}(\mathbf{y}_{2})}{\sqrt{1+\mathbf{\eta}_{2}^{\top} \bar{\mathbf{\Psi}}_{22}^{-1}\mathbf{\eta}_{2}}}\Bigg{\}}\] \[\quad\times\prod_{i=p_{1}+1}^{p}\Bigg{\{}\frac{1}{\omega_{ii}} \left(\frac{\exp[\frac{1}{2}W_{0}\{h_{i}(\frac{y_{i}-\xi_{i}}{\omega_{ii}})^{2} \}]}{h_{i}(\frac{y_{i}-\xi_{i}}{\omega_{ii}})^{2}+\exp[W_{0}\{h_{i}(\frac{y_{i}- \xi_{i}}{\omega_{ii}})^{2}\}]}\right)\Bigg{\}},\quad\mathbf{y}_{2}\in\mathbb{R}^{p_ {2}}.\] Hence, the conditional pdf of \(\mathbf{Y}_{1}|\mathbf{Y}_{2}=\mathbf{y}_{2}\) is \[f_{\mathbf{Y}_{1}|\mathbf{Y}_{2}=\mathbf{y}_{2}}(\mathbf{y}_{1})=\frac{f_{\mathbf{Y}}(\mathbf{y})}{f_{\mathbf{Y}_{ 2}}(\mathbf{y}_{2})} =\frac{\phi_{p}\{\mathbf{g}(\mathbf{y});\mathbf{0},(\bar{\mathbf{\Psi}}+\mathbf{\eta}\mathbf{\eta}^{ \top})\}\Phi\Bigg{\{}\frac{\mathbf{\eta}^{\top}\bar{\mathbf{\Psi}}^{-1}\mathbf{g}(\mathbf{y})}{ \sqrt{1+\mathbf{\eta}^{\top}\bar{\mathbf{\Psi}}^{-1}\mathbf{\eta}}}\Bigg{\}}\] \[\quad\times\prod_{i=1}^{p_{1}}\Bigg{\{}\frac{1}{\omega_{ii}} \left(\frac{\exp[\frac{1}{2}W_{0}\{h_{i}(\frac{y_{i}-\xi_{i}}{\omega_{ii}})^{2} \}]}{h_{i}(\frac{y_{i}-\xi_{i}}{\omega_{ii}})^{2}+\exp[W_{0}\{h_{i}(\frac{y_{i}- \xi_{i}}{\omega_{ii}})^{2}\}]}\right)\Bigg{\}},\quad\mathbf{y}_{1}\in\mathbb{R} _{1}^{p}.\] From the pdf given above, we can see that it is the density function of \(\mathbf{\tau}_{h_{1}}(\mathbf{Y}_{0})\), where \(\mathbf{Y}_{0}\stackrel{{ d}}{{=}}[\mathbf{Z}_{1}]\{\mathbf{Z}_{2}=\mathbf{g}_{2}(\bm {y}_{2})\}\) and \(\mathbf{Z}=(\mathbf{Z}_{1}^{\top},\mathbf{Z}_{2}^{\top})^{\top}\sim\mathcal{SN}_{p}(\mathbf{0}, \bar{\mathbf{\Psi}},\mathbf{\eta})\). Hence, from Equation (9), we have \(\mathbf{Y}_{0}\sim\mathcal{ESN}_{p_{1}}(\mathbf{\xi}_{1.2},\bar{\mathbf{\Psi}}_{11.2},\bar{ \mathbf{\eta}}_{1.2},\bar{\bar{\eta}}_{1.2})\). Since the conditional distribution of the \(\mathcal{SNH}\) family can be viewed as a component-wise Tukey-\(h\) transformation on the \(\mathcal{ESN}\), closed-form expressions of its mean vector and variance-covariance matrix can be derived. The conditional mean and the variance-covariance matrix will be helpful for using the \(\mathcal{SNH}\) model for various formal statistical purposes such as regression modeling, time-series analysis, and spatial modeling. In the next three propositions we provide the mathematical expressions of the elements of the conditional mean vector and the conditional variance-covariance matrix. The proofs of Proposition 8 and 9 below are very similar to the proof of Proposition 7, hence they are omitted in the main article and are given in Sections S1 and S2 in the supplementary material. **Proposition 7**.: _Let \(\mathbf{Y}_{0}\) be defined as in Proposition 6. The mean vector \(\mathbf{\mu}=\mathbb{E}\{\mathbf{\tau}_{\mathbf{h}_{1}}(\mathbf{Y}_{0})\}\) is:_ \[\mu_{i}=\frac{1}{\sqrt{1-(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i }}}\exp\left\{\frac{(\xi_{1.2_{i}}+\bar{\tau}_{1.2}\bar{\bar{\eta}}_{1.2_{i}}) ^{2}h_{i}}{2(1-(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i})}\right\} \frac{\Phi(\tilde{\tau}_{i})}{\Phi(\bar{\tau}_{1.2})}\left\{\tilde{\xi}_{i}+ \tilde{\omega}_{i}\tilde{\delta}_{i}\frac{\phi(\tilde{\tau}_{i})}{\Phi(\bar{ \tau}_{i})}\right\},\] _where \(\mathbf{\xi}_{1.2}=(\xi_{1.2_{1}},\ldots,\xi_{1.2_{p_{1}}})^{\top}\), \(\text{diag}(\bar{\Psi}_{11.2})=(\bar{\Psi}_{11.2_{11}},\ldots,\bar{\Psi}_{11.2 _{p_{1}1}})^{\top}\), \(\bar{\mathbf{\eta}}_{1.2}=(\bar{\eta}_{1.2_{1}},\ldots,\bar{\eta}_{1.2_{p_{1}1}})^ {\top}\), \(\tilde{\xi}_{i}=\frac{\xi_{1.2_{i}}+\bar{\tau}_{1.2}\bar{\eta}_{1.2_{i}}}{1-( \bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i}}\), \(\tilde{\omega}_{i}=\sqrt{\frac{\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2} }{1-(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i}}}\), \(\tilde{\alpha}_{i}=\frac{\bar{\eta}_{1.2_{i}}}{\sqrt{\bar{\Psi}_{11.2_{ii}}} +\frac{1}{\sqrt{\bar{\Psi}_{11.2_{ii}}}}\left\{\frac{\bar{\tau}_{1.2_{i}}\bar{ \eta}_{1.2_{i}}+1.2_{i}(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i} }{1-(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i}}\right\}}}\), \(\tilde{\alpha}_{0_{i}}=\frac{\bar{\tau}_{1.2}\sqrt{\bar{\Psi}_{11.2_{ii}}}+ \frac{\bar{\eta}_{1.2_{i}}}{\sqrt{\bar{\Psi}_{11.2_{ii}}}}\left\{\frac{\bar{ \tau}_{1.2_{i}}\bar{\eta}_{1.2_{i}}+1.2_{i}(\bar{\Psi}_{11.2_{ii}}+\bar{\eta }_{1.2_{i}}^{2})h_{i}}{1-(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{ i}}\right\}}{\sqrt{\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2}}}\), \(\tilde{\delta}_{i}=\frac{\tilde{\alpha}_{i}}{\sqrt{1+\tilde{\alpha}_{i}^{2}}}\), \(\tilde{\tau}_{i}=\frac{\tilde{\alpha}_{0_{i}}}{\sqrt{1+\tilde{\alpha}_{i}^{2}}}\), and \(h_{i}<\frac{1}{\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2}}\), \(i=1,\ldots,p_{1}\)._ **Proof.** From Equation (8) it can be established that \(Y_{0_{i}}\sim\mathcal{ESN}_{1}(\xi_{1.2_{i}},\bar{\Psi}_{11.2_{ii}},\bar{\bar{ \eta}}_{1.2_{i}},\bar{\bar{\tau}}_{1.2}),\quad i=1,\ldots,p_{1}\). Then, \[\mu_{i}=\mathbb{E}(Y_{0_{i}})=\int_{\mathbb{R}}x\exp(h_{i}x^{2}/2 )\frac{1}{\Phi(\bar{\tau}_{1.2})}\phi(x;\xi_{1.2_{i}}+\bar{\tau}_{1.2}\bar{\eta }_{1.2_{i}},\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})\] \[\qquad\qquad\qquad\times\Phi\left\{\frac{\bar{\tau}_{1.2}+\bar{ \eta}_{1.2_{i}}(x-\xi_{1.2_{i}})/\bar{\Psi}_{11.2_{ii}}}{\sqrt{1+\bar{\eta}_{1.2_{i}}^{2}/\bar{\Psi}_{11.2_{ii}}}}\right\}\text{d}x\] \[=\exp\left[\frac{(\xi_{1.2_{i}}+\bar{\tau}_{1.2}\bar{\eta}_{1.2_ {i}})^{2}h_{i}}{2\{1-(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i}\} }\right]\frac{1}{\Phi(\bar{\tau}_{1.2})}\frac{1}{\sqrt{2\pi}\sqrt{\bar{\Psi}_{1 1.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2}}}\] \[\qquad\times\int_{\mathbb{R}}x\exp\left[-\frac{1}{2}\frac{\left\{ x-\frac{\xi_{1.2_{i}}+\bar{\tau}_{1.2}\bar{\eta}_{1.2_{i}}}{1-(\bar{\Psi}_{11.2_{ii}}+ \bar{\eta}_{1.2_{i}}^{2})h_{i}}\right\}^{2}}{\left\{\frac{\bar{\Psi}_{11.2_{ ii}}+\bar{\eta}_{1.2_{i}}^{2}}{1-(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{ i}}\right\}^{2}}\right]\Phi\left\{\frac{\bar{\tau}_{1.2}+\bar{\eta}_{1.2_{i}}(x-\xi_{1.2 _{i}})/\bar{\Psi}_{11.2_{ii}}}{\sqrt{1+\bar{\eta}_{11.2_{ii}}^{2}/\bar{\Psi}_{1 1.2_{ii}}}}\right\}\text{d}x\] \[=\exp\left\{\frac{(\xi_{1.2_{i}}+\bar{\tau}_{1.2}\bar{\eta}_{1.2 _{i}})^{2}h_{i}}{2(1-(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i})} \right\}\frac{1}{\sqrt{1-(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{ i}}}\frac{\Phi(\tilde{\tau}_{i})}{\Phi(\bar{\tau}_{1.2})}\] \[\qquad\times\int_{\mathbb{R}}x\frac{1}{\Phi(\tilde{\tau}_{i})} \phi(x;\tilde{\xi}_{i},\tilde{\omega}_{i}^{2})\Phi\{\tilde{\alpha}_{0_{i}}+ \tilde{\alpha}_{i}\tilde{\omega}_{i}^{-1}(x-\tilde{\xi}_{i})\}\text{d}x\] \[=\frac{1}{\sqrt{1-(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_ {i}}}\exp\left\{\frac{(\xi_{1.2_{i}}+\bar{\tau}_{1.2}\bar{\eta}_{1.2_{i}})^{2}h_ {i}}{2(1 **Proposition 8**.: _Let \(\mathbf{Y}_{0}\) be defined as in Proposition 6, and let \(\mathbf{\Sigma}=(\sigma_{ij})=\mathbb{V}\text{ar}\{\mathbf{\tau}_{\mathbf{h}_{1}}(\mathbf{Y}_{0})\}\). Then:_ \[\sigma_{ii} =\frac{1}{\sqrt{1-2(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{12.2_{i}}^{ 2})h_{i}}}\exp\left\{\frac{(\xi_{1.2_{i}}+\tau\bar{\eta}_{1.2_{i}})^{2}h_{i}}{ 1-2(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{12.2_{i}}^{2})h_{i}}\right\}\frac{\Phi( \tilde{\tau}_{i})}{\Phi(\bar{\tau}_{1.2})}\] \[\quad\times\left\{\tilde{\xi}_{i}^{2}+\tilde{\omega}_{i}^{2}- \tilde{\tau}_{i}\frac{\phi(\tilde{\tau}_{i})}{\Phi(\tilde{\tau}_{i})}\tilde{ \omega}_{i}^{2}\tilde{\delta}_{i}^{2}+2\frac{\phi(\tilde{\tau}_{i})}{\Phi( \tilde{\tau}_{i})}\tilde{\xi}_{i}\tilde{\omega}_{i}\tilde{\delta}_{i}\right\} -\mu_{i}^{2},\] _where \(\mathbf{\xi}_{1.2}=(\xi_{1.2_{1}},\ldots,\xi_{1.2_{p_{1}}})^{\top}\), \(\text{diag}(\bar{\mathbf{\Psi}}_{11.2})=(\bar{\Psi}_{11.2_{11}},\ldots,\bar{\Psi}_ {11.2_{p_{1}}p_{1}})^{\top}\), \(\bar{\mathbf{\eta}}_{1.2}=(\bar{\eta}_{1.2_{1}},\ldots,\bar{\eta}_{1.2_{p_{1}}})^{\top}\), \(\tilde{\xi}_{i}=\frac{\xi_{1.2_{i}}+\bar{\tau}_{1.2_{i}}\bar{\eta}_{1.2_{i}}}{ 1-2(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i}}\), \(\tilde{\omega}_{i}=\sqrt{\frac{\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2} }{1-2(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i}}}\), \(\tilde{\alpha}_{i}=\frac{\bar{\eta}_{1.2_{i}}}{\sqrt{\bar{\Psi}_{11.2_{ii}}}} \frac{1}{\sqrt{1-2(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})h_{i}}}\), \(\tilde{\alpha}_{0}=\frac{\bar{\eta}_{1.2}}{\sqrt{\bar{\Psi}_{11.2_{ii}}}+\frac {\bar{\eta}_{1.2_{i}}}{\sqrt{\bar{\Psi}_{11.2_{ii}}}}}\)\(\frac{\bar{\tau}_{1.2}\sqrt{\bar{\Psi}_{11.2_{ii}}}+\frac{\bar{\eta}_{1.2_{i}}}{ \sqrt{\bar{\Psi}_{11.2_{ii}}}}}{\sqrt{\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{ i}}^{2}}}\), \(\tilde{\delta}_{i}=\frac{\tilde{\alpha}_{i}}{\sqrt{1+\tilde{\alpha}_{i}}^{2}}\), \(\tilde{\tau}_{i}=\frac{\tilde{\alpha}_{0_{i}}}{\sqrt{1+\tilde{\alpha}_{i}}^{2}}\), \(\mu_{i}\) is the same as in Proposition 7, and \(h_{i}<\frac{1}{2(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})}\), \(i=1,\ldots,p_{1}\)._ **Proposition 9**.: _Let \(\mathbf{Y}_{0}\) be defined as in Proposition 6, and let \(\mathbf{\Sigma}=(\sigma_{ij})=\mathbb{V}\text{ar}\{\mathbf{\tau}_{h_{1}}(\mathbf{Y}_{0})\}\). Then:_ \[\sigma_{ij} =\frac{\sqrt{\det\{(\mathbf{\Omega}_{i,j}^{-1}-\mathbf{H}_{i,j})^{-1}\}} }{\sqrt{\det(\mathbf{\Omega}_{i,j})}}\exp\left[-\frac{1}{2}\{\tilde{\mathbf{\mu}}_{i,j}^ {\top}\mathbf{\Omega}_{i,j}^{-1}\tilde{\mathbf{\mu}}_{i,j}-\tilde{\mathbf{\mu}}_{i,j}^{ \top}(\mathbf{\Omega}_{i,j}-\mathbf{\Omega}_{i,j}\mathbf{H}_{i,j}\mathbf{\Omega}_{i,j})^{-1} \tilde{\mathbf{\mu}}_{i,j}\}\right]\] \[\quad\times\frac{\Phi(\tilde{\tau}_{i,j})}{\Phi(\tilde{\tau}_{1.2} )}\Bigg{\{}(\tilde{\mathbf{\Omega}}_{i,j})_{12}-\tilde{\tau}_{i,j}\frac{\phi( \tilde{\tau}_{i,j})}{\Phi(\tilde{\tau}_{i,j})}(\tilde{\mathbf{\omega}}_{i,j})_{11} (\tilde{\mathbf{\omega}}_{i,j})_{22}(\tilde{\mathbf{\delta}}_{i,j})_{1}(\tilde{\mathbf{ \delta}}_{i,j})_{2}+\xi_{1.2_{i}}\xi_{1.2_{j}}\] \[\quad\qquad\qquad\qquad+\frac{\phi(\tilde{\tau}_{i,j})}{\Phi( \tilde{\tau}_{i,j})}\xi_{1.2_{i}}(\tilde{\mathbf{\omega}}_{i,j})_{22}(\tilde{\mathbf{ \delta}}_{i,j})_{2}+\frac{\phi(\tilde{\tau}_{i,j})}{\Phi(\tilde{\tau}_{i,j})} \xi_{1.2_{j}}(\tilde{\mathbf{\omega}}_{i,j})_{11}(\tilde{\mathbf{\delta}}_{i,j})_{1} \Bigg{\}}-\mu_{i}\mu_{j},\] _where \(\mathbf{\xi}_{i,j}=(\xi_{1.2_{i}},\xi_{1.2_{j}})^{\top}\), \(\mathbf{\Psi}_{i,j}=\left(\frac{\bar{\Psi}_{11.2_{ii}}\ \bar{\Psi}_{11.2_{ij}}}{\bar{\Psi}_{11.2_{ij}}}\right)\), \(\mathbf{\eta}_{i,j}=(\bar{\eta}_{1.2_{i}},\bar{\eta}_{1.2_{j}})^{\top}\), \(\mathbf{\Omega}_{i,j}=\mathbf{\Psi}_{i,j}+\mathbf{\eta}_{i,j}\mathbf{\eta}_{i,j}^{\top}\), \(\tilde{\mathbf{\mu}}_{i,j}=\mathbf{\xi}_{i,j}+\bar{\tau}_{1.2}\mathbf{\eta}_{i,j}\), \(\tilde{\mathbf{H}}_{i,j}=\left(\begin{smallmatrix}h_{i}&0\\ 0&h_{j}\end{smallmatrix}\right)\),\(\tilde{\mathbf{\xi}}_{i,j}=(\mathbf{I}_{2}-\mathbf{\Omega}_{i,j}\mathbf{H}_{i,j})^{-1}\tilde{\mathbf{\mu}}_{i,j}\), \(\tilde{\mathbf{\Omega}}_{i,j}=(\mathbf{\Omega}_{1}^{-1}-\mathbf{H}_{i,j})^{-1}\), \(\tilde{\alpha}_{0_{i,j}}=\frac{\bar{\tau}_{1.2}+\mathbf{\eta}_{i,j}^{\top}\mathbf{\Psi}_{i,j}^{-1}(\tilde{\mathbf{\xi}}_{i,j}-\mathbf{\xi}_{i,j})}{\sqrt{1+\eta_{i,j}^{\top}\mathbf{ \Psi}_{i,j}^{-1}\mathbf{\eta}_{i,j}}}\), \(\tilde{\mathbf{\omega}}_{i,j}=\frac{\tilde{\omega}_{i,j}\mathbf{\Psi}_{i,j}^{-1}\mathbf{\eta}_{ i,j}}{\sqrt{1+\eta_{i,j}^{\top}\mathbf{\Psi}_{i,j}^{-1}\mathbf{\eta}_{i,j}}}\), \(\tilde{\mathbf{\omega}}_{i,j}=\{\text{diag}(\tilde{\mathbf{\Omega}}_{i,j})\}^{1/2}\), \(\tilde{\mathbf{\Omega}}_{i,j}=\tilde{\mathbf{\omega}}_{i,j}^{-1}\tilde{\mathbf{\Omega}}_{i,j} \tilde{\mathbf{\omega}}_{i,j}^{-1}\), \(\mathbf{\delta}_{i,j}=(1+\tilde{\mathbf{\alpha}}_{i,j}^{\top}\tilde{\mathbf{\Omega}}_{i,j} \tilde{\mathbf{\omega}}_{i,j})^{-1/2}\tilde{\mathbf{\Omega}}_{i,j}\tilde{\mathbf{\alpha}}_{i,j} )^{-1}(\tilde{\mathbf{\Omega}}_{i,j}\tilde{\mathbf{\alpha}}_{i,j})^{-1}(\tilde{\mathbf{ \Omega}}_{i,j}\tilde{\mathbf{\alpha}}_{i,j})\), \(\mu_{i}\), \(\mu_{j}\) are the same as in Proposition 7, and \(h_{i}<\frac{1}{2(\bar{\Psi}_{11.2_{ii}}+\bar{\eta}_{1.2_{i}}^{2})}\), \(h The canonical form of the \(\mathcal{ASN}\) or the \(\mathcal{SN}\) distribution is useful for deriving Mardia's measures of multivariate skewness and kurtosis (Mardia, 1970) and the measures of multivariate skewness and kurtosis introduced by Malkovich and Afifi (1973) since they are invariant under affine transformations of the variable. Moreover, using the canonical form, the unique mode of the \(\mathcal{ASN}\) distribution can be derived; see Proposition 5.14 in Azzalini and Capitanio (2014). Hence, the canonical form is used mainly to reduce the dimensionality of various problems when applicable. For the \(\mathcal{SNT}\mathcal{H}\) distribution, we define the canonical form by taking the component-wise Tukey-\(h\) transformation of the canonical form of the latent \(\mathcal{SN}\) random vector. **Proposition 10**.: _Suppose \(\mathbf{Y}\sim\mathcal{SNT}\mathcal{H}_{p}(\mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}}, \mathbf{\eta},\mathbf{h})\). We define the canonical form of the \(\mathcal{SNT}\mathcal{H}\) by the distribution of_ \[\mathbf{\omega}^{-1}(\mathbf{Y}^{*}-\mathbf{\xi})=\mathbf{\tau}_{\mathbf{h}}[\mathbf{H}^{*}\mathbf{\tau}_{ \mathbf{h}}^{-1}\{\mathbf{\omega}^{-1}(\mathbf{Y}-\mathbf{\xi})\}]\sim\mathcal{SNT}\mathcal{H }_{p}(\mathbf{0},\mathbf{I}_{p},\mathbf{\eta}^{*},\mathbf{h}),\] _where \(\mathbf{\tau}_{\mathbf{h}}^{-1}(\mathbf{z})=\{\tau_{h_{1}}^{-1}(z_{1}),\ldots,\tau_{h_{p}}^ {-1}(z_{p})\}^{\top}\), \(\tau_{h}^{-1}(z)\) is same as in Equation (6), \(\mathbf{\eta}^{*}=(\eta^{*},0,\ldots,0)^{\top}\), \(\eta^{*}=\sqrt{\mathbf{\alpha}^{\top}\Omega\mathbf{\alpha}}\), \(\mathbf{\Omega}=\bar{\mathbf{\Psi}}+\mathbf{\eta}\mathbf{\eta}^{\top}\), \(\mathbf{\alpha}=(1+\mathbf{\eta}^{\top}\bar{\mathbf{\Psi}}^{-1}\mathbf{\eta})^{-1/2}\{\text{ diag}(\mathbf{\Omega})\}^{1/2}\bar{\mathbf{\Psi}}^{-1}\mathbf{\eta}\), \(\bar{\mathbf{\Omega}}=\{\text{diag}(\mathbf{\Omega})\}^{-1/2}\mathbf{\Omega}\{\text{diag}( \mathbf{\Omega})\}^{-1/2}\), \(\mathbf{H}^{*}=\left(\begin{smallmatrix}\sqrt{1+\mathbf{\alpha}^{\top}\bar{\mathbf{\Omega} }\mathbf{\alpha}}&\mathbf{0}^{\top}\\ \mathbf{0}&\mathbf{I}_{p-1}\end{smallmatrix}\right)\mathbf{H}\), \(\mathbf{H}=\mathbf{Q}\mathbf{\Omega}^{-1/2}\), \(\mathbf{Q}\) is obtained from the spectral decomposition of \(\mathbf{Q}^{\top}\Lambda\mathbf{Q}=\mathbf{\Omega}^{-1/2}\mathbf{\Sigma}\mathbf{\Omega}^{-1/2}\), and \(\mathbf{\Sigma}=\bar{\mathbf{\Psi}}+\left(1-\frac{2}{\pi}\right)\mathbf{\eta}\mathbf{\eta}^{\top}\)._ **Proof.** We have \(\mathbf{Y}=\mathbf{\xi}+\mathbf{\omega}\mathbf{\tau}_{\mathbf{h}}(\mathbf{Z})\), where \(\mathbf{Z}\sim\mathcal{SN}_{p}(\mathbf{0},\bar{\mathbf{\Psi}},\mathbf{\eta})\). Moreover, let \(\mathbf{Z}^{*}\) be the canonical transform of \(\mathbf{Z}\), and \(\mathbf{Z}^{*}=\mathbf{H}^{*}\mathbf{Z}\sim\mathcal{SN}_{p}(\mathbf{0},\mathbf{I}_{p},\mathbf{\eta }^{*})\). Here, \(\mathbf{H}^{*}=\left(\begin{smallmatrix}\sqrt{1+\mathbf{\alpha}^{\top}\mathbf{\Omega }\mathbf{\alpha}}&\mathbf{0}^{\top}\\ \mathbf{0}&\mathbf{I}_{p-1}\end{smallmatrix}\right)\mathbf{H}\), \(\mathbf{H}=\mathbf{Q}\mathbf{\Omega}^{-1/2}\), \(\mathbf{Q}\) is obtained from the spectral decomposition of \(\mathbf{Q}^{\top}\Lambda\mathbf{Q}=\mathbf{\Omega}^{-1/2}\mathbf{\Sigma}\mathbf{\Omega}^{-1/2}\), and \(\mathbf{\Sigma}=\mathbb{V}\text{ar}(\mathbf{Z})=\bar{\mathbf{\Psi}}+\left(1-\frac{2}{\pi} \right)\mathbf{\eta}\mathbf{\eta}^{\top}\). Hence, \(\mathbf{\omega}^{-1}(\mathbf{Y}-\mathbf{\xi})=\mathbf{\tau}_{\mathbf{h}}(\mathbf{Z}^{*})\sim\mathcal{ SNT}\mathcal{H}_{p}(\mathbf{0},\mathbf{I}_{p},\mathbf{\eta}^{*},\mathbf{h})\). \(\square\) Since the canonical form of the \(\mathcal{SNT}\mathcal{H}\) distribution is not exactly an affine transformation, it cannot be used for deriving the measures of multivariate skewness and kurtosis introduced by Mardia (1970) and Malkovich and Afifi (1973). However, it can be used for reducing the dimensionality of the problem, when applicable, such as simulating observations from the \(\mathcal{SNT}\mathcal{H}\) distribution. ## 4 Inference for the \(\mathcal{SNT}\mathcal{H}\) Distribution In this section, we discuss how to estimate parameters and perform tests for the \(\mathcal{SNT}\mathcal{H}\) distribution. ### Parameter Estimation for the \(\mathcal{SNT}\mathcal{H}\) Distribution To estimate the parameters of the \(\mathcal{SNT}\mathcal{H}\) distribution, we use the method of maximizing the likelihood function. Suppose \(\mathbf{Y}_{1},\ldots,\mathbf{Y}_{n}\) is a random sample of size \(n\) from the \(\mathcal{SNT}\mathcal{H}_{p}(\mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}},\mathbf{\eta}, \mathbf{h})\) distribution with \(\mathbf{Y}_{i}=(Y_{i1},\ldots,Y_{ip})^{\top}\), \(i=1,\ldots,n\). For an observed sample \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\), with \(\mathbf{y}_{i}=(y_{i1},\ldots,y_{ip})^{\top}\), \(i=1,\ldots,n\), the log-likelihood function based on Equation (5) is \[\ell(\mathbf{\theta})= \log(2)-\frac{np}{2}\log(2\pi)-\frac{n}{2}\log\{\det(\bar{\mathbf{ \Psi}}+\mathbf{\eta}\mathbf{\eta}^{\top})\}-\frac{1}{2}\sum_{i=1}^{n}\mathbf{g}(\mathbf{y}_{i} )^{\top}(\bar{\mathbf{\Psi}}+\mathbf{\eta}\mathbf{\eta}^{\top})^{-1}\mathbf{g}(\mathbf{y}_{i}) \tag{10}\] \[+\sum_{i=1}^{n}\Phi\left\{\frac{\mathbf{\eta}^{\top}\bar{\mathbf{\Psi}}^{- 1}\mathbf{g}(\mathbf{y}_{i})}{\sqrt{1+\mathbf{\eta}^{\top}\bar{\mathbf{\Psi}}\mathbf{\eta}}}\right\} -n\sum_{j=1}^{p}\log(\omega_{jj})+\sum_{i=1}^{n}\sum_{j=1}^{p}\frac{1}{2}W_{0} \left\{h_{j}\left(\frac{y_{ij}-\xi_{j}}{\omega_{jj}}\right)^{2}\right\}\] \[-\sum_{i=1}^{n}\sum_{j=1}^{p}\log\left(h_{j}\left(\frac{y_{ij}-\xi_ {j}}{\omega_{jj}}\right)^{2}+\exp\left[W_{0}\left\{h_{j}\left(\frac{y_{ij}-\xi_{j} }{\omega_{jj}}\right)^{2}\right\}\right]\right),\] where \(\mathbf{\theta}=(\mathbf{\xi}^{\top},\text{diag}(\mathbf{\omega})^{\top},\text{vech}(\bar{ \mathbf{\Psi}})^{\top},\mathbf{\eta}^{\top},\mathbf{h}^{\top})^{\top}\), where \(\text{vech}(\bar{\mathbf{\Psi}})^{\top}\) is the vector of all the upper-off-diagonal elements of \(\bar{\mathbf{\Psi}}\). We estimate the parameters in \(\mathbf{\theta}\) by maximizing \(\ell(\mathbf{\theta})\) with respect to \(\mathbf{\theta}\). This maximization cannot be done analytically and has to be done numerically. Hence, for a \(p\)-dimensional problem, we need to perform a \(\{4p+p(p-1)/2\}\)-dimensional numerical optimization, which becomes difficult when \(p\) is large. We can tackle this problem in a different way. Since \(\mathbf{Y}_{1},\ldots,\mathbf{Y}_{n}\overset{\text{i.i.d.}}{\sim}\mathcal{SNTH}_{p}( \mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}},\mathbf{\eta},\mathbf{h})\), from Proposition 3 we also have that \(Y_{1j},\ldots,Y_{nj}\overset{\text{i.i.d.}}{\sim}\mathcal{SNTH}_{1}(\xi_{j}, \omega_{jj},1,\eta_{j},h_{j})\), \(j=1,\ldots,p\). Based on the \(j^{\text{th}}\) marginal data, the marginal log-likelihood function is \[\ell_{j}(\xi_{j},\omega_{jj},\eta_{j},h_{j})= \log(2)-\frac{n}{2}\log(2\pi)-\frac{n}{2}\log(1+\eta_{j}^{2})- \frac{1}{2}\sum_{i=1}^{n}\frac{g_{j}(y_{ij})^{2}}{1+\eta_{j}^{2}} \tag{11}\] \[+\sum_{i=1}^{n}\Phi\left\{\frac{\eta_{j}g_{j}(y_{ij})}{\sqrt{1+ \eta_{j}^{2}}}\right\}-n\log(\omega_{jj})+\sum_{i=1}^{n}\frac{1}{2}W_{0}\left\{ h_{j}\left(\frac{y_{ij}-\xi_{j}}{\omega_{jj}}\right)^{2}\right\}\] \[-\sum_{i=1}^{n}\log\left(h_{j}\left(\frac{y_{ij}-\xi_{j}}{\omega_ {jj}}\right)^{2}+\exp\left[W_{0}\left\{h_{j}\left(\frac{y_{ij}-\xi_{j}}{ \omega_{jj}}\right)^{2}\right\}\right]\right),\] \(j=1,\ldots,p\). We estimate \(\xi_{j}\), \(\omega_{jj}\), \(\eta_{j}\), and \(h_{j}\), by maximizing the log-likelihood function for the \(j^{\text{th}}\) marginal \(\ell_{j}(\xi_{j},\omega_{jj},\eta_{j},h_{j})\), \(j=1,\ldots,p\). Therefore, by performing four-dimensional numerical optimization \(p\) times, we obtain the marginal maximum likelihood estimates (MLEs) for \(\mathbf{\xi}\), \(\mathbf{\omega}\), \(\mathbf{\eta}\), and \(\mathbf{h}\). At this point, we are yet to obtain the estimate for \(\bar{\mathbf{\Psi}}\). From the definition of the \(\mathcal{SNTH}\) distribution, we have \(\mathbf{Y}_{i}\overset{\text{d}}{=}\mathbf{\xi}+\mathbf{\omega}\mathbf{\tau}_{\mathbf{h}}(\mathbf{Z}_ {i})\), \(i=1,\ldots,n\) and \(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{n}\overset{\text{i.i.d.}}{\sim}\mathcal{SN}_{p}(\bm {0},\bar{\mathbf{\Psi}},\mathbf{\eta})\). With the marginal MLEs \(\mathbf{\widehat{\xi}}\), \(\widehat{\mathbf{\omega}}\), \(\widehat{\mathbf{\eta}}\), and \(\widehat{\mathbf{h}}\) of \(\mathbf{\xi}\), \(\mathbf{\omega}\), \(\mathbf{\eta}\), and \(\mathbf{h}\), we can compute an estimate for the latent \(\mathcal{SNT}\) observations. Then, \(\widehat{\mathbf{Z}}_{i}=\mathbf{\tau}_{\widehat{\mathbf{h}}}^{-1}\{\widehat{\mathbf{\omega}} ^{-1}(\mathbf{Y}_{i}-\widehat{\mathbf{\xi}})\}\), \(i=1,\ldots,n\) are the estimates for \(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{n}\). Assuming that, \(\widehat{\mathbf{Z}}_{1},\ldots,\widehat{\mathbf{Z}}_{n}\overset{\text{i.i.d.}}{\sim} \mathcal{SN}_{p}(\mathbf{0},\bar{\mathbf{\Psi}},\widehat{\mathbf{\eta}})\) we can estimate \(\bar{\mathbf{\Psi}}\). We use the EM algorithm for the \(\mathcal{SNT}\) distribution for estimating \(\bar{\mathbf{\Psi}}\), keeping the location and the skewness parameter fixed at \(\mathbf{0}\) and \(\widehat{\mathbf{\eta}}\). The EM algorithm does not ensure that the estimate of \(\bar{\mathbf{\Psi}}\) will be a correlation matrix, but the estimate is a covariance matrix, which can be easily converted to its corresponding correlation matrix. We use this correlation matrix as an estimate for \(\bar{\mathbf{\Psi}}\). In the next section, we will justify the effectiveness of the described method for estimating parameters using a simulation study. Moreover, if we use the marginal MLEs of \(\mathbf{\xi}\), \(\mathbf{\omega}\), \(\mathbf{\eta}\) and \(\mathbf{h}\) and the estimate of \(\bar{\mathbf{\Psi}}\) obtained from the EM algorithm as the initial value for the numerical maximization of \(\ell(\mathbf{\theta})\) in Equation (10), we can converge to the joint MLEs of \(\mathbf{\theta}\) in very few iterations. Although it does not completely tackle the problem of high-dimensional numerical maximization, this specific selection of initial values reduces the run-time of the numerical maximization greatly. Moreover, we will show in our simulation study that the initial parameter values obtained in the aforementioned way are close to the joint MLEs and can be directly used for high-dimensional problems as the computation required for estimating the initial estimates is linear in \(p\). In the next subsection, we describe the EM algorithm for the \(\mathcal{SNT}\) distribution in details. Note that instead of computing the marginal MLEs of the parameters one can use the iterative generalized method of moments (IGMM) estimators proposed by Goerg (2011). IGMM is also based on the estimates of the latent observations and from there estimating the parameters corresponding to the latent random vector. While using the IGMM estimators for the \(\mathcal{SNTH}\) distribution one has to keep in mind that the location and the scale parameters used in its definition are not the mean and the marginal standard deviation of the latent random vectors, unlike the proposal of Goerg (2011). The IGMM has to be adapted accordingly for getting the correct estimates of the parameters. ### EM Algorithm for the \(\mathcal{SNT}\) Distribution The EM algorithm for the skew-normal distribution is a well-researched topic. Interested readers are directed to the recent paper by Abe et al. (2021) and the references therein for more on this topic. In this section, we put forward an EM algorithm for the skew-normal distribution with \(\bar{\mathbf{\Psi}}\)-\(\mathbf{\eta}\) parameterization (see (2)), which is new in the literature. Moreover, we are only concerned with the scenario when we need to estimate the scale parameter \(\bar{\mathbf{\Psi}}\) while the location \(\mathbf{\xi}=\mathbf{0}\) and the skewness parameter \(\mathbf{\eta}\) is known. Consider a random sample \(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{n}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{ SN}_{p}(\mathbf{0},\mathbf{\Psi},\mathbf{\eta}_{0})\), where \(\mathbf{\eta}_{0}\) is given. The log-likelihood of an observed sample \(\mathbf{z}_{1},\ldots,\mathbf{z}_{n}\) is \[\ell(\mathbf{\Psi})=-\frac{np}{2}\log(2\pi)-\frac{n}{2}\log\{\det(\mathbf{\Psi}+\mathbf{ \eta}_{0}\mathbf{\eta}_{0}^{\top})\}-\frac{1}{2}\sum_{i=1}^{n}\mathbf{z}_{i}^{\top}( \mathbf{\Psi}+\mathbf{\eta}_{0}\mathbf{\eta}_{0}^{\top})^{-1}\mathbf{z}_{i}+\sum_{i=1}^{n}\log \left\{2\Phi\left(\frac{\mathbf{\eta}_{0}^{\top}\mathbf{\Psi}^{-1}\mathbf{z}_{i}}{\sqrt{1+ \mathbf{\eta}_{0}^{\top}\mathbf{\Psi}^{-1}\mathbf{\eta}_{0}}}\right)\right\}.\] Using the stochastic representation of the \(\mathcal{SN}\) distribution we can represent \(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{n}\) as \((\mathbf{Z}_{i}|U_{i}=u_{i})\stackrel{{\text{i.i.d.}}}{{\sim}} \mathcal{N}_{p}(u_{i}\mathbf{\eta}_{0},\mathbf{\Psi})\), \(U_{i}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{HN}(0,1)\), \(i=1,\ldots,n\) and obtain the conditional pdf of \((U_{i}|\mathbf{Z}_{i}=\mathbf{z}_{i})\) as \[f_{U_{i}|(\mathbf{Z}_{i}=\mathbf{z}_{i})}(u) \propto\phi_{p}(\mathbf{z}_{i};u_{i}\mathbf{\eta}_{0},\mathbf{\Psi})\phi(u;0,1)\] \[=\phi_{p}(\mathbf{z}_{i};\mathbf{0},\mathbf{\Psi}+\mathbf{\eta}_{0}\mathbf{\eta}_{0}^ {\top})\phi\left\{u;\mathbf{\eta}_{0}^{\top}(\mathbf{\Psi}+\mathbf{\eta}_{0}\mathbf{\eta}_{0}^ {\top})^{-1}\mathbf{z}_{i},1-\mathbf{\eta}_{0}^{\top}(\mathbf{\Psi}+\mathbf{\eta}_{0}\mathbf{\eta} _{0}^{\top})^{-1}\mathbf{\eta}_{0}\right\}\] \[=\phi_{p}(\mathbf{z}_{i};\mathbf{0},\mathbf{\Psi}+\mathbf{\eta}_{0}\mathbf{\eta}_{0}^ {\top})\phi\left(u;\tau_{i},\frac{1}{1+\alpha^{2}}\right),\quad u>0,\quad i=1, \ldots,n,\] where \(\alpha^{2}=\mathbf{\eta}_{0}^{\top}\mathbf{\Psi}^{-1}\mathbf{\eta}_{0}\) and \(\tau_{i}=\frac{\mathbf{\eta}_{0}^{\top}\mathbf{\Psi}^{-1}\mathbf{z}_{i}}{1+\alpha^{2}}\). Hence, the conditional distribution of the latent variables \(U_{i}\) given the observable \(\mathbf{Z}_{i}\) is \[(U_{i}|\mathbf{Z}_{i}=\mathbf{z}_{i})\stackrel{{\text{i.i.d.}}}{{\sim}} \mathcal{TN}\left(0;\tau_{i},\frac{1}{1+\alpha^{2}}\right),\quad i=1,\ldots,n.\] Moreover, the first and second order raw moments of \((U_{i}|\mathbf{Z}_{i}=\mathbf{z}_{i})\) are \[v_{1i}=\mathbb{E}(U_{i}|\mathbf{Z}_{i}=\mathbf{z}_{i})=\frac{\bar{\tau}_{i}+\frac{\phi (\bar{\tau}_{i})}{\Phi(\bar{\tau}_{i})}}{\sqrt{1+\alpha^{2}}},\quad v_{2i}= \mathbb{E}(U_{i}^{2}|\mathbf{Z}_{i}=\mathbf{z}_{i})=\frac{1+\bar{\tau}_{i}^{2}+\bar{ \tau}_{i}\frac{\phi(\bar{\tau}_{i})}{\Phi(\bar{\tau}_{i})}}{1+\alpha^{2}}, \quad i=1,\ldots,n,\] where \(\bar{\tau}_{i}=\sqrt{1+\alpha^{2}}\tau_{i}=\frac{\mathbf{\eta}_{0}^{\top}\mathbf{\Psi }^{-1}\mathbf{z}_{i}}{\sqrt{1+\alpha^{2}}}\). From the hierarchical representation above, the complete log-likelihood for \(\mathbf{\Psi}\) based on the observed data \(\mathbf{z}=(\mathbf{z}_{1},\ldots,\mathbf{z}_{n})^{\top}\) and the missing data \(\mathbf{u}=(u_{1},\ldots,u_{n})^{\top}\) is \[\ell_{c}(\mathbf{\Psi}|\mathbf{z},\mathbf{u}) =-\frac{np}{2}\log(2\pi)+\frac{n}{2}\log\{\det(\mathbf{\Lambda})\}- \frac{1}{2}\sum_{i=1}^{n}\mathbf{z}_{i}^{\top}\mathbf{\Lambda}\mathbf{z}_{i}+\mathbf{\eta}_{0}^ {\top}\mathbf{\Lambda}\sum_{i=1}^{n}u_{i}\mathbf{z}_{i}\] \[\quad-\frac{1}{2}\mathbf{\eta}_{0}^{\top}\mathbf{\Lambda}\mathbf{\eta}_{0}\sum _{i=1}^{n}u_{i}^{2}+\frac{n}{2}\log\left(\frac{2}{\pi}\right)-\frac{1}{2}\sum_{ i=1}^{n}u_{i}^{2},\] where \(\mathbf{\Lambda}=\mathbf{\Psi}^{-1}\). Let \(\mathbf{Z}=(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{n})^{\top}\) be the observable random sample and \(\mathbf{U}=(U_{1},\ldots,U_{n})^{\top}\) be the latent random sample. Then the E-Step at the \((k+1)^{\text{th}}\) iteration of the EM algorithm is \[Q(\mathbf{\Psi}|\mathbf{\Psi}^{(k)}) =\mathbb{E}_{\mathbf{\Psi}^{(k)}}\{\ell_{c}(\mathbf{\Psi}|\mathbf{Z},\mathbf{U})| \mathbf{Z}=\mathbf{z}\}\] \[=-\frac{np}{2}\log(2\pi)+\frac{n}{2}\log\{\det(\mathbf{\Lambda})\}- \frac{1}{2}\sum_{i=1}^{n}\mathbf{z}_{i}^{\top}\mathbf{\Lambda}\mathbf{z}_{i}+\mathbf{\eta}_{0}^ {\top}\mathbf{\Lambda}\sum_{i=1}^{n}v_{1i}^{(k)}\mathbf{z}_{i}\] \[\quad-\frac{1}{2}\mathbf{\eta}_{0}^{\top}\mathbf{\Lambda}\mathbf{\eta}_{0} \sum_{i=1}^{n}v_{2i}^{(k)}+\frac{n}{2}\log\left(\frac{2}{\pi}\right)-\frac{1}{2} \sum_{i=1}^{n}v_{2i}^{(k)},\] where \(\mathbf{\Psi}^{(k)}\) is the estimated value of \(\mathbf{\Psi}\) in the \(k^{\text{th}}\) step, \(\mathbf{\Lambda}^{(k)}=\{\mathbf{\Psi}^{(k)}\}^{-1}\), \[v_{1i}^{(k)}=\frac{\bar{\tau}_{i}^{(k)}+\frac{\phi(\bar{\tau}_{i}^{(k)})}{\Phi( \bar{\tau}_{i}^{(k)})}}{\sqrt{1+\{\alpha^{(k)}\}^{2}}},\quad v_{2i}^{(k)}= \frac{1+\{\bar{\tau}_{i}^{(k)}\}^{2}+\bar{\tau}_{i}^{(k)}\frac{\phi(\bar{\tau}_{i}^ {(k)})}{\Phi(\bar{\tau}_{i}^{(k)})}}{1+\{\alpha^{(k)}\}^{2}},\] \(\bar{\tau}_{i}^{(k)}=[1+\{\alpha^{(k)}\}^{2}]^{-1/2}\mathbf{\eta}_{0}^{\top}\mathbf{\Lambda} ^{(k)}\mathbf{z}_{i}\), \(\alpha^{(k)}=\sqrt{\mathbf{\eta}_{0}^{\top}\mathbf{\Lambda}^{(k)}\mathbf{\eta}_{0}}\). To get the \((k+1)^{\text{th}}\) estimate of \(\mathbf{\Psi}\), we maximize \(Q(\mathbf{\Psi}|\mathbf{\Psi}^{(k)})\) with respect to \(\mathbf{\Psi}\) and update \(\mathbf{\Psi}^{(k+1)}=\text{argmax}\{Q(\mathbf{\Psi}|\mathbf{\Psi}^{(k)})\}\). Since \(\bar{\mathbf{\Psi}}\) is a symmetric positive definite matrix according to our definition of the \(\mathcal{SN}\) distribution, we can write \(\mathbf{\Psi}^{-1}=\mathbf{\Lambda}=\mathbf{C}^{\top}\mathbf{C}\), where \(\mathbf{C}\in\mathbb{R}^{p\times p}\) is a nonsingular matrix. Hence, \[Q(\mathbf{\Psi}|\mathbf{\Psi}^{(k)})\propto\frac{n}{2}\log\{\det(\mathbf{C} ^{\top}\mathbf{C})\}-\frac{1}{2}\sum_{i=1}^{n}\mathbf{z}_{i}^{\top}\mathbf{C}^{\top}\mathbf{C} \mathbf{z}_{i}+\mathbf{\eta}_{0}^{\top}\mathbf{C}^{\top}\mathbf{C}\sum_{i=1}^{n}v_{1i}^{(k)} \mathbf{z}_{i}-\frac{1}{2}\mathbf{\eta}_{0}^{\top}\mathbf{C}^{\top}\mathbf{C}\mathbf{\eta}_{0}\sum _{i=1}^{n}v_{2i}^{(k)}\] \[\Rightarrow\frac{\partial Q(\mathbf{\Psi}|\mathbf{\Psi}^{(k)})}{\partial \mathbf{C}}=n(\mathbf{C}^{\top})^{-1}-\mathbf{C}\sum_{i=1}^{n}\mathbf{z}_{i}\mathbf{z}_{i}^{\top} +\mathbf{C}\sum_{i=1}^{n}\left(\mathbf{\eta}_{0}v_{1i}^{(k)}\mathbf{z}_{i}^{\top}+v_{1i}^{ (k)}\mathbf{z}_{i}\mathbf{\eta}_{0}^{\top}\right)-\mathbf{C}\mathbf{\eta}_{0}\mathbf{\eta}_{0}^{ \top}\sum_{i=1}^{n}v_{2i}^{(k)}=\mathbf{0}\] \[\Rightarrow(\mathbf{C}^{\top}\mathbf{C})^{-1}=\frac{1}{n}\sum_{i=1}^{n} \mathbf{z}_{i}\mathbf{z}_{i}^{\top}+\mathbf{\eta}_{0}\mathbf{\eta}_{0}^{\top}\left(\frac{1}{n }\sum_{i=1}^{n}v_{2i}^{(k)}\right)-\frac{1}{n}\sum_{i=1}^{n}\left(\mathbf{\eta}_{ 0}v_{1i}^{(k)}\mathbf{z}_{i}^{\top}+v_{1i}^{(k)}\mathbf{z}_{i}\mathbf{\eta}_{0}^{\top} \right).\] Therefore, we update \[\mathbf{\Psi}^{(k+1)}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{z}_{i}\mathbf{z}_{i}^{\top}+\mathbf{ \eta}_{0}\mathbf{\eta}_{0}^{\top}\left(\frac{1}{n}\sum_{i=1}^{n}v_{2i}^{(k)}\right) -\frac{1}{n}\sum_{i=1}^{n}\left(\mathbf{\eta}_{0}v_{1i}^{(k)}\mathbf{z}_{i}^{\top}+v_{ 1i}^{(k)}\mathbf{z}_{i}\mathbf{\eta}_{0}^{\top}\right).\] We stop the algorithm when \(\{\ell(\mathbf{\Psi}^{(k+1)})/\ell(\mathbf{\Psi}^{(k)})-1\}\) is sufficiently close to \(0\). ### Tests Based on the \(\mathcal{SNTH}\) Distribution It is a well-known fact (Hallin and Ley, 2012) that the Fisher information matrix of the \(\mathcal{ASN}\) and the \(\mathcal{SN}\) distributions is singular when the skewness parameter, \(\mathbf{\alpha}\) or \(\mathbf{\eta}\), is set to zero. As a result, we cannot use the Wald type test or the likelihood ratio test (LRT) for testing the null hypothesis that the skewness parameter is zero based on the \(\mathcal{ASN}\) or the \(\mathcal{SN}\) distribution. Although the asymptotic distribution of the LRT statistic is \(\chi_{p}^{2}\) for the univariate \(\mathcal{ASN}\) or the univariate \(\mathcal{SN}\) distribution, i.e. for \(p=1\), the same is not true for \(p>1\); see Mondal et al. (2023). The explanation of why the asymptotic distribution of the LRT statistic is \(\chi_{1}^{2}\) for the univariate \(\mathcal{ASN}\) or the univariate \(\mathcal{SN}\) is still an open problem. For the skew-\(t\) distribution, this singularity of the Fisher information matrix does not occur when the skewness parameter is set to zero. Hence, we can perform the test of the null hypothesis that the skewness parameter is zero based on the skew-\(t\) distribution using the Wald type test or the LRT. Next, we show that the Fisher information matrix of the \(\mathcal{SNTH}_{2}\) distribution, when the skewness parameter is set to zero, remains nonsingular. **Proposition 11**.: _The Fisher information matrix for a bivariate random vector \(\mathbf{Y}\sim\mathcal{SNTH}_{2}(\mathbf{\xi},\mathbf{\omega},\bar{\mathbf{\Psi}},\mathbf{\eta},\mathbf{h})\) is nonsingular when \(\mathbf{\eta}=\mathbf{0}\)._ **Proof.** From Equation (10), the log-likelihood function for \(\mathbf{Y}=\mathbf{y}=(y_{1},y_{2})^{\top}\) is \[\ell(\mathbf{\theta}) =-\log(\pi)-\frac{1}{2}\log\{\det(\bar{\mathbf{\Psi}}+\mathbf{\eta}\mathbf{ \eta}^{\top})\}-\frac{1}{2}\mathbf{g}(\mathbf{y})^{\top}(\bar{\mathbf{\Psi}}+\mathbf{\eta}\mathbf{ \eta}^{\top})^{-1}\mathbf{g}(\mathbf{y})+\log\Bigg{[}\Phi\left\{\frac{\mathbf{\eta}^{\top} \bar{\mathbf{\Psi}}^{-1}\mathbf{g}(\mathbf{y})}{\sqrt{1+\mathbf{\eta}^{\top}\bar{\mathbf{\Psi}}^{-1 }\mathbf{\eta}}}\right\}\Bigg{]}\] \[\quad+\sum_{i=1}^{2}\Bigg{(}-\log(\omega_{ii})+\frac{1}{2}W_{0} \left(h_{i}x_{i}^{2}\right)-\log\left[h_{i}x_{i}^{2}+\exp\left\{W_{0}\left(h_{i} x_{i}^{2}\right)\right\}\right]\Bigg{)},\] where \(x_{i}=\left(\frac{y_{i}-\xi_{i}}{\omega_{ii}}\right)\), \(i=1,2\). The score functions of all the parameters are obtained by differentiating the log-likelihood with respect to the parameters. Assuming that \(\bar{\mathbf{\Psi}}=\left(\begin{smallmatrix}1&\rho\\ \rho&1\end{smallmatrix}\right)\), the score functions of all the parameters, when \(\mathbf{\eta}=\mathbf{0}\), are listed below for \(i=1,j=2\) or \(i=2,j=1\): \[S_{\xi_{i}}=\frac{1}{\omega_{ii}}\left(\frac{x_{i}-\rho x_{j}\exp\left\{\frac{1}{2 }W_{0}(h_{i}x_{i}^{2})-\frac{1}{2}W_{0}(h_{j}x_{j}^{2})\right\}}{(1-\rho^{2})[h_{ i}x_{i}^{2}+\exp\left\{W_{0}(h_{i}x_{i}^{2})\right\}]}+\frac{h_{i}x_{i}[h_{i}x_{i}^{2}+3 \exp\{W_{0}(h_{i}x_{i}^{2})\}]}{[h_{i}x_{i}^{2}+\exp\{W_{0}(h_{i}x_{i}^{2})\}]^{2}} \right),\] \[S_{\omega_{ii}}=\frac{1}{\omega_{ii}}\left(\frac{x_{i}^{2}-\rho x_{i} x_{j}\exp\left\{\frac{1}{2}W_{0}(h_{i}x_{i}^{2})-\frac{1}{2}W_{0}(h_{j}x_{j}^{2}) \right\}}{(1-\rho^{2})[h_{i}x_{i}^{2}+\exp\left\{W_{0}(h_{i}x_{i}^{2})\right\} ]}+\frac{\exp\{W_{0}(h_{i}x_{i}^{2})\}[h_{i}x_{i}^{2}-\exp\{W_{0}(h_{i}x_{i}^ {2})\}]}{[h_{i}x_{i}^{2}+\exp\{W_{0}(h_{i}x_{i}^{2})\}]^{2}}\right),\] \[S_{\eta_{i}}=\sqrt{\frac{2}{\pi}}\left\{\frac{g_{i}(y_{i})-\rho g _{j}(y_{j})}{(1-\rho^{2})}\right\},\] \[S_{h_{i}}=\frac{1}{2}\left(\frac{x_{i}^{4}\exp\left\{-W_{0}(h_{i }x_{i}^{2})\right\}-\rho x_{i}^{3}x_{j}\exp\left\{-\frac{1}{2}W_{0}(h_{i}x_{i} ^{2})-\frac{1}{2}W_{0}(h_{j}x_{j}^{2})\right\}}{(1-\rho^{2})[h_{i}x_{i}^{2}+ \exp\left\{W_{0}(h_{i}x_{i}^{2})\right\}]}-\frac{h_{i}x_{i}^{4}+3x_{i}^{2}\exp \{W_{0}(h_{i}x_{i}^{2})\}]}{[h_{i}x_{i}^{2}+\exp\{W_{0}(h_{i}x_{i}^{2})\}]^{2 }}\right),\] \[S_{\rho}=\frac{g_{1}(y_{1})g_{2}(y_{2})}{(1-\rho^{2})}-\frac{ \rho}{(1-\rho^{2})^{2}}\{g_{1}^{2}(y_{1})+g_{2}^{2}(y_{2})-2\rho g_{1}(y_{1})g _{2}(y_{2})\}+\frac{\rho}{(1-\rho^{2})}.\] From the form of the score functions we can observe that they are not linearly dependent when \(\mathbf{\eta}=\mathbf{0}\) and hence the Fisher information matrix, which is the variance-covariance matrix of the score vector, is nonsingular when \(\mathbf{\eta}=\mathbf{0}\). Proposition 11 demonstrates that the Fisher information matrix of the \(\mathcal{SNH}\) distribution is nonsingular when \(\mathbf{\eta}=\mathbf{0}\) for \(p=2\). Our conjecture is that this statement remains true for \(p>2\). We justify this by plotting, in Figure 5, the histogram of the LRT statistic for testing \(H_{0}:\mathbf{\eta}=\mathbf{0}\) vs \(H_{1}:\mathbf{\eta}\neq\mathbf{0}\) for \(p=2\), \(3\), and \(4\), based on samples of size \(5000\) and \(1000\) replicates. Along with the histograms, we also plot the \(\chi_{p}^{2}\) pdf. The plots indicate that the asymptotic distribution of the LRT statistic indeed follows \(\chi_{p}^{2}\), for \(p=2\), \(3\), and \(4\). This would not have been the case if the Fisher information matrix was singular for \(\mathbf{\eta}=\mathbf{0}\). Although we have justified the nonsingularity of the Fisher information matrix for the \(\mathcal{SNH}\) distribution when \(\mathbf{\eta}=\mathbf{0}\), we do not have the mathematical form of the Fisher information matrix. As a result, we cannot use the Wald type test for testing \(\mathbf{\eta}=\mathbf{0}\). We have to rely on the LRT for that: * _Testing \(H_{0}:\mathbf{\eta}=\mathbf{0}\) vs \(H_{1}:\mathbf{\eta}\neq\mathbf{0}\), given that \(\mathbf{h}\neq\mathbf{0}\)_: Since the Fisher information matrix of the \(\mathcal{SNH}\) distribution when \(\mathbf{\eta}=\mathbf{0}\) is nonsingular, given that \(\mathbf{h}\neq\mathbf{0}\), we use the asymptotic distribution of the LRT statistic for conducting the test. * _Testing \(H_{0}:\mathbf{h}=\mathbf{0}\) vs \(H_{1}:\mathbf{h}\neq\mathbf{0}\), given that \(\mathbf{\eta}\neq\mathbf{0}\)_: Under the null hypothesis the \(\mathcal{SNH}\) distribution becomes the \(\mathcal{SN}\) distribution. The Fisher information matrix of the \(\mathcal{SN}\) distribution is nonsingular when \(\mathbf{\eta}\neq\mathbf{0}\). Hence, under the null hypothesis we can use the asymptotic distribution of the LRT statistic for conducting the test. * _Testing \(H_{0}:\mathbf{\eta}=\mathbf{0}\) and \(\mathbf{h}=\mathbf{0}\) vs \(H_{1}:\mathbf{\eta}\neq\mathbf{0}\) or \(\mathbf{h}\neq\mathbf{0}\)_: Under the null hypothesis, the Fisher information matrix is singular. Hence, we cannot use the LRT anymore for this testing problem. However, since the asymptotic distribution of the LRT statistic for testing \(\eta=0\) vs \(\eta\neq 0\) based on the univariate \(\mathcal{SN}\) is \(\chi_{1}^{2}\), we can use the LRT for testing \(H_{i0}:\eta_{i}=0,h_{i}=0\) vs \(H_{i1}:\eta_{i}\neq 0\) or \(h_{i}\neq 0\), \(i=1,\ldots,p\). We reject \(H_{0}\) if any of the \(H_{i0}\) gets rejected. Note here that the rejection region for testing \(H_{i0}\) vs \(H_{i1}\), \(i=1,\ldots,p\), has to be computed subject to Bonferroni's correction. Figure 5: Histograms of the LRT statistic for testing \(H_{0}:\mathbf{\eta}=\mathbf{0}\) vs \(H_{1}:\mathbf{\eta}\neq\mathbf{0}\) for \(\mathcal{SNH}_{p}\) when \(p=2\), \(3\), and \(4\) based on samples of size \(5000\) and \(1000\) replicates. The red curves indicate the pdf of the \(\chi_{p}^{2}\) distribution. Simulation Study We conduct two simulation studies in this section: one to demonstrate the effectiveness of the parameter estimation method described in Sections 4.1 and 4.2, and another to show in which scenarios the \(\mathcal{SNTH}\) distribution is more suitable compared to the skew-\(t\) distribution. ### \(\mathcal{SNTH}\) Parameter Estimation We test the methodology for \(\mathcal{SNTH}\) parameter estimation in a simulation study. We simulate observations of size \(n=50\), \(100\)\(200\), \(500\), and \(1000\) from a \(\mathcal{SNTH}_{3}(\boldsymbol{\xi},\boldsymbol{\omega},\bar{\boldsymbol{ \Psi}},\boldsymbol{\eta},\boldsymbol{h})\), with \(\boldsymbol{\xi}=(0.8,-0.6,1.3)^{\top}\), \(\boldsymbol{\omega}=\text{diag}(3,5,2)\), \(\bar{\boldsymbol{\Psi}}=\left(\begin{smallmatrix}1&-0.5&0.3\\ -0.5&1&-0.2\\ 0.3&-0.2&1\end{smallmatrix}\right)\), \(\boldsymbol{\eta}=(-1.5,2,0.5)^{\top}\) and \(\boldsymbol{h}=(0.02,0.08,0.03)^{\top}\). Based on the simulated data, we estimate the parameters by the methodology described in Sections 4.1 and 4.2. We repeat the process \(100\) times and summarize the estimated parameter in boxplots in Figure 6. Alongside the estimates obtained from the methodology described in Section 4.1 (indicated as mMLE (short for marginal MLE) for \(\boldsymbol{\xi}\), \(\boldsymbol{\omega}\), \(\boldsymbol{\eta}\), \(\boldsymbol{h}\) and as EM for \(\bar{\boldsymbol{\Psi}}\) in Figure 6) we also report the MLEs of all the parameters as well. The boxplots indicate that the methodology is working reasonably well for estimating the parameters from the \(\mathcal{SNTH}\) model. Moreover, as the sample size increases, the variance of the estimates decreases, as it should. Hence, we can say that the parameter estimation methodology described in Sections 4.1 and 4.2 is justified. The boxplots also show that the estimates of the parameters obtained from the EM algorithm are not very different from the MLEs, although they have more variability. The variability difference between the two estimation methods also decreases as the sample size increases. For problems with high dimensions where the computation of the exact MLEs are infeasible, one can use the methodology described in Sections 4.1 and 4.2 as an alternative. Moreover, these estimates are an excellent choice for the starting values of the parameters when optimizing the exact log-likelihood for computing the MLEs. ### Comparison Between the \(\mathcal{SNTH}\) and the Skew-\(t\) Distributions In this simulation study we show that when there is a great disparity between the marginal kurtosis values in a multivariate dataset, the \(\mathcal{SNTH}\) distribution is more appropriate than the skew-\(t\) distribution. We generate \(500\) random samples from Figure 6: Boxplots of the parameter estimates (100 replicates) of a \(\mathcal{SNTH}_{3}\) distribution obtained from the methodology in Sections 4.1 and 4.2 for different sample sizes \(n\), given as mMLE (marginal MLE) for \(\boldsymbol{\xi}\), \(\boldsymbol{\omega}\), \(\boldsymbol{\eta}\), \(\boldsymbol{h}\) and as EM for \(\bar{\boldsymbol{\Psi}}\) along with the MLE boxplots. The red line in each plot indicates the true parameter value. a three-dimensional vine copula to create a trivariate dataset in \(\text{Uniform}(0,1)\) scale. In this vine copula model, variables \(1\) and \(2\) are related with a Gaussian copula with \(\rho=0.5\), variables \(1\) and \(3\) are related with a Clayton copula with parameter \(4.8\) and variables \(2\) and \(3\) given variable \(1\) are related with a Gumbel copula with parameter \(1.9\). On the trivariate simulated data, we transform the 1st component to the standard normal scale, the \(2^{\text{nd}}\) component to the Cauchy \(t_{1}\) scale, and the \(3^{\text{rd}}\) component to the Student's \(t_{10}\) scale. We fit both the \(\mathcal{SNTH}\) and the skew-\(t\) distribution to this simulated data. The Akaike information criterion (AIC) computed for the \(\mathcal{SNTH}\) and the skew-\(t\) are \(4393\) and \(4848\), respectively, suggesting the \(\mathcal{SNTH}\) distribution is more suitable for this simulated dataset, compared to the skew-\(t\) distribution. We perform similar experiments where we generate \(500\) observations from a three-dimensional multiple-scaled generalized hyperbolic \((\mathcal{MSGH})\) distribution (Wraith and Forbes, 2015) and from a three-dimensional \(t\)-SAS distribution (Babic et al., 2019). For the \(\mathcal{MSGH}\) distribution we use the following parameters: \(\boldsymbol{\mu}=(0,0,0)^{\top}\), \(\boldsymbol{\Sigma}=\left(\begin{smallmatrix}1&0.3&-0.2\\ 0.3&1&-0.4\\ -0.2&-0.4&1\end{smallmatrix}\right)\), \(\boldsymbol{\beta}=(3,0.5,-0.2)^{\top}\), \(\boldsymbol{\lambda}=(2,1,4)^{\top}\), \(\boldsymbol{\gamma}=(\sqrt{3},\sqrt{0.2},\sqrt{0.25})^{\top}\), and \(\delta=1\). For the \(t\)-SAS distribution, we use a three-dimensional \(t\)-copula with correlation matrix \(\left(\begin{smallmatrix}1&0.3&-0.2\\ -0.2&-0.4&1\end{smallmatrix}\right)\) to generate observations on the uniform scale. For the Sinh-Arcsinh (SAS) transformation, we use \((-0.7,1)\), \((0.2,0.6)\), and \((0.5,0.8)\) as our \((g,h)\) (for skewness and tail Figure 7: Bivariate contours of the marginal bivariate pdfs obtained from the fitted \(\mathcal{SNTH}\) using Sections 4.1 and 4.2 methodology (green), from the fitted \(\mathcal{SNTH}\) using MLEs (red) and from the fitted skew-\(t\) (blue) distributions to trivariate vine copula data (first row), \(\mathcal{MSGH}\) data (second row), and \(t\)-SAS data (third row). The contours correspond to \(0.25\), \(0.5\), \(0.75\), and \(0.95\) approximate probability regions. thickness, as used in Babic et al. (2019)) parameters for the three marginals, respectively. Finally, we scale the marginals by \(1\), \(1.2\), and \(1.8\), respectively. When the \(\mathcal{SNTH}\) and the skew-\(t\) model are fitted to the \(\mathcal{MSGH}\) dataset the obtained AICs are \(9982\) and \(10110\), and for the \(t\)-SAS dataset, the AICs are \(6606\) and \(6634\). The AICs for both studies suggest that the \(\mathcal{SNTH}\) is a better fit to these two datasets compared to the skew-\(t\) model. We provide the contour plots of the bivariate marginal pdfs of the \(\mathcal{SNTH}\) and the skew-\(t\) distribution fitted to the three simulated datasets in Figure 7. The bivariate marginal pdfs for the \(\mathcal{SNTH}\) distribution are obtained based on the MLEs and also based on the estimates from the EM algorithm. The contours are plotted for the \(0.25\), \(0.5\), \(0.75\), and \(0.95\) approximate probability regions. The plots show that, as expected, the skew-\(t\) distribution cannot handle different tail-thickness for different marginals, and instead tries to find the best compromise with a single parameter, \(\nu\). In scenarios like this, the \(\mathcal{SNTH}\) distribution is more appropriate. Moreover, in the first row of Figure 7 we see from the contour plots that the difference between the bivariate marginal pdfs obtained based on the MLE and the EM algorithm is small for the vine copula dataset. However, in the second and third rows of Figure 7 the dissimilarity between the two \(\mathcal{SNTH}\) parameter estimation methods is more prominent, especially for the \((Y_{1},Y_{3})\) pair. Finally, it is clear from the plots that the marginal bivariate \(\mathcal{SNTH}\) pdf contours obtained from the MLEs are more suitable for all three datasets compared to the skew-\(t\) counterparts. ## 6 Data Applications We use two data applications to illustrate the effectiveness of the \(\mathcal{SNTH}\) distribution over the skew-\(t\) in certain situations. The parameter estimates and standard errors for the two data applications, as well as log-likelihood and AIC values along with computing times, are given in Sections S3 and S4 of the supplementary material. ### Italian Wine Dataset We consider a trivariate dataset consisting of the amount of chloride, glycerol and magnesium in a particular type of wine. The data were obtained from Forina et al. (1986) and originally consist of measurements on \(28\) chemicals from \(178\) samples of Italian wines. Among these \(178\) samples, \(48\) originated from the Barbera region, \(59\) from the Barolo region, and \(71\) from the Grignolino region. Here we use the variables chloride, glycerol and magnesium for the Grignolino region as previously analyzed by Azzalini and Capitanio (2014) with a skew-\(t\) distribution, hence \(p=3\) variables and \(n=71\) observations. The sample estimate of the marginal Pearson's measure of kurtosis for this dataset are \(7.7\), \(21.1\), and \(7.9\), which suggest that the \(\mathcal{SNTH}\) distribution might be more suitable for this dataset compared to the skew-\(t\) distribution. We fit both the \(\mathcal{SNTH}\) and the skew-\(t\) distribution to this dataset. The contour plots of the bivariate marginal pdfs obtained from the two fitted distributions are presented in Figure 8. For the \(\mathcal{SNTH}\) model we have produced the contours of the bivariate marginal pdfs using MLEs (in red) and the EM algorithm estimates (in green) along with the skew-\(t\) bivariate marginal pdfs Figure 8: Bivariate contours of the marginal bivariate pdfs obtained from the fitted \(\mathcal{SNTH}\) using Section 4.1 methodology (green), from the fitted \(\mathcal{SNTH}\) using MLE (red) and the skew-\(t\) (blue) distributions to the wine data. The contours correspond to \(0.25\), \(0.5\), \(0.75\), and \(0.95\) approximate probability regions. (in blue). One can see visually that the \(\mathcal{SNTH}\) distribution fits the data better than the skew-\(t\). Moreover, the contour plots indicate that there are some discrepancies between the two estimation methodologies based on the \(\mathcal{SNTH}\) distribution, specifically for the magnesium-chloride pair, but much less in the other two pairs. The difference is likely due to a relatively small sample size (\(n=71\)). The AIC corresponding to the \(\mathcal{SNTH}\) distribution and the skew-\(t\) distribution are \(1474\) and \(1492\), respectively. Hence, for this dataset, the \(\mathcal{SNTH}\) distribution is a better model than the skew-\(t\) distribution. Moreover, assuming that \(\mathbf{\eta}\neq\mathbf{0}\), the \(p\)-value for testing \(H_{0}:\mathbf{h}=\mathbf{0}\) vs \(H_{1}:\mathbf{h}\neq\mathbf{0}\) is \(2.53\times 10^{-14}\), using the LRT based on the \(\mathcal{SNTH}\) distribution. This suggests that \(\mathbf{h}\neq\mathbf{0}\) for this dataset. Using the LRT for testing \(H_{0}:\mathbf{\eta}=\mathbf{0}\) vs \(H_{1}:\mathbf{\eta}\neq\mathbf{0}\) when \(\mathbf{h}\neq\mathbf{0}\) is \(1.6\times 10^{-5}\), hence confirming the apparent skewness in the data. ### Saudi Arabian Wind Speed Dataset We analyze the dependence structure of the daily average, minimum, and maximum wind speed in the city of Sharurah in southern Saudi Arabia, at \(100\) meters in height (a typical hub height for wind turbines), in the year 2015. Understanding the dependence and distribution of these variables is important for setting up wind farms for harvesting wind energy. We remove a quadratic trend from all three variables and fit an AR(1) time series model to the detrended data marginally to obtain residuals. A Ljung-Box test shows that there is no significant serial correlation left in all three residuals. Hence, the residuals can be treated as a random sample of size \(n=365\) from a trivariate distribution. The sample estimates of the marginal Pearson's measure of kurtosis for the three variables are \(3.0\), \(7.5\), and \(4.4\), which means that the residuals corresponding to the average windspeed have a Gaussian-like tail and the other two residuals have heavier tails than the Gaussian distribution. This indicates that the \(\mathcal{SNTH}\) distribution may be more apt for this dataset compared to the skew-\(t\) distribution. We fit both the \(\mathcal{SNTH}\) and the skew-\(t\) distribution to the residuals. Similar to the previous contour plots, we have produced in Figure 9 the contours of the bivariate marginal pdfs using MLEs (in red) and the EM algorithm estimates (in green) along with the skew-\(t\) bivariate marginal pdfs (in blue). The plots indicate that the \(\mathcal{SNTH}\) distribution is more suitable here for capturing different tail-thickness for different marginals, compared to the skew-\(t\) distribution. This conclusion is further validated by the AIC which is \(3274\) for the \(\mathcal{SNTH}\) distribution and is \(3432\) for the skew-\(t\) distribution. Moreover, the difference between the contours obtained from the MLEs and from the EM algorithm estimates for the \(\mathcal{SNTH}\) distribution are very close to each other. Similar to the wine dataset, we can perform the following tests: assuming that \(\mathbf{\eta}\neq\mathbf{0}\), the \(p\)-value for testing \(H_{0}:\mathbf{h}=\mathbf{0}\) vs \(H_{1}:\mathbf{h}\neq\mathbf{0}\) is \(9.7\times 10^{-35}\), using the LRT based on the \(\mathcal{SNTH}\) distribution, which confirms that the data here are not from a skew-normal distribution; \(H_{0}:\mathbf{\eta}=\mathbf{0}\) vs \(H_{1}:\mathbf{\eta}\neq\mathbf{0}\) when \(\mathbf{h}\neq\mathbf{0}\) is \(2.56\times 10^{-13}\), which confirms the presence of skewness in the data. ## 7 Discussion In this article, we have introduced the multivariate \(\mathcal{SNTH}\) distribution, a new extension of the multivariate skew-normal distribution for modeling heavy-tailed data. We have compared our proposed distribution with the skew-\(t\) distribution, Figure 9: Bivariate contours of the marginal bivariate pdfs obtained from the fitted \(\mathcal{SNTH}\) using the EM algorithm (green), from the fitted \(\mathcal{SNTH}\) using MLE (red) and the skew-\(t\) (blue) distributions to the wind speed residuals. The contours correspond to \(0.25\), \(0.5\), \(0.75\), and \(0.95\) approximate probability regions. another extension of the skew-normal distribution for adapting tail-thickness. Unlike the skew-\(t\) distribution, our proposal is capable of handling data with different kurtosis for different marginals. As a consequence, the \(\mathcal{SNTH}\) model can be used as a robust model, as suggested by Azzalini and Genton (2008) for the skew-\(t\), for modeling outliers. Moreover, the \(\mathcal{SNTH}\) distribution can capture outliers in some marginals while having Gaussian-like distributions in other marginals. We have discussed various appealing stochastic and inferential properties of the \(\mathcal{SNTH}\) distribution in detail. A methodology for parameter estimation of the \(\mathcal{SNTH}\) distribution was also provided. There are other proposals in the multivariate setup for modeling varying marginal tail-thickness, such as the \(\mathcal{MSGH}\) distribution by Wraith and Forbes (2015) and the \(t\)-SAS distribution by Babic et al. (2019). However, they lack appealing stochastic properties, such as a tractable conditional distribution and an explicit form of conditional mean and variance, unlike the \(\mathcal{SNTH}\) model. How the \(\mathcal{SNTH}\) model performs compared to these other multivariate models for modeling varying marginal tail-thickness is left as a future research direction. The \(\mathcal{SNTH}\) distribution can be further generalized by extending the idea of using transformation to induce tail-thickness in the distribution to the extended skew-normal (\(\mathcal{ESN}\)) family and the unified skew-normal (\(\mathcal{SUN}\)) family (Arellano-Valle and Azzalini, 2006). In Section 3.1, we have discussed how the \(\mathcal{SNTH}\) distribution induces tail-thickness in the \(\mathcal{SN}\) distribution by stretching the distribution along different axes, and this stretching can be different for different marginals. This idea could be further generalized where the stretching occurs along arbitrary directions. The EM algorithm in Section 4.2 discussed how we can estimate the scale matrix \(\boldsymbol{\Psi}\) of an \(\mathcal{SN}_{p}(\boldsymbol{0},\boldsymbol{\Psi},\boldsymbol{\eta}_{0})\) distribution, given that \(\boldsymbol{\eta}_{0}\) is fixed. However, we need this \(\boldsymbol{\Psi}\) to be a correlation matrix, not a covariance matrix. This is achieved by transforming the final estimate of \(\boldsymbol{\Psi}\) from covariance to a correlation matrix. The EM algorithm for the scenario when \(\boldsymbol{\Psi}\) is a correlation matrix is an open problem. The R-codes and real data for Sections 5 and 6 are available on a GitHub repository: [https://github.com/sagnikind/Skew-normal-Tukey-h](https://github.com/sagnikind/Skew-normal-Tukey-h).
2307.12867
On the use of SRIM for calculating arc-dpa exposure
We propose two methods for evaluating athermal recombination corrected (arc) displacement damage parameters in ion irradiations employing the computer code SRIM (Stopping and Range of Ions in Matter). The first method consists of post-processing the detailed SRIM output for all simulated damage events and re-calculating according to the arc damage model. In the second method, an approximate empirical formula is devised which gives the average displacements in the arc damage model as a function of the corresponding quantity according to the standard Norgett-Robinson-Torrens model, which is readily obtained from SRIM.
E. Mitsi, K. Koutsomitis, G. Apostolopoulos
2023-07-24T15:05:15Z
http://arxiv.org/abs/2307.12867v2
# On the use of SRIM for calculating arc-dpa exposure ###### Abstract We propose two methods for evaluating athermal recombination corrected (arc) displacement damage parameters in ion irradiations employing the computer code SRIM (Stopping and Range of Ions in Matter). The first method consists of post-processing the detailed SRIM output for all simulated damage events and re-calculating according to the arc damage model. In the second method, an approximate empirical formula is devised which gives the average displacements in the arc damage model as a function of the corresponding quantity according to the standard Norgett-Robinson-Torrens model, which is readily obtained from SRIM. keywords: Displacements per atom (dpa), Athermal recombination corrected dpa, Ion irradiation, SRIM + Footnote †: journal: NIM ## 1 Introduction In studies of radiation effects in materials it is generally desirable to have a standardized parameter to quantify radiation damage exposure, that would provide a common basis for comparison of data obtained under different irradiation conditions in terms of impinging particle type and energy. Currently, the internationally accepted standard parameter for this purpose is the number of displacements per atom (dpa) calculated according to the Norgett-Robinson-Torrens (NRT) model [1]. In the case of ion irradiation, one of the most widely used software tools for estimating the NRT-dpa exposure is the Monte Carlo code SRIM (Stopping and Range of Ions in Matter) [2]. SRIM incorporates the NRT model and readily provides the NRT-dpa value for a given ion irradiation. Its popularity is based on the fact that it employs accurate ion stopping powers and on its user-friendly interface. Several authors [3; 4; 5; 6; 7; 8] have discussed the application of SRIM for accurate NRT-dpa calculations. Recently, a modification to the NRT model has been proposed, the athermal recombination corrected dpa (arc-dpa) [9; 10]. It addresses a well known issue of NRT, namely, the overestimation of the number of stable defects generated by high energy displacement cascades. The arc-dpa model is based on evidence from experimental studies and computer simulations, which indicates that significant defect recombination takes place during the cascade cooldown phase leading to reduced numbers of remaining stable defects. Currently, there is no standardized way to compute arc-dpa exposure in ion irradiations as the model has not yet been implemented in any of the widely used software tools. In their original publication introducing the new model, Nordlund et al. [9] already proposed a method to indirectly estimate the arc-dpa parameter based on the output of standard SRIM simulations. Their implementation consisted of two main parts. First, a series of SRIM simulations were performed to evaluate the energy deposited by primary knock-on atom (PKA) recoils as target displacements. This is also called damage energy, \(T_{d}\), and must be obtained as a function of the initial PKA recoil energy, \(E_{R}\), for a given target material. In [9] this was done for Fe and an interpolating function was devised to obtain \(T_{d}\) as a function of \(E_{R}\) continuously for recoil energies up to 300 keV. In the second part of the calculation, the information obtained on \(T_{d}\) is used in post-processing of the SRIM output file "COLLISON.TXT" to finally obtain the arc-dpa values. In this paper, we propose two alternative methods to calculate arc-dpa exposure using SRIM. The first one is also based on the COLLISON.txt file, similarly to the method in [9]. However, instead of separately computing \(T_{d}\) by interpolation, we use the damage energy values that are internally calculated in SRIM with the Lindhard-Scharff-Schiott (LSS) approximation [11]. Thus, the damage energy interpolation for different target materials is not required. The second method is based on an approximate formula that we propose, which can be employed to estimate directly the arc-dpa exposure based on the corresponding NRT-dpa value. Thus, the cumbersome handling of the COLLISON.txt file is avoided. The two methods are tested on all targets for which arc-dpa model parameters are available and for a range of projectile ions. ## 2 Radiation Damage Models The NRT model gives the number of stable displacements, \(\nu_{d}\), produced by a PKA recoil with damage energy \(T_{d}\) as: \[\nu_{d}(T_{d})=\begin{cases}0&\text{for }T_{d}\leq E_{d}\\ 1&\text{for }E_{d}<T_{d}\leq L\\ T_{d}/L&\text{for }T_{d}>L\end{cases} \tag{1}\] where \(E_{d}\) is the displacement threshold energy, i.e., the minimum energy required to displace an atom from its lattice position. \(L=2E_{d}/0.8\) denotes the cascade multiplication threshold above which more than one stable displacements are generated by the PKA. In the arc-dpa model, the 3\({}^{rd}\) branch of (1) is multiplied by an energy dependent efficiency factor, \(\xi\leq 1\). The model definition is summarized in the following two relations: \[\nu_{d,\text{arc}}(T_{d})=\begin{cases}0&\text{for }T_{d}\leq E_{d},\\ 1&\text{for }E_{d}<T_{d}\leq L,\\ \xi(T_{d}/L)\cdot T_{d}/L&\text{for }T_{d}>L,\end{cases} \tag{2}\] \[\xi(x)=(1-c)\,x^{b}+c,\quad\text{for }x\geq 1. \tag{3}\] The parameters \(b\) and \(c\) are material constants that have been determined for a number of target materials by Nordlund et al. [12]. Their values are given in table 2. We note that for damage energies above the displacement threshold, \(T_{d}>E_{d}\), \(\nu_{d,\text{arc}}(T_{d})\) can be compactly written as \[\nu_{d,\text{arc}}(T_{d})=\nu_{d}(T_{d})\cdot\xi\left[\nu_{d}(T_{d})\right]. \tag{4}\] This definition will be utilized in the following paragraphs. ## 3 SRIM simulation conditions and data handling All simulations were performed utilizing SRIM-2013 and employing the option "Ion distribution and Quick calculation of damage" (Q-C). Lattice and surface binding energies were set equal to zero according to the recommendation in [3]. A range of projectile ions were employed, with atomic numbers varying from \(Z=1\) (H) to 79 (Au) and energies ranging from \(E_{0}=1\) to 10 MeV, similarly to the work of Agarwal et al. [8]. The ions and corresponding energies are listed in Table 1. Table 2 shows all the targets that we tested, which are essentially all materials whose arc-dpa parameters were estimated in [12]. Target thickness was chosen appropriately in order to ensure that the impinging ions stop within the examined region. The target displacement energies, \(E_{d}\), are based on internationally recommended standard values and are also given in Table 2. In the case of Fe self-ion irradiation, an extra simulation with \(E_{0}=78.7\) keV was also performed in order to directly compare with results from [9]. For each ion/target combination 10,000 ion histories were run. Damage parameters were extracted from the SRIM output files, either VACANCY.txt or COLLISON.txt. Table 3 lists all quantities of interest and the way they are calculated depending on the damage model and the output file used. The number of PKAs per ion, \(N_{\text{PKA}}\), is obtained by integrating the 2nd data column of VACANCY.txt ("vacancies by ions", \(\nu_{i}\)) or by dividing the number of data rows, \(N_{\text{rows}}\), in COLLISON.txt by the number of simulated ions, \(N_{\text{ions}}\). \(N_{\text{PKA}}\) is independent of the damage model. The NRT displacements per ion, \(N_{d}\), is obtained as follows. In the case of VACANCY.txt, \(N_{d}\) is found by summing the 2\({}^{nd}\) and 3\({}^{rd}\) column of the data table, i.e., "vacancies by ions", \(\nu_{i}\), and "vacancies by recoils", \(\nu_{r}\), respectively. Regarding the COLLISON.txt file, \(N_{d}\) is calculated by adding up the "Target vacancies", \(\nu_{d}\), of all PKAs and dividing by \(N_{\text{ions}}\). Finally, the average displacements per PKA, \(\langle\nu_{d}\rangle\), is equal to \(N_{d}/N_{\text{PKA}}\). The calculation of arc-dpa damage parameters is described in the next section. All evaluations and the parsing of SRIM output files were performed in the OCTAVE computing environment [13]. The open source python code PYSRIM [14] was employed to automate the SRIM calculations. All relevant data and code are available in [15]. ## 4 Methods and Results In this section, we present the two different methods to obtain arc-dpa damage parameters from SRIM output. ### Method 1 (M1) This method utilizes the COLLISON.txt output file. In SRIM Q-C mode, this file lists all simulated PKA scattering events and reports, among other data, the number of displacements, \(\nu_{d}\), generated per event. These \(\nu_{d}\) values, labelled "Target vacancies", are calculated according to the NRT model, eq. (1), with the damage energy, \(T_{d}\), obtained from the approximate LSS theory [16]. For the \(\nu_{d}\) values in COLLISON.txt that satisfy \(\nu_{d}>1\), we can easily recover the LSS damage energy by multiplying \(\nu_{d}\) with the cascade multiplication factor, \(L\) (cf. eq. (1)). Then, the obtained \(T_{d}\) can be used in eq. (2) to evaluate \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{Ion} & H & He & Al & Fe & Au \\ \hline \(E_{0}\) (MeV) & 1 & 1 & 3 & 5 & 10 \\ \hline \hline \end{tabular} \end{table} Table 1: Projectile ions and corresponding incident energies \(E_{0}\). \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{Target} & Fe & Ni & Cu & Pd & W & Pt \\ \hline \(E_{d}\) (eV) [9] & 40 & 40 & 29 & 41 & 90 & 44 \\ \(b\)[12] & -0.568 & -1.01 & -0.68 & -0.88 & -0.56 & -1.12 \\ \(c\)[12] & 0.286 & 0.23 & 0.16 & 0.15 & 0.12 & 0.11 \\ \hline \hline \end{tabular} \end{table} Table 2: Displacement threshold, \(E_{d}\), and arc-dpa model parameters, \((b,c)\), of simulated targets. the displacements according to the arc-dpa model. This is essentially what is done in M1, however, instead of actually evaluating \(T_{d}\) we employ \(\nu_{d}\) directly in the equivalent arc-dpa definition, eq. (4). Thus, the steps to calculate the arc-dpa parameters are as follows: 1. Run SRIM with the "Quick calculation of damage" (Q-C) option. 2. Parse the COLLISON.txt output file to obtain the NRT displacements per PKA event, \(\nu_{d}\). 3. Calculate the corresponding \(\nu_{d,\text{arc}}\) per PKA from eq. (4), \(\nu_{d,\text{arc}}=\nu_{d}\cdot\xi(\nu_{d})\). 4. Take the average of the \(\nu_{d,\text{arc}}\) values to obtain the mean displacements per PKA according to the arc-dpa model, \(\left\langle\nu_{d,\text{arc}}\right\rangle\) (cf. Table 3). 5. Multiply by the number of PKAs per ion, \(N_{PKA}\), to obtain the number of displacements per ion, \(N_{d,arc}=\left\langle\nu_{d,\text{arc}}\right\rangle\cdot N_{\text{PKA}}\) M1 is very similar to the method proposed by Nordlund et al. [9]. The main difference lies in the derivation of damage energy. In [9], \(T_{d}\) is obtained by separate SRIM simulations employing the "Detailed Calculation with Full Damage Cascades" (F-C) option. In this case, SRIM utilizes detailed stopping power calculations for all secondary recoils in the PKA cascade, thus, the value of \(T_{d}\) is potentially more accurate. Agarwal et al. [8] have made a detailed comparison of SRIM damage calculations in Q-C and F-C modes. They found that there is a difference of up to \(\pm 25\%\) in the amount of NRT vacancies predicted by the two modes, when vacancy production is estimated by the SRIM damage energy. The authors attributed the difference to the use of the LSS approximation in Q-C mode. It is expected that also in the present case, where the arc-dpa damage estimation in M1 is based on the Q-C damage energy, there will be similar differences with respect to the procedure described in [9], where the F-C mode was employed. To make a quantitative comparison of the two approaches, we repeated the simulation of 78.7 keV Fe ions incident on an Fe target that was employed in [9]. Table 4 shows the results from the two approaches. It is seen that there is only a small 2% difference in the NRT parameters, \(N_{d}\) and \(\left\langle\nu_{d}\right\rangle\), obtained with the present method in comparison to the values reported in [9], while the corresponding arc-dpa parameters almost coincide. We attribute the good agreement to the low damage energies occurring in this simulation. To have a more meaningful comparison, we simulated self-ion Fe irradiation with a much higher projectile energy, \(E_{0}=5\) MeV, and evaluated the results with both our proposed method M1 and the one by Nordlund et al. [9]. In the latter case, we used the data from their fig. 1.2 to extend the interpolation of \(T_{d}\) to target recoil energies up to 10 MeV. The results are also listed in Table 4. As seen from the table, there is a 10% difference between the NRT parameters obtained by our M1 and the evaluation according to [9]. This difference is comparable to the observations of [8] and thus can be attributed to the use of approximate LSS damage energy in the Q-C simulation mode. The corresponding arc-dpa parameters exhibit a similar but slightly lower difference of about 8%. This is due to the fact that the arc-dpa efficiency lowers the significance of high energy damage events, where the errors due to the LSS approximation are more pronounced. _Method 2 (M2)_ The objective of M2 is to provide a quick estimate of the arc-dpa damage parameters, without having to resort \begin{table} \begin{tabular}{l c c c} \hline \hline Quantity & Symbol & Method 1 (M1) & Method 2 (M2) \\ & & COLLISON.txt & VACANCY.txt \\ \hline PKAs per ion & \(N_{\text{PKA}}\) & \(N_{\text{rows}}/N_{\text{ions}}\) & \(\sum_{k}\left[\nu_{i}\right]_{k}\)\(\Delta x\)\({}^{\dagger}\) \\ \hline \multicolumn{4}{c}{NRT-dpa model} \\ Displacements per ion & \(N_{d}\) & \(\left\langle\nu_{d}\right\rangle\cdot N_{\text{PKA}}\) & \(\sum_{k}\left[\nu_{i}+\nu_{r}\right]_{k}\)\(\Delta x\)\({}^{\dagger}\) \\ Mean displacements per PKA & \(\left\langle\nu_{d}\right\rangle\) & \(N_{\text{rows}}^{-1}\)\(\sum_{k}\left[\nu_{d}\right]_{k}\)\({}^{\ddagger}\) & \(N_{d}/N_{\text{PKA}}\) \\ \hline \multicolumn{4}{c}{arc-dpa model} \\ Displacements per ion & \(N_{d,arc}\) & \(\left\langle\nu_{d,arc}\right\rangle\cdot N_{\text{PKA}}\) \\ Mean displacements per PKA & \(\left\langle\nu_{d,arc}\right\rangle\) & \(N_{\text{rows}}^{-1}\)\(\sum_{k}\left[\nu_{d}\right]_{k}\)\(\xi\left(\left[\nu_{d}\right]_{k}\right)\)\({}^{\ddagger}\) & eq. (7) with \(\left\langle\nu_{d}\right\rangle\) \\ & & & as above \\ \hline \hline \multicolumn{4}{c}{\({}^{\dagger}\)\(\left[\nu_{i}\right]_{k}\) and \(\left[\nu_{r}\right]_{k}\) are the “vacancies by ions” and “vacancies by recoils”, respectively, in the \(k\)-th target depth bin, with \(\Delta x\) denoting the bin width.} \\ \multicolumn{4}{c}{\({}^{\ddagger}\)\(\left[\nu_{d}\right]_{k}\) denotes the number of vacancies estimated by SRIM for the \(k\)-th PKA event. The sum is over all events.} \\ \end{tabular} \end{table} Table 3: Calculation of damage parameters from SRIM output files to the cumbersome processing of COLLISON.txt. For this, we note that from eq. (4) the average arc-dpa can be written: \[\langle\nu_{d,\text{arc}}\rangle=(1-c)\langle\nu_{d}^{1+b}\rangle+c\cdot\langle \nu_{d}\rangle. \tag{5}\] Thus, to obtain \(\langle\nu_{d,\text{arc}}\rangle\) the value of \(\langle\nu_{d}^{1+b}\rangle\) is needed. We performed an approximate calculation of this quantity, employing a power-law cross-section for the ion-target atom interaction and ignoring the effect of ionization losses, i.e., setting \(T_{d}\approx T\). As shown in A, the following approximation \[\langle\nu_{d}^{1+b}\rangle\approx\langle\nu_{d}\rangle^{\lambda(1+b)}, \tag{6}\] where \(\lambda\approx 0.56\), gives adequate results for a wide range of incident ion energies and ion-target combinations. This can be seen in fig. 1, where \(\langle\nu_{d}^{1+b}\rangle\) is plotted as a function of \(\langle\nu_{d}\rangle^{1+b}\) for all the ion/target combinations simulated in the current work. The data shown in the figure have been obtained by taking the \(\nu_{d}\) values per PKA event listed in COLLISON.txt and evaluating the required averages. As seen from the figure, the data from all simulated targets lie within \(\pm 10\%\) of the approximate eq. (6), which is depicted by the dashed line. Utilizing the above approximation, the arc-dpa damage parameters can be obtained by the following prescription: 1. Run SRIM with the "Quick calculation of damage" (Q-C) option. 2. Calculate the NRT-\(\langle\nu_{d}\rangle\) from VACANCY.txt as described in Table 3. 3. Obtain \(\langle\nu_{d,\text{arc}}\rangle\) from eq. (5), substituting the approximate relation (6): \[\langle\nu_{d,\text{arc}}\rangle\approx(1-c)\langle\nu_{d}\rangle^{0.56(1+b)}+ c\cdot\langle\nu_{d}\rangle\] (7) 4. The number of displacements per ion is \(\langle\nu_{d,arc}\rangle\cdot N_{\text{PKA}}\) Fig. 2 depicts the ratio of \(\langle\nu_{d,\text{arc}}\rangle\) calculated by the two methods, M2 and M1, respectively, for all simulated ion/target combinations. It is seen that the results of the approximate method M2 deviate by at most 3% from those of M1. A similar small deviation can be observed in Table 4 between the arc-dpa damage parameters obtained by methods M1 and M2 in the two simulated Fe self-ion irradiations. In the low energy case the arc-dpa parameters obtained by M2 are 4% lower than those of M1 while in the high energy example the two methods produce essentially equivalent results. Thus, the method M2 can be employed for a quick, approximate evaluation of arc-dpa damage, introducing an error of not more than a few percent compared to the more detailed method M1. ## 5 Conclusions In this work, we present two methods for evaluating arc-dpa damage parameters in ion irradiations employing the SRIM simulation code with the option "Quick calculation of damage" (Q-C). The first method is based on SRIM's COLLISON.txt output file, which lists the NRT displacements, \(\nu_{d}\), produced in each simulated primary knock-on atom (PKA) recoil event. The \(\nu_{d}\) values are converted to the corresponding arc-dpa model prediction, \(\nu_{d,\text{arc}}\), by means of eq. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \(E_{0}\) & \(N_{\text{PKA}}\) & \(N_{d}\) & \(\langle\nu_{d}\rangle\) & \(N_{d,\text{arc}}\) & \(\langle\nu_{d,\text{arc}}\rangle\) \\ \hline Nordlund et al. [9] & & & 539 & 12.2\({}^{\dagger}\) & 217 & 4.93\({}^{\dagger}\) \\ This study - Method 1 & 78.7 keV & 44.1 & 530 & 12.0 & 217 & 4.92 \\ This study - Method 2 & & & 530 & 12.0 & 209 & 4.74 \\ \hline Method of Nordlund et al. [9] & & & 8800 & 20.0 & 3150 & 7.14 \\ This study - Method 1 & 5 MeV & 442 & 7870 & 17.9 & 2900 & 6.56 \\ This study - Method 2 & & & 7870 & 17.9 & 2890 & 6.54 \\ \hline \hline \end{tabular} \({}^{\dagger}\) Mean values are calculated by dividing \(N_{d}\) and \(N_{d,arc}\) from [9] by \(N_{\text{PKA}}\) as obtained in the present study. \end{table} Table 4: Damage parameters obtained by different methods for the irradiation of an Fe target with Fe ions of energy \(E_{0}\). Figure 1: \(\langle\nu_{d}^{1+b}\rangle\) as a function of \(\langle\nu_{d}\rangle^{1+b}\), where \(\nu_{d}\) denotes the NRT displacements and \(b\) is the arc-dpa model parameter of the corresponding target material. Both quantities were obtained by post-processing the output of SRIM simulations and averaging over all PKA events. Results for the different target materials are depicted with different symbol and color. The dashed line corresponds to the approximate relation (6). (4) and then averaged to obtain the total damage parameters. This procedure is similar to the one proposed by Nordlund et al. [9] only in our case the damage energy is essentially obtained by the LSS approximation employed in SRIM's Q-C mode, whereas in [9] the damage energy was interpolated from the results of separate detailed SRIM simulations. Thus, our method gains in simplicity but can lead to errors due to the approximation in the damage energy calculation. The errors in the estimated damage could be up to \(\sim 30\%\)[8]. In the second method, we devise an approximate relation, which gives \(\langle\nu_{d,\mathrm{arc}}\rangle\) directly as a function of \(\langle\nu_{d}\rangle\). Thus, the cumbersome processing of the COLLISON.txt file is not needed since the NRT damage parameter \(\langle\nu_{d}\rangle\) can be easily obtained from VACANCY.txt. We found that the arc-dpa parameters obtained by this approximate method differ by not more than a few percent from those calculated by the first method. ## Acknowledgement This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. ## Appendix A Approximation of \(\langle\nu_{d}^{1+b}\rangle\) The general expression for the average \(\langle\nu_{d}^{n}\rangle\) is given by \[\langle\nu_{d}^{n}\rangle=\frac{\int_{Ed}^{T_{m}}[\nu_{d}(T_{d})]^{n}\:d \sigma(E,T)}{\int_{Ed}^{T_{m}}d\sigma(E,T)}, \tag{10}\] where \(d\sigma(E,T)\) denotes the cross-section for scattering of an ion with initial energy \(E\) producing a PKA with recoil energy \(T\). \(T_{m}\) is the maximum PKA recoil energy. Making the following assumptions: 1. A power-law cross-section, \(d\sigma(E,T)\propto dT/T^{1+p}\), where \(p\) ranges from 0.5 (heavy ions) to 1 (light ions) [17] 2. Ionization losses can be ignored (\(T_{d}\approx T\)) and performing the integrations in eq. (10) we obtain the following analytical expression: \[\langle\nu_{d}^{n}\rangle=\frac{(L/E_{d})^{p}-1+\frac{p}{p-n}\left[1-(L/T_{m} )^{p-n}\right]}{(L/E_{d})^{p}-(L/T_{m})^{p}}\;, \tag{11}\] which is valid for \(T_{m}\geq L\) and \(n\neq p\). In the special case \(n=p\) it becomes \[\langle\nu_{d}^{n}\rangle=\frac{(L/E_{d})^{n}-1-n\log(L/T_{m})}{(L/E_{d})^{n} -(L/T_{m})^{n}}\;. \tag{12}\] Based on eqs. (11)-(12) we calculate \(\langle\nu_{d}^{1+b}\rangle\) for several representative \((b,p)\) combinations and for \(T_{m}\) values in the range \(L<T_{m}<10^{4}L\). This corresponds to a maximum \(T_{m}\) of \(\sim 10^{6}\) eV in Fe and similar values for other metals. The results are shown in fig. 12 as a function of \(\langle\nu_{d}\rangle^{1+b}\), where \(\langle\nu_{d}\rangle\) is also obtained from (11). It is apparent from the figure that all curves follow roughly a central line. Fitting a power law of the form: \[\langle\nu_{d}^{1+b}\rangle\approx A\left\langle\nu_{d}\right\rangle^{\lambda \left(1+b\right)}, \tag{13}\] to the data, with \(A\) and \(\lambda\) as adjustable parameters, we obtain the values \(\lambda\approx 0.56\) and \(A\approx 1.0\). This is denoted by the dashed line in fig. 12. The deviation of the analytically calculated \(\langle\nu_{d}^{1+b}\rangle\) from the fitted power law is within \(\pm 20\%\), which corresponds to the shaded area in fig. 12.
2310.09136
DocCert: Nostrification, Document Verification and Authenticity Blockchain Solution
Many institutions and organizations require nostrification and verification of qualification as a prerequisite for hiring. The idea is to recognize the authenticity of a copy or digital document issued by an institution in a foreign country and detect forgeries. Certificates, financial records, health records, official papers and others are often required to be attested from multiple entities in distinct locations. However, in this digital era where most applications happen online, and document copies are uploaded, the traditional signature and seal methods are obsolete. In a matter of minutes and with a simple photo editor, a certificate or document copy may be plagiarized or forged. Blockchain technology offers a decentralized approach to record and verify transactions without the need for huge infrastructure investment. In this paper, we propose a blockchain based nostrification system, where awarding institutions generate a digital certificate, store in a public but permissioned blockchain, where students and other stakeholders may verify. We present a thorough discussion and formal evaluation of the proposed system.
Monther Aldwairi, Mohamad Badra, Rouba Borghol
2023-10-13T14:23:58Z
http://arxiv.org/abs/2310.09136v1
# DocCert: Nostrification, Document Verification and Authenticity Blockchain Solution ###### Abstract Many institutions and organizations require nostrification and verification of qualification as a prerequisite for hiring. The idea is to recognize the authenticity of a copy or digital document issued by an institution in a foreign country and detect forgeries. Certificates, financial records, health records, official papers and others are often required to be attested from multiple entities in distinct locations. However, in this digital era where most applications happen online, and document copies are uploaded, the traditional signature and seal methods are obsolete. In a matter of minutes and with a simple photo editor, a certificate or document copy may be plagiarized or forged. Blockchain technology offers a decentralized approach to record and verify transactions without the need for huge infrastructure investment. In this paper, we propose a blockchain based notification system, where awarding institutions generate a digital certificate, store in a public but permissioned blockchain, where students and other stakeholders may verify. We present a thorough discussion and formal evaluation of the proposed system. _nostrification, antiforgery, plagiarism, document authentication, blockchain._ ## I Introduction Often times, job applicants are required to certify their documents, attest certificates, or equalize a degree or course. The attestation process is lengthy, time consuming, costly and cumbersome, especially when the candidate did graduate long time ago or he graduated from a foreign country, he no longer has access to. Equivalency process requires those attested certificates and transcripts and awards an equivalent and recognized degree. The same process applies for international trade agreements, customs' forms, birth certificates, etc. The process may involve universities, schools, notaries, embassies, departments ministries, education boards, etc. The process takes several months and may be costly if you live abroad. However, after all of that trouble, the final attested or sealed documents may be easily forged digitally. Therefore, not much verification is achieved after this lengthy and costly process [1]. Below is a sample of foreign degree equivalency process requirements. 1. Certified copy of degree (and or all previous degrees) in English or a translated version from an official service. Copies must be attested by 1. Ministry of education in issuing country, 2. ministry of exterior in issuing country, 3. embassy of country seeking equivalency, 4. ministry of exterior in country of equivalency, and 5. ministry of education in country of equivalency. 2. Copy of transcript or diploma indicating dates of admission and completion. 3. Copy of passport with visa, entry and exit stamps. 4. Equivalency fees. 5. All original documents. In most of the above documents, notaries' services may be required. Public and private notaries are authorized by the judicial system to attest documents and certify their originality. Online notaries have been using cameras to verify identity and attest documents. However, the admissibility in court of digital signatures continues to be challenged. Courts often accepts digitally signed documents only when a copy of the originally signed document is presented! [2]. We believe blockchain is a game changer and would present a perfect solution for the online nostrification issue [3]. Blockchain has been made popular with the wide spread of Bitcoin. Bitcoin is one of the earliest and most popular cryptocurrencies. It is a digital currency that can be exchanged between people without the need for a central bank or authority. It benefits from peer-to-peer networks and blockchain technologies to keep an anonymous record of all Bitcoins [4]. Blockchain is a shared immutable ledger for securely recording transaction history. A blockchain could be public or private. As indicated by the name, the blocks are chained, with each block storing one or more transactions. Transactions are kept indefinitely and the blockchain may be queried to verify any transaction, which makes it ideal for nostrification. There have been few attempts to use blockchain for nostrification. EduCTX was one of the first attempts to use blockchain as a higher education credit platform. It is supposed to serve as a centralized repository of all students' records and completed courses [5]. In this paper we propose to use blockchain in a novel manner to implement a secure, shared authenticated and public repository of student records. Students may access their records, so can universities and any other participant who wishes to verify a record. Security and privacy are of the at most importance and therefore access to records is authenticated. The rest of the paper is organized as follows. Section II explains blockchain in more details. Section III surveys the literature, covers the related work and points out advantages and disadvantages of exiting approaches. Section IV discusses the proposed approach and Section V presents the formal evaluation. ## II Blockchain Blockchain maintains a shared record including full details of every single transaction over a network. Blockchain is based on peer to peer networks, making it distributed and not controlled by any third party [4]. A transaction is any exchange of assets between participants and is represented by a block. Each block tracks and stores data and those blocks are chained together chronologically. Blocks are not editable, which means once a transaction is committed to the blockchain it can no longer be modified. A transaction is reversed by creating a new block, which maintains a timeline of events and changes to the data. Each block contains the transaction data, timestamp, unique hash and the hash of the previous block. The latter maintains the chain and the timestamp ensures timeliness. Unlike databases that are files stored on a single system, blockchain is decentralized and identical copies of the shared ledger are distributed across all participating nodes. This distributed nature of the ledger reduces the chances of data tempering. If a party chooses to add a block to his copy of ledger, it will be inconsistent with all other blockchain participants [6]. Before any block is added to the chain, a consensus of the majority of the endorsing nodes must be reached. Consensus may be through solving a cryptographic puzzle called "proof-of-work", which is the case in many cryptocurrencies. Proof-of-state on the other hand requires validators to hold a cryptocurrency in escrow trusted service. While proof-of-time-elapsed randomizes blocks waiting for trusted environment. Solo-NoOps requires no consensus and validator applies transactions, which may lead to divergent chains or ledgers. Finally, Byzantine-Fault-Tolerance (BFT) achieves consensus in a peer to peer network while some nodes are malicious or faulty [7]. In blockchain for business, we have a shared ledger, where every participant has his own copy. Those ledgers are permissioned and proper credentials are required to access the ledger. The ledger is immutable, in that no participant may tamper with a transaction after it was agreed upon. Transactions cannot be altered deleted or inserted back in time. Smart contracts are a set of business rules in chain code format that when executed a block/transaction is created. The shared ledger has the final say of an asset ownership and provenance. Contrary to cryptocurrencies that emphasize anonymity, blockchain for business is private permissioned network that values identity and permissions over anonymity [8]. Turkanovic et al. explained that higher education institutions (HEI) keep their students' completed courses' records in databases that are structured and only available to institution's staff [4]. Thus, leaving students with limited access where they can only view or print their document. Moreover, these student documents are stored in different standards, which contributes to the problem of transferring student documents to another HEI [9]. Correspondingly, if a student wants to apply for a job in a foreign country, he/she has to translate and nostrificate their academic certificate, which is complex and time consuming. In addition, if a student loses his/her academic certificate, he/she has to visit their HEI and ask for a new copy. Andrejs Rauhvargers discussed qualifications frameworks for recognizing qualification in the European higher education. The paper details the recognition of foreign higher education qualifications [9]. Blockchain is idea for keeping record of student's diploma certificates, transcript, courses, grades, achievements, skills and research experience. All of these may be securely registered in a shared ledger that can be accessed by many institutions or stakeholders. This will help reduce fraud, forgery and false claims [10]. All of the above data can be logged in the form of timely transactions into the shared ledger. The data in the blockchain is permissioned and associated with a student ID, organization ID and stored security in the blockchain. Using blockchain means performance may be sacrificed for secure recordkeeping of transactions. Nonetheless, the blockchain would be much more efficient as opposed to the manual attestation process described earlier [11]. Using blockchain in education might be a new concept, but it surely is very beneficial. It will make it much easier for students to have all of their completed courses certificated, verified and in one place. Not only this will facilitate attestation and verification of qualification but also will help in the cases of credit transfer between institutions. It will be very easy for any workplace anywhere in the world to subscribe to the blockchain and verify graduate credentials, of course with the applicant's consent [12]. ## III Related Work There are a few research papers concerned with blockchain for notification. In this section we summarize each paper and present a critical analysis. Wibke et al. used blockchain technology to store and handle educational data [13]. They offer the possibility to store different types of immutable educational information on blockchain technology. A total of 58.1 % of the education technologies were based on Ethereum, 3.2 % on Bitcoin, 9.7 % on EOS, and 1.6 % on NEM; 1.6 % used a private blockchain, 4.8 % could be used more than one blockchain and 6.5 % used other blockchain technologies. Their results provide a deeper understanding of blockchain technology in education and serve as a signal to educational stakeholders by underlining the importance of blockchain technology in education. EduCTX is a blockchain-based higher education credit platform from University of Maribor [5]. This EduCTX platform is anticipated to use ECTX tokens as academic credits. It rests on peer-to-peer networks where the peers of the network are HEI and users of the platform are students and other various organizations. These ECTX tokens represent students credit amount for completed courses. Every student will have an EduCTX wallet for collection of ECTX tokens that will be transferred by his/her HEI. The transferred information is stored in blockchain alongside with the sender's identity with HEI official name, the recipients (which is student and is anonymously presented), the token (course credit value) and the course identification. Therefore, students can access and provide his/her completed courses by directly presenting their blockchain address. EduCTX is still a prototype based on Ark blockchain platform, and the real-world perception cannot be evaluated. EduCTX enables organizations and students to check academic records of a student's (potential employee) in transparent way. Moreover, since the system is based on blockchain platform, it maintains the possibility of fraud detection and prevention. On the other hand, in the case of a student's losing his/her private key, they have to visit to their home HEI and request a new blockchain address, which is time consuming and almost similar to the current approach for certification. Moreover, it is expected that user and organizations have to protect and backup their private keys, signatures and stamps to be secure, because this platform is yet to have additional level of protection against impersonation. Gresch et al. from the University of Zurich proposed a blockchain-based Architecture for Transparent Certificate Handing [14]. The work used a questionnaire to shed the light on the wide spread of people with fake diplomas, and how ineffective is the current accreditation system. The system identifies three stake holders: the certificate issuer, companies and institutes wanting to verify diplomas, and the graduates or applications who submitted the diploma. The system has two stages. First, the issuing organization has to create the digital diploma, with one-way hash function and the hash will be stored in a smart contract. Second, the verifier company verifies the authenticity of the document without contacting the university. A prototype was built using an Ethereum blockchain and deployed on University of Zurich BlockChain (UZHBC). They concluded that granting an organization the ability to issue certificates is one of the most critical aspects of the blockchain. In addition, they only stored the diploma has on the blockchain for privacy concerns. As opposed to storing an encrypted diploma and risking losing the data forever if the key is lost. Azael Capetllo proposed a blockchain education longstanding model for academic institutions [15]. The paper described the technology of storing student records, which can be shared openly with third parties, offering a safe and lasting record. The technology is strong against data damage or loss, and those third parties can verify student record directly by accessing the University blockchain. Two applications of Blockchain in education have been mentioned in the research paper, the first one is Smart Contracts, and it is to form an autonomous learning experience by consuming an analogy from the financial application of blockchain. The second application is the use of Blockchain to offset the cost learning using peer-to-peer networks, offering financial prize for students offering services to university. Mike Sharples and John Domingue from University of Nicosia proposed blockchain and Kudos, a distributed system for educational record, reputation and reward [16]. It was the first higher education institution to issue academic certificates whose authenticity can be verified through the Bitcoin blockchain. They proposed to use Bitcoin payments as a reward for academic achievements as tasks such as peer review or assessments. Then they proposed an "educational reputation currency', called Kudos. Each recognized educational institution, innovative organization, and intellectual worker is given an initial award of educational reputation currency, the initial award might be based on some existing metric: Times Higher Education World Reputation Rankings for Universities, H-index for academics, Amazon author rank for published authors etc. An institution could allocate some of its initial fund of Kudos to staff whose reputation it wishes to promote. Each person and institution store its fund of reputation in a virtual wallet on a universal educational blockchain. They used Ethereum smart contracts to implement OpenLearn badges on a private blockchain, where student enroll on courses and institution award them badges. Wolfgang et al. proposed blockchain in the context of education and certification [17]. The blockchain technology supports counterfeit protection of certificates, easy verification of certificates even if the certification authority no longer exists and automation of monitoring processes for certificates with a time-limited validity. It ensures higher efficiency and improved security for certification authorities through digitization of current processes, issuing and registering of certificates in a blockchain as well as automatic monitoring of certificates. It comprises a blockchain including smart contracts, a public storage holding profile information of certification authorities, a document management system managing the actual payload of certificates tracked by the blockchain and the parties involved in the system, namely accreditation and certification authorities, certifiers, learners and employers. John Rooksby and Kristiyan Dimitrov from University of Glasgow implemented Ethereum based blockchain technology for permanent and tamper proof grading system [18]. The system was able to store student information on courses enrolled, grades and their final degree. It supported the university specific cryptocurrency called Kelvin Coin. Payment of the cryptocurrency can be made by smart contract to the top performing student in a course. However, there were some drawbacks involved by implementing the system. Scenario-based and focus group evaluation methods were implemented to address the advantages and disadvantage derived from the system. Because universities rely on trust and confidentiality the blockchain system was found to be not trustworthy. Blockchain system was global scope idea, however universities tend to set their own boundaries, at least at institutional level. Moreover, using smart contracts to store grades in blockchain was problematic due the fact that there is no formal algorithm for calculating grades. Unfortunately, the Ethereum Blockchain system needed to change the way of administrative system of the university work. Finally, the prototype of the blockchain system was found to prioritize transparency over efficiency. Cheng et al. [19] proposed a system that uses Ethereum to generate digital certificates and confirm the eligibility of graduation certificates [19]. The system functions as follows. The HEI enters student's certificate and academic records into the system. The system verifies all the data. The student receives a quick response (QR) code, inquiry number and electronic file of their certificate. Whenever students want to apply for a job or apply for higher education, he/she has only to send the e-certificate alongside with the QR code to the respective organization. The organization can retrieve the student's certificate and academic records once the credentials are verified. Moreover, the QR is used to asses if the certificate is tampered or forged. There have been several industry projects that were concerned with student records and online digital badges. Many projects capitalized on the opportunity of digital diplomas as countermeasure to fake diplomas. Those projects offered technologies to both manage the complete educational past of students by gathering all digital badges awarded by different academic organizations. Sony Global Education for example, has announced development of a new blockchain for storing academic records [20]. Their platform allows secure sharing of exams results and academic proficiency levels with third-party evaluating organizations. Mozilla Foundation Open Badges are a digital record of the different accomplishments encoded into an image with associated infrastructure for verification [21]. MS Global Learning Consortium was managing this central open source repository of badges with over 1500 participating organizations until 2017. More recently, Mozilla migrated all users to Badgr, as a replacement for Open Badges as the standard verifying credentials [22]. Finally, Acclaim and IBM offer digital badges as a form of organizations recognizing individuals' skills and competencies [23]. Contrary to all of the above industry efforts based in central repositories of badges, BCDiploma is using blockchain to provide security, immutability, ease of use for certifying diplomas and other achievements [24]. All of the above research agreed that counterfeit certificates, credentials and documents is a major problem that can be solved with blockchain. Record all student's academic history from completed courses, skills and qualification in one trusted and secure blockchain is a perfect solution [25]. Yet, all of them tried to tweak current cryptocurrencies blockchain to be used to store certificates and award badges (reputation) and that resulted in low usability. Crypto currencies blockchains and smart contracts are not customized to student records. We propose a permissioned and custom blockchain for business, designed specifically for storing student records or any other document for that matter. ## IV Proposed Approach In this section, we propose an efficient solution that is based on a Merkle tree to provide nostrification and verification of qualifications to guarantee data integrity on the certificates through non-repudiation. A Merkle hash tree [26] is a data structure used to efficiently verify data integrity and authenticity. As illustrated in Fig. 1, each non-leaf node in the tree, from the bottom up until reaching the root node in the tree, holds the hash of the concatenated hashes of its sub-nodes; example, \(h_{12}=h(h_{1}\mid h_{2})\). The hash held by the root is represented as the root hash, which can be shared in a trusted way for verification purposes; example \(h_{\text{root}}=h(h_{12}\mid h_{34})\). In [27], a hash calendar is proposed to include the generated root hashes to verify the integrity of the contents of large data structures. In our proposed solution, we propose forming a Merkle tree where the leaves are documents. The first objective is to provide a periodic publication of the root hash in the blockchain to enhance transparency and protection against any modification in the hashed content, and to provide proof of existence of contents. Each document is certified by its issuer, so we include, in a blockchain, either the hash of that document, or the hash root of a set of documents issued by the same issuer when multiple documents are to be included. The included hash is authenticated by digitally signing it by the same issuer. Interested parties can authenticate the existence of any document and the verification is legally acceptable. The verification process relies on the authentication path of a given node in the tree to validate the hashed content held by the node, without Merkle tree traversal [28]. The authentication path of a node in the tree consists of a set of siblings on the path from that node to the root. The content of a document can be authenticated using the hashed contents held by the root node and by the corresponding authentication path as well. For example, and with reference to Fig.1, to verify \(h_{1}\), the verifier needs the values of \(h_{\text{root}}\), and the authentication path \(h_{34}\) and \(h_{2}\). Hence, the verifier computes \(h_{12}^{*}=h(h_{1}^{*}\mid h_{2})\) and \(h_{\text{root}}^{*}=h(h_{12}^{*}\mid h_{34})\), and then compares \(h_{\text{root}}\) with \(h_{\text{root}}\) for equality. Choosing the hash function or algorithm [29] depends on many factors such as speed, digest length, number of rounds, collisions and ease of implementation both in hardware and software [30]. ### _Transaction Structure_ When a transaction is generated by our entities, it should include the hash root and a set of hash values, where each hash is the digest of a document belonging to the user. \[\mathsf{h}_{\text{root}}\quad\text{Set of hash values (i.e., $h_{1}$, $h_{2}$,...)}\] ### _Nostrification's Generation of Document's Qualification_ The proposed system is very versatile and can be applied to any document and authentication process. In this section, we describe our solution using three different scenarios. In the first one, we describe the case where the user has several documents issued by the same entities, whereas in the second case, the user has several documents issued by different entities. The third scenario is concerned with the case of one document that will be certified by a hierarchy of different institutions. The proposed notification solutions supports 3 cases or operating scenarios, because of space limitation we present cases 1 and 3. _Case 1_ In this case, the issuer (entity) of the documents will form a Merkle tree where each leaf is the hash of a document (Fig. 2). Next, the entity will generate the authentication path for each intermediate node in the tree, the digest of each document, and the hash root of the tree. Then the entity will sign the hash root and publish it along with the hash value of each document in the blockchain. The entity will next issue the documents to the user after stamping each document. The stamp consists of adding to each document, the identifier of the transaction that is already stored in the blockchain. _Case 3_ This case is similar to the two previous cases; however, each tree is dedicated to one document only and each layer of the tree is associated to an organization that will certify the document. Each organization has a private and a public key. When we want to nostrify a document, we start by selecting all organizations that will certify the document. Then, with the hash of the document, we sign that hash with the private key of the first organization. Finally, we create the right node of the layer with the pair constituted by the signature and the location of the public key of the organization, required to verify the signature. The parent node of the layer is created Fig. 1: Example of Merkle tree with 4 leaves (depth = 2). by hashing the result of the concatenation of the hash from the left node and the signature from the right node. The process is then repeated for each organization. When a layer has been created for each organization, we calculate the last hash that will become the hash root and we have the Merkle tree (Fig. 3). As everyone is able to get public keys of organization with the location included in the tree, it is very simple to verify the authenticity of a document certify by any number of organizations. ### _Nostrification's Verification of Document's Qualification_ Any third party who is willing to verify any document that is shown by the user, the third party shall first query the blockchain to extract the root hash and the set of hash values as well, using the transaction identifier stored on the document presented by the user. Then, the third party generates the digest of the presented document and compares it for equality with the one of the hash values stored in the extracted set of the hash values. Then, the third party regenerates the hash root and compares the result for equality with the hash root that is downloaded from blockchain. Then, it verifies the signature on the hash root that is generated by the document issuer, and if the signature is validate, the third party approve the document. In case a third party is willing to verify more than one document that were nostrified by more than one entity, then the user should send the transaction identifier of the most recent nostrified document, which includes a hash root, and a set of hash values. This latter set includes the hash of the most recent nostrified document and the hash of any other document belonging to the user and nostrified prior to nostrifying the most recent nostrified document. ## V Implementation and Security Analysis In this section, we present a detailed analysis of the proposed approach's security and we demonstrate its effectiveness in providing in providing long-term integrity protection, proof of existence, authenticity, non-repudiation and privacy. Additionally, we evaluate the efficiency in terms of the processor and time usage. We start with one of the most popular attacks on data integrity, False Data Injection (FDI). In FDI the attackers intentionally change the data in such a way that the receiver will be unable to detect forged data. Blockchain by its design is secured against tampering and revision, which makes it very difficult to the adversary to inject or add forged or malformed document into the blockchain. In addition, the signature of the issuer over the document, make it much more difficult, even impossible, to inject malformed data into the blockchain. In addition to the long-term data integrity and the proof of existence, our solution ensures the authenticity and the non-repudiation of origin because the hash root will be signed by the last document's issuer when we have several documents from different entities, and by the documents' issuer when those documents were issued by the same issuer. It is worth noting that the issued documents will be always valid if the issuer's certificate will expire or revoked. In fact, our solution leverages blockchain approach properties to provide long-term integrity of documents. Privacy concern usually arises in many applications, particularly in is a public distributed database like the blockchain. The privacy concerns are mostly related to the publication of the user's documents. In our solution, the privacy is preserved since the digest of the documents are stored in the blockchain, but not the document itself. Hence, the adversary needs to crack the digest in order to find out the original document. Our approach maintains the above security services while reducing the computation overhead. In fact, instead of generating a signature for each document issued by the same entity to the user, the entity will only need to sign the hash root. However, our solution will introduce very limited computation overhead related to the hash function that will be applied to generate the hash of each document that is required to compute the hash root. But consider the asymmetric encryption computational overhead when compared to the hash function computational overhead, this latter is usually negligible. The system proof of concept was implemented using Python v3, Flask webserver, and Ganache is used to create the blockchain test server. To evaluate the efficiency of the system we measure the CPU usage, memory consumption, and time for adding a document and the nostrification process. The _PyCharm_ IDE was used for measurements and average of five runs. In Table I, we can observe that adding a user will relatively take more time because of the deployment of the contract on the blockchain. Our system proof of concept allows everyone to verify the authenticity of a document after accessing both the authentication path and the hash root, which are stored into a smart contract that is publicly available on a blockchain. Our used smart contracts are based on the same cryptography technology being used by cryptocurrency; therefore, they have the same level of security. All details of deployments or Fig. 3: Nostrification of a single document by different entities. Fig. 2: Nostrification of several documents issued by the same entities. updates of contracts are written into transactions to help finding the data at any time. Particularly, it allows storing data like the username and the user's Merkle Tree. When the issuing institution adds a user along with its documents to the blockchain, a contract is deployed, and a transaction initializes it with the data. At any time, the issuing institution can add several documents to the user profile, in which the user's Merkle Tree is then updated, and a new transaction is also needed to update the data stored on the blockchain. ## VI Conclusions In this paper, we proposed a blockchain-based nostrification system, where awarding institutions generate a digital certificate, store in a public but permissioned blockchain, where students and other stakeholders may verify. We present a thorough discussion and formal evaluation of the proposed system. In addition, we implemented a prototype of the solutions supporting 3 use-cases. The formal analysis shows resistance to all sorts of common attacks with excellent performance in terms of CPU and memory usage as well as negligible blockchain programing and query times. ## Acknowledgment This project was supported in part by Zayed University Research incentive grant #R22018.
2310.06205
Fair Classifiers that Abstain without Harm
In critical applications, it is vital for classifiers to defer decision-making to humans. We propose a post-hoc method that makes existing classifiers selectively abstain from predicting certain samples. Our abstaining classifier is incentivized to maintain the original accuracy for each sub-population (i.e. no harm) while achieving a set of group fairness definitions to a user specified degree. To this end, we design an Integer Programming (IP) procedure that assigns abstention decisions for each training sample to satisfy a set of constraints. To generalize the abstaining decisions to test samples, we then train a surrogate model to learn the abstaining decisions based on the IP solutions in an end-to-end manner. We analyze the feasibility of the IP procedure to determine the possible abstention rate for different levels of unfairness tolerance and accuracy constraint for achieving no harm. To the best of our knowledge, this work is the first to identify the theoretical relationships between the constraint parameters and the required abstention rate. Our theoretical results are important since a high abstention rate is often infeasible in practice due to a lack of human resources. Our framework outperforms existing methods in terms of fairness disparity without sacrificing accuracy at similar abstention rates.
Tongxin Yin, Jean-François Ton, Ruocheng Guo, Yuanshun Yao, Mingyan Liu, Yang Liu
2023-10-09T23:07:28Z
http://arxiv.org/abs/2310.06205v1
# Fair Classifiers that Abstain without Harm ###### Abstract In critical applications, it is vital for classifiers to defer decision-making to humans. We propose a post-hoc method that makes existing classifiers selectively abstain from predicting certain samples. Our abstaining classifier is incentivized to maintain the original accuracy for each sub-population (i.e. no harm) while achieving a set of group fairness definitions to a user specified degree. To this end, we design an Integer Programming (IP) procedure that assigns abstention decisions for each training sample to satisfy a set of constraints. To generalize the abstaining decisions to test samples, we then train a surrogate model to learn the abstaining decisions based on the IP solutions in an end-to-end manner. We analyze the feasibility of the IP procedure to determine the possible abstention rate for different levels of unfairness tolerance and accuracy constraint for achieving no harm. To the best of our knowledge, this work is the first to identify the theoretical relationships between the constraint parameters and the required abstention rate. Our theoretical results are important since a high abstention rate is often infeasible in practice due to a lack of human resources. Our framework outperforms existing methods in terms of fairness disparity without sacrificing accuracy at similar abstention rates. ## 1 Introduction Enabling machine learning (ML) systems to abstain from decision-making is essential in high-stakes scenarios. The development of classifiers with appropriate abstention mechanisms has recently attracted significant research attention and found various applications [1, 2, 3, 4, 5]. In this paper, we demonstrate that allowing a classifier to abstain judiciously enhances fairness guarantees in model outputs while maintaining, or even improving, the accuracy for each sub-group in the data. Our work is primarily anchored in addressing the persistent dilemma of the fairness-accuracy tradeoff - a prevalent constraint suggesting that the incorporation of fairness into an optimization problem invariably compromises achievable accuracy [6]. To circumvent this problem, we propose to use classifiers with abstentions. Conventionally, the fairness-accuracy tradeoff arises due to the invariance of _data distribution_ and the rigid _model hypothesis space_. Intuitively, by facilitating abstentions within our model, we introduce a framework that permits the relaxation of both limiting factors (distribution & model space). This transformation occurs as the abstention mechanism inherently changes the distributions upon which the classifier's accuracy and fairness are computed. In addition, since the final model output is a combination of the abstention decision and the original model prediction, the model hypothesis space expands. This adaptability paves the way for our approach to breaking the fairness-accuracy curse. There exist several works that explore abstaining classifiers to achieve better fairness [4, 7]. In contrast, we aim to achieve the following four requirements simultaneously, a rather ambitious goal that separates our work from the prior ones: * **Feasibility of Abstention**: We need to determine if achieving fairness with no harm is feasible or not at a given abstention rate. * **Compatible with Multiple Common Fairness Definitions**: We seek a flexible solution that can adapt to different fairness definitions. * **Fairness Guarantee**: We aim for a solution that provides a strong guarantee for fairness violations, i.e., impose hard constraint on disparity. * **No Harm**: We desire a solution that provably guarantees each group's accuracy is no worse than the original (i.e. abstaining-free) classifier. We propose a post-hoc solution that abstains from a given classifier to achieve the above requirements. Our solution has two stages. In Stage I, we use an integer programming (IP) procedure that decides whether it is feasible to satisfy all our requirements with a specific abstention rate. If feasible, Stage I will return the optimal abstention decisions for each training sample. However, a solution that satisfies all our constraints might not exist. To expand the feasible space of the solution, we also selectively flip the model prediction. Stage I informs us how to abstain on the training samples; to expand the abstention (and flipping) decisions to unseen data, Stage II trains a surrogate model to encode and generalize the optimal abstention and flipping patterns in an end-to-end manner. We name our solution as \(\underline{\mathsf{F}}\)air \(\underline{\mathsf{A}}\)bstention classifier with No harm (\(\mathsf{FAN}\)). Compared to the prior works, our solution guarantees the four desired properties mentioned before, shown in Table 1. To the best of our knowledge, our method is the first to develop an abstention framework that incorporates a variety of constraints, including _feasible abstention rates_, _compatibility with different fairness definition_, _fairness guarantees_ and _no harm_. We theoretically analyze the conditions under which the problem is feasible - our work is the first to characterize the feasibility region for an abstaining mechanism to achieve some of the above-listed constraints. We have carried out extensive experiments to demonstrate the benefits of our solution compared to strong existing baselines. ### Related Work **Fair Classification.** Our work relates broadly to fairness in machine learning literature [8, 9, 10, 11, 12]. Our work is particularly inspired by the reported fairness-utility tradeoff [6, 11]. One way to resolve the problem is to decouple the training of classifiers to guarantee each group receives a model that is no worse than the baseline but this line of work often requires knowing the sensitive attribute at test time and is less flexible to incorporate different fairness definitions [12]. There's a wide range of approaches available to achieve fairness, including pre-processing methods [13, 14, 15, 3], in-processing techniques [16, 17, 18], and post-processing methods [12, 19, 20]. Our work specifically focuses on post-processing techniques. **Abstain Classifier.** Existing literature provides an expansive exploration of abstention or selective classifiers [21, 22, 23, 1, 2]. Typically, selective classification predicts outcomes for high-certainty samples and abstains on lower ones, where the softmax outputs of the classifier are employed [24, 25]. Interestingly, [26] highlights a potential pitfall, suggesting that selective classifiers can inadvertently exacerbate fairness problems if not used judiciously. This finding underscores the importance of careful application and has inspired various fair selective methodologies [4, 7, 27, 28, 29]. However, these methodologies primarily focus on regression or incorporate fairness constraints in their optimization objectives to create selective classifiers. For instance, LTD[7] introduces a penalty term to address high abstention rates and unfairness. However, it lacks robust mechanisms for controlling both abstention rates and fairness. On the other hand, FSCS [4] presents an abstention framework specifically designed to reduce precision disparities among different groups, but it does not accommodate other fairness definitions. Additionally, neither of these approaches offers a way to monitor or control the accuracy reduction for each group. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Related Works** & **Abstention Rate Control** & **Multiple Fairness** & **Fairness Guarantee** & **No Harm** \\ \hline \hline LTD [7] & & \(\checkmark\) & & \\ \hline FSCS [4] & \(\checkmark\) & & & \\ \hline FAN (Our work) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \hline \end{tabular} \end{table} Table 1: A summary of key properties of our work and closely related works. Our paper proposes a novel approach that first utilizes exact integer programming to establish the optimal classifier that satisfies all the aforementioned constraints, then trains a surrogate model on the output of said IP. The most relevant work to ours is [5], which applies Mixed-Integer Programming (MIP) to the selective classification problem, but differs in terms of the fairness, no harm, and feasibility guarantees. Furthermore, we introduce distinctive strategies that can be deployed without requiring knowledge of the true label. These strategies involve training a model based on the IP's output, which not only enhances efficiency but also substantially reduces computational requirements, especially in large-scale problems. Whereas the MIP design proposed in [5] is limited to moderately-sized problems and relies on the availability of true labels at the inference time. ## 2 Preliminaries and Overview Let \(\mathcal{D}\) be a data distribution defined for a set of random variables \((X,Z,Y)\), representing each feature (e.g. application profile in a loan application), protected attribute (e.g., gender or race), and label (qualified or not in a loan application), respectively. Consider a discrete \(Z\in\mathcal{Z}\) and a binary classification problem where \(Y=1\) indicates the model predicts positive (i.e. to the favorable outcome) on the sample, \(Y=0\) indicates negative (i.e. unfavorable), and \(X\in\mathcal{X}\). We assume a training dataset sampled i.i.d. from \(\mathcal{D}\) with \(N\) samples: \((x_{1},z_{1},y_{1}),(x_{2},z_{2},y_{2}),\cdots,(x_{N},z_{N},y_{N})\). We aim to develop a post-hoc framework, named FAN, that takes in a trained classifier \(h:\mathcal{X}\rightarrow[0,1]\), i.e. the baseline classifier, and outputs its abstaining decisions. Denote \(S=h(X)\in[0,1]\) the _confidence score_ for individual \(X\), and the predicted label \(\hat{Y}_{b}=1[h(X)\geq t_{0}]\) based on a user-specified threshold \(t_{0}\). Figure 1 shows the overview of FAN. We will use two modules, Abstention Block (AB) and Flip Block (FB), to determine which samples to abstain. The goal of AB is to decide which samples to abstain in order to satisfy our set of constraints; the goal of FB is to expand the feasibility region of the final decision outcomes, enabling a larger feasibility region of the formulated problem (see Section 3.1 for the explanation). AB, i.e. \(h_{A}:[\mathcal{X},h(\mathcal{X})]\rightarrow\{0,1\}\), takes the feature \(X\) of the given individual, and the corresponding confidence score \(S\) predicted from the baseline model as inputs, and decides whether to abstain the prediction. \(h_{A}\left(X,h(X)\right)=0\) indicates that the prediction should abstain. Samples that are not abstained by AB will be forwarded to FB. FB, i.e. \(h_{F}:[\mathcal{X},h(\mathcal{X})]\rightarrow\{0,1\}\) decides whether to flip the prediction of \(h\) or not, which is the final decision of FAN: \[\hat{Y}=\left\{\begin{array}{cl}1-\hat{Y}_{b}&\text{if }h_{F}(X,h(X))=1\\ \hat{Y}_{b}&\text{otherwise}\end{array}\right. \tag{1}\] ## 3 Method We explain how we formulate the problem to achieve our goal and our two-stage algorithm. Figure 1: Overview of FAN. We first get the baseline classifier \(h\)’s confidence score on data, i.e. \(s=h(x)\). We then forward the confidence scores to Abstention Block AB where a model \(h_{A}\) either tells us to abstain (i.e. \(h_{A}(x,s)=0\)) or to pass to the Flip Block FB (i.e. \(h_{A}(x,s)=1\)). If abstained, then it is the final outcome. Otherwise, FB will decided on the unabstained samples if their predicted labels \(\hat{y}_{b}=s\geq t_{0}\) should be flipped or not. \(\hat{y}\) is the final outcome. ### Problem Formulation In general, improving fairness often results in decreased accuracy [6]. In our case, though, we enable abstention, which allows us to prioritize fairness while still maintaining accuracy. Furthermore, we desire a formulation that imposes hard constraints for both fairness and accuracy, as compared to prior works that only incorporate a soft penalty term into the objective function [4, 7]. Specifically, we use the following optimization problem to obtain \(h_{A},h_{F}\) in our AB and FB: \[\min_{h_{A},h_{F}} \mathbb{E}_{\mathcal{D}}\Big{[}\Big{(}h_{F}(X,h(X))\big{(}1-\hat{ Y}_{b}\big{)}+\big{(}1-h_{F}(X,h(X))\big{)}\hat{Y}_{b}\Big{)}\neq Y\mid h_{A} \left(X,h(X)\right)=1\Big{]}\] ( **Error Rate** ) s.t. \[\mathscr{G}(h_{A},h_{F},z,z^{\prime})\leq\mathscr{E},\forall z,z^{ \prime}\in\mathcal{Z}\] ( **Disparity** ) \[\mathbb{E}_{\mathcal{D}}\Big{[}h_{A}\left(X,h(X)\right)\mid Z=z \Big{]}\geq 1-\delta_{z}\] ( **Abstention Rate** ) \[\mathbb{E}_{\mathcal{D}}\Big{[}\Big{(}h_{F}(X,h(X))\big{(}1-\hat{ Y}_{b}\big{)}+\big{(}1-h_{F}(X,h(X))\big{)}\hat{Y}_{b}\Big{)}\neq Y\mid h_{A} \left(X,h(X)\right)=1,Z=z\Big{]}\leq e^{\prime}_{z},\] ( **No Harm** ) where \(e^{\prime}_{z}=(1+\eta_{z})e_{z}\). \(e_{z}=\mathbb{E}_{\mathcal{D}}\left[h(X)\neq Y\mid Z=z\right]\) is the error rate of baseline optimal classifier \(h\), \(\delta_{z}\). \(\eta_{z}\) is a "slack" we allow for the no harm constraint and is chosen such that \(0\leq(1+\eta_{z})e_{z}\leq 1\). **Error Rate.** Our main objective is to minimize 0-1 loss for all samples that are not abstained. **Disparity.** We enforce a fairness constraint between every pair of groups \(z,z^{\prime}\in\mathcal{Z}\), by bounding the disparity \(\mathscr{D}\) within the predefined hyperparameter \(\mathscr{E}\). There are several fairness definitions that can be applied. In this paper, we utilize three specific fairness notions, Demographic Parity (DP) [8], Equal Opportunity (EOp) [9], Equalized Odds (EOd) [9]. Details are shown in Table 3. **Abstention Rate.** Although abstention can lead to better model performance, a high abstention rate can be impractical due to a lack of human resources. Therefore, it is crucial to limit the abstention rate. To address this issue, we set a maximum threshold for the proportion of instances that the system can abstain in each group. The abstention rate should not exceed a user-specified threshold \(\delta_{z}\) for each group \(z\). Intuitively, this means that we cannot simply decide to forgo giving predictions on the majority of the samples (or the majority from a certain group), because even though it would satisfy all the other constraints it would not be practically useful. Note that to introduce more flexibility, we enable independent control of the abstention rates for each group. **No Harm.** We ensure the classifier does not compromise the accuracy of the groups. The extent of relaxation is determined by a user-specified \(\eta_{z}\), which establishes the maximum allowable reduction in accuracy. When \(\eta_{z}>0\), IP permits a certain degree of relaxation on the error rate bound for each group. Conversely, when \(\eta_{z}<0\), it implies that a lower group error rate is mandated. **Why Need** FB.The fairness and no harm constraints specified in **Disparity** and **No Harm** jointly impose challenging constraints for the decision rule to satisfy. For instance, the no harm constraint only allows certain predictions to be abstained, as this constraint essentially requires us to abstain more from wrongly predicted samples. When a classifier is relatively accurate, and when the abstention rate is constrained, we are left with only a small feasibility region. The FB block opens up more design space for the abstention policy, as we flip properly, the disparity and no harm conditions could become easier to satisfy. Note that flipping model predictions is a popular post hoc way of expanding the model decision space towards improving fairness [11]. We illustrate it using the following example: **Example 3.1**.: Consider Demographic Parity (DP) as the fairness measure, imagine a system with two groups, where the allowed abstention rate is \(\delta_{1}=\delta_{2}=0.1\). If we set \(\varepsilon=0.1\) as the permissible disparity in demographic parity (DP), according to the baseline classifier, the acceptance rate for group 1 and 2 are 0.3 and 0.7 respectively. Even if we abstain only from the positive samples in group 2, the adjusted acceptance rates would be 0.3 and 0.6 respectively, while the resulting disparity (0.3) is still greater than \(\varepsilon\). However, if flipping is allowed, we can further flip 0.2 positive samples of group 2 to negative, resulting in final adjusted acceptance rates of 0.3 and 0.4. * Footnote *: To keep the example simple, we do not consider accuracy here. While in our formulation, **Error Rate** and **No Harm** together ensure that flipping would not cause harm but rather incentivize improvements in accuracy. Directly solving the optimization problem in Section 3.1 is challenging because it would require joint training of \(h_{A}\) and \(h_{F}\). In addition, the analysis of its feasibility would also highly rely on the hypothesis space for learning \(h_{A}\) and \(h_{F}\). Lastly, the composition of multiple sets of constraints adds to the difficulty of solving and analyzing it. To solve those challenges, we propose a _two-stage approach_ to train \(h_{A}\) and \(h_{F}\). Instead of solving the inflexible and costly optimization problem on the fly, it learns the optimal abstention patterns end to end. **Stage I: Integer Programming.** We approximate \(h_{A}(X,h(X))\) and \(h_{F}\left(X,h(X)\right)\) by binary parameters. Specifically, for dataset with \(N\) individuals, \(\omega=\{\omega_{n}\}_{N}\), where \(\omega_{n}=h_{A}(x_{n},h(x_{n}))\in\{0,1\}\), \(f=\{f_{n}\}_{N}\), where \(f_{n}=h_{F}(x_{n},h(x_{n}))\in\{0,1\}\). This returns the following Integer Programming problem **IP-Main**, which is an empirically solvable version of the optimization in Section 3.1: \[\min_{\omega,f} \sum_{n=1}^{N}\omega_{n}\cdot 1[\hat{y}_{n}\neq y_{n}]:=\sum_{n=1}^{N }\omega_{n}\cdot 1\left[(\hat{y}_{n}(1-f_{n})+(1-\hat{y}_{n})f_{n})\neq y_{n}\right]\] ( **IP-Main** ) s.t. \[\widehat{\mathscr{D}}\leq\widehat{\mathscr{E}},\forall z,z^{ \prime}\in\mathcal{Z}\] ( **Disparity** ) \[\frac{\sum_{n=1}^{N}\omega_{n}\cdot 1[z_{n}=z]}{\sum_{n=1}^{N}1[z_{n }=z]}\geq(1-\delta_{z}),\forall z\in\mathcal{Z}\] ( **Abstention Rate** ) \[\sum_{n=1}^{N}\omega_{n}\cdot 1[\hat{y}_{n}\neq y_{n},z_{n}=z] \leq\left(\sum_{n=1}^{N}\omega_{n}\cdot 1[z_{n}=z]\right)\cdot(1+\eta_{z})e_{z}, \forall z\in\mathcal{Z}\] ( **No Harm** ) \[\omega_{n}\in\{0,1\},f_{n}\in\{0,1\},\forall n.\] Solving it gives us the abstention (i.e. \(\omega\)) and flipping decision (i.e. \(f\)) for each of the training data. The empirical version of the (**Disparity**) constraints can be found in Table 4 in the Appendix. **Stage II: Learning to Abstain.** Although IP results offer an optimal solution for the training data, they are not applicable at inference time. This is due to two main reasons. First, accessing the ground truth label \(y\) is impossible during inference, which is a necessary input for IP. Second, solving IP is too time-consuming to perform during inference. To solve this problem, we train surrogate models to learn the abstaining and flipping patterns in an end-to-end manner (i.e. from features to abstention and flipping decisions). We use the IP solutions (on the training samples) as the surrogate models' training data, and we want the surrogate model to generalize the patterns to the unseen test samples. Figure 2 illustrates our design and we will describe the details in Appendix B. Note that we only need to solve **IP-Main** and train surrogate models during the training process, and when we deploy FAN, we only need to run inference on the trained AB and FB, and therefore the inference overhead is small. ## 4 Theoretical Analisis: Feasibility and Fairness in Abstention The selection of hyperparameter (including \(\delta_{z},\eta_{z},\varepsilon\)), plays a crucial role in training AB and FB, therefore the overall performance of FAN. A higher level of disparity restriction can intuitively result in a higher rate of data samples being abstained from classification, while a more stringent accuracy requirement can also increase the abstention rate and make the problem infeasible. In this section, we focus on theoretically analysis of Stage I, Specifically we answer the following research questions: **Under what conditions will Problem IP-Main become feasible for each fairness notion? What is the relationship between the hyperparameters?** We derive the feasibility condition for the IP formulation (**IP-Main**) in Stage I. The task of theoretically bounding the performance gap between predictions (surrogate models in Stage II) and ground truth (IP solution in Stage I) is generally challenging as the models are neural network, therefore we study it empirically in Section 5. Figure 2: Illustration of the two-stage design. We summarize the key parameters used in this section: \begin{tabular}{c c c c c} \hline \hline \(\varepsilon\) & \(\delta_{z}\) & \(e_{z}\) & \(\eta_{z}\) & \(\tau_{z}\) (TBD in 3.1) \\ \hline \hline Fairness & Abstention rate & Error rate for \(z\) & Error rate slack or & Qualification rate \\ Disparity & allowed for \(z\) & by \(h\) & restrictiveness compared to baseline & of group \(z\) \\ \hline \hline \end{tabular} ### Feasibility Define \(\tau_{z}=\frac{\sum_{n}1[z_{n}=z]y_{n}}{\sum_{n}1[z_{n}=z]}\) the proportion of qualified individuals of group \(z\), i.e., qualification rate of group \(z\). We prove the following results for demographic parity: **Theorem 4.1**.: _(Feasibility of Demographic Parity (DP)) (**IP-Main**) is feasible under DP if and only if \(\forall\bar{z},\bar{z}\in\mathcal{Z}\) such that \(\tau_{\bar{z}}\geq\tau_{\bar{z}}\),_ \[\delta_{\bar{z}}\geq 1-\frac{1+\varepsilon+(1+\eta_{\bar{z}})e_{\bar{z}}- \tau_{\bar{z}}+\tau_{\bar{z}}}{1-(1+\eta_{\bar{z}})e_{\bar{z}}}. \tag{2}\] Theorem 4.1 demonstrates the feasibility of the IP to achieve Demographic Parity. Specifically, the theorem establishes the minimum value of \(\delta_{z}\) that is allowed, subject to upper bounds on disparity and a relaxation parameter for the error rate. This highlights the importance of abstention by the more qualified group (higher qualification rate) for achieving a fair model without compromising accuracy, while the less qualified group need not abstain. Later in Section 4.2, we provide further treatment to remedy the concern over an imbalanced abstention rate. Specifically, for the two group scenario (\(\mathcal{Z}=\{\bar{z},\underline{z}\}\)), our results demonstrate that increasing the values of \(\eta_{z}\) and \(\eta_{\bar{z}}\) will lead to smaller values of \(\delta_{\bar{z}}\), indicating that a relaxation of the error rate can allow the more qualified group to abstain from fewer samples. Additionally, a looser bound on disparity will also enable the more qualified group to abstain from fewer samples. In practice, determining an appropriate value of \(\delta_{\bar{z}}\) is of paramount importance. To this end, we present the following illustrative example. **Example 4.2**.: a) If \(\tau_{\bar{z}}=\tau_{\underline{z}}\), i.e., the dataset is balanced, and \((1+\eta_{\bar{z}})e_{\bar{z}}<1\), we have that \(1-\frac{1+\varepsilon+(1+\eta_{\bar{z}})e_{\bar{z}}-\tau_{\bar{z}}+\tau_{ \bar{z}}}{1-(1+\eta_{\bar{z}})e_{\bar{z}}}<0\), therefore the problem is always feasible. b) If \(\tau_{\bar{z}}-\tau_{\underline{z}}=0.3\), \(e_{\bar{z}}=e_{\underline{z}}=0.1,\eta_{\bar{z}}=\eta_{\underline{z}}=0\), when \(\varepsilon=0.05,\delta_{\bar{z}}\geq 0.056\); when \(\varepsilon=0.1,\delta_{\bar{z}}\) has no restriction. Further for Equal Opportunity and Equalized Odds we have the following results: **Theorem 4.3**.: _(Feasibility of Equal Opportunity (EOp)) **IP-Main** is always feasible under EOp._ **Theorem 4.4**.: _(Feasibility of Equalized Odds (EOd)) **IP-Main** is always feasible under EOd._ Theorems 4.3 and 4.4 demonstrate the feasibility of the IP under Equal Opportunity and Equalized Odds. Specifically, regardless of the hyperparameter values, our results indicate that a feasible solution to the IP problem always exists. Notably, our results imply that even when the abstention rate is 0, the IP can solely adjust the flip decisions \(f_{n}\) to satisfy constraints on disparate impact, abstention rate, and no harm. More discussion on this can be found in Appendix C. ### Equal Abstention Rate An objection may arise that the model's excessive abstention from a particular group, while not observed in others. Moreover, if such abstention occurs solely on data samples with positive or negative labels, further concerns may be raised. In this section, we delve into a scenario where differences in abstention rates across groups and labels are constrained. We show that under equal abstention rate constraints, the performance of IP will become worse compared to Problem **IP-Main**. \[\min_{\omega,f} \textbf{IP-Main}\] (3) s.t.a. \[\left|\frac{\sum_{n=1}^{N}\omega_{n}1[z_{n}=z,y_{n}=y]}{\sum_{n=1 }^{N}1[z_{n}=z,y_{n}=y]}-\frac{\sum_{n=1}^{N}\omega_{n}1[z_{n}=z^{\prime},y_{n }=y]}{\sum_{n=1}^{N}1[z_{n}=z^{\prime},y_{n}=y]}\right|\leq\sigma_{y},\forall z \in\mathcal{Z},y\in\{0,1\}\] **Theorem 4.5**.: _(Feasibility of Demographic Parity with Constraint Disparity of Abstention Rate) A sufficient condition for Problem 3 being feasible is \(\forall\bar{z},\underline{z}\in\mathcal{Z}\) such that \(\tau_{\bar{z}}\geq\tau_{\underline{z}}\),_ \[\delta_{\underline{z}}\leq 2\tau_{\underline{z}}\sigma_{1},\quad\delta_{\bar{z}} \geq 1-\frac{1+\varepsilon+(1+\eta_{\bar{z}})e_{\underline{z}}-\tau_{ \bar{z}}+\tau_{\underline{z}}}{1-(1+\eta_{\bar{z}})e_{\bar{z}}}. \tag{4}\] We similarly show that for Equal Opportunity and Equalized Odds the problem remains feasible even under equal abstention rate constraints. We defer these details to Appendix. ## 5 Experiments In this section, we evaluate FAN using various real-world datasets. Our goal is to compare it against current state-of-the-art methods and to better understand its components. We start by explaining our experimental settings and then move on to how FAN performs in comparison to other methods. We also do a deep dive into the separate components of FAN to get a clearer picture of how each contributes to the overall performance. Additionally, we compare our trained models, specifically AB and FB, with integer programming (IP) solutions. This gives us further insights into the effectiveness, robustness, and real-world potential of FAN+. Footnote †: We’ll release the code once the paper is accepted In our study, we primarily focus on a setting involving only two distinct groups. For experiments that extend to multiple groups, we direct the reader to Appendix E. Throughout this section, we set \(\delta_{z}=\delta\) across all groups, meaning that each group is constrained by the same upper limit on the permissible rate of abstention. We rigorously evaluated our proposed method, FAN against two established baselines: LTD [7] and FSCS [4], as demonstrated in Table 1. For the LTD baseline, we employ the learning-to-reject framework, specifically referring to Equation 4 in [7]++. We draw upon three real-world datasets for our experiments: Adult [30], Compas [31], and Law [31]. During the training phase, we adhere to the Equalized Odds fairness criterion, incorporating two separate constraints. To facilitate a straightforward interpretation of our findings, we compute the average disparity in both the true positive and true negative rates. Due to space constraints, the details of data preprocessing and model setting can be found in Appendix E. Footnote ‡: It should be noted that the learning-to-defer schema requires an additional Decision Maker, which does not apply to our focal scenario. **Baseline Optimal.** For FAN, we use an optimal classifier trained solely to minimize the loss as the baseline \(h\), naming it "baseline optimal". We use Multi-Layer Perceptron (MLP) to train baseline optimal, AB, and FB. Details can be found in Appendix E. Table 7 in Appendix shows the performance of the baseline optimal model on both the training and test Figure 3: Comparison of FAN with baseline algorithms on training data. The first row shows the disparity reduction ( compared to baseline optimal) of each algorithm, while the second row shows the minimum group accuracy increase compared to baseline optimal. (a) Evaluation on the Adult under Equalized Odds. (b) Evaluation on the Compas under Equal Opportunity. (c) Analysis on the Law under Demographic Parity. For FAN, \(\eta_{z}\) is set to \(0\), i.e., no tolerance for reducing accuracy. 5 individual runs are performed. datasets (including overall accuracy and group accuracy, along with disparities measured under DO, EOp, and EOd.) Specifically, the overall training accuracy on Adult, Compas and Law are 92.08%, 72.33%, 82.86%, respectively. **Overall Performance.** Figure 3 illustrates how FAN compares to LTD and FSCS across different datasets and abstention rates when trained on the same data. In the LTD method, abstention is introduced as a penalty term in the objective function, making it difficult to precisely control the abstention rate. To work around this, we adjust the penalty term's coefficient and chart the resulting actual abstention rate. The first row of the figure highlights the disparity reduction each algorithm achieves compared to the baseline optimal \(h\). The second row shows the _minimum increase in group accuracy_ for all groups. Generally, FAN yields the most significant reduction in disparity without sacrificing much accuracy, unlike FSCS and LTD, which focus more on fairness at the cost of accuracy. Using the no-harm constraint NoHarm, FAN often matches or even surpasses the baseline optimal classifier in terms of accuracy. Nevertheless, there are a few instances where accuracy slightly drops, which we discuss further below. **Stage II Analysis: Performance of Surrogate Model.** The no-harm constraint is imposed to encourage FAN to maintain or even improve group-level accuracy when compared to the baseline. The integer programming formulation in Equation **IP-Main** is designed to strictly satisfy this constraint. However, FAN may not strictly meet this due to the surrogate model training of AB and FB in what we refer to as Stage II. As seen in the second row of Figure 3, there are instances where accuracy slightly decreases. Table 2 provides insights into this by illustrating the training accuracy of AB and FB. This suggests that the surrogate models are effective at learning from the IP outcomes. Figure 20 also shows the loss of AB and FB on Adult under Demographic parity, as an example. **Comparison to Baseline Optimal Classifier.** Our objective is to rigorously examine the specific impact of FAN on both disparity and accuracy. To that end, we conduct a comprehensive set of experiments. Figure 4 depicts the performance metrics when applying Equal Opportunity on the training and test data for the Adult dataset. These results offer a comparative benchmark against the baseline optimal classifier \(h\), facilitating a precise assessment of the degrees to which FAN either enhances or compromises model performance in both fairness and accuracy dimensions. For a more expansive view, additional results concerning multiple datasets and fairness criteria are provided in Appendix E. The figure elucidates that FAN successfully optimizes for a more equitable model without incurring a loss in accuracy. Notably, as the permissible abstention rate (\(\delta\)) increases, both demographic groups experience a significant improvement in accuracy, while simultaneously reducing the overall disparity. These findings indicate that FAN has the ability to train models that are both fairer and more effective, particularly when higher levels of abstention are allowed. **IP Solution.** The plotted Figure 5, corresponding to Figure 4, displays the IP solution. Remarkably, the figure demonstrates that each constraint in Problem **IP-Main** is strictly satisfied. When examining the influence of an increasing \(\varepsilon\), we note a diminishing effect on disparity reduction. This phenomenon can be attributed to the IP formulation, which inherently allows for a greater degree of disparity, thereby alleviating the necessity for stringent disparity control measures. Intriguingly, as \(\delta\) increases, the augmentation in accuracy for both demographic groups remains relatively stable across different configurations. This stability, however, is mainly because the accuracy is already approaching an optimal level in this specific experimental setup, leaving minimal scope for substantial further \begin{table} \begin{tabular}{|l l|c c c|c c c|c c c|} \hline \multicolumn{2}{|c|}{Accuracy (\%)} & \multicolumn{3}{c|}{Adult} & \multicolumn{3}{c|}{Compas} & \multicolumn{3}{c|}{Law} \\ & & \multicolumn{2}{c}{DP} & \multicolumn{2}{c}{EOp} & \multicolumn{2}{c}{EOd} & \multicolumn{2}{c}{DP} & \multicolumn{2}{c}{EOp} & \multicolumn{2}{c}{EOd} & \multicolumn{2}{c}{DP} & \multicolumn{2}{c}{EOp} & \multicolumn{2}{c}{EOd} \\ \hline \hline \multirow{2}{*}{\(\delta=0.1\)} & AB & 94.23 & 93.29 & 93.55 & 90.26 & 90.32 & 95.22 & 96.09 & 91.42 & 92.03 \\ & FB & 94.26 & 94.01 & 94.54 & 79.87 & 79.57 & 76.15 & 88.12 & 91.38 & 90.14 \\ \hline \multirow{2}{*}{\(\delta=0.2\)} & AB & 92.20 & 89.93 & 88.48 & 82.94 & 90.32 & 91.44 & 96.12 & 95.75 & 92.11 \\ & FB & 97.79 & 95.33 & 95.86 & 86.07 & 79.63 & 77.03 & 87.90 & 87.90 & 90.29 \\ \hline \multirow{2}{*}{\(\delta=0.3\)} & AB & 89.94 & 87.42 & 87.43 & 80.28 & 79.99 & 82.82 & 86.50 & 86.39 & 94.82 \\ & FB & 97.18 & 96.31 & 96.33 & 87.72 & 88.55 & 85.53 & 93.00 & 93.92 & 88.17 \\ \hline \end{tabular} \end{table} Table 2: Performance Evaluation of Surrogate Model Training. We use MLP as the network structure for both AB and FB. Each cell displays the average training accuracy (from 5 individual runs) of AB (first row) and FB (second row) for specific \(\delta\), fairness notion, and dataset employed. Generally, both AB and FB demonstrate a strong ability to learn the IP outcomes effectively and achieve high accuracy, underscoring the success of Stage II. It is worth mentioning that, under some settings, the accuracy on the Compas is low, for example the training accuracy of FB under EOd on Compas with abstention rate \(0.1\) is 76.15%. However, the issue lies not with our Stage II design but rather with the limitations of the MLP. As demonstrated in Table 7, the performance of the baseline optimal classifier (training accuracy 72.33%) on the Compas is also low. improvments. A side-by-side comparison between Figure 5 and Figure 4 reveals a strong alignment between the results derived from the training data and those obtained from the IP formulation. This concordance underscores the successful learning of the IP solution by the AB and FB models. ## 6 Conclusion In this work, we develop an algorithm for training classifiers that abstain to obtain a favorable fairness guarantee. Simultaneously, we show that our abstaining process incur much less harm to each individual group's baseline accuracy, compared to existing algorithms. We theoretically analyzed the feasibility of our goal and relate multiple system design parameters to the required abstention rates. We empirically verified the benefits of our proposal. Figure 4: Disparity reduction and increased accuracy for each group performed on Adult, compared to baseline optimal classifier. The first row shows the performance on the training data while the second row is on test data. (a) demonstrates the disparity reduction in terms of Equal Opportunity, while (b) and (c) showcase the increases in accuracy for group \(1\) and \(0\), separately. x-axis represents the maximum permissible abstention rate, while y-axis represents the maximum allowable disparity. Figure 5: This figure illustrates the result of the IP solution, under the same setting of Figure 4.
2302.09327
Transformadores: Fundamentos teoricos y Aplicaciones
Transformers are a neural network architecture originally designed for natural language processing that it is now a mainstream tool for solving a wide variety of problems, including natural language processing, sound, image, reinforcement learning, and other problems with heterogeneous input data. Its distinctive feature is its self-attention system, based on attention to one's own sequence, which derives from the previously introduced attention system. This article provides the reader with the necessary context to understand the most recent research articles and presents the mathematical and algorithmic foundations of the elements that make up this type of network. The different components that make up this architecture and the variations that may exist are also studied, as well as some applications of the transformer models. This article is in Spanish to bring this scientific knowledge to the Spanish-speaking community.
Jordi de la Torre
2023-02-18T13:30:32Z
http://arxiv.org/abs/2302.09327v1
## Transformadores: #### Abstract Los Transformadores son una arquitectura de red neuronal disenada originalmente para la transformacion de datos para el processamiento del lenguaje natural que se ha convertido en la herramienta principal para resolver una amplia variedad de problemas, incluyendo el processamiento del lenguaje natural, sonido, imagen, aprendizaje por refuerzo, y otros problemas con datos de entrada heterogeneos. Su caracteristica distintivas es su sistema de auto-atencion, basado en la atencion a la propia secuencia, que deriva del sistema de atencion introducido en los anos anteriores en distintas publicaciones. Este articulo proporciona al lector el contexto necessario para comprender los articulos de investigacion mas recientes y presenta los fundamentos matematicos y algoritmicos de los elementos integrantes de este tipo de redes. Tambien se estudian los distintos componentes que conforman esta arquitectura y las variaciones que pueden existir, asi como algunas aplicaciones de los modelos de transformadores. Este articulo esta en espanol para facilitar la llegada de este conocimiento cientifico a la comunidad hispanohablante. Transformers are a neural network architecture originally designed for natural language processing that it is now a mainstream tool for solving a wide variety of problems, including natural language processing, sound, image, reinforcement learning, and other problems with heterogeneous input data. Its distinctive feature is its self-attention system, based on attention to one's own sequence, which derives from the previously introduced attention system. This article provides the reader with the necessary context to understand the most recent research articles and presents the mathematical and algorithmic foundations of the elements that make up this type of network. The different components that make up this architecture and the variations that may exist are also studied, as well as some applications of the transformer models. This article is in Spanish to bring this scientific knowledge to the Spanish-speaking community. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Introduction * 2.2.1 General * 2.2.2 The * 2.2.3 The * 2.3.1 The * 2.3.2 The * 2.3.3 The * 2.3.4 The * 2.3.5 The * 2.3.6 The * 2.3.7 The * 2.3.8 The * 2.3.9 The * 2.3.10 The * 2.3.11 The * 2.3.11 The * 2.3.12 The * 2.3.13 The * 2.3.14 The * 2.3.15 The * 2.3.16 The * 2.3.17 The * 2.3.18 The * 2.3.19 The * 2.3.20 The * 2.3.21 The * 2.3.22 The * 2.3.23 The * 2.3.24 The * 2.3.25 The * 2.3.26 The * 2.3.27 The * 2.3.28 The * 2.3.29 The * 2.3.30 The * 2.3.31 The * 2.3.32 The * 2.3.33 The * 2.3.34 The * 2.3.35 The * 2.3.36 The * 2.3.37 The * 2.3.38 The * 2.3.39 The * 2.3.31 The * 2.3.32 The * 2.3.33 The * 2.3.34 The * 2.3.35 The * 2.3.36 The * 2.3.37 The * 2.3.38 The * 2.3.39 The * 2.3.31 The * 2.3.32 The * 2.3.34 The * 2.3.35 The * 2.3.36 The * 2.37 The * 2.3.38 The * 2.3.39 The * 2.3.31 The * 2.3.39 The * 2.3.31 The * 2.3.32 The * 2.3.34 The * 2.3.35 The * 2.3.36 The * 2.3.37 The * 2.3.38 The * 2.3.39 The * 2.3.39 The * 2.3.31 The * 2.3.32 The * 2.3.31 The * 2.3.322 The * 2.3.33 The * 2.3.34 The * 2.34 The * 2.3.35 The * 2.35 The * 2.35.3 The * 2.36 The * 2.37 The * 2.38 The * 2.39.30 The * 2.39.31 The * 2.39.32 The * 2.39.32 The * 2.39.33 The * 2.39.34 The * 2.39.35 The * 2.39.36 The * 2.39.37 The * 2.39.38 The * 2.39.39 The * 2.39.39 The * 2.39.39 The * 2.39.31 The * 2.39.32 The * 2.39.33 The * 2.39.34 The * 2.39.35 The * 2.39.36 The * 2.39.37 The * 2.39.38 The * 2.39.39 The ## 1 Introduccion Las redes neuronales completamente conectadas son consideradas desde hace decadas aproximadores universales. En 1989, Hornik et al. demostraron que una red con una sola capa oculta y un numero suficiente de nodos puede aproximar cualquier funcion continua con la precision deseada [1]. A pesar de esta realidad teorica, en la practica puede ser dificil encontrar el numero adecuado de nodos y los valores de los pesos necesarios para lograr la aproximacion deseada. Para facilitar el proceso de optimizacion, se han utilizado priores estructurales que restringen el espacio de optimizacion a un subconjunto mas limitado de los valores esperados. Por ejemplo, las redes convolucionales explotan las regularidades estdasticas que suelen estar proximas en una imagen, mientras que las redes recurrentes se especializan en el procesamiento de datos secuenciales, aprovechando las regularidades temporales. Estas arquitecturas, por tanto, estan disenadas para sacar provecho de las regularidades especiales y temporales presentes en las senales sensoriales de nuestro mundo [2]. Sin embargo, estas optimizaciones computacionales tambien pueden ser una desventaja en algunos casos. Las redes neuronales recurrentes pueden tener problemas para resolver dependencias de largo alcance debido a limitaciones en la memoria interna y dificultades con la propagacion de gradientes a traves de una larga secuencia, lo que puede provocar un rendimiento insatisfactorio. Como una solucion para abordar estas limitaciones, surgieron los modelos basados en mecanismos de atencion, permitiendo ajustar la relevancia de los atributos. ## 2 Evolucion historica de los mecanismos de atencion El objetivo de esta seccion no es realizar una presentacion exhaustiva de todas las modalidades de mecanismos de atencion existentes, pero sf introducir el concepto y para posterioremente presentar la evolucion que posteriormete dio lugar al sistema de _auto-atencion_ utilizado en los tranformadores. En esta seccion no se pretende hacer una exposicion exhaustiva de todos los mecanismos de atencion existentes, sino mas bien introducir el concepto y presentar la evolucion que llevo al sistema de.uto-atencion", base de diseno de los transformadores. Dicha arquitectura es de gran importancia y se asemeja a la propuesta de invenciones anteriores como las redes convolucionales y las redes recurrentes LSTM. Se trata de una arquitectura generalista que permite tratar e integrar datos de diferentes tipos de forma elegante en la prediccion. El hito mas importante de la clase de redes neuronales basadas en mecanismos de atencion ha sido la eliminacion de la necesidad de la recurrencia para el tratamiento de datos secuenciales, sustituyendo la representacion de las dependencias de un estado interno de la maquina a un metodo de atender en paralelo a las partes relevantes del conjunto de la secuencia. Basicamente convertir la dependencia temporal en una espacial, codificando la temporalidad como una parte adicional de la entrada. La evolucion historica de las ideas que llevaron al estado del arte actual en lo referente a atencion, empieza alrededor de 2014. El interes por dotar de elementos explicitos a los modelos predictivos, llevo al diseno de sistemas basados en atencion como modo de identificar los objetos presentes en imagenes que tenian mas importancia para la realizacion de las predicciones de la red neuronal. En 2015 se propone la utilizacion de estos mecanismos en una aplicacion de processamiento del lenguaje natural (NLP), concretamente para la traduccion de texto [3]. Hasta ese momento se habian utilizado arquitecturas codificador-decodificador en las que la primera parte de la red se encargaba de codificar en un vector interno la frase de entrada en el idioma original y a continuacion el decodificador la traducia al lenguaje objetivo. Los autores de esta nueva propuesta conjeturaron y demostraron experimentalente que la utilizacion de informacion de los estados internos del codificador, y no unicamente del vector interno final, mejoraba los resultados de rendimiento. A partir de esa primera propuesta se genera una nueva corriente de investigacion en ese sentio, que lleva a que en 2017 se presente un articulo que cambio definivamente el paradigme de diseno hacia una nueva arquitectura. En el articulo Attention is all you need. [4], se presenta una propuesta radical en la que se elimina totalmente la recurrencia, para disieran una red formada por la combinacion de modulos de _auto-atencion_, normalizacion y capas completamente concetadas, denomamida por los autores red de _transformadores_, consiguienido resultados al nivel del estado del arte, eliminando las complejiddades asociadas al entrenamiento de las redes neuronales recurrentes. Desde su presentacion en 2017, las redes de transformadores han seguido siendo objeto de intensa investigacion y desarrollo en una amplia variedad de aplicaciones, incluyendo el procesamiento del lenguaje natural, la vision por computador, el control de sistemas, y mas. Este enfoque ha revolucionado no solo el campo del procesamiento del lenguaje natural - que historicamente ha sido uno de los mas desafiantes para el aprendizaje profundo - sino tambien el processamiento de otras senales, como las imagenes, que hasta ahora eran dominio exclusivo de las redes neuronales convolucionales. ## 3 Retos que plantea la arquitectura de las redes neuronales recurrentes Las redes recurrentes surgieron para tratar con datos secuenciales, permitiendo la entrada en el orden de la secuencia y adaptandose a sus caracteristicas. Estas redes pueden mantener un estado interno que refleja las dependencias de corto, medio o largo alcance en la entrada. El objetivo es que la red recuerde y estableza relaciones entre estas dependencias para predecir salidas futuras. Este tipo de redes se utilizan como modulo dentro de modelos para abordar problemas de los siguientes tipos: 1. **Modelos vector-secuencia:** A partir de un vector de tamano fijo de entrada se genera una secuencia de dimension arbitraria. Un ejemplo seria el etiquetado de imagenes, donde la entrada es una imagen y la salida un texto explicativo. 2. **Modelos secuencia-vector:** La entrada es un vector de dimension arbitraria y la salida uno de dimension fija. Un ejemplo seria el analisis de sentiminento, donde a partir de una secuencia de entrada de dimension variable se genera un valor binario que indica si el sentiminento de esa secuencia es positivo o negativo. 3. **Modelos secuencia-secuencia:** Tanto la entrada como la salida son secuencias de dimension variable. Un ejemplo seria por ejemplo la traduccion de un texto desde un idioma a otro distinto. La practica ha demostrado que las arquitecturas basadas en redes neuronales recurrentes, diseadas para resolver los problemas arriba mencionados, no solo requieren mas epocas de entrenamiento que las redes convolucionales, sino que tambien presentan diversos inconvenientes. Estos estan relacionados con la alta profundidad de las redes implementadas, que conlleva problemas para resolver las dependencias de largo alcance, la explosion y cancelacion de gradientes, asif como dificultades para la paralelizacion del computo. Si bien arquitecturas como las LSTM [5] supusieron un avance muy significativo para la incorporacion de dependencias de largo alcance, estas siguen presentando algunos problemas con las dependencias de muy largo alcance. El desplegamiento de las redes que es necesario realizar durante el entrenamiento, suele incrementar las probabilidades de explosion y cancelacion de gradientes, con los subsecuentes efectos sobre la optimizacion. Cuando las secuencias de entrada son largas, es necesario desplegar la red en toda su extension para calcular los gradientes asociados al tratamiento de cadena. Esto incide, no unicamente en el punto anterior (gradientes), sino tambien en las necesidades de recursos computacionales requeridos para tu tratamiento (tiempo y espacio). ## 4 Arquitectura de los Transformadores La arquitectura de los transformadores fue creada para superar los desafios de las redes recurrentes mencionados anteriormente, eliminando la recurencia y processando todos los elementos de la secuencia simultaneamente. Gracias a los mecanismos de atencion, los elements distantes pueden cencerase directamente, lo que minimiza los problemas con los gradientes. Ademas, los transformadores resuvelen el probolema de la profundidad de la red utilizando los mismos mecanismos introducidos en [6], ya que tambien son redes residuales. Por otro lado, el entrenamiento, en la practica, demuestra requierr menos epocas y, para cada una de ellas, el tratamiento en paralelo de la computacion es mucho mas sencillo de implementar. Figure 1: Diagrama básico representativo de una red recurrente con celdas LSTM. ### El mecanismo de auto-atencion El mecanismo de auto-atencion es el nucleo fundamental y distintivo de la arquitectura de los transformadores. Comprender su funcionamiento es clave para adentrase en las especificidades de la arquitectura. #### 4.1.1 Definicion formal Sea \(\mathbf{X}\) una secuencia de entrada formada por \(\ell_{X}\) elementos, cada uno de ellos de dimension \(d_{e}\), \(\mathbf{X}\in\mathbb{R}^{d_{x}\times\ell_{X}}\). Sea \(\mathbf{Z}\) una secuencia de denominada de contexto formada por \(\ell_{Z}\) elementos, cada uno de ellos de dimension \(d_{e}\), \(\mathbf{Z}\in\mathbb{R}^{d_{x}\times\ell_{Z}}\). Sean (\(\mathbf{W}_{\mathbf{q}}\in\mathbb{R}^{d_{attn}\times d_{X}}\), \(\mathbf{b}_{\mathbf{q}}\in\mathbb{R}^{d_{attn}}\)), (\(\mathbf{W}_{\mathbf{k}}\in\mathbb{R}^{d_{attn}\times d_{Z}}\), \(\mathbf{b}_{\mathbf{k}}\in\mathbb{R}^{d_{attn}}\)) y (\(\mathbf{W}_{\mathbf{v}}\in\mathbb{R}^{d_{out}\times d_{Z}}\), \(\mathbf{b}_{\mathbf{v}}\in\mathbb{R}^{d_{out}}\)) tres pares de matriz-vector representatives de tres transformaciones lineales aplicables a la secuencia de entrada \(\mathbf{X}\) y a la secuencia de contexto \(\mathbf{Z}\). Sean \(\mathbf{Q}\), \(\mathbf{K}\) y \(\mathbf{V}\) las matrices obtenidas despues de aplicar las transformaciones lineales referidas, donde \(\mathbf{Q}=\mathbf{W}_{\mathbf{q}}\mathbf{X}+\mathbf{b}_{\mathbf{q}}\mathbf{1}^ {T}\in\mathbb{R}^{d_{attn}\times\ell_{X}}\), \(\mathbf{K}=\mathbf{W}_{\mathbf{k}}\mathbf{Z}+\mathbf{b}_{\mathbf{k}}\in \mathbb{R}^{d_{attn}\times\ell_{Z}}\) y \(\mathbf{V}=\mathbf{W}_{\mathbf{v}}\mathbf{Z}+\mathbf{b}_{\mathbf{v}}\in\mathbb{ R}^{d_{out}\times d_{Z}}\). Sea S la funcion de _similaridad_, definida en este caso como \(\mathbf{S}=\frac{1}{\sqrt{d_{attn}}}\mathbf{K}^{T}\mathbf{Q}\in\mathbb{R}^{d_ {Z}\times\ell_{X}}\). Entonces, la ecuacion de atencion de la secuencia de entrada \(\mathbf{X}\) sobre la secuencia de contexto \(\mathbf{Z}\) se puede expresar como: \[\tilde{\mathbf{V}}=\mathbf{V}\cdot\text{softmax}(\mathbf{S})\quad\text{donde} \quad\tilde{\mathbf{V}}\in\mathbb{R}^{d_{out}\times\ell_{X}} \tag{1}\] La funcion \(softmax\) refiere a la aplicacion de la funcion de normalizacion softmax por cada fila de la matriz. Por lo tanto, cada elemento de la matriz depende de todos los otros elementos pertenecientes a la misma fila. Cuando secuencia de entrada y contexto coinciden, esto es \(\mathbf{X}=\mathbf{Z}\) entonces hablamos de atencion a la propia secuencia o, mas breve, auto-atencion. El algoritmo 1 muestra en version algoritmica como se calcula la auto-atencion. ``` Entrada:\(\mathbf{e}\in\mathbb{R}^{d_{in}}\), la representacion token de entrada actual Entrada:\(\mathbf{e}_{t}\in\mathbb{R}^{d_{in}}\), representacion vectorial de los tokens de contexto \(t\in[T]\) Salida:\(\tilde{v}\in\mathbb{R}^{d_{out}}\), representacion vectorial de la combinacion de token y contexto. Parametros:\(\mathbf{W}_{q},\mathbf{W}_{k}\in\mathbb{R}^{d_{attn}\times d_{in}}\), \(\mathbf{b}_{q},\mathbf{b}_{k}\in\mathbb{R}^{d_{attn}}\), parametros de proyeccion lineal de consula y clave. Parametros:\(\mathbf{W}_{v}\in\mathbb{R}^{d_{out}\times d_{in}},\mathbf{b}_{v}\in\mathbb{R}^{d_{out}}\) param. de proy. lineal4 ``` **Algorithm 1** Esquema basico de atencion con una \(\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{ \text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{ \text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{ \text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{ \text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}} \tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{ \text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{n}}\tilde{\text{ Informalmente, el transformador utiliza una serie de caracteristicas de la secuencia de entrada para realizar una combinacion ponderada de ellas basandose en su similitul. Los pesos asignados a cada caracteristica se calculan a partir de la similitut entre los pares de caracteristicas de entrada. Este proceso se repite varias veeces con los datos de la capa anterior. En la primera capa, se comparan los pares de caracteristicas, y en las siguientes capas se comparan pares de pares, y asi sucesivamente. A medida que se profundiza en la red, el numero de caracteristicas combinadas aumenta exponencialmente, lo que permite obtener diferentes formas de combinacion de todas las caracteristicas de la secuencia en las capas finales. #### 4.1.2 Analogia con las bases de datos Los nombres originales utilizados para definir las transformaciones lineales usadas en el calculo de la auto-atencion hacen referencia a una analogia conceptual con las consultas propias de las bases de datos relacionales. En una base de datos tenemos consultas (queries, Q), claves (keys, K) y valores (values, V). Al realizar un consulta Q sobre un conjunto de claves \(K_{1}\), \(K_{2}\),... \(K_{N}\), la base de datos reporta como resultado una serie de valores \(V_{1}\), \(V_{2}\),... \(V_{N}\). El mecanismo de atencion de los transformadores viene a ser una version probabilistica de este proceso. Una funcion de similaridad compara la clave Q con cada una de las claves K. El resultado es un vector que puede ser interpretado como Figure 2: Diagrama de bloques representativo de las operaciones matriciales involucradas en los calculos de la atencion la similaridad de la consulta Q con cada una de las claves K. Esta valor sirve posteriormete para calcular un peso que se utiliza a continuacion para calcular el valor final, como una combinacion lineal de los valores de entrada. La funcion de similaridad puede ser definida de distintos modos (ver ecuacion 2). Es una decision de diseño el escoger una u otra. Como hemos observado en las ecuaciones y algoritmos presentados en el apartado anterior, los transformadores que estudiaremos utilizan como funcion de similaridad el producto escalar escalado. \[s_{i}=f(Q,K_{i})=\begin{cases}Q^{T}K_{i}&\text{producto escalar}\\ \frac{Q^{T}K_{i}}{d}&\text{producto escalar escalado}\\ Q^{T}WK_{i}&\text{producto escalar general}\\ W^{T}_{Q}+W^{T}_{K}K_{i}&\text{similaridad aditiva}\\ \text{otros}&\text{kernels, etc.}\end{cases} \tag{2}\] Calculada la similaridad, obtenemos para una consulta concreta Q, una valor por clave, esto es, un vector de dimension igual al numero de claves. Valores altos de similaridad indican un alto grado de coincidencia. Por tanto, como la intencion es crear una funcion de busqueda probabilistica, se utiliza ese valor para calcular la probabilidad de coincidencia aplicando una funcion softmax (ver ecuacion 3), obteniendo como resultado una probabilidad del valor de la posicion i para pareja consulta-clave determinada. \[\omega_{i}=\frac{\exp{(s_{i})}}{\sum_{j}\exp(s_{j})} \tag{3}\] Finalmente, se obtendra un conjunto de valores como combinacion lineal de los valores fuente y el peso de cada uno de ellos en la comparacion entre consulta-clave (ver ecuacion 4). \[A=\sum\omega_{i}V_{i} \tag{4}\] #### 4.1.3 Auto-atencion multiple El mecanismo que hemos descrito en las secciones anteriores permite realizar una transformacion utilizando el superconjunto (\(\mathbf{Q}\), \(\mathbf{K}\), \(\mathbf{V}\)) desde el espacio inicial \(\mathbb{R}^{n\times d}\) para obtener como resultado un valor perteneciente al espacio \(\mathbb{R}^{n\times d_{v}}\). Una forma de ampliar las capacidades del sistema es aplicar varias transformaciones en paralelo del mismo tipo, pero con super-conjuntos (\(\mathbf{Q}\), \(\mathbf{K}\), \(\mathbf{V}\)) distintos. Cada uno de ellos reportando un valor distinto situado en el espacio \(\mathbb{R}^{n\times d_{v}}\). En la ecuacion 5 se muestra la ecuacion utilizada para calcular la atencion multiple. \[\mathbf{M}=\text{Concat}_{i=1}^{h}\left[\mathbf{D_{i}}\left(\mathbf{Q_{i}},\mathbf{K_{i}},\mathbf{V _{i}}\right)\right]\mathbf{W_{O}} \tag{5}\] Si el numero de bloques de atencion paralelos \(n\) se escoge de forma tal que \(n=\frac{d}{d_{v}}\) entonces las dimensiones del espacio de salida seran iguales a las del espacio de entrada \(\mathbb{R}^{n\times d}\). \(\mathbf{W_{O}}\) es una matriz que realiza una transformacion lineal posterior a la concatenation y permitiria eventualmente la modificacion de las dimensiones del espacio de salida, en caso de que en el diseño pudiera ser de interes. La utilizacion del mecanismo de atencion multiple permite aplicar simultaneamente diferentes transformaciones a los pares de atributos de entrada, lo que aumenta tanto la diversidad como la complejidad de las comparaciones. Ademas, permite mantener la dimension del espacio de trabajo, lo que puede resultar util en arquitecturas que emplean bloques de la misma naturaleza apilados unos sobre otros. El algoritmo 3 muestra la version algoritmica del proceso. ### Transformadores Llegados a este punto estamos en condiciones de presentar la arquitectura del _transformador_. Como ya hemos indicado con anterioridad, este nuevo paradigma de diseno de red neuronal se introdujo con la publicacion de [4] y se planteo para su aplicacion en sistemas de traduccion automatica. Figure 3: Arquitectura del Transformador introducida en “Attention is all you need” [4] En la figura 4.2 se muestra el diagrama de los bloues integernetes de la red de transformadores tal como fue publicado por primera vez en [4]. Cada elemento de la secuencia de entrada se convirete en una representacion numerica (_input embedding_) para posteriormente combinares (suma en este caso) con una funcion que cotifica la posicion del elemento en la secuencia (_positional encoding_). Este valor sirve de entrada auna pila de N bloues denominados _codificadores_. Los artibutos fruto de la codificacion seran una de las entradas que alimentar a la red de _decodificadores_. El objetivo de esta red es predecir la siguiente palabra y, para ello, utiliza no unicamente la informacion precedente de cadena de la entrada a traves del codificador, sino tambien todos los elementos de la cadena de salida producidos hasta el momento. En las secoiones siguientes vamos a describir con detalle cada una de las partes integrantes de la arquitectura, agrupan-dolos en sus elementos constituutivos de mayor nivel. #### 4.2.1 Notacion Sea \(V\) un conjunto finito denominado _vocabulario_, denotado como \([N_{V}]:=\{1,...,N_{V}\}\). Puede estar formado por letras o palabras completas, anuque tipicamente esta formado por partes constituuventes de palabras denominadas _tokens_. Sea \(\mathbf{x}\equiv x[1:\ell]\equiv x[1]x[2]...x[\ell]\in V^{*}\) una secuencia de _tokens_, por ejemplo una frase, un paragrafo o un documento. Siguiendo la notacion matematica y en contra de lo establecido en algunos lenguajes de programacion como C o python, el primer indice de matrices y vectores utilizados en este modulo seri el uno. \(X[1:\ell]\), por ejemplo, refiere a la cadena de caracteres que va desde el primero hasta el elemento \(\ell\) ambos incluidos. Para una matriz \(M\in\mathbb{R}^{d\times d^{\prime}}\), escribmos \(M[i,:]\in\mathbb{R}^{d^{\prime}}\) para referirmos a la fila \(i\)\(V\)\(M[:,j]\in\mathbb{R}^{d}\) para la columna \(j\). Usamos la convencion matriz \(\times\) columna mas comun en matematicas en vez de fila \(\times\) matriz mas tipica de la mayoria de la literatura de transformadores, esto es, las matrices estan traspuestas. #### 4.2.2 Tratamiento de entrada En la figura 4.2.2 se muestra el proceso de tratamiento de entrada propio del transformador. El texto se alimenta a un _tokenizador_ la funcion del cual es particionar la secuencia en sus elementos constituuventes. Cada elemento es representado por un vector ortogonal que tiene un \(1\) en la posicion del diccionario identificativa del elemento y ceros en el resto. A continuacion se realiza una transformacion lineal para comprimir la representacion inicial en un vector denso de dimension menor. Finalmente, mediante un operador, que suele ser la suma (pero nada impide que pudiera ser por ejemplo la concatenacion u otro distinto), se anade la informacion referente a la posicion que ocupa el elemento dentro de la secuencia. La representacion numerica de la secuencia ya esta lista para ser alimentada a los bloques de processamiento siguientes. La capa de entrada se encarga del preprocesamiento de los datos para adecuarlos a un formato que sea interpretable por el transformador. Los transformadores se disenaron inicialmente como forma alternativa de tratamiento de los datos Figure 4: Figure 4.2.2. Esquema tipico de las operaciones necesarias para convertir una entrada de texto a formato necesario para ser tratado por el transformador secuenciales. El primer paso para el tratamiento de este tipo de datos es la particion de la secuencia en sus e elementos constitutivos. Este proceso es conocido habitualmente como _tokenizacion_. Esta particion se puede hacer de muchos modos: a nivel de palabra, partes de palabra o bien caracteres. En los diseno actuales de transformadores lo mas habitual es utilizar particiones a nivel de partes de palabra. Muchas palabras estan constituuidas por distintos monemas (unidades minimas de significcado). Particionar a nivel de palabra tiene el inconveniente de que muchas de las unidades elementales de significcado no se separan. Esto suele ser un inconveniente que dificula el funcionamiento de modelos interpretativos del lenguaje. De entrada se podria conieptur que una separacion a nivel de caracter permitira al modelo reconstruir las unidades minimas de significcado. Los articulos publicados indica explicitamente, que los resultados practicos, al menos hasta el momento, indician que la particion a nivel de caracter es demasiado agresia, con resultados significativamente inferiores a los obtenidos utilizando particiones a nivel de partes de palabra. Entre los particionadores mas tipicos utilizados con Transformadores tenemos _BPE (Byte Pair Encoding)_[7], _Unigram_[8] y _WordPiece_[9]. La diferencia existente ent estesto se basa en la estrategia seguida para escoger los pares de caracteres a concatenar para elegir cada una de las subpalabras para formar el diccionario. Cada modelo pre-entrenado esta disenado para ser usado con un diccionario especifico derivado del particionador escogido. Para el caso de los transformadores para el tratamiento de imagen, la propuesta existente hasta el momento, consiste en particionar la imagen en trozos no superpuestos de dimensiones reducidas y alimentarlos secuencialmente. Si bien la preparacion de las entradas descritas son los modos actuales establecidos de hacerlo, no existe una razon para que esto pueda ser de otro modo. Por lo tanto, es posible que en el futuro puedan surgir maneras alternativas de codificar las entradas. ``` Entrada:\(v\in V\cong[N_{V}]\), identificador del token (p.e. one-hot) Salida:\(\mathbf{e}\in\mathbb{R}^{d_{e}}\), la representacion vectorial de un token Parametros:\(\mathbf{W}_{e}\in\mathbb{R}^{d_{e}\times N_{V}}\), la matriz de embedding retorna:\(\mathbf{e}=\mathbf{W}_{e}[:,v]\) ``` **Algorithm 4** Embedding de un token ``` Entrada:\(\ell\in\ell_{max}\), posicion del token dentro de la secuencia Salida:\(\mathbf{e}_{p}\in\mathbb{R}^{d_{e}}\), la representacion vectorial de la posicion Parametros:\(\mathbf{W}_{p}\in\mathbb{R}^{d_{e}\times\ell_{max}}\), la matriz de embeddings retorna:\(\mathbf{e}_{p}=\mathbf{W}_{p}[:,\ell]\) ``` **Algorithm 5** Embedding de la posicion #### 4.2.3 Codificadores El codificador esta integrado por un cabezal de auto-atencion multiple seguido de una red completamente conectada. Ambas capas forman parte de una red residual y, por lo tanto, integran a continuacion de cada una de esas dos capas la suma de la entrada y su normalizacion. En la figura 4.2.3 se muestra el diagrama de bloques de los elementos integrantes de un codificador. #### 4.2.4 Decodificacores El decodificador es un bloque integrado por un cabezal de auto-atencion multiple que recibe como entradas los elementos de la cadena de salida de la red del transformador desplazados una posicion. Para asegurar que la predicicion se realiza usando unicamente la informacion precedente, pero no la posterior, integra una matriz de enmascarado que permite anular el efecto de los elementos posteriores de la cadena de salida. A continuacion de este primer bloque, existe un segundo que recibe como entrada la salida del primero asf como la salida procedente de la codificacion de la cadena de entrada. La salida de este bloque se alimenta a una capa completamente conectada. Todas estas etapas son tambien capas residuales y estan seguidas de su correspondiente suma con la entrada y normalizacion. La red de decodificacion apila en serie N elementos de este tipo. En la figura 4.2.4 se muestra el diagrama de bloques de los elementos integrantes de un decodificador. #### 4.2.5 Tratamiento de salida En la capa de salida es necesario realizar una conversion desde el espacio de representacion interna hasta el de representacion externa (ver figura 4.2.5). Es la transformacion inversa a la operacion de _embedding_, donde pasamos del vector representativo del diccionario a una representacion interna. Existen dos formas de llevar a cabo la conversion: la primera es tratar a la capa como independiente, realizando el aprendizaje de los pesos de conversion, mientras que la segunda es utilizar para la matriz inversa del _embedding_, sin realizar un aprendizaje independiente. En [4] se utiliza la estregia definida en [10] de ligar la transformacion de la capa de entrada a la capa de salida. Uno de los pasos inciales para preparar las entradas, pasa por convertir la representacion _one-hot_ de su posicion en el diccionario, por una representacion "densa"de las dimensiones del modelo. Este paso se hace mediante una transformacion lineal que reduce las dimensiones desde las del diccionario a la interna de trabajo del modelo. La idea presentada en [10] y utilizada posteriormente en [4], pasa por utilizar la misma matriz de transformacion tanto para las entradas como para las salidas. Siendo esto asi, el clasificador de salida pasa a ser una transformacion lineal con los mismos pesos que la de la entrada. El algoritmo 6 muestra la version algoritmica del proceso. Figure 5: Esquema tipico de los elementos integrantes de un codificador. La red de codificacion integra varios de estos elementos en serie. La salida de uno sirve de entrada para el siguiente. #### 4.2.6 Normalizacion de capas Las capas de normalizacion estan presentes como bloque constituyente tanto en los codificadores como en los decodificados. Su finalidad es el control explicito de la media y la varianza de la activacion de cada neurona. Figure 6: Esquema tipico de los elementos integrantes de un decodificador. La red de decodificacion integra varios de estos elementos en serie. La salida de uno sirve de entrada para el siguiente. A todos ellos se alimenta la salida de la red de codificadores ## 5 Tipos de Transformadores En la seccion anterior se ha introducido la arquitectura del transformador mostrando el diagrama original introducido en [4]. Como ya hemos indicado con anterioridad, la propuesta inicial se hizo como medio alternativo de resolucion de problemas de traduccion de secuencia a secuencia y representa el primero de los tipos de modelos posibles de transformadores. En este apartado presentaremos modificaciones de la arquitectura original que surgieron posteriormente y que han dado lugar a subfamilias de transformadores especializadas en la resolucion de problemas distintos. Actualmente podemos distinguir entre tres tipos de modelos o arquitecturas: los auto-regresivos, los de auto-codificacion y los de conversion de secuencia-a-secuencia. ### Transformadores secuencia-a-secuencia Los modelos de secuencia-a-secuencia utilizan dos unidades basicas de diseno denominadas codificadores y decodificadores. Su finalidad es la conversion de una secuencia de entrada en otra distinta. Sus aplicaciones principales estan relacionadas con la traduccion, resumen y respuesta a preguntas. El modelo original es un ejemplo de este tipo de aplicaciones. **T5**[11] seria un ejemplo presentado despues del articulo general de la misma arquitectura usada para resolver otro tipo de tareas mas especificas. El algoritmo 8 recoge los detalles de implementacion de lo que seria un transformador de secuencia-a-secuencia. Figure 7: Esquema tipico de las operaciones necessarias para convertir la diltima capa ocula del transformador en una salida ``` /* Transformador codificador/decodificador o de secuencia-a-secuencia */ Entrada:\(z,\mathbf{x}\in V^{*}\), dos secuencias de identificadores de tokens Salida:\(\mathbf{P}\in(0,1)^{N_{V}\times longitudinal(x)},\) t de \(\mathbf{P}\) representa \(\hat{P_{\theta}}(x[t+1]\mid\mathbf{x}[1:t],\mathbf{z})\) Hippararientores:\(\mathbf{\ell_{max}},L_{enc},L_{dec},H,d_{e},d_{mlp},d_{f}\in\mathbb{N}\) Parameters:\(\mathbf{\theta}\) quelev alo: \(\mathbf{W_{e}}\in\mathbb{R}^{d_{e}\times N_{V}},\mathbf{W_{p}}\in\mathbb{R}^{d_{e} \times\ell_{max}}\), matrices de embedding de token y posicion Para \(l\in[L_{enc}]\): \(\mathbf{W_{we,l}^{dec}}\), auto-atencion de la capa \(l\) \(\mathbf{\gamma}_{l}^{l},\mathbf{\beta}_{1}^{l},\mathbf{\gamma}_{l}^{l},\mathbf{\beta}_{l}^{l} \in\mathbb{R}^{d_{e}}\), dos conjuntos de normalizacion de capas \(\mathbf{W_{w}}^{l}\in\mathbb{R}^{d_{mlp}\times d_{e}}\), \(\mathbf{b}_{mlp1}^{l}\in\mathbb{R}^{d_{mlp}}\), \(\mathbf{W_{mlp2}^{l}}\in\mathbb{R}^{d_{mlp}\times d_{e}}\), \(\mathbf{b}_{mlp2}^{l}\in\mathbb{R}^{d_{mlp}}\) Para \(l\in[L_{dec}]\): \(\mathbf{W_{we,l}^{dec}}\), \(\mathbf{W_{we,l}^{ed/d}}\), auto-atencion y auto-atencion cruzada de la capa \(l\) \(\mathbf{\gamma}_{l}^{3},\mathbf{\beta}_{l}^{l},\mathbf{\gamma}_{l}^{l},\mathbf{\beta}_{l}^{l},\mathbf{\gamma}_{l}^{l},\mathbf{\beta}_{l}^{l}\in\mathbb{R}^{d_{e}}\), tres conjuntos de normalizacion de capas \(\mathbf{W_{mlp3}^{l}}\in\mathbb{R}^{d_{mlp}\times d_{e}}\), \(\mathbf{b}_{mlp3}^{l}\in\mathbb{R}^{d_{mlp}}\), \(\mathbf{W_{mlp4}^{l}}\in\mathbb{R}^{d_{mlp}}\) \(\mathbf{W_{w}}\in\mathbb{R}^{N_{V}\times d_{e}}\), matrix inversa de embedding /* Codificacion de la secuencia de contexte */ \(l\) \(\ell_{z}\gets longitudinal(z)\) \(\text{para }t\in[\ell_{z}]:\mathbf{e_{t}}\leftarrow\mathbf{W_{e}}[:,z[t]]+\mathbf{W_{p}}[:,t]\) \(\mathbf{Z}\leftarrow[\mathbf{e_{1}},\mathbf{e_{2}},...,\mathbf{e_{l}}]\) para \(\ell=1,2,...,L_{enc}\) hacer \(\mathbf{Z}\leftarrow\mathbf{Z}+atencion\_multicabezal(\mathbf{Z}\mid\mathbf{W_{we,l}^{enc}},Mask \equiv 1)\) para \(t\in[\ell_{Z}]:\mathbf{Z}[:,t]\gets normalizacion(\mathbf{Z}[:,t]\mid\mathbf{\gamma}_{l}^{1}, \mathbf{\beta}_{l}^{1})\) para \(\ell\leftarrow\mathbf{Z}+\mathbf{W_{mlp2}^{l}}\text{ReLU}(\mathbf{W_{mlp1}^{l}}\mathbf{Z}+\mathbf{b _{mlp1}^{l}}\mathbf{1}^{T})+\mathbf{b_{mlp2}^{l}}\mathbf{1}^{T}\) para \(t\in[\ell_{Z}]:\mathbf{Z}[:,t]\gets normalizacion(\mathbf{Z}[:,t]\mid\mathbf{\gamma}_{l}^{2}, \mathbf{\beta}_{l}^{2})\) /* Decodificacion de la primera secuencia condicionada por el contexto */ \(\ell_{X}\gets longitudinal(X)\) para \(t\in[\ell_{X}]:\mathbf{e_{t}}\leftarrow\mathbf{W_{e}}[:,x[t]]+\mathbf{W_{p}}[:,t]\) \(\mathbf{X}\leftarrow[\mathbf{e_{1}},\mathbf{e_{2}},...,\mathbf{e_{l}}]\) para \(i=1,2,...,L_{dec}\) hacer \(\mathbf{X}\leftarrow[\mathbf{X}+atencion\_multicabezal(\mathbf{X}\mid\mathbf{W_{qke,l}^{dec}},Mask [t,t^{\prime}]=[[t\leq t^{\prime}]])\) para \(t\in[\ell_{X}]:\mathbf{X}[:,t]\gets normalizacion(\mathbf{X}[:,t]\mid\mathbf{\gamma}_{l}^{3}, \mathbf{\beta}_{l}^{3})\) para \(\ell\leftarrow\mathbf{X}+atencion\_multicabezal(\mathbf{X},\mathbf{Z}\mid\mathbf{W_{qke,l}^{ed/d}}, Mask \equiv 1)\) para \(t\in[\ell_{X}]:\mathbf{X}[:,t]\gets normalizacion(\mathbf{X}[:,t]\mid\mathbf{\gamma}_{l}^{4}, \mathbf{\beta}_{l}^{4})\) para \(\mathbf{X}\leftarrow\mathbf{X}+\mathbf{W_{mlp4}^{l}}\text{ReLU}(\mathbf{W_{mlp3}^{l}}\mathbf{X}+\mathbf{b _{mlp3}^{l}}\mathbf{1}^{T})+\mathbf{b_{mlp4}^{l}}\mathbf{1}^{T}\) para \(P=softmax(\mathbf{W_{w}}\mathbf{X})\) ``` **Algorithm 8**\(\mathbf{P}\gets transformadorCD(\mathbf{x}\mid\mathbf{\theta})\) En la figura 5.1 representa el diagrama funcional tipico representativo de esta arquitectura. Este diagrama es equivalente al de la figura 4.2, mostrando los elementos constitutivos al mas alto nivel, esto es: los bloques de tratamiento de la secuencia de entrada, los codificadores, decodificadores y finalmente la capa de clasificacion y salida. En la figura 5.1 mostramos un ejemplo de una secuencia tipica de entrada y otra de salida. Las secuencias mostradas serian un ejemplo de una traduccion de una secuencia inicial en griego clasico a otra secuencia en espanol. Al modelo se le entraria la primera y se obtendria como salida la segunda. En el algoritomo 9 se muestra lo que seria el codigo necesario para realizar la optimizacion del modelo utilizando un conjunto de datos de entrada. ### Transformadores bidireccionales o de auto-codificacion Los modelos de auto-codificacion son aquellos que estan pre-entrenados para la correccion de entradas corrompidas artificialmente. Esto se hace para entrenar al modelo en la reconstruccion del texto original. Este tipo de modelos se corresponden con la parte del codificador de la arquitectura original en el sentido que no utilizan mascaras para anular el efecto del texto posterior. Estos modelos construyen una representacion bidireccional de la frase en su totalidad y pueden ser posteriormente ser entrenados para resolucion de tareas especificas como la generacion de texto, anuque su aplicacion principal es la clasificacion de frases. Un ejemplo tipico de estos modelos seria **BERT**[9]. Figure 8: Arquitectura típica del modelo de transformador de secuencia-a-secuencia Figure 9: Ejemplo de un conjunto de entrada y salida comán de un modelo secuencia-a-secuencia. En este caso sería un modelo de traduccion de griego clásico al espanól. El algoritmo 10 recoge los detalles de implementacion de lo que seria un transformador bidireccional, esto es un transformador formado unicamente por codificadores. ``` /* Transformador formado por codificadores, bidireccional o auto-codificador */ Entrada:\(\mathbf{x}\in V^{*}\), una secuencia de identificadores de tokens Salida:\(\mathbf{P}\in(0,1)^{N_{V}\times longitudinal}(x)\), donde cada columma de \(\mathbf{P}\) es una distribucion sobre todo el vocabulario Hipperarametros:\(\ell_{max},L,H,d_{e},d_{mlp},d_{f}\in\mathbb{N}\) Parametros:\(\mathbf{\theta}\) que incluye a: \(\mathbf{W_{e}}\in\mathbb{R}^{d\times N_{V}},\mathbf{W_{p}}\in\mathbb{R}^{d_{e}\times \ell_{max}}\), matrices de embedding de token y posicion Para \(l\in[L]\): \(\mathbf{W_{qbr,l}}\), parametrios de auto-atencion de la capa \(l\) \(\mathbf{\gamma}_{1}^{l},\mathbf{\beta}_{1}^{l},\mathbf{\gamma}_{2}^{l},\mathbf{\beta}_{1}^{l} \in\mathbb{R}^{d_{e}}\), dos conjuntos de normalizacion de capas \(\mathbf{W^{\prime}}_{mlp1}\in\mathbb{R}^{d_{mip}\times d_{e}}\), \(\mathbf{b^{\prime}}_{mlp1}\in\mathbb{R}^{d_{mlp}}\), \(\mathbf{W^{\prime}}_{mlp2}\in\mathbb{R}^{d_{mlp}\times d_{e}}\), \(\mathbf{b^{\prime}}_{mlp2}\in\mathbb{R}^{d_{mlp}}\) \(\mathbf{W_{f}}\in\mathbb{R}^{d_{f}\times d_{e}}\), \(\mathbf{b^{\prime}}_{f}\in\mathbb{R}^{d_{f}}\), \(\mathbf{\gamma},\mathbf{\beta}\in\mathbb{R}^{d_{f}}\), proyeccion lineal y normalizacion final \(\mathbf{W_{u}}\in\mathbb{R}^{N_{V}\times d_{e}}\), matriz inversa de embedding \(\mathbf{W_{1}}\in\mathbb{R}^{d_{e}}\), matriz inversa de embedding \(\mathbf{W_{u}}\in\mathbb{R}^{N_{V}\times d_{e}}\), matriz inversa de embedding \(\mathbf{W_{1}}\in\mathbb{R}^{d_{e}}\), matriz inversa de embedding \(\mathbf{W_{u}}\ ### Transformadores auto-regresivos Los modelos auto-regresivos son aquellos que a partir de una secuencia de entrada inicial general nuevas salidas que a su vez se incorporan a la secuencia inicial como entrada para generar nuevas salidas. Su principal aplicacion es la Figure 11: Ejemplo de entradas y salidas tipicas de los modelos de auto-codificacion. En este caso se presentan tres entradas posibles con sus valores de salida esperados Figure 10: Arquitectura tipica del modelo de transformador auto-codificador generacion de texto, anunque es posible ajustarlos con un entrenamiento posterior para adaptarlos a la resolucion de problemas especificos. Un ejemplo de este tipo de modelos seria la familia de **GPT**. El algoritmo 12 recoge los detalles de implementacion de lo que seria un transformador auto-regresivo, esto es un transformador formado unicamente por decodificadores. ``` /* Transformador formado por decodificadores o auto-regresivo */ Entrada:\(\mathbf{x}\in V^{*}\), una secuencia de identificadores de tokens Salida:\(\mathbf{P}\in(0,1)^{N_{V}\times longitud(x)}\), donde la columna t de \(\mathbf{P}\) representa \(\hat{P}_{\theta}(x[t+1]\mid\mathbf{x}[1:t])\) Hyperparametros:\(\mathbf{\ell}_{max},L,H,d_{mlp},d_{f}\in\mathbb{N}\) Parametros:\(\mathbf{\theta}\) que incluye a: \(\mathbf{W}_{\mathbf{e}}\in\mathbb{R}^{d_{e}\times N_{V}},\mathbf{W}_{\mathbf{p}}\in\mathbb{R}^ {d_{e}\times d_{max}}\), matrices de embedding de token y posicion Para \(l\in[L]\): \(\mathbf{W}_{qkv,l}\), parametriros de auto-atencion de la capa \(l\) \(\mathbf{\gamma}^{\intercal}_{l},\mathbf{\beta}^{\intercal}_{l},\mathbf{\gamma}^{\intercal }_{l},\mathbf{\beta}^{\intercal}_{l}\in\mathbb{R}^{d_{e}}\), dos conjuntos de normalizacion de capas \(\mathbf{W}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Figure 13: Ejemplo de entradas y salidas tipicas de los modelos auto-regresivos. En este caso se presentan tres entradas posibles con sus respectivas salidas esperadas \begin{tabular}{l|c|c|c} \hline & **GPT-1** & **GPT-2** & **GPT-3** \\ \hline \multicolumn{4}{c}{**Arquitectura**} \\ \hline \(n_{param}\) & \(117\times 10^{6}\) & \(1{,}5\times 10^{9}\) & \(175\times 10^{9}\) \\ \hline \(T_{param}\) & \(468\,\mathrm{MB}\) & \(6\,\mathrm{GB}\) & \(700\,\mathrm{GB}\) \\ \hline \(\ell_{max}\) & \(512\) & \(1024\) & \(2048\) \\ \hline \(L_{dec}\) & \(12\) & \(48\) & \(96\) \\ \hline \(H\) & \(12\) & \(12\) & \(96\) \\ \hline \(d_{attn}\) & \(64\) & \(128\) \\ \hline \(d_{mlp}\) & \(3072\) & & \\ \hline \(N_{V}\) & \(40478\) & \(50257\) & \(50257\) \\ \hline \(c_{cntrada}\) & BPE & BPE & BPE \\ \hline \(d_{e}\) & \(768\) & \(1600\) & \(12288\) \\ \hline \multicolumn{4}{c}{**Entrenamiento**} \\ \hline \(T_{datos}\) & \(1\,\mathrm{GB}\) & \(40\,\mathrm{GB}\) & \(2\,\mathrm{TB}\) \\ \hline \(n_{tokens}\) & \(25\times 10^{7}\) & \(10\times 10^{9}\) & \(499\times 10^{9}\) \\ \hline \(ratio_{compr}\) & \(2\),\(13\) & \(6\),\(66\) & \(2\),\(85\) \\ \hline \multicolumn{4}{c}{**Optimizador**} & Adam & Adam & Adam \\ \hline \multicolumn{4}{c}{**BS**} & \(64\) & \(512\) & \(3\),\(2\times 10^{6}\) \\ \hline \end{tabular} Cuadro 2: Caracteristicas de los modelos autoregresivos de la familia GPT. **Algorithm 13**\(\hat{\mathbf{\theta}}\gets entrenamientoD(\mathbf{x}_{1:N_{data}},\mathbf{\theta})\) ``` /* Prediccion usando un modelo entrenado */ Entrada:\(\hat{\theta}\), parametros entrenados Entrada:\(\mathbf{x}\in V^{*}\), una secuencia de entrada Salida:\(y\in V^{*}\), prediccion de la continuacion de la secuencia de entrada Hyperparametros:\(\ell_{gen}\in\mathbb{N},\tau\in(0,\infty)\) \(\ell\gets longitud(\mathbf{x})\) para \(n=1,2,...,\ell_{gen}\)bacer 3\(\mathbf{P}\gets transformador\_de\_decodificadores(\mathbf{x}\mid\hat{\theta})\) \(\mathbf{p}\leftarrow\mathbf{P}[:,\ell+i-1]\) muestrea un token \(y\) de \(\mathbf{q}\sim\mathbf{p}^{1/\tau}\) \(\mathbf{x}\leftarrow[\mathbf{x},y]\) fpara 4retora \(y=\mathbf{x}[\ell+1:\ell+\ell_{gen}]\) ``` **Algorithm 14**\(\hat{\mathbf{\theta}}\gets inferencia\_transformadorD(\mathbf{x},\hat{\mathbf{\theta}})\) ``` /* Prediccion usando un massprodacion sequencia-a-secuencia */ Entrada:\(\mathbf{\theta}\), parametros entrenados Entrada:\(\mathbf{z}\in V^{*}\), una secuencia de entrada Salida:\(\mathbf{x}\in V^{*}\), prediccion de la secuencia de salida Hipparparametros:\(\tau\in(0,\infty)\) \(\hat{\mathbf{x}}\leftarrow[bos\_token]\) \(y\gets 0\) \(y\neq 0\) \(y\neq eos\_token\)bacer 4\(\mathbf{P}\gets transformador\_CD(\mathbf{z},\hat{\mathbf{x}}\mid\hat{\theta})\) \(\mathbf{p}\leftarrow\mathbf{P}[:,longitud(\hat{\mathbf{x}})]\) muestrea un token \(y\) de \(\mathbf{q}\sim\mathbf{p}^{1/\tau}\) \(\hat{\mathbf{x}}\leftarrow[\hat{\mathbf{x}},y]\) fpara 5retora \(\hat{\mathbf{x}}\) ``` **Algorithm 15**\(\hat{\mathbf{\theta}}\gets inferencia\_transformadorCD(\mathbf{x},\hat{\mathbf{\theta}})\) ## 6 Transformadores para procesamiento del lenguaje natural (NLP) ### Desarrollos previos: embeddings Previo a la introduccion de los algoritmos basados en transformadores es importante hacer mencion a un conjunto de algoritmos que se crearon en los anos anteriores, concretamente a los algoritmos de creacion de _embeddings_. En el contexto de NLP un _embedding_ es una representacion matematica del significado de las palabras. Esta transformacion parte del supuesto que es posible representar el conocimiento en un espacio vectorial multi-dimensional. De esta manera cualquier palabra del lenguaje tendra una transformacion en ese espacio. A los vectores representativos de las palabras se les denomina _embeddings_. En los anos anteriores a la introduccion de los transformadores se realizaron multitud de publicaciones definiendo distintos metodos para realizar dicha transformacion. Entre los mas conocidos tenemos a **Word2Vec**[12], **GloVe**[13], **FastText**[14], entre otros. Estos algoritmos realizan la transformacion utilizando una red neuronal poco profunda. Estre las propiedades mas destacadas de estos algoritmos estan las siguientes: * **Similaridad entre palabras de sinonimas o semejantes:** En el espacio transformado palabras de significado similar convergen en ubicaciones simileares. Assi por ejemplo, palabras como 'coche','vehiculo' o 'furgoneta' estan relativamente mas cercanas entre ellas de otras como 'luna', 'espacio' o 'arbol', entendiendo la proximidad como una similaridad calculada como distancia euclidiana o similaridad del coseno, por ejemplo. * **Codificacion de relaciones linguisticas entre palabras:** Una caracteristica cuando menos sorprendente de estas transformaciones es la codificacion de algunas relaciones entre palabras como transformaciones lineales. Por ejemplo, la transformacion lineal entre 'hombre' y'mujer' es similar a la necessaria entre'rey' y'reina', 'tio' y 'tia' o 'actor' y 'actriz', generalizando por tanto eas transformacion como una transformacion de genero. Asi, esto habilita la posibilidad de realizar operaciones con resultados aproximados como \(\overrightarrow{\mathrm{rey}}-\widehat{\mathrm{hombre}}+\mathrm{mujer}\approx \overrightarrow{\mathrm{reina}}\circ\mathrm{como}\ \overrightarrow{\mathrm{Paris}}-\widehat{\mathrm{Francia}}+ \widehat{\mathrm{Alemania}}\approx\overrightarrow{\mathrm{Berlin}}\) Las limitaciones de estos metodos proceden de la propia naturaleza de su diseno. Al estar entrenados para codificar el significado de las balabras aisladas, no codifican el contexto. Una posibilidad para integrar el contexto, en el entorno de las redes neuronales, paso por utilizar los embeddings como transformaciones previas a la entrada a las redes neuronales recurrentes, con el objetivo que estas se encargaran de codificar el contexto como medio de resolucion de problemas de processamiento natural concretos. Los transformacores suponen un paso mas alla. El objetivo de los mismos pasa por realizar un aprendizaje de inicio a fin. Las transformaciones del embedding pasan a formar parte de la propia red y son aprendidas en tiempo de entrenamiento, quedando integradas como parte de la red global como los parametros de auto-atencion de la primera capa. ### Inicios de los Transformadores para NLP Las publicaciones iniciales que marcaron la evolucion posterior de los transformadores se circunscriben en el ambito del processamiento natural. Presentamos a continuacion, en orden conologico, un breve resumen de aquellas que consideramos mas relevantes para la definicion de este paradigma de diseno asi como las aprotaciones mas significativas de cada una de estas publicaciones. Introduccion de la arquitectura. [4] **ULMFiT** (Universal Language Model Fine-tuning for Text Classification): Los autores presentan una solucion para resolver cualquier tarea de NLP mediante aprendizaje por transferencia. Introdujo la idea de general-domain, aprendizaje no supervisado seguido de fine-tuning. Introdujeron tambien otras tecnicas ampliamente utilizadas posteriormente en otros trabajos como slanted triangular learning rate schedules (akas warm-up). [15] **GPT**[16] y **GPT-2**[17] (Generative Pre-Trained Transformers) Se presenta una arquitectura formada unicamente por decodificadores. (haciendo esto de esta forma se pierde la naturaleza autoregresiva/unidirectional del modelo????) La mayor contribucion de estas publicaciones es que demostraron el valor de entrenar modelos de enorme tamano. **BERT**[9] (Bidirectional Encoder Representations from Transformers) Los autores usa unicamente codificadores para preentrenar representaciones bidireccionales de texto no etiquetado. Estos modelos son afinados posteriormente con segundo entrenamiento, anadiendo onicamente una capa de salida adicional, para resolver aplicaciones especificas. Este modelo supuso un antes y un despues en la resolucion de problemas de NLP. **RoBERTa**[18] (Robustly Optimized BERT Approach) Los autores proponen una arquitectura identica a BERT introduciendo una serie de mejoras del procedimiento de entrenamiento. Basicamente, es un BERT con un entrenamiento mejorado. **TS**[11] (Text-to-Text Transfer Transformer) Las principales contribuciones de este trabajo son, por una parte, la reformulacion de la forma de reportar los resultados sustituyendo el enfoque numerico por un enfoque textual y por otra, el estudio sistematico de la contribucion de la funcion objetivo, arquitecturas, conjuntos de datos, transfer approaches y otros factores en los resultados obtenidos en un conjunto de tareas canonicas de NLP. ### Tipos de representaciones aprendidas por los Transformadores Vamos a intentar clasificar los distintos tipos de representaciones que es posible conseguir dependiendo del tipo de transformador que se entrene. 1. Representaciones contextuales vs no-contextuales: **Word2Vec** o **FastText** son no contextuales mientras que todos los procedentes delos transformadores son textuales. 2. Unidireccional vs Bidireccionales ### Escalado de tamano en las capacidades de los transformadores Explicar lo de las emerging properties ## 7 Transformadores para Vision (ViT) En las secciones anteriores hemos estudiado los transformadores en su aplicacion de referencia, el processamiento del lenguaje natural. Despues de los prometedores resultados obtenidos en dicho campo, algunos autores empezaron a considerar la posibilidad de usar dicha arquitectura en otros campos como la vision. Desde su introduccion, se demostro que el rendimiento de los modelos ViT es comparable al de arquitecturas especializadas como son las redes convolucionales para aplicaciones tipicas de vision por computador como la clasificacion de inagenes [19], detecion de objetoes [20], segmentacion senematica [21], colorizacion de la imagen [22], vision de bajo nivel [23] y compression de video [24], por citar algunas. Ademas, recientes investigaciones indican que los errores de prediccion de ViTs son mas consistentes con las de los humans que las redes convolucionales [25]. El funcionamiento de los ViT es similar a un transformador convencional. La aninca diferencia esencial radica en la naturaleza de la senal de entrada, que en vez de ser una secuencia de palabras, es una imagen. La problematica radica, por tanto, en como tratar la entrada para que sea gestionable por el modelo; esto es, adaptar el _tokenizador_ para trabajar con imagenes. Una de las primeras propuestas realizadas en la bibliografia, con muy buenos resultados, consiste en particionar la imagen en partes disjuntas mas pequenas y alimentarlas a la red de forma secuencial. Cada parte tiene las mismas dimensiones (16x16, por ejemplo en [19]) y actua de forma equivalente a lo que seria la codificacion de un elemento en una secuencia. A este vector se le anade un _embedding_ para codificar la posicion que ocupa en la imagen, al estilo de lo que se realiza con las secuencias de texto para abicar cada elemento. En [27] se puede encontrar un exhaustivo compendio de todas las referencias existentes hasta el momento relativas a transformadores para vision. Si bien el objetivo principal de la arquitectura de los transformadores es conseguir un tratamiento generalista para datos heterogeneos, no todas las publicaciones existentes van en ese sentido. Asi podemos encontrar articulos que modifican los sistemas de atencion para conseguir las propiedades de localidad semejantes al de las redes convolucionales mediante el uso de atencion local o jeraruquica, conceptos que describiemos en el capitulo dedicado a los transformadores multi-modales. Otras utilizan como entrada del transformador, atributos representativos de la imagen procedentes de redes convolucionales pre-entrenadas. Aunque se aleja del objetivo de este modulo el presentar la miriada de variaciones existentes publicadas que esta alejadas del espirit generalista de la arquitectura, realizaremos un presentacion general de las propuestas realizadas hasta el momento al respecto. Hay que tener en cuenta que los transformadores, en general, y para vision en particular, son una arquitectura novel, por tanto no madura, que seguro que estara sujeta a grandes innovaciones futuras. Independientemente del interes que pueda suscitar una aplicacion particular, el objetivo de este modulo esta destinalo a explicar aquellas arquitecturas mas generalistas que funcionen bien en un amplio abanico de circunstancias. En las secciones que siguen intentaremos presentar aquellas arquitecturas que cumplen ese cometido para las aplicaciones de vision en los campos de clasificacion, deteccion y segmentacion. Para aplicaciones mas especificas se puede consultar la extensa bibliografia. ### Clasificacion Hasta el momento se puede decir que hay hasta seis formas distintas de abordar la contribucion de los transformadores en la resolucion de problemas de clasificacion que enumeramos a continuacion: 1. _Adaptacion directa de la arquitectura original_[19]. La imagen original se dispone como un conjunto de partes mas pequenas no superpuestas (_patches_) que son tratadas como elementos de una secuencia. Figure 14: Diagrama del proceso de partición, secuenciación y embedding de una imagen para su procesado en el transformador de visión (adaptado de [26]) 2. _Transformador de atributos procedentes de una red convolucional_[28]. Los atributos extraidos de una capa intermedia de una red CNN pre-entrenada son alimentados a un transformador para mejorar sus capacidades de representacion. La ventaja de este entfoque es que los elementos de la secuencia inicial ya separan y codifican la informacion, que es combinada y perfeccionada por el transformador. De esta manera se approvecan los priores estructurales de la CNN para facilitar el trabajo posterior. 3. _Destilacion del conocimiento de una convolucional pre-entrenada_[29]. Una red CNN actua como maestra de una red basada en transformadores. El proceso de aprendizaje consigue transferir los sesgos inductivos a la red aprendiz mediante destilacion del conocimiento [30]. Sorprendentemente, la red aprendiz basada en transformadores acaba obteniendo mejeores resultados predictivos que los de la red convolucional de la que deriva. 4. _Transformador con atencion localizada_[27] y _Transformadores jerarquicos_. Es un transformador con modulos de atencion adaptados para funcionar de forma localizada. A partir la particion de la imagen de entrada en elementos mas pequenos que en ViT original [19] (4x4 en vez de 16x16), estas se alimentan a matrices de auto-atencion que combinan unicamente las particiones mas proximas. Una vez calculada la atencion, se recombina con las atenciones circulantes mediante una capa de fusion. Las capas siguentes aplican el mismo modelo se atencion localizada desplazando las particiones combinadas. De esta forma se consigue combinar la informacion con las zonas adyacentes. En [31], [32] se utilizan aproximaciones similares basadas en un tratamiento jerarquico donde los modulos de atencion son modificados para integrar una progresiva reduccion del espacio de calculo. En la figura 7.1 se muestra la evolucion temporal del estado del arte para clasificacion de imagenes (benchmark Imagenet). Se puede observar como a mediados del 2022 los modelos con mejor rendimiento estan basados en transformadores (**ViT-G/14**, 1843M de parametros, \(91.0\,\%\)[33]) seguido muy de cerca de uno basado en redes convucionales (**EfficientNet-L2**, 480M de parametros, \(90.2\,\%\)[34]). Para hacer comparativos los resultados buscamos un transformador con un numero similar de parametriros. Encontramos que **DaViT-H**, con 362M consigue tambien un \(90,2\,\%\)[33]. A partir de estos resultados se puede conclunic que la arquiectura de los transformadores, a pesar de no disponer de priores extruteculas, consigue obtener resultados de clasificacion pareloges e incluso mejeores de rendimiento respecto a las CNNs mas competitivas, con una complejidad computacional similar. En contrapartida cabe indicar que para conseguir estos resultados es necesario someter a las redes de transformadores a procesos de optimizacion mas prolongados y con conjuntos de datos grandes. Una alternativa cuando no se dispone de suficientes datos etiquetados es utilizar una etapa previa de aprendizaje semi-supervisado [35]. Figure 15: Estado del arte de los modelos de clasificacion de imagen [36]) ### Deteccion Cuando refermos al uso de transformadores para deteccion en imagenes debemos realizar una distincion de inicio entre las aplicaciones que utilizan un enfaque de transformador de extremo-a-extremo, de aquellas que lo usan iniciamente como _backbone_ para extraccion de atributos pero utilizan un bloque final de deteccion convencional (por ejemplo **RetinaNet**[37] o **Mask R-CNN**[38]). Los _backbone_ mencionados vienen a ser transformadores pre-entrenados en tareas de clasificacion que se utilizan como extractor de atributos para alimentar a un cabezal de deteccion. Para un compendio de los modelos que utilizan este tipo enfaque se puede consultar la bibliografia [27]. En lo que respecta a los transformadores de extremo-a-extremo, en el momento de la redaccion de este capitulo, existen dos paradigmas distintos para abordar el problema de deteccion: uno seria la familia de modelos derivados de la arquitectura **DETR**[39] y el otro basado en modelos derivados de **Pix2Seq**[40]. Estudiamos a continuacion las caracteristicas definitorias de cada uno de ellos. #### 7.2.1 DETR: DEtection with TRansformer En la figura 7.2.1 se muestra esquematicamente la arquitectura de deteccion **DETR**. La imagen de entrada se particiona, secuencia y alimenta a un transformador secuencia-a-secuencia. El _tokenizador_ esta formado por una red convlucional que extrae los atributos de la imagen original para posteriori entrar en el transformador codificador. La originalidad de esta propuesta viene en la parte de decodificacion. Esta, por una lado recibe los datos procedentes de la codificacion y por otro, una secuencia de entrada. Esta representa la propuesta de objetos a detectar. A su paso por la red de decodificacion se combina con la informacion de la imagen dando como resultado la secuencia de salida. Si la red ha cumplido su funcion, la secuencia de salida contendra la representacion intera de la deteccion de los objetos presentes en la imagen. La secuencia de elementos de salida tiene la misma cardinalidad que la de entrada y alimenta a una red neuronal independiente para cada elemento de la que deriva la clase y el cuadoro delimitador los elementos detectados. El tamano de la secuencia del decodificador determina el numore maximo de elementos que es posible detecter. En el articulo original, el numore maximo de objetos es de \(100\). El modelo preve una clase especial para indicar la no presencia de objeto para cada posicion. En **DETR** la secuencia de entrada esta formada por numeros aleatorios que siuren como prior para la deteccion de los objetos. A pesar de la naturaleza aleatoria del prior de **DETR** los resultados obtenidos son muy buenos. Trabajos publicados posteriormente han intentado mejorar las capacidades predictivas de la red afinando los priors de la secuencia de entrada (**SMCA**[41], **Conditional DETR**[42], **Anchor DETR**[43], **DAB-DETR**[44], **Efficient DETR**[45], **Dynamic DETR**[46]). Un punto clave del diseno de esta arquitectura es el diseno de una funcion de perdida que permita la optimizacion eficiente de la red durante el aprendizie supervisado, esto es la comparacion de los valores predichos con los reales. Para comprender la dificultad del problema, podemos ver el ejemplo indicado en la tabla 3. Observamos como las clases predichas y reales no necessariamente se van a encontrar en la misma posicion. Ademas, el ordenamiento de las predicciones tampoco garantiza la coincidencia, pues sera frecuente el caso en que el modelo no sea capaz de predecir todos los elementos o que lo haga de forma erronea. Este problema se resuelve calculando la funcion de perdida para todas las agrupaciones posibles entre valorederos y predichos, escogiendo aquella combinacion donde la perdida es menor. [39, 47]. La funcion de perdida calculada con el emparejamiento bipartito es la que se muestra en la ecuacion 6. \[\hat{\sigma} =\underset{\sigma\in\mathcal{N}}{\argmin}\sum_{i}^{N}\mathcal{L} _{match}(y_{i},\hat{y}_{\sigma(i)}), \tag{6}\] \[\mathcal{L}_{match}(y_{i},\hat{y}_{\sigma(i)}) =-\mathcal{W}_{\{c_{i}\neq\emptyset\}}\hat{p}_{\sigma(i)}(c_{i})+ \mathcal{W}_{\{c_{i}\neq\emptyset\}}\mathcal{L}_{box}(b_{i},\hat{b}_{\sigma(i)})\] Una vez hecho el emparejamiento optimo, para llevar a cabo la optimizacion de la red, se utiliza la funcion de perdida hungara (eq. 7). \[\mathcal{L}_{H}(y_{i},\hat{y}_{\sigma(i)})=\sum_{i=1}^{N}[-log\hat{p}_{\hat{ \sigma}_{i}}(c_{i})+\mathcal{W}_{\{c_{i}\neq\emptyset\}}\mathcal{L}_{box}(b_{i}, \hat{b}_{\sigma(i)})] \tag{7}\] Figure 16: Figura 7.2.1. Arquitectura del transformador **DETR** especializado en deteccion [39]) #### 7.2.2 Pix2Seq En la figura 7.2.2 se muestra un esquema del funcionamiento de **Pix2Seq**. Utiliza un nuevo paradigma consistente en un mapeo directo entre la imagen a tratar y la salida de secuencia de texto, donde se codifican clase y cuadro delimitador de todos los elementos detectados. Es una solucion elegante con resultados de rendimiento similares a los obtenidos por **DETR** e incluso en algunas circunstancias mejores (por ejemplo en la deteccion de objetos pequenos). Utiliza una red de transformadores formada por codificadores y decodificadores resolviendo de forma auto-regresiva. La parte de codificacion puede ser o bien un transformador o una red convolucional. El codificador extrae los atributos de interes de la imagen para alimentarlos a la red de decodificadores que los convierten junto con los comandos procedentes de la secuencia de entrada en una salida textual. Se genera un elemento cada vez condicionado a los atributos suministrador por el codificador y a los elementos de la secuencia generados hasta el momento. La naturaleza textual de la salida simplifica los calculos requeridos de la funcion de perdida con respecto a **DETR**. El formato esperado de la salida es un conjunto de elementos de la forma [x_min], [y_min], [x_max], [y_max], [clase],... [EOS], donde EOS seria el indicador de final de secuencia que aparece despues de la salida de todos los cajetines delimitadores presentes en la imagen. Durante el entranemiento la ordenacion de los elementos en la secuencia de prediccion se hace de forma aleatoria. Se optimiza mediante la funcion de perdida tipica de log-likelihood utilizada por los transformadores auto-regresivos de NLP (eq. 8). \[\max\sum_{j=1}^{L}\mathbf{\omega}_{j}logP(\tilde{\mathbf{y}}_{j}\mid\mathbf{x},\mathbf{y}_{1:j -1}) \tag{8}\] donde \(\mathbf{x}\) es una imagen determinada, \(\mathbf{y}\)\(\tilde{\mathbf{y}}\) las secuencias de entrada y objetvo respectivamente asociadas con \(\mathbf{x}\) y \(L\) la longitud de la secuencia. \(\mathbf{\omega}_{j}\) es un peso preestablecido para el elemento j de la secuencia. En el caso de **Pix2Seq**, \(\mathbf{\omega}_{j}=1,\forall j\) Figure 17: Tipologia de E/S del transformador **Pix2Seq** especializado en deteccion [39]) ### Segmentacion Segmentacion es el termino generico que se utiliza para referirse en realidad a tres problemas distintos: _segmentacion semantica, segmentacion de instancias y segmentacion panoptica_. La _segmentacion semantica_ es el problema consistente en asignar a cada pixel de una imagen una clase. Este tipo de segmentacion es el mas sencillo pues no requiere de la separacion entre objetos distintos pertenecientes a una misma clase. En la _segmentacion panoptica_ y en la _segmentacion de instancias_ el objetivo es mas complejo pues, no unicamente pretenden separar clases sino tambien objetos. La differencia entre ambas es que la panoptica identififica cada pixel con una unica clase, mientras que la de instancias permite que un pixel este asociado a mas de una clase, teniendo en cuenta la superposicion de objetos. Las propuestas para resolver los problemas de segmentacion utilizando transformadores han estado y estan sometidos a una profunda evolucion. Podemos distinguir entre dos enfoques: por un lado la segmentacion directa a partir de la informacion de los pixels de la imagen y por otra la construccion de representaciones internas basadas en objetos, para inferir posteriormente las mascaras de segmentacion. Las propuestas inelastico Basalas de en el metodo directo son una extension del ViT para la resolucion de tareas de segmentacion (vase **SETR**[21]). Esta primera aproximacion sivio para demostrar la viabilidad de la arquitectura para la resolucion de este tipo de problemas, anquue con unos costes computacionales elevados. Posteriormente, se presentaron propuestas basadas en la representacion interna de objetos con arquitecturas derivadas de la arquitectura de deteccion **DETR** presentada en la seccion anterior. Las primeras propuestas requieran de clase, cuadro delimitador y mascara. Las posteriores son capaces de derivar mascara y clase directamente sin la necesidad de cuadro delimitador. Presentaremos en esta seccion la arquitectura **Mask2Former**[48], como un ejemplo de una arquitectura madura que permite resolver con una unica arquitectura las tres tareas segmentacion: semantica, de instancias y panoptica. En la figura 7.3 se muestra el esquema representativo del transformador de segmentacion **MaskFormer** (valido tambien para **Mask2Former**). Podemos observar que consta de un _backbone_, que puede ser tanto una red convolucional o como un ViT, del que se extraen los atributos representativos de la imagen. Estos se alimentan a un decodificador de pixeles y a un transformador decodificador. El decodificador de pixeles es una FPN (Feature Pyramid Network) [50], pero podria ser cualquier otra arquitectura que sea capaz de transformar artiuto a imagen (ejemplo alternativo podria ser un U-NET [51]). El transformador decodificador, al estidio del **DETR** generate una secuencia de salida a partir de una secuencia de entrada utilizando la representacion interna de los objetos presentes en la imagen procedente del _backbone_. A partir de la misma, un MLP que genera la prediccion de clases para cada objeto asi como un representacion interna de la mascara, que combinada con la informacion procedente de la salida del descodificador de pixeles, da como resultado las predicciones de las mascaras para cada uno de los objetos. Al igual que con **DETR** existe una clase especial para identificar aquellas clases en las que no se ha detectado objeto y para las que, por tanto, la segmentacion reportada no debe tenerse en cuenta. El numero maximo de mascaras que la red es capaz de detectar es un valor de diseno de la arquitectura y esta limitado por el tamano de la secuencia de objetos. ## 8 Transformadores para Audio Los transformadores tambien se estan convirtiendo en el estandar de facto para el tratamiento de senales de audio, espacio hasta el momento ocupado por las redes convolucionales y anteriormente por la redes recurrentes. Las posibilidades en este campo son muy amplias y diversas. La arquitectura usada es la misma que en las aplicaciones anteriormente presentadas para otro tipo de senales, siendo el hecho distintivo el tokenizador utilizado para transformar la senal de entrada en una representacion interna manejable por la red. ### Senales de audio, tratamiento y tokenizacion Las entradas de texto e imagen son tipos de datos mas intuitivos de aprehender. Para el caso del sonido parece mas complejo el modo de expresion numerica. Realizaremos una explicacion de la naturaleza de las senales de audio para intentar comprender mejor como se realiza la preparacion para su utilizacion en aplicaciones de prediccion automatica. Una senal viene a ser la evolucion temporal de una variable fisica que puede ser medida. En el caso del audio estaraimos refirriendonos a variaciones de la presion del aire en un determinado punto. Para tratar digitalmente estas senales es necessario, primeramente el almacenarlas. Esto se hace inicialmente capturando esta informacion temporal realizando un muestreo, esto es, registrando el valor de la propiedad fisica con una frecuencia pre-establecida. Esta frecuencia de Figure 18: Transformador MaskFormer. Arquitectura especializada en segmentacion de imagenes. [49]) muestreo puede variar, pero un valor habitual para sonido puede ser, por ejemplo, \(44,1\,\mathrm{kHz}\), esto es \(44100\) muestras por segundo. Una vez capturada esta informacion, se dispone de la representacion digital de la senal de audio y se esta en condiciones de ser tratarla digitalmente. Del analisis matematico conoecmos el Teorema de Fourier que vience a decir que toda senal periodica definida en un intervalo temporal finito, independientemente de su forma, puede ser aproximada por la suma de senales senoidales. Cada un de estas senales constituurgentes est definida por unicamente dos variables: amplitude y frecuencia. Dichas variables pueden ser mostradas graficamente en lo que sed denomina un espectoro de frecuencia. As es posible representar la misma senal de dos formas distintas: atendiendo al dominio temporal o al dominio de frecuencia. A la transformacion entre un dominio y el otro se la denomina Transformada de Fourier [52]. El algoritmo FFT (Fast Fourier Transform) [53] permite llevar a cabo de manera eficiente dicha transformacion en un entorno digital. Lo presentado hasta ahora aplica a senales periodicas, pero el habla o la musica no son periodicas,?que ocurre entonces en estos casos? La solucion pasa por definir una longitud de ventana en la que set trata la senal como periodica y para la cual se calcula su espectro. Llevando a cabo este calculo para ventanas consecutivas superpuestas, obtenemos una representacion en dominio de frecuencia de la senal no periodica. A este grafico se le denomina espectrograma. El espectrograma grafica Frecuencia va Tiempo. Cada punto del mismo se representa la amplitud en color en decibelios. Los decibelios vienen a representar la amplitud en escala logaritmica. Se utiliza dicha escala debido a que la percepcoin de los cambios de sonido en humanos esta misa alaneda con ese orden de escala. Al igual que pasa con la amplitud, los humanos no perchiphos las frecuenica s de manera lineal. Nuestro odo esta mejor adaptado a percibir differencias para bajas frecuencias (sonidos graves). Una persona sana es capaz de distinguir sin problemas la diferencia entre una senal a \(500\,\mathrm{Hz}\) de una de \(1000\,\mathrm{Hz}\), pero le sera mucho mas difICI hacerlo entre una de \(10\,000\,\mathrm{Hz}\) y una de \(10\,500\,\mathrm{Hz}\) a pesar de que la diferencia entre ambas es la misma. En 1937 se propuso la escala de MEL [54] que pretendia linealizar la percepcoin de la diferencia frecuencias introduciendo una nueva unidad denominada pitch de MEL. El grafico resultante de representar la amplitud en decibelios en un grafico bidimensional de pitch vs tiempo es mas representativo, linealmente, de la percepcoin del sentido del oido en el ser humano. A este grafico se le denomina Espectrograma de MEL. Suele ser el espectrogram mas habitual en los modelos de aprendizaje automatico. La linealidad del grafico con la percepcoin auditiva humana enfatiza la informacion relevante y facilita el procesamiento posterioro de la senal. Llegados a este punto podemos entender que una de las representaciones mas habituales de las senales de audio para el aprendizaje automatico sea el Espectrograma de MEL. Al ser una matriz de dos dimensiones que suele tener un tamano mayor del segionable directamente por el transformador, un tokenizador habitual (otros tambien son posibles) consiste en alplac convoluciones que reduzcan su tamano al espacio interno de la arquitectura. ### Ejemplos de transformadores de audio A continuacion se presenta una muestra de las muchas otras aplicaciones de audio que utilizan transformadores: * **Whisper**[55] Es un transformador secuencia-a-secuencia multi-lenguaje que a partir de una fuente de audio que contenga una conversacion, devuelve la secuencia de texto asociada. * **Wav2Vec 2.0**[56] Es un transformador con una finalidad similar a BERT pero que en vez de utilizar secuencias de texto trabaja directamente con secuencias de audio. Utiliza una red convolucional como codificador de arbitotos que posteriormente se alimentan al transformador. Realiza un aprendizaje no supervisado para obtener representaciones internas de secuencias de audio. El modelo puede ser posteriormente afindado con relativamente pocos datos para resolver problemas especificos. * **SepFormer**[57] Es una red de transformadores especializada en la separacion de voces de una fuente de audio. ## 9 Transformadores multi-modales Hasta ahora hemos visto la potencialidad de los transformadores para el aprendizaje automatico utilizando datos de una inac naturaleza. Debido a sus capacidades generalistas para resolver problemas de areas hasta ahora inconnexas es logico hipotetizar que dicha arquitectura potencialmente puede ser capaz de gestionar problemas que requieran entradas de distinta naturaleza al mismo tiempo. Un ejemplo claro de agente de este tipo son los animales y las personas, que integran informacion procedente de hasta cinco sentiidos distintos. Las arquitecturas multi-modales son aquellas capaces de gestionar entradas de distinta naturaleza, por ejemplo imagen y texto. Desde el punto de vista de la geometria topologica, el espacio generado por la arquitectura de los transformadores es equivalente a de un gra graz completamente conectado [58]. Esto no es ast en orras arquitecturas, donde las interconexiones del grazto esta limitadas a su espacio mas restringido. Los transformadores, por tanto, disposon de un espacio de modelado mas general y flexible [59]. Esto representa una ventaja cuando se requiere combinar entradas de distinta naturaleza. Los priores estructurales que pueden ser titles para facilitar la optimizacion para un determinado tipo de datos, pueden ser un inconveniente cuando se usan junto con ortas entradas de distinta naturaleza. A continuacion se describen los puntos diferenciales que existen entre una arquitectura multi-modal y la tradicional. ### Entradas multi-modales La primera funcion que realiza un transformador es la conversion de los datos de entrada en un tensor de dimension interna definida, que se mantiene constante en su avance por las distintas capas. La _tokenizacion_, transforma los elementos de entrada en un vector intermedio al que posteriormente se le aplica una transformacion matematica que, normalmente reduce su dimensiones con objeiro de alcanzar las internas de proceso. En entradas textuales es habitual utilizar un _tokenizador_ que convirette cada elemento de la academia de entrada en un vector orthogonal de dimension igual a la delicionario. Esta operacion es seguida de una transformacion lineal con el objetivo de alcanzar la dimension interna de trabajo. En el caso de imagen, una estregtais consiste en _tokenizar_ mediante la particicion de la imagen en partes no superpuestas de tamao predefinido, seguido de una transformacion lineal de la misma para alcanzar la dimension de trabajo [19]. Otro tipo de estregtagas de _tokenizacion_ consisten en el uso de redes especializadas para derivar atributos que son alimentados al transformador [60]. En el caso del video, existen diversas estregtagas: en [61] la tokenizacion consiste en seleccionar clips de duracion predeterminada, a unos fps pre-establecidos a los que seguidamente se les pasa a traves de una red convolucional tridimensional para una artythotse que son posteriormente adaptados hastas conseguir las dimensiones internas de trabajo. Oras estregtagas publicadas incluyen la tokenizacion mediante seleccion en los clips de puntos tridimensionales de interes seguida por una transformacion lineal [62]; tokenizacion en mediante la particion de las imagenes integrantes aplicada al conjunto del clip seguida de una proyeccion lineal [63]; entre otros. En el caso del audio, se pueden utilizar tambien distintas estregtagas que dependen en parte del objetivo que seretende conseguir. Una de ellas consiste, como hemos descrito anteriormente, en utilizar el espectograma MEL (u otros derivados) seguido de una proyeccion lineal o del calculo de atributos mediate una red convolucional [64], [62], [63]. En la bibliografia [59] se pueden encontrar ejemplos de modulos preparatorios para otro de de datos mas especificicos como podrian ser sequensens de bases de datos SQL, nubes de puntos tridimensionales, datos tabulares, poses humanas, registros electronicos de salud, conjunto de ordenes posibles a las que responde un robot, entre otros muchos. ### Variantes de auto-atencion en un contexto multi-modal Una vez es dispone de la representacion interna de las distintas senales de entrada, se plantea el dilema de como encontrar el modo mas efectivo de combinarlas en la capa de auto-atencion para obtener los mejores resultados. A pesar de que puedan tener la mismas dimensiones, debido a su diferente naturaleza, puede ser necesario adaptar las transformaciones a cada uno de los distintos modos. Las estregias mas habituales para calcular la auto-atencion en esta primera etapa son: la suma temprana, la concatenacion temprana, la jerarquica y la cruzada. La _suma temprana_ (fig. 9.2) consiste en la suma de los vectores procedentes de las distintas fuentes como paso previo a untico sistema de auto-atencion comu. La _concatenicon temprana_ (fig. 9.2) genera un vector como concatenacion de los distintos modos que se alimenta al modulo de auto-atencion. La _auto-atencion jerarquica_ (fig. 9.2 y 9.2) se realiza en dos pasos, en uno se aplican transformaciones distintas a cada senal y en el otro una unica que las combina. La _auto-atencion cruzada_ (fig. 9.2) utiliza canales de auto-atencion independientes donde la matriz de Query de los canales actua sobre el otro canal y no sobre el propio. La _auto-atencion cruzada seguida de concatenacion_ (fig. 9.2) acta en dos fases, una primera en que se aplica una auto-atencion cruzada y posteriormente se aplica una auto-atencion convencional sobre la concatenacion de los vectores procedentes de la operacion original. ### Arquitecturas multi-modales El objetivo de estas estregias es la consecucion de una representacion interna que integre de la manera mas adecuada posible la informacion relevante procedente de cada uno de los modos de entrada. El sistema de auto-atencion escogido determina las caracteristicas de la arquitectura. Asf hablamos de arquitecturas de _canal unico_, _multi-canal_ o _hbridas_. Las arquitecturas de _canal unico_ son las derivadas del uso de los sistemas de auto-atencion de _suma y concatanacion temporana_ y disponen de un canal de tratamiento dinico de los datos. Las arquitecturas _multi-canal_ son las derivadas del uso de los sistemas de auto-atencion _cruzada_ y dan lugar a canales multiples de tratamiento diferenciado de los datos. Finalmente, las arquitecturas _hibridas_, derivadas de sistemas de auto-atencion _jeratquica_ y _cruzada con concatanacion_ combinan partes de las dos anteriores. ## 10 Aplicaciones de los transformadores multi-modales En esta seccion podremos algunos de los trabajos publicados hasta el momento que utilizan transformadores multi-modales. * **MMBT**[76] es una arquitectura multi-modal que integra imagen y texto para obtener una representacion interna que puede ser utilizada para realizar tareas de clasificacion. * **VideoBERT**[61] es una arquitectura multi-modal con una finalidad similar a BERT pero utilizando video como fuente de datos. * **Med-BERT**[77] una arquitectura multi-modal que utiliza registtro de salud estructurados para representacion de la salud de pacientes y a partir de la misma poder predecir enfermedades. * **Gato**[78] es una arquitectura multi-modal con un objeto generalista. A partir de datos procedentes de diversas fuentes: imagen, texto y acciones propias de un robot es capaz de resolver hasta 604 tareas distinctas, entre las cuales se encuentran jugar a multiud de juegos de Atari, hacer de chatbot, comandar un brazo articulado, Figure 19: Atención por suma temprana. La información procedente de dos canales distintos se suma antes de ser procesada por el canal de auto-atención Figure 20: Atención por concatanación. La información procedente de dos canales distintos se procesa en la misma capa de auto-atención Figure 21: Atención jeráruplica multi-canal a mono-canal. La información procedente de canales distintos se procesa en una primera etapa en canales de atención independientes para posterioremente procesarse en conjunto en la etapa final. Figure 22: Atención jeráruplica mono-canal a multi-canal. La información procedente de canales distintos se procesa en una primera etapa en conjunto para posterioramente procesarse en la etapa final de manera independiente. etiquetar el contenido de una imagen, entre otras muchas. Todo ello utilizando una dinica arquitiectura basada en transformadores. Es la primera aplicacion generalista que integra imagen, processamiento del lenguaje natural, aprendizaje por refuerzo y control de robots en un unico agente. Figure 23: Atención cruzada. La información procedente de dos canales distintos se procesa en la misma capa de auto-atención Figure 24: Atención cruzada seguida de concatenación. La información procedente de dos canales distintos se procesa en la misma capa de auto-atención y posteriormente es combinada en un canal unico. ## 11 Resumen En este modulo hemos presentado la arquitectura de red neuronal Transformadores, la cual tiene la capacidad de reemplazar a arquitecturas especializadas como redes convolucionales, recurrentes y de grafos. Esta arquitectura no solo puede funcionar en el campo especifico del procesamiento del lenguaje natural, para el cual fue diseada originalmente, sino que tambien ha demostrado funcionar al maximo nivel para el tratamiento de otro tipo de senales especificas, asi como para la resolucion de problemas que combinan datos de distintas fuentes, lo que la convierte en un agente generalista para la resolucion de problemas. La arquitectura de los Transformadores ha demostrado funcionar muy bien en areas como la vision por computador, el procesamiento del lenguaje natural, el tratamiento de senales de audio, la prediccion de series temporales y el aprendizaje por refuerzo, entre otras. En el modulo se ha presentado una descripcion rigurosa del algoritmo de la arquitectura y se han mostrado aplicaciones en distintas areas, desde su aplicacion de origen en el procesamiento del lenguaje natural, el procesamiento de imagenes y sonidos y hasta suo en aplicaciones multi-modales. Conocer los detalles de implementacion de la arquitectura de los Transformadores es de gran importancia, ya que puede ser una solucion general para la mayora de los problemas de ciencia de datos. ### Notacion La notacion matematica de este modulo es la misma que la utilizada en [79]. A efectos practicos la reproducimos a continuacion. \begin{tabular}{l l l} \hline **Simbolo** & **Tipo** & **Explicacion** \\ \hline \(|N|\) & \(:=\{1,...,N\}\) & conjunto de enteros \(1,2,...,N-1,N\) \\ \(i,j\) & \(\in\mathbb{N}\) & indices enteros genericos \\ \(V\) & \(\cong[N_{V}]\) & vocabulary \\ \(N_{V}\) & \(\in\mathbb{N}\) & tamano del vocabulario \\ \(V^{*}\) & \(=\bigcup_{\ell=0}^{\infty}V^{\ell}\) & conjunto de secuencias de tokens; p.e. palabras y documentos \\ \(\ell_{max}\) & \(\in\mathbb{N}\) & longitud maxima de la secuencia de tokens \\ \(\ell\) & \(\in[\ell_{max}]\) & longitud de la secuencia de tokens \\ \(t\) & \(\in[\ell]\) & indice del token dentro de una secuencia \\ \(d_{...}\) & \(\in\mathbb{N}\) & dimension de variac vectors \\ \(\mathbf{x}\) & \(\equiv x[1:\ell]\) & \(=x[1]x[2]...x[\ell]\in V^{\ell}\) secuencia de tokens primaria \\ \(\mathbf{z}\) & \(\equiv z[1:\ell]\) & \(=z[1]z[2]...z[\ell]\in V^{\ell}\) secuencia de tokens de contexto \\ \(M[i,j]\) & \(\in\mathbb{R}\) & entrada \(M_{i,j}\) de la matriz \(M\in\mathbb{R}^{d\times d^{\prime}}\) \\ \(M[i,:]\equiv M[i]\) & \(\in\mathbb{R}^{d}\) & fila \(i\) de la matriz \(M\in\mathbb{R}^{d\times d^{\prime}}\) \\ \(M[:,j]\) & \(\in\mathbb{R}^{d}\) & columma \(j\) de la matriz \(M\in\mathbb{R}^{d\times d^{\prime}}\) \\ \(\mathbf{e}\) & \(\mathbb{R}^{d_{e}}\) & representacion vectorial / representacion vectorial (embedding) de un token \\ \(\mathbf{X}\) & \(\mathbb{R}^{d_{e}\times\ell_{X}}\) & codificacion de la secuencia de tokens primaria \\ \(\mathbf{Z}\) & \(\mathbb{R}^{d_{e}\times\ell_{Z}}\) & codificacion de la secuencia de tokens de contexto \\ \(\mathbf{Mask}\) & \(\mathbb{R}^{\ell_{Z}\times\ell_{X}}\) & matriz de enmascarado, determina el contexto de atencion para cada token \\ \(L,L_{enc},L_{dec}\) & \(\mathbb{N}\) & numero de capas de red (codificacion y decodificacion) \\ \(l\) & \(\in[L]\) & indice de la capa de la red \\ \(H\) & \(\mathbb{N}\) & numero de capaczales de atencion \\ \(h\) & \(\in[H]\) & indice del cheazeal de atencion \\ \(N_{data}\) & \(\in\mathbb{N}\) & (i.i.d.) tamap de la muestra \\ \(n\) & \(\in[N_{data}]\) & indice de la secuencia de muestra \\ \(\eta\) & \(\in(0,\infty)\) & ratio de aprendizaje \\ \(\tau\) & \(\in(0,\infty)\) & temperatura, controla la diversidad en tiempo de inferencia \\ \hline **Simbolo** & **Tipo** & **Explicacion** \\ \hline \(\mathbf{W_{e}}\) & \(\in\mathbb{R}^{d_{e}\times N_{V}}\) & matriz de embeddings de los tokens \\ \(\mathbf{W_{p}}\) & \(\in\mathbb{R}^{d_{e}\times\ell_{max}}\) & matriz de embeddings de posicion \\ \(\mathbf{W_{u}}\) & \(\in\mathbb{R}^{N_{V}\times d_{e}}\) & matriz de conversion de embedding a token \\ \(\mathbf{W_{q}}\) & \(\in\mathbb{R}^{d_{attn}\times d_{X}}\) & parametros de la matriz de consulta \\ \(\mathbf{b_{q}}\) & \(\in\mathbb{R}^{d_{attn}}\) & sesgo de consulta \\ \(\mathbf{W_{k}}\) & \(\in\mathbb{R}^{d_{attn}\times d_{Z}}\) & parametros de la matriz de clave \\ \(\mathbf{b_{k}}\) & \(\in\mathbb{R}^{d_{attn}}\) & sesgo de clave \\ \(\mathbf{W_{v}}\) & \(\in\mathbb{R}^{d_{out}\times d_{Z}}\) & parametros de la matriz de valor \\ \(\mathbf{b_{v}}\) & \(\in\mathbb{R}^{d_{attn}}\) & sesgo de valor \\ \(\mathbf{W_{qkv}}\) & & coleccion de parametros de una capa de atencion \\ \(\mathbf{W_{o}}\) & \(\in\mathbb{R}^{d_{out}\times H_{mid}}\) & parametros de la matriz de salida \\ \(\mathbf{b_{o}}\) & \(\in\mathbb{R}^{d_{out}}\) & sesgo de salida \\ \(\mathbf{W}\) & & coleccion de parametros de una capa de multi-atencion \\ \(\mathbf{W_{mlp}}\) & \(\in\mathbb{R}^{d_{1}\times d_{2}}\) & parametros de una MLP del transformador \\ \(\mathbf{b_{mlp}}\) & \(\in\mathbb{R}^{d_{1}}\) & sesgo correspondiente a una MLP del transformador \\ \(\gamma\) & \(\in\mathbb{R}^{d_{e}}\) & parametro de escalado de la capa de normalizacion \\ \(\beta\) & \(\in\mathbb{R}^{d_{e}}\) & parametro de offset de la capa de normalizacion \\ \(\theta,\hat{\theta}\) & \(\in\mathbb{R}^{d}\) & coleccion de todos los parametros del transformador \\ \hline \end{tabular}
2302.11568
Galactic Model Parameters and Space Density of Cataclysmic Variables in Gaia Era: New Constraints to Population Models
The spatial distribution, Galactic model parameters and luminosity function of cataclysmic variables (CVs) are established using re-estimated trigonometric parallaxes of {\it Gaia} DR3. The data sample of 1,587 CVs in this study is claimed to be suitable for Galactic model parameter estimation as the distances are based on trigonometric parallaxes and the {\it Gaia} DR3 photometric completeness limits were taken into account when the sample was created. According to the analysis, the scale height of All CVs increases from 248$\pm$2 to 430$\pm$4 pc towards shorter periods near the lower limit of the period gap and suddenly drops to 300$\pm$2 pc for the shortest orbital period CVs. The exponential scale heights of All CVs and magnetic systems are found to be 375$\pm$2 and 281$\pm$3 pc, respectively, considerably larger than those suggested in previous observational studies. The local space density of All CVs and magnetic systems in the sample are $6.8^{+1.3}_{-1.1}\times$10$^{-6}$ and $2.1^{+0.5}_{-0.4}\times10^{-6}$ pc$^{-3}$, respectively. Our measurements strengthen the 1-2 order of magnitude discrepancy between CV space densities predicted by population synthesis models and observations. It is likely that this discrepancy is due to objects undetected by CV surveys, such as the systems with very low $\dot{M}$ and the ones in the period gap. The comparisons of the luminosity function of white dwarfs with the luminosity function of All CVs in this study show that 500 times the luminosity function of CVs fits very well to the luminosity function of white dwarfs. We conclude that the estimations and data sample in this study can be confidently used in further analysis of CVs.
R. Canbay, S. Bilir, A. Özdönmez, T. Ak
2023-02-22T18:59:59Z
http://arxiv.org/abs/2302.11568v1
Galactic Model Parameters and Space Density of Cataclysmic Variables in Gaia Era: New Constraints to Population Models ###### Abstract The spatial distribution, Galactic model parameters and luminosity function of cataclysmic variables (CVs) are established using re-estimated trigonometric parallaxes of _Gaia_ DR3. The data sample of 1,587 CVs in this study is claimed to be suitable for Galactic model parameter estimation as the distances are based on trigonometric parallaxes and the _Gaia_ DR3 photometric completeness limits were taken into account when the sample was created. According to the analysis, the scale height of All CVs increases from 248\(\pm\)2 to 430\(\pm\)4 pc towards shorter periods near the lower limit of the period gap and suddenly drops to 300\(\pm\)2 pc for the shortest orbital period CVs. The exponential scale heights of All CVs and magnetic systems are found to be 375\(\pm\)2 and 281\(\pm\)3 pc, respectively, considerably larger than those suggested in previous observational studies. The local space density of All CVs and magnetic systems in the sample are 6.8\({}^{+1.3}_{-1.1}\)\(\times\)10\({}^{-6}\) and 2.1\({}^{+0.5}_{-0.4}\)\(\times\)10\({}^{-6}\) pc\({}^{-3}\), respectively. Our measurements strengthen the 1-2 order of magnitude discrepancy between CV space densities predicted by population synthesis models and observations. It is likely that this discrepancy is due to objects undetected by CV surveys, such as the systems with very low \(\dot{M}\) and the ones in the period gap. The comparisons of the luminosity function of white dwarfs with the luminosity function of All CVs in this study show that 500 times the luminosity function of CVs fits very well to the luminosity function of white dwarfs. We conclude that the estimations and data sample in this study can be confidently used in further analysis of CVs. Cataclysmic Variables - solar neighbourhood + Footnote †: journal: AJ 0000-0002-4002-0882]Remziye Canbay 0000-0002-3180-7885]Selcuk Bilir 0000-0002-4073-3877]Aykut Ozdonmez 0000-0002-4880-0883]Tansel Ak ## 1 Introduction Cataclysmic variables (CVs) are short-period semi-detached binary stars. A cataclysmic variable's primary component is a white dwarf which is accreting matter from a Roche-lobe filling low-mass main-sequence star, the secondary component, via a gas stream. Since the matter stream has a high angular momentum and the primary star is small, an accretion disc surrounding the white dwarf is created. A bright spot is also formed where the matter stream impacts the disc. Magnetised white dwarfs in CVs have accretion columns instead of discs to transfer matter from the secondary component (Warner, 1995; Hellier, 2001; Knigge, Baraffe & Patterson, 2011). The standard formation and evolution scenario developed for CVs is concentrated on the explanation of the features seen in the orbital period distribution of these systems, since the most precisely determined parameter of a CV is its orbital period. The sharp cut-off at about 80 min (Willems et al., 2005; Gansicke et al., 2009), period minimum, and the period gap between roughly 2 and 3 h (King, 1988; Knigge, Baraffe & Patterson, 2011) are the most striking features of the CV period distribution. The standard theory successfully explains these features as its main predictions are supported by observations. However, there are still some observational properties to be explained, for example (1) Predicted and observed fractions of CVs above and below the period gap (de Kool, 1992; Kolb, 1993; Howell, Nelson & Rappaport, 2001; Gansicke et al., 2009; McAllister et al., 2019) are not in agreement. Standard CV population studies predict that more than 90% of CVs must be located below the period gap, while observations imply almost equal numbers of CVs below and above the gap. Although sky surveys like the Sloan Digital Sky Survey (SDSS; Szkody et al., 2002, 2003, 2004, 2005, 2006, 2007, 2011) revealed a larger population of short-period CVs, the disagreement still remains. (2) The fraction of observed post-period minimum CVs, period bouncers, is much smaller than that predicted by the population models based on the standard theory (Patterson et al., 2005; Unda-Sanzana et al., 2008; Littlefair et al., 2008; Patterson, 2011; Kato et al., 2015, 2016; McAllister et al., 2017; Neustroev et al., 2017; Pala et al., 2018). McAllister et al. (2019) found that 30% of donor stars in a sample of 225 CVs are likely to be brown dwarfs in period bouncers. However, only 5% of volume-limited sample of CVs in Pala et al. (2020) included period bouncers. (3) Although the standard theory predicts an orbital period minimum of about 65-70 min (see Kalomeni et al., 2016, and references therein), observed values are about 76-82 min (Knigge, Baraffe & Patterson, 2011; McAllister et al., 2019). (4) The observed white dwarf masses in CVs have been significantly larger than those of single white dwarfs (see Zorotovic & Schreiber, 2020, and references therein). In addition, white dwarf mass in CVs does not change with orbital period (McAllister et al., 2019). (5) Population studies based on the standard evolutionary model predict space densities 1-2 orders of magnitude larger than observed values (Zorotovic & Schreiber, 2020). Although additional angular momentum loss mechanisms and models (Patterson, 1998; Knigge, Baraffe & Patterson, 2011; Schreiber, Zorotovic & Wijnen, 2016; Pala et al., 2017; Zorotovic & Schreiber, 2017, 2020; Belloni et al., 2018; Liu & Li, 2019; Metzger et al., 2021; Sarkar & Tout, 2022) were suggested to solve the disagreements, the only published, self-consistent simulations of CV evolution were performed by Hillman et al. (2020), whose multi-Gyr models of novae take into account every nova eruption's thermonuclear runaway, mass and angular momentum loses, feedback due to irradiation and variable mass transfer rate (\(\dot{M}\)), and orbital size and period changes. Hillman et al. (2020) reproduced the observed range of mass transfer rates at a given orbital period, with large and cyclic Kyr-Myr timescale changes. It should be noted that the magnetic systems may have different evolutionary scenarios from non-magnetic CVs (see references in Belloni et al., 2020). Depending on the completeness of the samples, reliable observational constraints can be obtained from the stellar statistics (Ak et al., 2008; Ozdonmez, Ak & Bilir, 2015), and a proposed evolutionary scheme must also be in agreement with the data obtained from stellar statistics (Duerbeck, 1984). In this respect, Galactic model parameters and the space density of a group of objects are key parameters to constrain and to test population models based on evolutionary schemes. A wide range of observational results is remarkable, while the predicted space densities are systematically 1-2 order of magnitude larger than those derived from observations. For example, previous CV population synthesis models predicted space densities \(10^{-5}\)-\(10^{-4}\) pc\({}^{-3}\)(Ritter & Burkert, 1986; de Kool, 1992; Kolb, 1993; Politano, 1996; Willems et al., 2005, 2007; Goliasch & Nelson, 2015; Belloni et al., 2018), while observations indicated \(10^{-7}\)-\(10^{-4}\) pc\({}^{-3}\)(Warner, 1974; Patterson, 1984, 1998; Thomas & Beuermann, 1998; Ringwald, 1993; Schwope et al., 2002; Araujo-Betancor et al., 2005; Pretorius et al., 2007; Pretorius, Knigge & Kolb, 2007; Ak et al., 2008; Revnivtsev et al., 2008; Pretorius & Knigge, 2012; Pretorius, Knigge & Schwope, 2013; Schwope, 2018). In a recent study, Pala et al. (2020) measured very precise space densities of \(4.8^{+0.6}_{-0.9}\times 10^{-6}\) and \(1.2^{+0.4}_{-0.5}\times 10^{-6}\) pc\({}^{-3}\) for All CVs and magnetic CVs (mCVs), respectively, using the European Space Agency's (ESA) _Gaia_ Data Release 2 (_Gaia_ DR2; Gaia Collaboration, 2018). They assumed a scale height of 280 pc for their analysis and the sample was composed of only 42 objects within 150 pc from the Sun. They assumed that this restriction reduces the uncertainties in the derived space densities related to the unknown age and scale height of the CV population and the uncertainties from astrometric solutions. Belloni et al. (2020) claimed that these are the most reliable observational space density estimates ever found. They also concluded that the space densities given in Pala et al. (2020) are in very good agreement with their predicted values if potential period bouncers are excluded from the space density estimation. However, it should be noted that Belloni & Schreiber (2020) performed binary population models using an up-to-date version of the BSE code (Hurley, Tout & Pols, 2002) and found that their model fails to explain some observational properties of magnetic CVs. Thus, the agreement between the space density measurements of Pala et al. (2020) and predictions by Belloni et al. (2020) does not mean that the previous population synthesis models based on the standard evolution and formation theory are wrong. It is likely that the space densities of \(10^{-5}\)-\(10^{-4}\) pc\({}^{-3}\) proposed by previous CV population synthesis models are correct and the current observational measurements suffer from incomplete sky surveys as the surveys are probably missing the most of very low \(\dot{M}\) systems, CVs in the orbital period gap, whose lifetime is predicted to be longer (Hillman et al., 2020), and the most of period bouncers. Besides these studies, observational Galactic model parameters, luminosity functions and space densities of CVs, i.e. intrinsic properties of Galactic CV population, should still be determined from suitable data by using the methods of observational Galactic structure studies (Karaali, Bilir & Hamzaoglu, 2004; Bilir et al., 2006, 2006, 2006; Bilir, Karaali & Gilmore, 2006; Bilir et al., 2008; Karaali et al., 2007; Cabrera-Lavers et al., 2007), as the number of systems with reliable distance estimates is high enough to use these methods. Sky surveys such as _Gaia_(Gaia Collaboration, 2016, 2018, 2021, 2022) and SDSS (Szkody et al., 2002, 2003, 2004, 2005, 2006, 2007, 2011) determined faint systems and presented reliable distances of CVs. Using these data, it is possible to decrease the selection effects that may be strong for faint systems. These data and methods can allow us to determine observational constraints for evolutionary models of CVs and mCVs. In this study, we use ESA's _Gaia_ Data Release 3 (_Gaia_ DR3; Gaia Collaboration, 2022) and Bailer-Jones et al. (2021) to obtain reliable distances for a CV sample. We analyse the spatial distribution of CVs and discuss the completeness of the CV sample. Then, we use this sample to estimate the Galactic model parameters and space densities of All CVs and mCVs with scale-heights obtained from exponential and sech2 functions fitted to \(z\)-histograms. Footnote 1: www.aavso.org/vxx/ Footnote 2: [https://simbad.u-strasbg.fr/simbad/sim-fid](https://simbad.u-strasbg.fr/simbad/sim-fid) ## 2 Data In order to construct our CV sample, we collected CVs from AAVSO's International Variable Star Index1 database. Their equatorial and Galactic coordinates were taken from SIMBAD2 database. We also included CV's found in the previous studies (e.g. Hofmann et al., 2018; Halpern et al., 2018; Szkody et al., 2018; Bernardini et al., 2019; Kato, 2019; Yu et al., 2019; Belloni et al., 2020; Kato et al., 2020; Schwope et al., 2020) in this sample. Orbital periods were mainly taken from Ritter & Kolb (2003) and Downes et al. (2001). Superhump periods of objects, whose orbital periods are unknown, were assumed to be their orbital periods (Kato et al., 2020, and references therein). The number of systems is 10,852 in this very rough sample. We ignored the objects classified as non-mCVs and removed objects for which trigonometric parallax measurements are not present in _Gaia_ DR3. All the objects in the preliminary sample were checked with respect to equatorial coordinates in order to avoid from duplication. In order to prevent the misidentification of CVs due to adjacent objects, we also checked each object's _Gaia_ position individually using Aladin3 and removed misidentified objects from the database. To ensure a robust match, we crossmatched each object in the catalogue to a 15\({}^{{}^{\prime\prime}}\) radius in _Gaia_ DR3, propagated the subsets of _Gaia_ DR3 15\({}^{{}^{\prime\prime}}\) crossmatches to the J2000 epoch using proper motion, and made a final crossmatch at J2000 with a radius of 2\({}^{{}^{\prime\prime}}\), given the precision of the catalogue. The 132 CVs in the directions of the globular clusters were excluded from the statistics to avoid mismatching with the data in the _Gaia_ catalogue. As a result of this selection process, _Gaia_ photometric and astrometric data of 5,621 CVs were obtained. We showed the matching procedure for nine objects as an example. The results for matching these objects at different \(G\) apparent magnitudes from the _Gaia_ DR3 catalogue with panSTARR \(g\) images on Aladin4 are shown in Figure 1. Footnote 1: [https://aladin.uc-marc.fr/](https://aladin.uc-marc.fr/) Footnote 4: [http://aladin.cds.unistra.fr/aladin.gml](http://aladin.cds.unistra.fr/aladin.gml) As we need a precise data sample to extract Galactic model parameters of CVs, we also made strict cuts on quality flags, even though these cuts remove numerous systems from the sample. We retain matches with phot_g_mean_flux_over_error, \(f_{\rm G}/\delta f_{\rm G},>8\), more than eight "good" astrometric observations, as characterized by _Gaia_ DR3, astrometric_excess_noise<2, and \(\varpi>0.1\) mas. The remaining sample includes 4,149 CVs. In the sample, magnetic CVs classified as DQ Her (Intermediate polars) or AM Her type (Polars) objects are indicated as mCV, remaining systems as CV. Number of magnetic CVs is only 205 in this sample. Although the sample includes 4,149 objects classified as CV, we know orbital periods only for 1,187 of them. We used _Gaia_ DR3 data (Gaia Collaboration, 2022) to obtain the distances of CVs in our catalogue. In order to do this, we matched our catalogue with _Gaia_ DR3 catalogue and found _Gaia_ ID for each CV. It is possible to estimate the distances of CVs by simply inverting their trigonometric parallaxes taken from _Gaia_ DR3 catalogue. However, Bailer-Jones et al. (2018, 2021) indicated that the nonlinearity of the transformation and the asymmetry of the resulting probability distribution must be taken into account, and they re-estimated the _Gaia_ EDR3 (Gaia Collaboration, 2021) parallaxes. In such a re-estimation, we do not expect too different distance estimates from _Gaia_ DR3 data and Bailer-Jones et al. (2021)'s approximation for a certain system. Thus, we matched our catalogue with the catalogue of Bailer-Jones et al. (2021) using _Gaia_ DR3 IDs to obtain precise distances and distance errors of CVs and compared distances of CVs estimated from _Gaia_ DR3 and Bailer-Jones et al. (2021) in Figure 2, where different \(G\) apparent magnitude intervals are represented by coloured symbols. This comparison shows that there is a considerable scatter especially for fainter systems, while the scatter is much less for CVs with \(G\leq 18.5\) mag. Thus, we limited our CV sample to the systems for which \(G\leq 18.5\) mag, which is set as the faint limit of the sample. We set the bright limit as \(G=9\) mag, since there is no brighter object in our sample. The final sample comprises of CVs with \(9\leq G\leq 18.5\) mag and includes 1,714 CVs, 767 of them with known orbital periods. The relative distance error of the sample is less than 1.66 and the median value is 0.06. Figure 1: Location of nine CVs with different \(G\) apparent magnitudes selected from the _Gaia_ DR3 catalogue are shown on panSTARR \(g\) images using Aladin. Target objects are located in the centres of the images. Although the number of CVs in the sample is the largest ever used in similar analyses, it is still questionable if this sample is sufficiently large to be representative of the entire CV population and subject to magnitude-related selection effects. Since the standard theory predicts that most CVs should be intrinsically faint objects, it seems that apparent magnitude limits of surveys are one of the strongest selection effects. Therefore, the completeness limits of the data must be taken into account in a study based on stellar statistics. In order to set completeness limits to the sample, we first obtained interstellar absorption in \(V\)-band \(A_{\rm V}\) for CVs in the sample by using MWDUST5 code which produces two and three-dimensional Galactic dust map (Bovy et al., 2016), based on Schlegel, Finkbeiner & Davis (1998)'s dust maps as re-calibrated by Schlafly & Finkbeiner (2011). MWDUST code can provide two and three-dimensional data according to Galactic coordinates from the Sun to all directions until the edges of our Galaxy contributed by the Galactic dust. Since the distances of systems are well-known from trigonometric parallaxes, we preferred to use two-dimensional data and showed the rough absorption value in \(V\)-band according to Galactic latitude (\(b\)) and longitude (\(l\)) toward the direction of an object by \(A_{\infty}(V)\), practically means up to infinite, but actually up to the edge of the Galaxy. Figure 2: Comparison of CV distances obtained from _Gaia_ DR3 and Bailer-Jones et al. (2021) (\(d_{\rm BJ}\)). Different \(G\) apparent magnitude intervals are shown in panels (a), (b), (c) and (d). The red line represents the one-to-one line and blue dashed lines 500 and 1000 pc distances from the red line. The total absorption in \(V\)-band for the distance \(d\) to the star is calculated as following (Bahcall and Soneira, 1980) \[A_{\rm d}(V)=A_{\infty}(V)\Bigg{[}1-\exp\Bigg{(}\frac{-\mid d\times\sin b\mid}{H} \Bigg{)}\Bigg{]}, \tag{1}\] here \(H\) is the scaleheight for the interstellar dust which is adopted as 125 pc (Marshall et al., 2006). As the distance of the system (\(d_{\rm BJ}\), hereafter also denoted as \(d\)) \(d\) is known from Bailer-Jones et al. (2021), the total absorption for the system \(A_{\rm d}(V)\) could be estimated. We obtained the colour excess \(E_{\rm d}(B-V)\) for each CV in the sample using \(E_{\rm d}(B-V)=A_{\rm d}(V)/3.1\). The total absorptions in \(G\), \(G_{\rm BP}\), and \(G_{\rm RP}\) bands were obtained by using relations given as follows: \[\begin{split} A(G)&=0.83627\times 3.1E_{\rm d}(B-V), \\ A(G_{\rm BP})&=1.08337\times 3.1E_{\rm d}(B-V), \\ A(G_{\rm RP})&=0.63439\times 3.1E_{\rm d}(B-V). \end{split} \tag{2}\] The selective absorption coefficients in Equation (2) were taken from Cardelli, Clayton, & Mathis (1989). Once we know the total absorption values, we could calculate the de-reddened apparent magnitudes in \(G\), \(BP\) and \(RP\)-bands, \(G_{0}\), \(G_{\rm BP}\) and \(G_{\rm RP}\), respectively. The absolute magnitudes \(M_{\rm G}\) of CVs were calculated using the distance modulus formula \(G_{0}-M_{\rm G}=5\times\log(d_{\rm BJ})-5\), where \(d_{\rm BJ}\) is the distance obtained from Bailer-Jones et al. (2021). The absolute magnitudes \(M_{\rm G}\) of CVs in the preliminary sample against their distances are shown in Figure 3. Red dashed lines in Figure 3 show distances estimated from the bright and faint brightness limits (\(9\leq G\leq 18.5\) mag) for absolute magnitude intervals of 1 mag. These boxes limited by red dashed lines define the completeness limits of the data for certain absolute magnitude intervals. We removed systems beyond (out of the boxes defined by red dashed lines in Figure 3) these limiting magnitudes from the CV sample in order to obtain a complete catalogue in a certain volume with the Sun in its centre. We found that the majority of systems beyond 4 kpc in Figure 3 are discovered by SDSS. The final sample includes 1,587 CVs, 704 of them with known orbital periods. There are only 124 mCVs in this sample, 117 of them with a known orbital period. Analyses in this paper were performed using this final sample. Figure 3: The absolute magnitudes \(M_{\rm G}\) of CVs in the preliminary sample against their distances \(d_{\rm BJ}\) obtained from Bailer-Jones et al. (2021). Red dashed lines show distances estimated from the bright (\(G=9\) mag) and faint (\(G=18.5\) mag) limiting apparent magnitudes for absolute magnitude intervals of 1 mag. Blue solid lines correspond to the bright and faint apparent magnitude limits. The final sample is given in Table 1 including equatorial coordinates \((\alpha,\delta)_{\rm J2000}\), object groups (magnetic (mCVs) or non-magnetic (non-mCVs)), orbital periods (\(P_{\rm orb}\)), _Gaia_ DR3 trigonometric parallaxes (\(\varpi\)) and relative parallax errors (\(\sigma_{\varpi}/\overline{\varpi}\)), proper motions (\(\mu_{\alpha}\cos\delta\), \(\mu_{\delta}\)), \(G\)-band apparent magnitudes from _Gaia_ DR3. Distances \(d_{\rm BJ}\) (also denoted as \(d\)) in Table 1 were taken from Bailer-Jones et al. (2021). This sample is the largest ever used in similar analyses of CVs. Besides, it includes the most reliable distance information for these systems. Objects, that are not classified as magnetic system in the literature, are denoted as CV in our catalogue. In Figure 4a, we show the distance histogram of CVs in the final sample which is limited using bright and faint limiting magnitudes in \(G\)-band, 9 and 18.5 mag, respectively. The cumulative distribution of CV distances is presented in Figure 4b. Relative distance errors \(\sigma_{d_{\rm BJ}}/d_{\rm BJ}\) obtained from Bailer-Jones et al. (2021) are also shown in Figure 4c. As expected, the relative errors are increasing with the distance. Relative distance errors are good indicators of the precision of the distance measurements, which is very important in our study. While relative errors for 66% of the systems in the final sample is \(\sigma_{d_{\rm BJ}}/d_{\rm BJ}\leq\) 0.10, 88% of them have relative distance errors of \(\sigma_{d_{\rm BJ}}/d_{\rm BJ}\leq\) 0.25. These values show that reliable constraints to population models of CVs can be obtained from the sample in this study. Precision of distance estimations can be also seen in the HR diagram of CVs, for which the orbital periods are known. Figure 5 shows HR diagram of CVs below and above the orbital period gap using _Gaia_ photometry. The separating point of the orbital period gap is accepted as 2.6 h (Ak et al., 2010). CVs above the gap are generally brighter than those located below, as expected. This discrimination is very clear now due to precise distance measurements. It is clear that colour range for systems with \(P_{\rm orb}\leq\) 2.6 h is narrower compared to that of systems with \(P_{\rm orb}>\) 2.6 h. Figure 5 shows that there are short-period systems brighter than \(M_{\rm G}\approx\) 5 mag. These are probably SU UMa type dwarf novae, which were in the superoutburst phase when they were observed. Blue and grey shaded parts in the HR diagram of CVs represent the regions where the white dwarf, main sequence and giant stars are located, respectively. The data for these shaded regions were taken from Abril et al. (2020). \begin{table} \begin{tabular}{l l c c c c c c c c c c} \hline ID & Star Name & \(\alpha\) & \(\delta\) & Object & \(P_{\rm orb}\) & \(\overline{\varpi}\) & \(\sigma_{\varpi}/\overline{\varpi}\) & \(\mu_{\alpha}\cos\delta\) & \(\mu_{\delta}\) & \(G\) & \(d_{\rm BJ}\) & Ref \\ & (h:km:mm ## 3 The analysis and comparisons ### Spatial distribution The distribution of CVs in the final sample according to equatorial and Galactic coordinates are plotted in Figure 6, upper and lower panels, respectively. The panels in Figure 6 indicate that the CVs are symmetrically distributed about the Galactic plane in general. The densest regions in both panels correspond to the Solar vicinity. In order to inspect the Galactic distribution of CVs in the Solar neighbourhood, we also calculated the Sun-centred rectangular Galactic coordinates of CVs (\(X\) towards Galactic centre, \(Y\) Galactic rotation, \(Z\) north Galactic pole) in the sample and displayed their projected positions on the Galactic plane (\(X-Y\) plane) and on a plane perpendicular to it (\(X-Z\) plane) in Figure 7. Median heliocentric rectangular Galactic coordinates (\(X\), \(Y\), \(Z\)) are 80, 93 and -18 pc, respectively, for all systems in the sample and -51, -4 and 24 pc for magnetic systems, respectively. These values are summarized in Table 2. \begin{table} \begin{tabular}{l c c c c c} \hline Group & \(N\) & \(\widetilde{d}\) & \(\widetilde{X}\) & \(\widetilde{Y}\) & \(\widetilde{Z}\) \\ & & (pc) & (pc) & (pc) & (pc) \\ \hline All CVs & 1587 & 989 & 80 & 93 & -18 \\ mCVs & 124 & 559 & -51 & -4 & 24 \\ \hline \end{tabular} \end{table} Table 2: The median distances (\(d\)) and heliocentric rectangular Galactic coordinates (\(X\), \(Y\), \(Z\)) of CVs in the sample. Values are separately listed for All CVs and magnetic (mCVs) systems. \(N\) denotes the number of objects. Figure 4: Distance histogram of CVs in the sample which is limited using bright and faint limiting magnitudes in \(G\)-band. Relative distance errors \(\sigma_{d_{\rm BJ}}/d_{\rm BJ}\) are also shown in the lowest panel. Figure 5: _Gaia_ DR3 HR diagram of CVs for which orbital periods \(P_{\rm orb}\) are known in the final sample. The separating point of the orbital period gap is accepted as 2.6 h (Ak et al., 2010). White dwarf, main-sequence and giant star regions are shaded in blue and grey, respectively (Abril et al., 2020). Figure 6: The distribution of CVs according to an equatorial (\(\alpha\), \(\delta\)) and Galactic (\(l\), \(b\)) coordinates. Distributions of magnetic and non-magnetic systems are shown with different colours, blue and black, respectively. Red dashed lines represent the Galactic plane (upper panel) and the celestial equator (lower panel). The green star in the upper panel indicates the location of the Galactic centre on the plot. Figures 6 and 7 show that there is no considerable bias according to the spatial distribution of CVs in our study. It should be noted that only magnetic CVs near the Sun can be detected as they are faint objects, in general. The median distances of CVs in the final sample are 989 and 559 pc for All CVs and magnetic systems, respectively. The median distances of All CVs and magnetic systems in the CV sample of Ozdonmez et al. (2015) were found to be 423 and 385 pc, respectively. Comparisons of median distances in both studies and Figure 5 in this study with Figure 6 Figure 7: The spatial distribution of CVs in the final sample with respect to the Sun. \(X\), \(Y\) and \(Z\) are the Sun centred rectangular Galactic coordinates. Magnetic systems (mCVs) are shown with blue symbols. Number histograms created according to coordinates are shown to the right and top of the figure. of Ozdonmez et al. (2015) reveals that much farther objects are included in our sample, thanks to _Gaia_ mission. In addition, the number of CVs with distance estimates in this study is higher, and the distances are more accurate than those in Ozdonmez et al. (2015). ### Galactic model parameters Using the number of objects per unit volume, it is possible to obtain information on their Galactic population. Galactic locations of objects must be known in order to estimate their Galactic model parameters. This information allows us to estimate scale length and scale height of the objects in question. Based on deep sky surveys, it is known that the scale length of the thin disc stars is expected to be larger than 2.6 kpc (Bilir et al., 2006; Juric et al., 2008). Besides All CVs being members of the thin-disc population of the Galaxy according to their Galactic kinematics (Ak et al., 2015), most of the systems in our sample are located in distances less than 2 kpc, as shown in Figure 7, and the median distance of All CVs in the final sample is 989 pc, which is much less than 2.6 kpc. Thus, our sample was not inspected according to population types and the scale length estimation was not performed. In order to find the Galactic model parameters of CVs, \(z\)-histograms that demonstrate the vertical distribution of objects in the Galaxy must be studied. Although the Galactic model parameters are derived using the exponential functions, Bilir et al. (2006a) showed that the observed vertical distribution in the Galaxy is smoother in the Solar neighbourhood, and is well-approximated by a secans hiperbolicus square function (sech\({}^{2}\)). Thus, the number of stars at a distance \(z\) from the Galactic plane is described in our study by using both exponential and secans hiperbolicus square functions \[n(z)=n_{0}\exp(-\frac{\mid z\mid}{H}) \tag{3}\] and \[n(z)=n_{0}\ \mathrm{sech}^{2}\Bigg{(}-\frac{\mid z\mid}{H_{z}}\Bigg{)}, \tag{4}\] respectively. \(n(z)\) based on sech\({}^{2}\) can be also expressed as \[n(z)=n_{0}\ \Bigg{(}\frac{4}{\exp\left(-2z/H_{z}\right)+\exp\left(2z/H_{z} \right)+2}\Bigg{)} \tag{5}\] (Bilir et al., 2006a). Here, \(z\) is the distance of objects from the Galactic plane and \(n_{0}\) is the number of stars for \(z=0\) pc. \(H\) and \(H_{z}\) are the exponential and sech\({}^{2}\) scale heights, respectively. \(z\) is described as \(z=z_{0}+d\sin(b)\), with \(b\) being the Galactic latitude of the star, \(d\) distance of the object and \(z_{0}\) distance of the Sun from the Galactic plane (24 pc; Juric et al., 2008). The relation between the exponential scale height \(H\) and the sech\({}^{2}\) scale height \(H_{z}\) is \(H=1.08504\times H_{z}\)(Bilir et al., 2006a). To sample the posteriors, we used the Markov Chain Monte Carlo (MCMC) _emcee_ package of the affine-invariant ensemble sampler (Goodman & Weare, 2010), kindly provided by Foreman-Mackey et al. (2013). To obtain initial model parameters, we used a nonlinear least-squares algorithm (Python library LMFIT). Using these priors, we ran the MCMC to sample the posteriors for 128 initial conditions and tested these walkers in 15,000 steps of chains. Thus, we obtained the most plausible model parameters and their errors through minimising chi-square (\(\chi^{2}_{\mathrm{min}}\)). The best fits to the \(z\)-histograms of All CVs and magnetic CVs in the sample are shown in Figure 8. We also demonstrate the 2-D posterior probability distributions of the model parameters sampled by MCMC in Figure 8. The scale height \(H\) and the number of stars in the Solar neighbourhood \(n_{0}\) obtained from the analyses are listed in Table 3. It is remarkable that exponential and sech\({}^{2}\) functions give very similar scale heights for the two groups in Table 3. Nevertheless, the exponential functions well represent the \(z\)-histograms in Figure 8 constructed for All CVs and magnetic CVs in the sample. From Table 3, we found that there is a considerable difference between the scale heights obtained for All CVs and magnetic systems, 375\(\pm\)2 and 281\(\pm\)3 pc, respectively. This can be expected as the median distances of these two groups of systems are 989 and 559 pc, respectively. Thus, we conclude that magnetic CVs are different than All CVs in the sample with respect to the scale height. In addition, as a consequence of reduced magnetic braking with the strong magnetic fields (Li et al., 1994), magnetic CVs evolve slower than non-magnetic systems (Araujo-Betancor et al., 2005). Thus, we expect that their Galactic parameters should be different. Ak et al. (2013) estimated the contribution of thick-disc CVs in the Solar neighbourhood to the Galactic model parameters from the Monte Carlo simulations and found that only about 6 per cent of CVs in the Solar neighbourhood are members of the thick-disc population of the Galaxy. Therefore, the effect of thick disc systems on the scale heights in Table 3 must be negligible. Another interesting finding is that, if the \(z\) histogram of CVs is accepted to be exponential, it must be about 25 missing CVs within the sphere with a radius of about 100 pc with the Sun at the centre, which reminds that the number of period bouncers discovered in sky surveys is less than expected from the population models based on the standard theory. McAllister et al. (2019) found that 30% of donor stars in their sample are likely to be brown dwarfs in period bouncers, while only 5% of CVs are located within 150 pc from the Sun in Pala et al. (2020) included period bouncers. Although we selected magnetic systems (AM Her or DQ Her type systems) according to their classification in the literature, in fact, it can be unclassified magnetic systems in the sample. In a sense, the sample could be contaminated by them. As a result the scale heights estimated for All CVs in Table 3 could not be reliable. In order to find the effect \begin{table} \begin{tabular}{c c c c c} \hline Group & \(N\) & Function & \(n_{0}\) & \(H\)(pc) \\ \hline All CVs & 1587 & exp & 229\(\pm\)1 & 375\(\pm\)2 \\ & & sech\({}^{2}\) & 196\(\pm\)1 & 370\(\pm\)1 \\ \hline mCVs & 124 & exp & 41\(\pm\)1 & 281\(\pm\)3 \\ & & sech\({}^{2}\) & 35\(\pm\)1 & 279\(\pm\)3 \\ \hline \end{tabular} \end{table} Table 3: The Galactic model parameters for All CVs and mCVs in the sample. Model functions are given in the third column. Here, exp denotes for exponential function and sech\({}^{2}\) for secans hiperbolicus square function. \(N\) is the number of systems in the object group, \(n_{0}\) is the number of stars in the Solar neighbourhood, \(H\) the scale height for the model function. Figure 8: The \(z\)-histograms for CVs. The upper panel shows the \(z\)-histogram for All CVs in the sample and the lower panel magnetic systems (mCVs). The blue dashed line represents the sech\({}^{2}\) function and the red solid line the exponential function. The 2-D posterior probability distributions of the model parameters sampled by MCMC are demonstrated to the right of the \(z\)-histograms. of magnetic system contamination on the scale height of All CVs, we decided to perform Monte Carlo simulations. An inspection shows that about 30% of CVs in the catalogue of Ritter & Kolb (2003) are classified as either AM Her or DQ Her system. Thus, we assumed that 30% of the systems in our sample is magnetic. As we know that 124 of CVs in the sample are classified as magnetic, which is about 8% of objects in this study. Keeping magnetic systems in the sample as they are, 22% of the remaining sample were assumed to be magnetic by arbitrary selection. We calculated \(n_{0}\) and \(H\) for each run of 15,000 trials for the Monte Carlo simulations. Figure 9 shows \(n_{0}\) versus \(H\) for the logarithmic and \(\mathrm{sech}^{2}\) functions. After 15,000 trials for the Monte Carlo simulations performed on All CVs in the sample, the most probable values of the scale heights for the logarithmic and \(\mathrm{sech}^{2}\) functions were found to be 398\(\pm\)31 and 406\(\pm\)18 pc, respectively. Error values are 1\(\sigma\) errors. A comparison with the values in Table 3 shows that these values are in agreement, within errors, with those given for All CVs in the sample and the effect of the magnetic system' contamination on the scale height of all systems can be negligible for our sample. Note that the most probable \(n_{0}\) values obtained from the Monte Carlo simulations are 158\(\pm\)7 and 132\(\pm\)5 for the logarithmic and \(\mathrm{sech}^{2}\) functions, respectively. Differences of these values from those in Table 3 are expected as the system numbers in different \(z\) distances were changed during the simulations. A comparison of the scale heights found in this study with the ones in Ozdonmez et al. (2015) reveals that their results are very different from those listed in Table 3. They found the exponential scale heights to be 213\({}^{+11}_{-10}\) and 173\({}^{+18}_{-15}\) pc for All CVs and magnetic systems in their sample, respectively. It is clear that these values are very different than 375\(\pm\)2 and 281\(\pm\)3 pc in Table 3. The scale height of about 375 pc found for All CVs in this study is also considerably larger than those suggested by Patterson (1984, 190\(\pm\)30 pc), van Paradijs, Augusteijn & Stehle (1996, 160-230 pc) and Ak et al. (2008, 158\(\pm\)14 pc). Note that Pala et al. (2020) assumed a scale height of 280 pc for their analysis. The scale height differences between this study and the previous estimates should be resulted from the number of systems in the analysis and the accuracy of distance estimates. As our data sample is based on reliable distances based on precise trigonometric parallax measurements and it includes the highest number of systems in a similar analysis in the literature, we believe that the results in Table 3 can be confidently used in population studies and further analysis of CVs. Figure 9: The number of stars for \(z=0\) pc (\(n_{0}\)) versus scale height (\(H\)), estimated with 15,000 trials for the Monte Carlo simulations. The simulations were performed for All CVs in the sample, keeping magnetic systems in the sample as they are and assuming 22% of the remaining sample to be magnetic by arbitrary selection. The left and right panels are plotted for exponential and \(\mathrm{sech}^{2}\) scale heights, respectively. Red straight lines show the most probable values. 1\(\sigma\) values are indicated with blue straight lines. We also derived the scale height of CVs in terms of the orbital period to find if this Galactic model parameter changes according to the orbital period. In order to compare our results with those in Ozdonmez et al. (2015), the CVs in our sample were divided into four-period intervals as defined in their study. Thus, there are 203 systems in the period interval \(1.37\leq P_{\rm orb}({\rm h})<2.25\), 157 systems in \(2.25\leq P_{\rm orb}({\rm h})<3.7\), 105 systems in \(3.7\leq P_{\rm orb}({\rm h})<4.6\) and 171 in \(4.6\leq P_{\rm orb}({\rm h})<12\). The best fits to the \(z\)-histograms of CVs grouped according to these period intervals are shown in Figure 10. The scale height \(H\) and the number of stars in the Solar neighbourhood \(n_{0}\) obtained from the minimum \(\chi^{2}\) analysis are listed in Table 4 for the period intervals given above. Since the number of systems is too small for magnetic CVs in these period ranges, we list the Galactic model parameters only for All CVs in the sample. The \(z\)-histograms in Figure 10 are well represented by exponential functions, in general. A comparison of scale heights in Table 4 with those given in Ozdonmez et al. (2015) shows the same trend, although the values in this study are more reliable. The scale height increases monotonously from 248\(\pm\)2 to 430\(\pm\)4 pc while the orbital period decreases from 12 to 2.25 h. However, it drops to 300\(\pm\)2 pc for the shortest orbital period CVs with \(P_{\rm orb}<2.25\) h. We found a similar trend also for the \({\rm sech}^{2}\) function. The scale height for the systems in the interval \(2.25\leq P_{\rm orb}({\rm h})<3.70\) is the highest for all CVs. Note that only \(\sim 14\%\) of the systems in this interval are classified as mCV in our sample. Figure 10: The \(z\)-histograms for All CVs in the sample in terms of the orbital period. Orbital period (\(P_{\rm orb}({\rm h})\)) intervals are shown in brackets. The best fits to the \(z\)-histograms are also presented. The blue dashed line represents the \({\rm sech}^{2}\) function and the red solid line the exponential function. ### Space density Space density is an important parameter for population synthesis studies based on theoretical evolutionary models of a selected object type. The space density of a group of stars is derived by dividing the number of stars in consecutive distances from the Sun to the corresponding partial spherical volumes: \(D=N/\Delta V_{i,i+1}\)(Bilir et al., 2006a,b,c). Here, \(D\) is the space density, \(N\) denotes the number of stars in the partial spherical volume \(\Delta V_{i,i+1}\) which is defined by consecutive distances \(d_{i}\) and \(d_{i+1}\) from the Sun. The logarithmic space density is preferred to compare the results in the literature, which is defined as \(D^{*}=\log D+10\). The logarithmic density functions of All CVs and magnetic systems in the Solar neighbourhood are shown in Figure 11, where \(r^{*}\) denotes the centroid distance of the partial spherical volume which is defined as \(r^{*}=[(d_{i}^{3}+d_{i+1}^{3})/2]^{1/3}\). The local space density \(D_{0}\) is the space density estimated for \(r^{*}=0\) pc from the exponential fits shown in Figure 11. The logarithmic and local space densities of CV groups in the Solar neighbourhood are listed in Table 5. Table 5 shows that the local space density of All CVs in the sample is \(6.8^{+1.3}_{-1.1}\times 10^{-6}\) pc\({}^{-3}\). The space density of magnetic CVs are about three times smaller than that found for all systems, \(2.1^{+0.5}_{-0.4}\times 10^{-6}\) pc\({}^{-3}\). The local space density estimation in this study takes into account the CVs located even further than 6 kpc, which corresponds to a large Galactic volume. Note that the median distance of the objects in the sample is 989 pc and the local space density estimation is based on the objects within the completeness limits. Therefore, we believe that the space density values obtained from the CV sample in this study are reliable. CV population synthesis models based on the standard formation and evolution scenario predicts space densities \(10^{-5}\)-\(10^{-4}\) pc\({}^{-3}\)(Ritter & Burkert, 1986; de Kool, 1992; Kolb, 1993; Politano, 1996; Willems et al., 2005, 2007; Goliasch & Nelson, 2015; Belloni et al., 2018) while the space densities found in observational studies are on the order of \(10^{-7}\)-\(10^{-4}\) pc\({}^{-3}\)(Warner, 1974; Patterson, 1984, 1998; Thomas & Beuermann, 1998; Ringwald, 1993; Schwope et al., 2002; Araujo-Betancor et al., 2005; Pretorius et al., 2007a,b; Ak et al., 2008; Revnivtsev et al., 2008; Pretorius & Knigge, 2012; Pretorius et al., 2013; Schwope, 2018) In a recent study, Pala et al. (2020) measured the space density of \(4.8^{+0.6}_{-0.9}\times 10^{-6}\) and \(1.2^{+0.4}_{-0.5}\times 10^{-6}\) pc\({}^{-3}\) for All CVs and mCVs, respectively, from _Gaia_ DR2 (Gaia Collaboration, 2018). They assumed a scale height of 280 pc for their analysis and their data sample included 42 objects within 150 pc from the Sun. Note that we found a scale height of \(H=375\) pc for All CVs (see Table 3). \begin{table} \begin{tabular}{l c c c c} \hline \(P_{\rm orb}\)(h) & \(N\) & Function & \(n_{0}\) & \(H\) (pc) \\ \hline [1.37, 2.25) & 203 & exp & 86\(\pm\)1 & 300\(\pm\)2 \\ & & \({\rm sech}^{2}\) & 74\(\pm\)1 & 286\(\pm\)1 \\ \hline [2.25, 3.70) & 157 & exp & 43\(\pm\)1 & 430\(\pm\)4 \\ & & \({\rm sech}^{2}\) & 37\(\pm\)1 & 424\(\pm\)3 \\ \hline [3.70, 4.60) & 105 & exp & 41\(\pm\)1 & 269\(\pm\)4 \\ & & \({\rm sech}^{2}\) & 34\(\pm\)1 & 278\(\pm\)3 \\ \hline [4.60, 12.00) & 171 & exp & 69\(\pm\)1 & 248\(\pm\)2 \\ & & \({\rm sech}^{2}\) & 57\(\pm\)1 & 258\(\pm\)2 \\ \hline \end{tabular} \end{table} Table 4: The Galactic model parameters for All CVs in the sample in terms of the orbital period \(P_{\rm orb}\)(h). \(n_{0}\) and \(H\) are as defined in Table 3. \begin{table} \begin{tabular}{l c c c} \hline Group & \(N\) & \(D^{*}\) & \(D_{0}\) \\ & & & (\(\times 10^{-6}\) pc\({}^{-3}\)) \\ \hline All CVs & 1587 & 4.83\(\pm\)0.07 & 6.8\({}^{+1.3}_{-1.1}\) \\ mCVs & 124 & 4.33\(\pm\)0.09 & 2.1\({}^{+0.5}_{-0.4}\) \\ \hline \end{tabular} \end{table} Table 5: The logarithmic and local space densities of CVs. Symbols for subgroups are as in Table 3. \(N\) denotes the number of stars in the subgroup, \(D_{0}\) is the local space density and \(D^{*}\) logarithmic space density. The local space density estimated for All CVs in our sample (\(6.8^{+1.3}_{-1.1}\times 10^{-6}\) pc\({}^{-3}\)) is 2-20 times smaller than those predicted by population synthesis studies based on the standard evolution scenario. However, it is very similar to the observational space density found by Pala et al. (2020) (\(4.8^{+0.6}_{-0.9}\times 10^{-6}\) pc\({}^{-3}\)) from _Gaia_ DR2 (Gaia Collaboration, 2018), within the errors. The local space densities obtained in this study are only about 1.5 times more than those they found. Note that the fraction of mCVs in the whole sample used in this study is about 8%. ### Luminosity function The luminosity function is defined as the space density of objects in a certain absolute magnitude interval (Karaali et al., 2003, 2004, 2009; Ak et al., 2007). We estimated the logarithmic luminosity functions \(\phi\) for All CVs and magnetic systems and presented them in Table 6, where \(\Delta V_{i,i+1}\) is the partial spherical volume which includes the objects located between the distances \(d_{i}\) and \(d_{i+1}\). These distance limits correspond to the bright and faint limiting apparent magnitudes in \(G\)-band, \(9\leq G\leq 18.5\) mag, for the absolute magnitude \(M_{\rm G}\) interval preferred here. The logarithmic luminosity functions of all systems and magnetic systems are plotted in the lower panel of Figure 12. As can be expected, the luminosity function of All CVs is considerably different than the one estimated for magnetic CVs. The luminosity function of All CVs in the data sample is plotted in the upper panel of Figure 12, where we also presented the white dwarf luminosity function that demonstrate the collective evolution of white dwarfs (Gaia Collaboration, 2021). As can be seen from Table 6 and Figure 12, besides magnetic systems have a luminosity function smaller than that estimated for All CVs, they span a narrower absolute magnitude interval compared to the absolute magnitude interval of all systems. The comparison of the luminosity functions of white dwarfs and All CVs in our sample reveals that the tendencies of both luminosity functions is almost the same and that the 500 times the luminosity function of All CVs corresponds to the continuation of the luminosity function of white dwarfs towards the brighter absolute magnitudes. A similar comparison was demonstrated by Ozdonmez et al. (2015) for CVs in their Figure 11: The logarithmic density functions of All CVs (panel a) and mCVs (panel b) in the sample. Dashed lines represent exponential fits applied to the data. data sample and white dwarfs in the Anglo Australian Telescope survey (Boyle, 1989) and the Palomar Green survey (Fleming, Liebert and Green, 1986). The single white dwarf masses are significantly smaller than those of the white dwarfs in CVs (see Zorotovic and Schreiber, 2020, and references therein) and it is claimed that the white dwarf mass does not depend on the orbital period (McAllister et al., 2019; Pala et al., 2022). In addition, the mass of CV white dwarfs does not show a monotonous increase during the evolution of the system and the contribution of the primary component to the total radiation of a CV is dominant -or much higher- in UV rather than that in optical (Gansicke, 2000). Simulations performed by Hillman et al. (2020) predicts that the white dwarf masses in CVs decrease monotonically, by only a few per cent throughout the evolution of cataclysmic variable. Therefore, it is unlikely that the trend of CV luminosity function in the upper panel of Figure 12 reflects the evolution of the white dwarf companion of these systems. In any case, this comparison shows that we find one CV for about 500 white dwarfs in the Solar neighbourhood. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & & All CVs & & & mCVs & \\ \cline{5-10} \(M_{\rm G}\) & \(d_{1}\) & \(d_{2}\) & \(\Delta V_{1,2}\) & \(r^{*}\) & \(N\) & \(\rho\) & \(\phi\) & \(N\) & \(\rho\) & \(\phi\) \\ (mag) & (pc) & (pc) & (pc\({}^{3}\)) & (kpc) & & (pc\({}^{-3}\)) & & & (pc\({}^{-3}\)) & \\ \hline [-3,-2) & 2512 & 125893 & 8.4\(\times 10^{15}\) & 99.92 & 11 & 1.3\(\times 10^{-15}\) & -14.9 & 1 & 1.2\(\times 10^{-16}\) & -15.9 \\ [-2,-1) & 1585 & 79433 & 2.1\(\times 10^{15}\) & 63.05 & 8 & 3.8\(\times 10^{-15}\) & -14.4 & 1 & 4.8\(\times 10^{-16}\) & -15.3 \\ [-1, 0) & 1000 & 50119 & 5.3\(\times 10^{14}\) & 39.78 & 25 & 4.7\(\times 10^{-14}\) & -13.3 & 1 & 1.9\(\times 10^{-15}\) & -14.7 \\ [ 0, 1) & 631 & 31623 & 1.3\(\times 10^{14}\) & 25.10 & 22 & 1.7\(\times 10^{-13}\) & -12.8 & 5 & 3.8\(\times 10^{-14}\) & -13.4 \\ [ 1, 2) & 398 & 19953 & 3.3\(\times 10^{13}\) & 15.84 & 28 & 8.4\(\times 10^{-13}\) & -12.1 & 3 & 9.0\(\times 10^{-14}\) & -13.0 \\ [ 2, 3) & 251 & 12589 & 8.4\(\times 10^{12}\) & 9.99 & 87 & 1.0\(\times 10^{-11}\) & -11.0 & 5 & 6.0\(\times 10^{-13}\) & -12.2 \\ [ 3, 4) & 158 & 7943 & 2.1\(\times 10^{12}\) & 6.30 & 164 & 7.8\(\times 10^{-11}\) & -10.1 & 10 & 4.8\(\times 10^{-12}\) & -11.3 \\ [ 4, 5) & 100 & 5012 & 5.3\(\times 10^{11}\) & 3.98 & 229 & 4.3\(\times 10^{-10}\) & -9.4 & 14 & 2.7\(\times 10^{-11}\) & -10.6 \\ [ 5, 6) & 63 & 3162 & 1.3\(\times 10^{11}\) & 2.51 & 275 & 2.1\(\times 10^{-09}\) & -8.7 & 12 & 9.1\(\times 10^{-11}\) & -10.0 \\ [ 6, 7) & 40 & 1995 & 3.3\(\times 10^{10}\) & 1.58 & 210 & 6.3\(\times 10^{-09}\) & -8.2 & 8 & 2.4\(\times 10^{-10}\) & -9.6 \\ [ 7, 8) & 25 & 1259 & 8.4\(\times 10^{09}\) & 1.00 & 159 & 1.9\(\times 10^{-08}\) & -7.7 & 13 & 1.6\(\times 10^{-09}\) & -8.8 \\ [ 8, 9) & 16 & 794 & 2.1\(\times 10^{09}\) & 0.63 & 159 & 7.6\(\times 10^{-08}\) & -7.1 & 24 & 1.1\(\times 10^{-08}\) & -7.9 \\ [ 9,10) & 10 & 501 & 5.3\(\times 10^{08}\) & 0.40 & 99 & 1.9\(\times 10^{-07}\) & -6.7 & 10 & 1.9\(\times 10^{-08}\) & -7.7 \\ [10,11) & 6 & 316 & 1.3\(\times 10^{08}\) & 0.25 & 65 & 4.9\(\times 10^{-07}\) & -6.3 & 10 & 7.6\(\times 10^{-08}\) & -7.1 \\ [11,12) & 4 & 200 & 3.4\(\times 10^{07}\) & 0.16 & 28 & 8.4\(\times 10^{-07}\) & -6.1 & 6 & 1.8\(\times 10^{-07}\) & -6.7 \\ [12,13) & 3 & 126 & 8.4\(\times 10^{06}\) & 0.10 & 7 & 8.4\(\times 10^{-07}\) & -6.1 & 1 & 1.2\(\times 10^{-07}\) & -6.9 \\ \hline \end{tabular} \end{table} Table 6: The logarithmic luminosity functions \(\phi\) of CVs in the sample with \(9\leq G\leq 18.5\) mag. \(N\) is the number of stars in the \(M_{\rm G}\) absolute magnitude interval given in the first column. \(\Delta V_{i,i+1}\) is the partial spherical volume that includes the objects between the distances \(d_{i}\) and \(d_{i+1}\) corresponding to the bright and faint limits in \(G\)-band for the absolute magnitude \(M_{\rm G}\) interval. \(\rho\) denotes the density and \(r^{*}\) the centroid distance of the partial spherical volume. ## 4 Conclusions The spatial distribution, Galactic model parameters and luminosity function of cataclysmic variables were precisely derived using distances from Bailer-Jones et al. (2021), who re-estimated trigonometric parallaxes of ESA's _Gaia_ DR3 (Gaia Collaboration, 2021). We compared distances obtained from Bailer-Jones et al. (2021) and _Gaia_ DR3 data (Gaia Collaboration, 2022) found that the scatter is too much for the systems \(G\geq 18.5\) mag. Thus, the data sample in this study includes CVs with \(9\leq G\leq 18.5\) mag. Number of CVs in the sample decreased from 10,852 to 1,587, with 124 of them are magnetic systems, by preventing the misidentification of CVs due to adjacent objects, checking Figure 12: Logarithmic luminosity functions of CVs. Lower panel shows luminosity functions for All CVs and mCVs. Upper panel demonstrates the logarithmic luminosity function for All CVs in the sample. Red line represent the 500 times the luminosity function of All CVs and the blue line logarithmic luminosity function of white dwarfs taken from Gaia Collaboration (2021). for duplication and by also taking into account the quality flags of parallax measurements and completeness limits of the data. Projected positions of CVs on the Galactic plane (\(X-Y\) plane) and on a plane perpendicular to it (\(X-Z\) plane) demonstrate that systems in the sample are symmetrically distributed about the Galactic plane, in general. So, we conclude that there is no considerable bias according to the spatial distribution of CVs in our study. The median distances of objects in the sample are 989 and 559 for All CVs and magnetic systems, respectively. The exponential scale heights were found to be 375\(\pm\)2 and 281\(\pm\)3 for All CVs and mCVs in the sample, respectively. Thus, we conclude that a scale height of 375 pc can be used in CV studies, in general. This value is considerably larger than those previously suggested in observational studies (Patterson, 1984; van Paradijs et al., 1996; Ak et al., 2008). It is also significantly larger than that estimated by Ozdonmez et al. (2015). Monte Carlo simulations showed that the magnetic system' effect on the scale height of all systems can be negligible for the sample. It seems that it must be 25 missing CVs within the sphere with a radius of about 100 pc with the Sun at the centre. This reminds that the number of period bouncers discovered in sky surveys is less than expected from the population models based on the standard theory. Note that McAllister et al. (2019) found that 30% of donor stars in their sample are likely to be brown dwarfs in period bouncers. The exponential scale heights of All CVs derived in terms of the orbital period shows that the scale height increases from 248\(\pm\)2 to 430\(\pm\)4 pc with the orbital period decreases from 12 to 2.25 h, and it almost suddenly drops to 300\(\pm\)2 pc for the shortest orbital period CVs with \(P_{\rm orb}<2.25\) h. A similar trend was also found in Ozdonmez et al. (2015). Note that Pretorius et al. (2007b) modelled the Galactic population of CVs adopting 120, 260 and 450 pc for long, normal short orbital period systems and period bouncers, respectively. The local space density of All CVs and magnetic systems in the sample was estimated to be 6.8\({}^{+1.3}_{-1.1}\)\(\times\)10\({}^{-6}\) and 2.1\({}^{+0.5}_{-0.4}\)\(\times\)10\({}^{-6}\) pc\({}^{-3}\), respectively. The space densities estimated here for All CVs and magnetic systems are in agreement with those calculated by Pala et al. (2020), who used 42 CVs within 150 pc from the Sun with data obtained from _Gaia_ DR2 (Gaia Collaboration, 2018), within errors. They measured the space densities of 4.8\({}^{+0.6}_{-0.9}\)\(\times\)10\({}^{-6}\) and 1.2\({}^{+0.4}_{-0.5}\)\(\times\)10\({}^{-6}\) pc\({}^{-3}\) for all CVs and magnetic CVs, respectively. We claim that the space density values in our study are the most reliable estimates ever found. The measurements in this study strengthen the discrepancy between CV space densities obtained from observations and those predicted by population synthesis models based on the standard formation and evolution theory of these systems. If the population synthesis models are correct, this disagreement between the theory and observations means that the current CV surveys are incomplete as they missed almost all very low \(\dot{M}\) systems and CVs in the period gap, through which the life time of a CV is predicted to be longer (Hillman et al., 2020), and period bouncers. The logarithmic luminosity functions derived for CVs in the sample are in agreement with those shown in Ozdonmez et al. (2015). The trend of the logarithmic luminosity functions of CVs and white dwarfs are very similar. Although 500 times the luminosity function of CVs looks like the extension of the white dwarf luminosity function towards the brighter absolute magnitudes, it is not likely that this similarity indicates the evolution of white dwarf companion of CVs as the mass of CV white dwarfs does not show a monotonous increase during the evolution of the system and contribution of the primary component to the total radiation of a CV is dominant in UV (Gansicke, 2000; Pala et al., 2022). To conclude, the results in this study can be used in population studies and analysis of cataclysmic variables. We believe that the further \(Gaia\) observations of cataclysmic variables and surveys focused on low \(\dot{M}\) CVs and fainter systems will lead to not only larger datasets but also more precise distance measurements for these systems. Such observational results will allow us to obtain more detailed and confident observational Galactic model parameters to test population synthesis models. ## 5 Acknowledgments We would like to thank Michael Shara, the referee, for his useful and constructive comments concerning the manuscript. This work has been supported in part by the Scientific and Technological Research Council of Turkey (TUBITAK) 119F072. This work has been supported in part by Istanbul University: Project number NAP-33768. This study is a part of the PhD thesis of Remzije Canbay. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/](https://www.cosmos.esa.int/web/gaia/dpac/) consortium). Funding for DPAC has been provided by national institutions, in particular, the institutions participating in the _Gaia_ Multilateral Agreement. This research has made use of NASA's (National Aeronautics and Space Administration) Astrophysics Data System Bibliographic Services and the SIMBAD Astronomical Database, operated at CDS, Strasbourg, France and NASA/IPAC Infrared Science Archive and Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
2310.19325
A Geometric Algorithm for the Factorization of Spinor Polynomials
We present a new algorithm to decompose generic spinor polynomials into linear factors. Spinor polynomials are certain polynomials with coefficients in the geometric algebra of dimension three that parametrize rational conformal motions. The factorization algorithm is based on the "kinematics at infinity" of the underlying rational motion. Factorizations exist generically but not generally and are typically not unique. We prove that generic multiples of non-factorizable spinor polynomials admit factorizations and we demonstrate at hand of an example how our ideas can be used to tackle the hitherto unsolved problem of "factorizing" algebraic motions.
Zijia Li, Hans-Peter Schröcker, Johannes Siegele
2023-10-30T07:55:10Z
http://arxiv.org/abs/2310.19325v2
# A geometric algorithm for the factorization of spinor polynomials ###### Abstract. We present a new algorithm to decompose generic spinor polynomials into linear factors. Spinor polynomials are certain polynomials with coefficients in the geometric algebra of dimension three that parametrize rational conformal motions. The factorization algorithm is based on the "kinematics at infinity" of the underlying rational motion. Factorizations exist generically but not generally and are typically not unique. We prove that generic multiples of non-factorizable spinor polynomials admit factorizations and we demonstrate at hand of an example how our ideas can be used to tackle the hitherto unsolved problem of "factorizing" algebraic motions. Key words and phrases:conformal motion, elementary motion, null displacement, Study variety, null quadric, four quaternion representation 2010 Mathematics Subject Classification: 15A66, 15A67, 20G20, 51B10, 51F15 ## 1. Introduction The article [12] presented an algorithm to decompose a rational rigid body motion into a sequence of rotations or translations, coupled by the same rational motion parameter. In the Study model [32] of space kinematics this decomposition corresponds to the factorization of special polynomials over the ring of dual quaternions ("motion polynomials") into linear factors. The algorithm to do so builds upon older ideas for factorizing quaternion polynomials [11, 28]. In contrast to the quaternion case, the factorization of motion polynomials has to fail in some non-generic cases because not all motion polynomials admit factorizations with linear factors. The rotations or translations parametrized by linear factors can be realized by mechanical joints. Since factorizations of motion polynomials are generically non-unique, it is possible to combine different factorizations into closed-loop linkages. The thus resulting movable mechanical structures are of importance in theoretical mechanism science [6, 22] and have some potential for engineering applications [25, 13, 26]. Note that our notion of non-uniqueness is weaker than what is commonly found in some algebra communities, as discussed in the survey papers [1, 10, 31]. As has been observed at several occasions, the algebraic factorization algorithm of [12] generalizes, under certain conditions, to polynomials with coefficients in other algebras. It is worth to point out that the characterization [7, 8] for zeros of slice functions [9] also sheds some light on the factorization of motion polynomials over dual quaternions. In the articles [14, 17, 18, 19], the authors have provided an in-depth analysis of the principles and applications of the general conformal geometric algebra (CGA). An important example [21] is "projectivized" spin groups of Clifford algebras [4, 27], which leads to the factorization of what we call "spinor polynomials". The geometric foundations underlying this construction for CGA of dimension three are the topic of [15]. In this article, we present a new factorization algorithm for spinor polynomials in this algebra that has a pronounced geometric flavor. Since CGA contains dual quaternions (and other important algebras) as a sub-algebra, the new algorithm can also be used to factorize motion polynomials. We see at least two important advantages of this new algorithm: * Firstly, it provides a deep geometric insight into the factorization process, allowing us to extend results of [29, 22] and prove existence of factorizable real polynomial multiples for arbitrary spinor polynomials. * Secondly, some aspects of this new algorithm are independent of the rational parametrization and only depend on the geometric curve. This opens possibilities to extend factorization theory to algebraic motions. The latter point touches upon an important generalization of motion polynomial factorization. In fact, rational motions are rare in mechanism science where mechanically constrained movable structures typically produce configuration varieties that are algebraic but not of genus zero. While we believe that this article is interesting in its own right, it can also be viewed as a important step towards eastablishing a factorization theory for algebraic motions. In fact, we will even discuss a basic non-rational example as a proof of concept at the end of this text, in Section 5. Dual quaternions \(\mathbb{DH}\), a sub-algebra of CGA, would be sufficient to describe rigid body kinematics but our description and derivation will profit a lot from the richer algebraic and geometric structure of the more general algebra CGA. In Section 2 we collect necessary concepts and notation. In Section 3 we describe the new geometric factorization procedure and prove its correctness. The two following sections are dedicated to a result on the unconditional factorizability of suitable real polynomial multiples of spinor polynomials (Theorem 3 in Section 4) and to the exemplary factorization of an algebraic spherical four-bar motion (Section 5). Open questions for a more general factorization theory for algebraic motions will be discussed at the end of this article. The technical proof of Lemma 3 is deferred to an appendix. ## 2. Preliminaries In this article, we study polynomials with coefficients in the conformal geometric algebra (CGA) in dimension three. We introduce it following the conventions of [2, Chapter 8]. Let us take an orthonormal basis \(e_{1},e_{2},e_{3},e_{+},e_{-}\) of the quadratic space \(\mathbb{R}^{4,1}\) and consider a multiplication of vectors which satisfies \[e_{1}^{2}=e_{2}^{2}=e_{3}^{2}=e_{+}^{2}=-e_{-}^{2}=1\] as well as anti-commutativity of the basis elements: \[e_{i}e_{j}=-e_{j}e_{i}\quad\text{for distinct $i$, $j\in\{1,2,3,+,-\}$.}\] This multiplication extends in a unique way to a real associative algebra of dimension \(32\), the Clifford algebra \(\mathcal{C}\ell(4,1)\). Conformal geometric algebra CGA is obtained by the simple change of basis that replaces \(e_{-}\) and \(e_{+}\) with \[e_{o}=\frac{1}{2}(e_{-}-e_{+})\quad\text{and}\quad e_{\infty}=e_{-}+e_{+},\] respectively. Its basis elements are the products of up to five elements from the set \(\{e_{1},e_{2},e_{3},e_{\infty},e_{o}\}\) and we denote them by multiple subscripts, e.g. we write \(e_{ij}\) for \(e_{i}e_{j}\). The reverse \(\widetilde{e}_{r}\) of a basis element \(e_{r}\) for \(r=r_{1},r_{2},\ldots r_{n}\) is obtained by simply reversing the order of multiplication, i.e. \(\widetilde{e}_{r}=e_{r_{n}\ldots r_{2},r_{1}}\). The grade of \(e_{r}\) is the cardinality of the set \(\{r_{1},r_{2},\ldots,r_{n}\}\). Dot product and wedge product of CGA will be denoted by the symbols "\(\cdot\)" and "\(\wedge\)", respectively. Any element \(q\in\mathrm{CGA}\) can be written as a unique linear combination of the basis elements. The reverse of \(q\) is obtained by reversing each basis element of this linear combination. Vectors in \(\mathbb{R}^{4,1}\) are naturally embedded in CGA as grade one elements \[a=a_{o}e_{o}+a_{1}e_{1}+a_{2}e_{2}+a_{3}e_{3}+a_{\infty}e_{\infty}. \tag{1}\] The main purpose of vectors in CGA is to represent spheres of three-dimensional conformal space. The vector (1) represents the sphere with center \((a_{1},a_{2},a_{3})/a_{o}\) and squared radius \(a^{2}\). Negative values of \(a^{2}\) are possible and lead to spheres with purely imaginary radius. Planes can be viewed as spheres which contain the point at infinity, i.e. vectors satisfying \(a\cdot e_{\infty}=0\). Vectors with the property \(\widetilde{a}=a^{2}=0\) represent points in conformal three-space, i.e. a point \((a_{1},a_{2},a_{3})/a_{o}\in\mathbb{R}^{3}\), provided \(a_{o}\neq 0\), or the point at infinity \(e_{\infty}\) otherwise. ### Spinors and Conformal Displacements If \(x\) and \(s\) are spheres and \(s\) is not a point, the reflection (inversion) \(y\) of \(x\) in \(s\) is given by the formula \[y=sx\widetilde{s}. \tag{2}\] The group of conformal displacement is generated by reflections in spheres or planes. Denote the even sub-algebra of CGA by \(\mathrm{CGA}_{+}\). This is the algebra consisting of linear combinations of basis elements of even grade, such that the "sandwich product" (2) fixes grade one elements. The group of conformal displacements is isomorphic to the special orthogonal group \(\mathrm{SO}(4,1)\) which, in turn, is doubly covered by the spin group \[\{q\in\mathrm{CGA}_{+}\colon q\widetilde{q}=\pm 1,\ qa\widetilde{q}\in\mathbb{R }^{4,1}\text{ for all }a\in\mathbb{R}^{4,1}\}.\] It is well-known that the spin group of \(\mathcal{C}\ell(4,1)\) coincides with the group of even-graded versors, that is, products of an even number of vectors. A spinor \(q\) acts on a vector \(a\) via the sandwich product \(qa\widetilde{q}\) whence \(q\) and \(-q\) represent the same displacement. To avoid this representation ambiguity, we will consider \(\mathrm{CGA}_{+}\) modulo the real multiplicative group \(\mathbb{R}^{\times}\). In this way, elements of \(\mathrm{CGA}_{+}/\mathbb{R}^{\times}\) can be viewed as points of the projective space \(\mathbb{P}(\mathrm{CGA}_{+})=\mathbb{P}^{15}(\mathbb{R})\). It provides the scenery for our geometric factorization algorithm. Whenever we wish to emphasize that elements of \(\mathrm{CGA}_{+}/\mathbb{R}^{\times}\) should be considered as projective points, we use square brackets to denote equivalence classes, that is, \([q]=[2q]=[-q]\). A point \([q]\in\mathbb{P}(\mathrm{CGA}_{+})\) is represented by a spinor \(q\) if \[q\widetilde{q}=\widetilde{q}q\in\mathbb{R}^{\times}. \tag{3}\] If \(q\widetilde{q}=\widetilde{q}q\), we will call this the norm of \(q\). The group \(\mathrm{SO}(4,1)\) as a point-set is embedded into the projective space \(\mathbb{P}(\mathrm{CGA}_{+})\) as a projective variety \(\mathcal{S}\) minus a quadric \(\mathcal{N}\). The variety \(\mathcal{S}\) is defined by the condition \(q\widetilde{q}=\widetilde{q}q\in\mathbb{R}\) and was called _Study variety_\(\mathcal{S}\) in [15]. It generalizes the well-known _Study quadric_ of rigid body kinematics [30, Chapter 11]. The _null quadric_\(\mathcal{N}\) is given by the condition that the grade zero part of \(q\widetilde{q}\) vanishes. This is a quadratic condition so that \(\mathcal{N}\) is a quadric in the classical sense of projective geometry over vector spaces. It is often necessary for our purposes to also consider the complex extension of Study variety \(\mathcal{S}\), null quadric \(\mathcal{N}\), and their ambient projective space \(\mathbb{P}(\mathrm{CGA}_{+})\). We will therefore tacitly allow CGA elements with complex coefficients. This does not change much of the algebraic properties of \(\mathrm{CGA}_{+}\) itself but affects a lot important sub-algebras such as the quaternions \(\mathbb{H}\) or the dual quaternions \(\mathbb{DH}\). ### The Four Quaternion Representation Quaternions \(\mathbb{H}\) and dual quaternions \(\mathbb{DH}\) are embedded in \(\mathrm{CGA}_{+}\) via \[\mathbf{i}\mapsto-e_{23},\quad\mathbf{j}\mapsto e_{1,3},\quad\mathbf{k}\mapsto -e_{12},\quad\varepsilon\mapsto e_{123\infty}.\] In [15] we showed that any even-graded element \(q\) can be written in a four-quaternion representation \[q=q_{0}+\varepsilon_{1}q_{1}+\varepsilon_{2}q_{2}+\varepsilon_{3}q_{3},\] with \(q_{0}\), \(q_{1}\), \(q_{2}\), \(q_{3}\in\mathbb{H}\) and \(\varepsilon_{1}=\varepsilon=e_{123\infty}\), \(\varepsilon_{2}=e_{1230}\), \(\varepsilon_{3}=e_{\infty 0}+1=e_{+}e_{-}\). The elements \(\varepsilon_{1}\), \(\varepsilon_{2}\) and \(\varepsilon_{3}\) commute with the quaternion units; additional multiplication rules are given in Table 1. The four quaternion representation groups the sixteen coordinates of \(\mathrm{CGA}_{+}\) into four quadruples and makes manual computations more tractable. We will use it in examples and in the technical proof of Lemma 3 in the appendix. ### Spinor Polynomials A central object of interest in this article is polynomials \(C=\sum_{i=0}^{n}t^{i}q_{i}\) in the indeterminate \(t\) and with coefficients \(q_{i}\) in the even sub-algebra \(\mathrm{CGA}_{+}\). For polynomials over a non-commutative ring, there are different notions for multiplication and evaluation. Since we use polynomials to parametrize rational curves in the Study variety, it is natural to treat the indeterminate \(t\) as a real (or complex) parameter and postulate that it commutes with all coefficients of the polynomial. The caveat of this convention is that evaluation of a polynomial for an element not in the center of \(\mathrm{CGA}_{+}\) requires an additional convention. We define the _left evaluation of \(C\) at \(h\in\mathrm{CGA}_{+}\)_ as \(C(h)\coloneqq h^{i}q_{i}\). A corresponding _right evaluation_\(\sum_{i=0}^{n}q_{i}h_{i}\) exists and leads to a symmetric theory. We will not describe the "right" theory explicitly but occasionally hint at minor adaptations that are needed for it to work. Notation to distinguish between left and right evaluation will not be required. The reverse of a polynomial is obtained by reversing all of its coefficients, i.e. \(\widetilde{C}=\sum_{i=0}^{n}t^{i}\widetilde{q_{i}}\). Left and right norm polynomials are defined as \(C\widetilde{C}\) and \(\widetilde{C}C\), respectively. A polynomial in \(\mathrm{CGA}_{+}[t]\) is called spinor polynomial, if \(C\widetilde{C}=\widetilde{C}C\in\mathbb{R}[t]\setminus\{0\}\). In this case, we will call \(C\widetilde{C}\) the _norm polynomial of \(C\)_. Because of \(C\widetilde{C}=\widetilde{C}C\in\mathbb{R}[t]\setminus\{0\}\), a spinor polynomial \(C\) parametrizes a rational curve in the Study variety \(\mathcal{S}\) that is not entirely contained in the null quadric \(\mathcal{N}\). We denote by \([C]\) the set of curve points over the complex numbers \(\mathbb{C}\), that is, the set \(\{[C(t)]\mid t\in\mathbb{C}\cup\{\infty\}\}\) with the usual understanding that \(C(\infty)\) equals the leading coefficient of \(C\). ## 3. A Geometric Factorization Algorithm Factorizations into linear factors of univariate left polynomials with coefficients from the ring of quaternions, dual quaternions, and split quaternions polynomials was topic of previous research [11, 12, 23, 28, 29]. They lead to the decomposition in diverse transformation groups (\(\mathrm{SO}(3)\), \(\mathrm{SE}(3)\) and transformations of the hyperbolic planes) into coupled "elementary motions" (rotations and, in case of \(\mathrm{SE}(3)\), also translations). As all of these algebras are contained in \(\mathrm{CGA}_{+}\), it seems natural to study factorization with linear factors of polynomials in \(\mathrm{CGA}_{+}[t]\). They correspond to the decomposition of conformal motions into elementary motions that have been described in [5] (conformal scaling, translation, and rotation). As has been observed in [15], a standard factorization algorithm for the mentioned quaternion algebras also works for spinor polynomials \(C\in\mathrm{CGA}_{+}[t]\), at least generically. It is based on the algebra involution (reversion), a factorization of the real norm polynomial, and division with remainder for polynomials over non-commutative rings. We briefly present the basic steps but omit proofs: **Lemma 1** (Polynomial Division; [21, Theorem 1]).: _Let \(C\), \(P\in\mathrm{CGA}_{+}[t]\) be two polynomials and assume that the leading coefficient of \(P\) is invertible. Then there exist unique polynomials \(Q\) and \(R\in\mathrm{CGA}_{+}[t]\) such that \(C=QP+R\) and \(\deg(R)<\deg(P)\). Further, for \(h\in\mathrm{CGA}_{+}\) with \(P(h)=0\), it holds \(C(h)=R(h)\)._ A proof for the first part of this (well known) lemma is [21, Theorem 1]. The second statement is not trivial because left evaluation is no ring homomorphism but extending the proof of [12, Lemma 1] from dual quaternion polynomials to polynomials in \(\mathrm{CGA}_{+}[t]\) is straightforward. _Remark 1_.: We will often use Lemma 1 for polynomials \(P\in\mathbb{R}[t]\). In this case, we have \(C=QP+R=PQ+R\). Also, the second statement of Lemma 1 becomes rather trivial. **Lemma 2** (Zeros and Left Factors).: _Let \(C\in\mathrm{CGA}_{+}[t]\) and \(h\in\mathrm{CGA}_{+}\). Then \(t-h\) is a left factor of \(C\) if and only if \(C(h)=0\)._ The "right" version of this lemma (in the sense of right factors and right evaluation) is [21, Theorem 2]. Using above results, we immediately obtain the following proposition, which gives a method for the computation of linear left factors. **Proposition 1**.: _Let \(C\in\mathrm{CGA}_{+}[t]\) be a spinor polynomial._ * _If_ \(M\) _is a quadratic, monic, real factor of_ \(C\widetilde{C}\)_,_ \(R\) _is the (linear) remainder of polynomial division of_ \(C\) _by_ \(M\)_, and_ \(h\) _is a common zero of_ \(R\) _and_ \(M\)_, then_ \(t-h\) _is a left factor of_ \(C\)_._ * _If_ \(t-h\) _is a left factor of_ \(C\)_, then_ \(R(h)=0\) _where_ \(R\) _is the (linear) remainder of polynomial division of_ \(C\) _by_ \(M=(t-h)(t-\widetilde{h})\)_._ By Proposition 1, we can find all linear left factors of a spinor polynomial \(C\) by computing all quadratic, monic, real factors \(M\) of \(C\widetilde{C}\) and all common zeros of \(M\) and the linear remainder \(R\) when dividing \(C\) by \(M\). Finding all zeros of \(R\) is a linear problem. Generically, it has a unique solution but zero or infinitely many solutions are possible. A zero \(h\) of \(R\) is valid if it satisfies the quadratic constraint \(M(h)=0\). This also ensures that \(t-h\) is really a spinor polynomial because it has the real norm polynomial \(M\). _Remark 2_.: If the remainder polynomial \(R=tr_{1}+r_{0}\) has an invertible leading coefficient (generic case), its zero is uniquely given by \(h\coloneqq-r_{0}r_{1}^{-1}\). Thus we can write \(R=(t-h)r_{1}\). As \(M\) is a factor of \(C\widetilde{C}\), it follows from \[C\widetilde{C}=M^{2}Q\widetilde{Q}+M(Q\widetilde{R}+R\widetilde{Q})+R \widetilde{R}\] that \(M\) is also a factor of \(R\widetilde{R}=r_{1}\widetilde{r_{1}}(t-h)(t-\widetilde{h})\). Since \(r_{1}\) is invertible, \(h\) is also a zero of \(M\), thus the quadratic constraint is automatically fulfilled. By recursively constructing linear factors in this way, we can compute all decompositions of a spinor polynomial \(C\) into linear factors. * In each step of the outlined factorization procedure, the norm polynomial of the constructed left factor depends on the chosen quadratic real factor \(M\). As multiplication is non-commutative, we will in general obtain different factorizations into linear factors, depending on a chosen order of quadratic factors of \(C\widetilde{C}\). * Existence of a factorization into linear factors is by no means guaranteed. It might happen, that the remainder polynomial \(R\) is constant or that \(M\) and \(R\) have no common zeros. * Likewise, it is possible that \(M\) and \(R\) have infinitely many common zeros. This leads to spinor polynomials that admit infinitely many factorizations. There is a fairly complete and recent a priori characterization of dual quaternion polynomials, which admit a factorization [24]. A similar criterion for the factorizability of split quaternion polynomials is not available and currently out of reach. For this reason, we will largely focus on generic spinor polynomials in this article. The first step in computing factorizations of \(C\) according to the described algebraic procedure consists of computing all monic, quadratic, real factors of \(C\widetilde{C}\). This, essentially, means computing its zeros over \(\mathbb{C}\) and combining pairs of real or complex conjugate roots. In geometric terms computing the complex roots of \(C\widetilde{C}\) amounts to computing the parameter values of intersection points of \([C]\) with the null quadric \(\mathcal{N}\). In the remainder of this section, we will show how to construct linear factors directly from these points and their parameter values. ### Null Displacements We continue by exploring algebraic and kinematic properties of points in the intersection of Study variety \(\mathcal{S}\) and null quadric \(\mathcal{N}\). In doing so, we freely use the scalar extension of \(\mathrm{CGA}_{+}\) by complex numbers. This does not change essential properties of \(\mathrm{CGA}_{+}\) over \(\mathbb{R}\) but affects sub-algebras such as the quaternions or the dual quaternions. We are still interested in real factorizations but complex algebra elements need to be considered in order to get all real factorizations. We call any point \([n]\in\mathcal{S}\cap\mathcal{N}\) a _null displacement_. The following is an important lemma with a rather technical proof that we put off into the appendix. **Lemma 3**.: _For a null displacement \([n]\in\mathcal{S}\cap\mathcal{N}\) there exists a vector \(x\neq 0\) such that \(xn=0\)._ If no confusion with more general algebra elements is to be expected, we refer to a vector \(x\) as in Lemma 3 as _left annihilator_. By Remark 3 below, it is even a point whence we also call it _left annihilating point_. Generically, it is unique up to scalar multiples. For the following proposition, denote by \(\mathbb{S}\) the set of spheres (including points and planes) in conformal three space. **Proposition 2**.: _For a null displacement \([n]\in\mathcal{S}\cap\mathcal{N}\), the map \(\mathbb{S}\dashrightarrow\mathbb{S}\), \([x]\mapsto[nx\widetilde{n}]\) is constant wherever it is well-defined._ Proof.: Let \(x\) be an arbitrary vector and set \(y\coloneqq nx\widetilde{n}\). By Lemma 3, there exists a vector \(a\) such that \(an=0\). Thus \(ay=anx\widetilde{n}=0\), which shows, that \(y\) and \(a\) are either linearly dependent, i.e. \([y]=[a]\) or \(y=0\). _Remark 3_.: The left annihilator \(a\) of \(n\) in Lemma 3 satisfies \(\widetilde{a}a=0\) as we have \(\widetilde{a}an=0\) and \(n\neq 0\). Thus, it is a point, possibly with complex coordinates. _Remark 4_.: For \([n]\in\mathcal{S}\cap\mathcal{N}\) there exists a linear form \(l(x)\) such that \(nx\widetilde{n}=l(x)a\). Take, for example \(n=\varepsilon_{1}\) whence \(nx\widetilde{n}=l(x)a\) with \(l(x)=x_{o}\) and \(a=e_{\infty}\). In non-generic cases, it is possible that \(l\) vanishes. Thus, it is possible that the map mentioned in Proposition 2 is nowhere defined. One example of this is \(n=\varepsilon_{2}(1+\mathbf{i}\mathbf{i})\).1 Footnote 1: The complex unit \(\mathbf{i}\in\mathbb{C}\) is not to be confused with the quaternion unit \(\mathbf{i}\). The proof of Proposition 2 shows: **Corollary 1**.: _For a null displacement \([n]\in\mathcal{S}\cap\mathcal{N}\), the map \([x]\mapsto[nx\widetilde{n}]\) is undefined on \(\mathbb{S}\) if and only if \(n\) has two independent left annihilating points._ We see that there exist two types of null displacements: Generically, the left annihilating points are unique up to scalar multiples and the map of Proposition 2 is well-defined for generic spheres. Special null displacements of linearly independent left annihilators and the map \([x]\to[nx\widetilde{n}]\) is nowhere defined. ### A Geometric Factorization Algorithm Using the concept of null displacements and their annihilators, it is possible to devise a predominantly geometric construction of left factors for a spinor polynomial \(C\) (and of course a symmetric construction of right factors). The polynomial \(C\) parametrizes a curve on the Study variety. This curve \([C]\) intersects the null quadric in a finite number of points. Let us take two distinct intersection points \(n_{1}\) and \(n_{2}\) of this curve2 with the null quadric corresponding to two (possibly complex conjugate) parameter values \(z_{1}\) and \(z_{2}\). Because of Lemma 1, the remainder polynomial obtained from division of \(C\) by the real polynomial \((t-z_{1})(t-z_{2})\) is precisely the linear polynomial interpolating \(n_{1}\) at the parameter value \(z_{1}\) and \(n_{2}\) at \(z_{2}\), respectively. This suggests, that all the information about a left factor of \(C\) (provided it exists) are encoded in \(n_{1}\), \(n_{2}\), \(z_{1}\) and \(z_{2}\). Indeed, the following theorem shows how to construct a left factor from these entities alone. It uses the concept of "non-orthogonal points" \(a_{1}\), \(a_{2}\). This means that their Clifford dot product \(a_{1}\cdot a_{2}\) does not vanish. Footnote 2: The case of only a single intersection point of (necessarily) high multiplicity is dealt with in Section 3.3. **Theorem 1**.: _Let \(C\) be a monic spinor polynomial and \(M\coloneqq(t-z_{1})(t-z_{2})\in\mathbb{R}[t]\) a monic, quadratic factor of \(C\widetilde{C}\) with \(z_{1}\), \(z_{2}\in\mathbb{C}\). If \(C(z_{1})\) and \(C(z_{2})\) have non-orthogonal points \(a_{1}\), \(a_{2}\) as respective left annihilators, then \(C\) has a left factor \(t-h\) where_ \[h\coloneqq z_{1}-\frac{z_{1}-z_{2}}{2a_{1}\cdot a_{2}}a_{1}a_{2}=\frac{z_{1}+z _{2}}{2}-\frac{z_{1}-z_{2}}{2a_{1}\cdot a_{2}}a_{1}\wedge a_{2} \tag{4}\] _and \((t-h)(t-\widetilde{h})=M\)._ Proof.: Let \(a_{1}\) and \(a_{2}\) be non-orthogonal points, such that \(a_{1}C(z_{1})=a_{2}C(z_{2})=0\). Such points exist by Lemma 3. Define \(h\in\operatorname{CGA}_{+}\) by (4). For \(z_{1}\) and \(z_{2}\in\mathbb{R}\), this \(h\) obviously is a real element of \(\operatorname{CGA}_{+}\). For complex roots of \(M\), we have \(z_{2}=\overline{z_{1}}\). This implies \(C(z_{2})=\overline{C(z_{1})}\) and we can choose \(a_{2}=\overline{a}_{1}\). Writing \(a_{1}=a_{R}+\mathrm{i}a_{I}\) where \(a_{R}\) is the real part and \(a_{I}\) the imaginary part of \(a_{1}\), we then have \[a\overline{a} =a_{R}^{2}-a_{I}^{2}-2\mathrm{i}(a_{R}\wedge a_{I}),\] \[\overline{a}a =a_{R}^{2}-a_{I}^{2}+2\mathrm{i}(a_{R}\wedge a_{I}).\] Therefore, \(2a\wedge\overline{a}=a\overline{a}-\overline{a}a=-4\mathrm{i}(a_{R}\wedge a_ {I})\). Since \(z_{1}-\overline{z_{1}}\in\mathrm{i}\mathbb{R}\), we see that \(h\) is real in this case as well. We will show that \(h\) is a left zero of \(C\). Lemma 2 then implies that \(t-h\) is a left factor of \(C\). We use polynomial division to write \(C\) as \(C=QM+R\) with polynomials \(Q\), \(R\in\operatorname{CGA}_{+}[t]\) and \(\deg R\leq 1\). By Lemma 1, it is sufficient to show that \(h\) is a left zero of both, \(R\) and \(M=(t-z_{1})(t-z_{2})\). As \(C(z_{1})=R(z_{1})\) and \(C(z_{2})=R(z_{2})\), \(R\) is the unique linear polynomial interpolating \(n_{1}\coloneqq C(z_{1})\) and \(n_{2}\coloneqq C(z_{2})\) at parameter values \(z_{1}\) and \(z_{2}\) respectively, i.e. \[R=\frac{(t-z_{1})n_{2}-(t-z_{2})n_{1}}{z_{2}-z_{1}}.\] Moreover, we have \[h-z_{1} =-\frac{z_{1}-z_{2}}{2a_{1}\cdot a_{2}}a_{1}a_{2},\] \[h-z_{2} =\frac{z_{1}-z_{2}}{2a_{1}\cdot a_{2}}(2a_{1}\cdot a_{2}-a_{1}a_{ 2})=\frac{z_{1}-z_{2}}{2a_{1}\cdot a_{2}}a_{2}a_{1}.\] Since \(a_{1}n_{1}=0\) and \(a_{2}n_{2}=0\), it holds that \((h-z_{1})n_{2}=0\) and \((h-z_{2})n_{1}=0\). Thus \(h\) is a left zero of \(R\). Furthermore, \[M(h)=(h-z_{1})(h-z_{2})=-\Big{(}\frac{z_{1}-z_{2}}{2a_{1}\cdot a_{2}}\Big{)}^{ 2}a_{1}a_{2}a_{2}a_{1}=0.\qed\] _Remark 5_.: By Equation (4), the "vector part" \(a_{1}\wedge a_{2}\) of \(h\) is, up to scalar multiples, determined by the intersection points \([n_{1}]\) and \([n_{2}]\) alone and does not require knowledge of their respective parameter values \(z_{1}\), \(z_{2}\). This shows that the elementary motion \(t-h\) is determined, up to an affine re-parametrization \(t\mapsto\alpha t+\beta\) with \(\alpha\), \(\beta\in R\), by \([n_{1}]\) and \([n_{2}]\) alone. This observation is crucial for extensions of the factorization algorithm to algebraic motions that come without a given parametrization. _Remark 6_.: The left annihilating points \(a_{1}\) and \(a_{2}\) can be computed solving the systems of homogeneous linear equations arising from \(a_{1}n_{1}=a_{2}n_{2}=0\). Proposition 2 suggests a more straightforward method: Pick a random vector \(x\) and set \(a_{1}=n_{1}x\widetilde{n}_{1}\), \(a_{2}=n_{2}x\widetilde{n}_{2}\). It works, however, only in generic cases and for generic choices of \(x\). _Example 1_.: Let us consider the spinor polynomial \(C=t^{2}+1+\varepsilon_{1}(bt\mathbf{i}+a\mathbf{j})\), where \(0<b\leq a\). It is actually a motion polynomial in the sense of [12] and it is known that it admits factorizations (over the dual quaternions) if and only if \(a=b\)[20, Proposition 6]. Here, we investigate factorizability over \(\operatorname{CGA}_{+}\) using Theorem 1. There are only two intersection points of the curve \([C]\) with \(\mathcal{N}\), \[n_{1}=\varepsilon_{1}(a\mathbf{j}+b\mathbf{i}\mathbf{i})\quad\text{and}\quad n _{2}=\varepsilon_{1}(a\mathbf{j}-b\mathbf{i}\mathbf{i}).\] If \(a\neq b\), the unique left annihilating point for both, \(n_{1}\) and \(n_{2}\), is \(a_{1}=a_{2}=e_{\infty}\). Hence, the necessary condition of Theorem 1 is not fulfilled. If, however, \(a=b\), we have infinitely many respective left annihilating points, \[a_{1}=\mu_{1}\mathrm{i}e_{1}+\mu_{1}e_{2}+\lambda_{1}e_{\infty}\quad\text{and} \quad a_{2}=-\mu_{2}\mathrm{i}e_{1}+\mu_{2}e_{2}+\lambda_{2}e_{\infty}\] with \(\mu_{1}\), \(\mu_{2}\), \(\lambda_{1}\), \(\lambda_{2}\in\mathbb{C}\). Via Theorem 1, they give rise to infinitely many left factors \(t-h_{1}\) and factorizations \(C=(t-h_{1})(t-h_{2})\) where \[h_{1} =\frac{1}{2\mu_{1}\mu_{2}}(-2\mu_{1}\mu_{2}\mathbf{k}+\varepsilon _{1}((\lambda_{1}\mu_{2}+\lambda_{2}\mu_{1})\mathbf{i}+(\lambda_{1}\mu_{2}- \lambda_{2}\mu_{1})\mathbf{i}\mathbf{j})),\] \[h_{2} =\frac{-1}{2\mu_{1}\mu_{2}}(-2\mu_{1}\mu_{2}\mathbf{k}+ \varepsilon_{1}((2a\mu_{1}\mu_{2}+\lambda_{1}\mu_{2}+\lambda_{2}\mu_{1}) \mathbf{i}+(\lambda_{1}\mu_{2}-\lambda_{2}\mu_{1})\mathbf{i}\mathbf{j})).\] The quaternions \(h_{1}\) and \(h_{2}\) both have real coefficients if and only if \(\mu_{1}\), \(\mu_{2}\) and \(\lambda_{1}\), \(\lambda_{2}\), respectively, are complex conjugates. The factorizations we found are precisely those of [20, Proposition 15]. In other words, for this example extending the algebra from dual quaternions to \(\mathrm{CGA}_{+}\) does not yield more factorizations. The construction of the left factor in the proof of Theorem 1 uses two non-orthogonal points \(a_{1}\) and \(a_{2}\) obtained from a quadratic factor of the norm polynomial with _distinct_ roots. This raises the question, if it is also possible to use a quadratic factor \((t-z)^{2}\) of the norm polynomial which has one root \(z\) of multiplicity two. This factor, however, corresponds to only one intersection point \([n]=[C(z)]\) of the curve parametrized by a spinor polynomial \(C\) with the null quadric \(\mathcal{N}\). Generically, \([n]\) will only have a unique annihilating point \([a]\) and we cannot use the construction above. If the left annihilating point of \([n]\) is not unique, there exist two distinct vectors \(a_{1}\) and \(a_{2}\) such that \(a_{1}n=a_{2}n=0\). This implies, however, \((a_{1}a_{2}+a_{2}a_{1})n=0\) and since \(2a_{1}\cdot a_{2}=(a_{1}a_{2}+a_{2}a_{1})\in\mathbb{R}\) lies in the center of \(\mathrm{CGA}\), we have \(a_{1}\cdot a_{2}=0\). Thus, all possible choices of two different left annihilating points of the same zero displacement are orthogonal and cannot be used in the construction above. Therefore it is necessary to investigate the case of quadratic factors of the norm polynomial separately. ### Norm Polynomials with Quadratic Factors Theorem 1 only allows the construction of a left factor from two _distinct_ intersection points of the curve \(C\) with the null quadric. It is, however, also possible, at least generically, to obtain a left factor from a single intersection point \(n\), provided it corresponds to a zero \(z\) of \(C\widetilde{C}\) of multiplicity two or higher. For this special case we are able to provide a simple sufficient criterion for a left factor to exist that is also _necessary_. **Theorem 2**.: _Let \(C\) be a spinor polynomial such that \((t-z)^{2}\in\mathbb{R}[t]\) is a factor of \(C\widetilde{C}\). Then there exists a left factor \(t-h\) of \(C\) with \((t-h)(t-h)=(t-z)^{2}\) if and only if \(\widehat{C^{\prime}(z)}C(z)\neq 0\)._ _Remark 7_.: Note that Theorem 2 only talks about real zeros \(z\) of \(C\widetilde{C}\). Including complex zeros would be possible but result in a left factor \(t-h\) where \(h\) has complex coefficients - something we generally wish to avoid. Also note that a complex zero can always be paired with its complex conjugate to provide suitable input data for the factorization according to Theorem 1. We will prove Theorem 2 by reducing the statement to the special case where \(e_{\infty}C(z)=0\). This can always be done via a conformal transformation: From Lemma 3 we know there exists a vector \(a\), such that \(aC(z)=0\). If \(a\neq e_{\infty}\), the vector \(a+e_{\infty}\) does not square to zero and hence represents an invertible transformation. We can thus study the polynomial \((a+e_{\infty})C(a+e_{\infty})\) instead of \(C\). Indeed, because of \(aC(z)=0\) we have \[e_{\infty}(a+e_{\infty})C=e_{\infty}aC(z)=0,\] so that the polynomial \((a+e_{\infty})C(z)(a+e_{\infty})\) fulfills the desired property. This allows us to make simplifying assumptions on \(C(z)\) by the following lemma. **Lemma 4**.: _Let \(n\in\mathrm{CGA}_{+}\) be such that \(e_{\infty}n=0\). Then \(n=\varepsilon_{1}(h_{1}+\varepsilon_{2}h_{2})\) for two quaternions \(h_{1}\), \(h_{2}\in\mathbb{H}\)._ We omit the proof of Lemma 4 as it consists of a straightforward computation. Proof of Theorem 2.: We assume that \(z=0\) and \(e_{\infty}C(0)=0\). Neither of these assumptions is a loss of generality. The former can be achieved by a simple reparametrization, the latter by the considerations preceding Lemma 4. Let us define \(c_{0}\coloneqq C(0)\) and \(c_{1}\coloneqq C^{\prime}(0)\) so that the remainder polynomial when dividing \(C\) by \(t^{2}\) equals \(R=c_{1}t+c_{0}\). In order to find a left factor \(t-h\) of \(C\), we need to find \(h\) such that \(R(h)=0\) and \((t-h)(\widetilde{t-h})=t^{2}\). From [5], we know that there are only two types of elementary motions which have a norm polynomial with a root of multiplicity two, translations and transversions. Both are given by \(t-h\) where \(h=ab\) for a point \(a\) and a _plane_\(b\). Assuming \(t-h\) is a left factor of \(C\) we further get \(ac_{0}=0\). But by our initial assumption, we also have \(e_{\infty}c_{0}=0\), which then implies \((ae_{\infty}+e_{\infty}a)c_{0}=0\) and therefore \(a\cdot e_{\infty}=0\). But the only real point fulfilling this identity is \(e_{\infty}\) itself and therefore we obtain \(a=e_{\infty}\). Thus, we need to find \(h=e_{\infty}b\) for a plane \(b=b_{1}e_{1}+b_{2}e_{2}+b_{3}e_{3}+b_{\infty}e_{\infty}\) such that \(R(h)=0\). By assumption we have \(e_{\infty}C(0)=e_{\infty}c_{0}=0\). Lemma 4 shows that there exist quaternions \(q_{1}\), \(q_{2}\in\mathbb{H}\) such that \(c_{0}=\varepsilon_{1}(q_{1}+\varepsilon_{2}q_{2})\). Since \(C\) is a spinor polynomial, \(c_{0}=C(0)\) fulfills the Study conditions, which, in this case, simplify to the single condition \(S(q_{1},q_{2})=0\). With this, the condition \(R(h)=0\) becomes \[\begin{split} 0&=e_{\infty}bc_{1}+\varepsilon_{1}(q_{1}+ \varepsilon q_{2})\\ &=e_{\infty}(bc_{1}-e_{123}(q_{1}+\varepsilon_{2}q_{2}))\\ &=e_{\infty}e_{123}(-e_{123}bc_{1}-(q_{1}+\varepsilon_{2}q_{2})) \\ &=-\varepsilon_{1}((b_{1}\mathbf{i}+b_{2}\mathbf{j}+b_{3}\mathbf{ k})c_{1}-(q_{1}+\varepsilon_{2}q_{2})).\end{split} \tag{5}\] Let us denote the vectorial quaternion \(b_{1}\mathbf{i}+b_{2}\mathbf{j}+b_{3}\mathbf{k}\) by \(B\). By Lemma 4, Equation (5) is fulfilled, if and only if there exist quaternions \(h_{1}\), \(h_{2}\in\mathbb{H}\) such that \[Bc_{1}-(q_{1}+\varepsilon_{2}q_{2})=\varepsilon_{1}(h_{1}+\varepsilon_{2}h_{ 2}).\] This is the case, if the sum of the coefficients of \(1\) and \(\varepsilon_{3}\) as well as the coefficient of \(\varepsilon_{2}\) vanish in the four-quaternion representation of \(Bc_{1}-(q_{1}+\varepsilon_{2}q_{2})\). Using the four-quaternion representation \(c_{1}=r_{0}+\varepsilon_{1}r_{1}+\varepsilon_{2}r_{2}+\varepsilon_{3}r_{3}\), this is equivalent to the two quaternionic equations \[Br_{0}-q_{1}=0,\qquad Br_{2}-q_{2}=0 \tag{6}\] for the unknown \(B\). Either of these equations has a unique solution for \(B\) provided both \(r_{0}\) and \(r_{2}\) are different from \(0\), i.e. \(B_{1}=q_{1}r_{0}^{-1}\) and \(B_{2}=q_{2}r_{2}^{-1}\). We have to show that both solutions are non-trivial, coincide and are vectorial quaternions. To see this, we use the multiple root condition \((\widetilde{C}C)^{\prime}(0)=\widetilde{c}_{0}c_{1}+\widetilde{c}_{1}c_{0}=0\). Again invoking the four-quaternion representations for \(c_{0}\) and \(c_{1}\), respectively, this yields four quaternionic conditions: \[\widetilde{q_{2}}r_{0}+\widetilde{r_{2}}q_{1} =0, \tag{8}\] \[\widetilde{q_{1}}r_{2}+\widetilde{r_{0}}q_{2} =0,\] (9) \[S(q_{1},r_{0}) =0,\] (10) \[S(q_{2},r_{2}) =0. \tag{7}\] Equations (9) and (10) ensure, that both \(B_{1}\) and \(B_{2}\) are vectorial quaternions. To see, that they are the same, let us take \(B_{1}\) and plug it into the second equation of (6). Multiplying away the denominator \(r_{0}\widetilde{r}_{0}\) of \(B_{1}\) and using Eq. (8) and Eq. (9), this yields \[q_{1}\widetilde{r_{0}}r_{2}-r_{0}\widetilde{r_{0}}q_{2}=-r_{0}\widetilde{q_{1 }}r_{2}+r_{0}\widetilde{q_{1}}r_{2}=0.\] Thus, \(B_{1}\) and \(B_{2}\) are equal. Further, this solution is non-trivial, as we assumed \(r_{0}\) and \(r_{2}\) to be non-zero and further, \(q_{1}\) and \(q_{2}\) cannot vanish simultaneously as otherwise \(c_{0}\) is zero and consequently, \(C\) not reduced. Now let us consider the case, where either \(r_{0}\) or \(r_{2}\) is zero, but they do not vanish simultaneously. Then Eq. (7) ensures, that one of the equations in (6) is already fulfilled and we can take the solution of the other one. Finally, from \(r_{0}=r_{2}=0\) it follows \(c_{1}=\varepsilon_{1}(r_{1}+\varepsilon_{2}r_{3})\) which is a contradiction to our assumption \(\widetilde{c_{1}}c_{0}\neq 0\). _Remark 8_.: For the case of _split quaternions_, a sub-algebra of CGA\({}_{+}\), the article [29] gives a necessary and sufficient criterion for existence of a linear left factor that corresponds, via Theorem 1, to two intersection points \([C(z_{1})]\), \([C(z_{2})]\) of \([C]\) and \(\mathcal{N}\). It reads \[\widetilde{C(z_{1})}C(z_{2})\neq 0 \tag{11}\] and we may view the condition of Theorem 2 as a limiting version of (11). This suggests that (11) may be a sufficient and _necessary_ condition also in the case of spinor polynomials. Unfortunately, this is not true which can be seen in the following example. _Example 2_.: In Example 1, we considered the spinor polynomial \(C=t^{2}+1+\varepsilon_{1}(b\mathfrak{ti}+a\mathfrak{j})\) which admit (infinitely many) factorizations if and only if \(a=b\). The two intersection points of \([C]\) with \(\mathcal{N}\) are \([n_{1}]\) and \([n_{2}]\) where \[n_{1}=\varepsilon_{1}(a\mathfrak{j}+b\mathfrak{ii})\quad\text{and}\quad n_{2} =\varepsilon_{1}(a\mathfrak{j}-b\mathfrak{ii}).\] As \(\varepsilon_{1}^{2}=0\), it holds \(\widetilde{n_{1}}n_{2}=0\), regardless of the existence of a factorization. ## 4. A Multiplication Technique for Factorizing Spinor Polynomials As remarked in Section 3, not every spinor polynomials \(C\) admit a factorization. For the sub-algebras of dual and split quaternions of CGA\({}_{+}\), there exist a multiplication techniques which allows to obtain a factorization of \(RC\) where \(R\in\mathbb{R}[t]\) is a suitable real polynomial. For applications in kinematics [6, 16, 22], this is important as \(C\) and \(CR\) represent the same rational motion. While the real co-factor \(R\) in case of dual quaternions is quite tricky to compute [23], a generic real polynomial of suitable degree will do in case of split quaternions [29]. The same is true for spinor polynomials and a proof will be given in this section. **Theorem 3**.: _Let \(P\) be a spinor polynomial which does not admit a left or a right factor. Then there exists a spinor polynomial \(H\coloneqq t-h\) such that \(C\coloneqq PH\) admits both a left and a right factor._ Proof.: Denote the zeros of \(H\widetilde{H}\) by \(z_{1}\) and \(z_{2}\). Further, let us first assume that the norm polynomial \(P\widetilde{P}\) has two distinct roots \(t_{1}\) and \(t_{2}\). Let \(b_{1}\) and \(b_{2}\) be right annihilators of \(P(t_{1})\) and \(P(t_{2})\), respectively. To prove, that the polynomial \(C\) admits a left and a right factor, we need to show, that \(C(t_{1})\) and \(C(t_{2})\) have non-orthogonal right annihilators, and \(C(z_{1})\) and \(C(z_{2})\) have non-orthogonal left annihilators for an appropriate choice of \(H\). In addition, \(H\) has to fulfill the Study conditions. We choose two non-orthogonal vectors \(e\) and \(f\) and define \(H\) as the interpolation polynomial of \(ef\) and \(-fe\), i.e. \(H=t+e\wedge f\). We have six essential degrees of freedom to choose \(e\) and \(f\) but need to avoid orthogonality, i.e. one quadratic condition. For further arguments, we additionally need to ensure, that the roots of \(H\widetilde{H}\) are different from any roots of \(P\widetilde{P}\), which gives a finite number of additional algebraic conditions to avoid. As \(P(t_{i})b_{i}=0\) and \(\widetilde{H(t_{i})\widetilde{H(t_{i})}}\in\mathbb{C}\setminus\{0\}\), it holds \[C(t_{i})\widetilde{H(t_{i})b_{i}}H(t_{i})=P(t_{i})H(t_{i})\widetilde{H(t_{i}) b_{i}}H(t_{i})=0,\] for \(i\in\{1,2\}\). Thus, we have found suitable right annihilators \(r_{1}=\widetilde{H(t_{1})}b_{1}H(t_{1})\) and \(r_{2}=\widetilde{H(t_{2})b_{2}H(t_{2})}\) of \(C(t_{1})\) and \(C(t_{2})\) respectively. Similar arguments show, that \(l_{1}\coloneqq P(z_{1})e\widetilde{P(z_{1})}\) is a left annihilator of \(C(z_{1})\) and \(l_{2}\coloneqq P(z_{2})\widetilde{fP(z_{1})}\) is a left annihilator of \(C(z_{2})\). As we want to avoid orthogonality of these left and right annihilators, we need to fulfill one condition of degree four and one of degree two on the coefficients of \(H\). In summary, we need to avoid a finite number of algebraic sets of dimension at most five which is certainly possible. We still need to consider the case, where \(P\widetilde{P}\) has only one root \(z\in\mathbb{R}\) of multiplicity at least two. Let \(b_{1}\) be a left annihilating point of \(P(z)\) and \(b_{2}\) a right annihilating point of \(P(z)\). Further, let us define \(a_{1}=P(z_{1})\widetilde{eP(z_{1})}\) for some vector \(e\) and \(a_{2}=\widetilde{H(z)}b_{2}H(z)\). It is straightforward to see, that \(a_{1}\) is a left annihilator of \(C(z_{1})\) and \(b_{1}\) a left annihilator of \(C(z)\). Under the condition that they are not orthogonal, we can use Theorem 1 to construct a left factor of \(C\). Similarly, \(a_{2}\) is a right annihilator of \(C(z)\), and \(e\) is a right annihilator of \(C(z_{2})\). Thus we can use Theorem 1 to find a right factor of \(C\). Again, this is possible if we avoid a finite number of low dimensional algebraic sets. **Corollary 2**.: _For any spinor polynomial \(P\in\operatorname{CGA}_{+}[t]\) there exists a spinor polynomial \(H\in\operatorname{CGA}_{+}[t]\) such that \(PH\) admits a factorization with linear factors. Moreover, there exists a real polynomial \(R\in\mathbb{R}[t]\) such that \(PR\) admits a factorization with linear factors._ Proof.: The first statement follows by induction on the degree of \(P\) from Theorem 3. By Theorem 3, the co-factor \(H\) admits a factorization with linear factors. Thus, the second statement follows from the first with \(R=H\widetilde{H}\). _Example 3_.: The polynomial \(t^{2}+\varepsilon_{3}\) has the norm polynomial \((t^{2}+1)(t^{2}-1)\). After division by either of its two real quadratic factors, the respective remainder polynomials, \(\varepsilon_{3}+1\) and \(\varepsilon_{3}-1\), are constant and have no zero. Thus, \(t^{2}+\varepsilon_{3}\) does not admit a factorization into two linear factors by Lemma 2. However, with vectors \(e=e_{1}+e_{o}\) and \(f=e_{2}+e_{\infty}\), we have \(e\cdot f=-1\neq 0\), \(H=t-e\wedge f=t+\mathbf{k}-\mathbf{i}\varepsilon_{1}+\mathbf{j}\varepsilon_{2 }+\mathbf{k}\varepsilon_{3}\) and \((t^{2}+\varepsilon_{3})H=(t-h_{1})(t-h_{2})(t-h_{3})\) where \[h_{1} =-\mathbf{k}-\mathbf{i}\varepsilon_{1}+\mathbf{j}\varepsilon_{2}- \varepsilon_{3},\] \[h_{2} =\phantom{-}\mathbf{k}+(\mathbf{i}+\tfrac{1}{2}\mathbf{j}) \varepsilon_{1}-\mathbf{j}\varepsilon_{2}+\varepsilon_{3},\] \[h_{3} =-\mathbf{k}+(\mathbf{i}-\tfrac{1}{2}\mathbf{j})\varepsilon_{1} -\mathbf{j}\varepsilon_{2}-\varepsilon_{3}.\] There is nothing particular about the vectors \(e\) and \(f\). Any generic choice will do. ## 5. Factorization of an Algebraic Four-Bar Motion This section demonstrates, at hand of a single example, that ideas of our geometric factorization algorithm generalize to algebraic motions. Its purpose is twofold: It serves as a motivation for introducing an alternative factorization algorithm for spinor polynomials and it provides an outlook to future research. The reader is kindly asked to view the contents of this section as a proof of concept and accept that some steps in the computation only come with a rather vague justification. The actual factorization theory of algebraic motions is yet to be worked out. Let us consider the algebraic curve \(C\) in the projective space \(\mathbb{P}(\mathbb{H})=\mathbb{P}^{3}(\mathbb{R})\) over the quaternions \(x=x_{0}+x_{1}\mathbf{i}+x_{2}\mathbf{j}+x_{3}\mathbf{k}\) that is given by the ideal \[\langle x_{0}^{2}+x_{1}^{2}+x_{2}^{2}+x_{3}^{2}-4(x_{0}x_{2}+x_{1 }x_{3}),\] \[\qquad 13x_{0}^{2}-3x_{1}^{2}+13x_{2}^{2}-3x_{3}^{2}+16(x_{0}x_{1}+ x_{2}x_{3})-12(x_{0}+x_{1})(x_{2}-x_{3})\rangle. \tag{12}\] It is of genus one. The two generating polynomials were obtained as "circle constraint equations" in the sense of [3]Section 3.1]. They encode the condition that two unit vectors \(m_{1}\), \(m_{2}\) in the moving coordinate frame are mapped, via rotations described by (12), to respective circles with normalized unit axes \(f_{1}\), \(f_{2}\) in the fixed system. The motion given by \(C\) can be mechanically realized as coupler motion of a spherical four-bar linkage. It is our aim, to compute the fixed axes \(f_{1}\), \(f_{2}\) and the moving axes \(f_{1}\), \(f_{2}\) using our geometric factorization algorithm. By abuse of notation, we denote by \(\mathcal{N}\) the intersection of the null quadric with the projective space \(\mathbb{P}(\mathbb{H})=\mathbb{P}^{3}(\mathbb{R})\) of quaternions and we refer to this intersection by "null quadric" as well. It is a regular and doubly ruled quadric. In a first step, we intersect the curve \(C\) and the null quadric \(\mathcal{N}\). Since the curve is of degree four, we expect eight intersection points. Indeed, we find \[\begin{split}[n_{1}]&=[2-2\mathrm{i}\mathbf{i}+ \sqrt{2}((1+\mathbf{i})\mathbf{j}-(1-\mathbf{i})\mathbf{k})],\\ [n_{2}]&=[2-2\mathrm{i}\mathbf{i}-\sqrt{2}((1+ \mathbf{i})\mathbf{j}-(1-\mathbf{i})\mathbf{k})],\\ [n_{3}]&=[5-(4+3\mathbf{i})\mathbf{i}+(3-4\mathbf{ i})\mathbf{j}+5\mathbf{i}\mathbf{k}],\\ [n_{4}]&=[5+(4+3\mathbf{i})\mathbf{i}-(3-4\mathbf{ i})\mathbf{j}+5\mathbf{i}\mathbf{k}],\end{split} \tag{13}\] and the respective complex conjugates \([\overline{n}_{1}]\), \([\overline{n}_{2}]\), \([\overline{n}_{3}]\), \([\overline{n}_{4}]\). It will later be important that the point pairs \[(n_{1},\overline{n}_{2}),\quad(n_{2},\overline{n}_{1}),\quad(n_{3},\overline{n }_{4}),\quad(n_{4},\overline{n}_{3})\] lie on rulings of the first kind on \(\mathcal{N}\), while the point pairs \[(n_{1},n_{2}),\quad(\overline{n}_{1},\overline{n}_{2}),\quad(n_{3},n_{4}), \quad(\overline{n}_{3},\overline{n}_{4}) \tag{14}\] lie on rulings of the second kind. Left annihilating points to \(n_{1}\), \(n_{2}\), \(n_{3}\) and \(n_{4}\) are \[a_{1}=2e_{1}+\sqrt{2}\mathrm{i}(e_{2}-e_{3}),\quad a_{2}=2e_{1}-\sqrt{2} \mathrm{i}(e_{2}-e_{3}),\quad a_{3}=e_{1}-\mathrm{i}e_{2},\quad a_{4}=e_{1}+ \mathrm{i}e_{2},\] respectively. They come in conjugate complex pairs, even if they did not arise from conjugate complex null displacements. This is no coincidence as we only expect two real left factors. In fact, points on the same ruling of the second kind yield the same left annihilators. The two revolute axes in the fixed frame are given by the wedge products \[a_{1}\wedge\overline{a}_{1}=-a_{2}\wedge\overline{a}_{2}=4\sqrt{2}\mathrm{i}( \mathbf{j}+\mathbf{k}),\quad a_{3}\wedge\overline{a}_{3}=-a_{4}\wedge\overline {a}_{4}=-2\mathrm{i}\mathbf{k}.\] After normalization, we find the axis directions \(f_{1}=\frac{1}{\sqrt{2}}(\mathbf{j}+\mathbf{k})\) and \(f_{2}=\mathbf{k}\), respectively. The right factors will give rise to the moving axes. Their computation is based on the null points (13) as well but, in view of (14), we compute right annihilating points of \(n_{1}\), \(\overline{n}_{2}\), \(n_{3}\), and \(\overline{n}_{4}\): \[b_{1}=e_{2}+\mathrm{i}e_{3},\quad\overline{b}_{2}=e_{2}-\mathrm{i}e_{3},\quad b _{3}=4e_{1}-3e_{2}+5\mathrm{i}e_{3},\quad\overline{b}_{4}=4e_{1}-3e_{2}-5 \mathrm{i}e_{3}.\] Here, points on the same rulings of first kind give identical annihilators. The moving axes directions are found as \[b_{1}\wedge\overline{b}_{1}=b_{2}\wedge\overline{b}_{2}=2\mathrm{i}\mathbf{i},\quad b_{3}\wedge\overline{b}_{3}=b_{4}\wedge\overline{b}_{4}=-10\mathrm{i}( 3\mathbf{i}+4\mathbf{j}).\] Unit direction vectors of the two moving axes in the moving frame are \(m_{1}=-\frac{1}{5}(3\mathbf{i}+4\mathbf{j})\) and \(m_{2}=-\mathbf{i}\), respectively (the choice of sign is irrelevant but helps to make the visualization of the mechanism more compact). Mapping these two points via transformations in \(C\) produces the spherical four-bar motion. The underlying mechanical structure is illustrated in Figure 1. The seaming ease of these computations is deceiving. In case of four-bar linkages with axes only in the fixed and in the moving frame, everything is straightforward indeed. If further moving axes are involved, additional considerations are required: * A proper mathematical definition of left and right factors is needed. * It seems already clear that, in contrast to rational motion, left and right factors of algebraic motions generically don't exist. Formulation of necessary and maybe even sufficient criteria for their existence would be useful. * For algebraic motions with three or more factors, it will be necessary to "split off" a left/or right factor by a non-obvious algebraic version of polynomial division. * One needs to ensure that all axes are computed with respect to a particular configuration of the linkage and also with respect to the same coordinate frame. The latter point is exemplified by above example: The axes \(f_{1}\) and \(f_{2}\) are fixed in the fixed frame, the axes \(m_{1}\), \(m_{2}\) are fixed in the moving frame. In spite of the labeling, Figure 1 does not display \(m_{1}\) and \(m_{2}\) directly but their images in the fixed frame for particular configurations of the linkage. Working out a general factorization theory for algebraic motions will be left to future publications. A further line of future research concerns generalizations of our results to conformal algebras of dimension larger than three or even to more general Clifford algebras. In fact, with exception of Lemma 3, none of the arguments depends crucially on the dimension of the underlying conformal space. The formulation of Lemma 3 makes sense in other Clifford algebras as well but its rather technical proof in the appendix relies on arguments that only hold for conformal geometric algebra in dimension three. ## Acknowledgment Johannes Siegele was supported by Austrian Science Fund (FWF) P 33397-N (Rotor Polynomials: Algebra and Geometry of Conformal Motions). ## 6. Appendix Proof of Lemma 3.: Let us write \(n\) in the four quaternion representation, i.e. \(n=q_{0}+q_{1}\varepsilon_{1}+q_{2}\varepsilon_{2}+q_{3}\varepsilon_{3}\), and \(x=x_{0}e_{0}+Xe_{123}+x_{\infty}e_{\infty}\), where \(X=x_{1}\mathbf{i}+x_{2}\mathbf{j}+x_{3}\mathbf{k}\in\mathbb{H}\) Figure 1. A spherical four-bar linkage. is a vectorial quaternion. Using the multiplication Table 2 we get \[\begin{split} xn&=(x_{0}e_{0}+Xe_{123}+x_{\infty}e_{ \infty})(q_{0}+q_{1}\varepsilon_{1}+q_{2}\varepsilon_{2}+q_{3}\varepsilon_{3}) \\ &=e_{0}(x_{0}(q_{0}-q_{3})-Xq_{2})\\ &\quad+e_{\infty}(x_{\infty}(q_{0}+q_{3})-Xq_{1})\\ &\quad+e_{123}(X(q_{0}-q_{3})+2x_{\infty}q_{2})\\ &\quad+e_{0123\infty}(x_{0}q_{1}-x_{\infty}q_{2}+Xq_{3}).\end{split} \tag{15}\] For this product to equal zero, we need that the coefficients of \(e_{0}\), \(e_{\infty}\), \(e_{123}\) and \(e_{0123\infty}\) in Equation (15) vanish. This gives rise to four equations over quaternions: (i) \[x_{0}(q_{0}-q_{3})-Xq_{2} =0\] (ii) \[x_{\infty}(q_{0}+q_{3})-Xq_{1} =0\] (iii) \[X(q_{0}-q_{3})+2x_{\infty}q_{2} =0\] (iv\(\prime\)) \[x_{0}q_{1}-x_{\infty}q_{2}+Xq_{3} =0\] For convenience, let us replace (iv\(\prime\)) by 2(iv\(\prime\))+(iii) which results in the new fourth equation (iv) \[X(q_{0}+q_{3})+2x_{0}q_{1}=0.\] The aim is to find a non-zero solution for this system of quaternion equations. We will do so successively by explicitly giving a solution and subsequently discussing the cases where this solution is zero. Case 1: We give two explicit solutions. They span a subspace of the vector space of all solutions. For a slightly more compact notation, let us define \(\|x\|\coloneqq x\widetilde{x}\). Solution 1.1: The first solution is given by \(X=\widetilde{q_{2}(q_{0}-q_{3})}=-(q_{0}-q_{3})\widetilde{q_{2}}\). Plugging this into Equations (i)-(iv) we obtain \[(x_{0}+\|q_{2}\|)(q_{0}-q_{3}) =0\] \[x_{\infty}(q_{0}+q_{3})+(q_{0}-q_{3})\widetilde{q_{2}}q_{1} =0\] \[\widetilde{q_{2}(q_{0}-q_{3})(q_{0}+q_{3})+2x_{0}q_{1}} =0.\] The choice \(x_{0}=-\|q_{2}\|\) solves the first equation and \(x_{\infty}=-\|q_{0}-q_{3}\|/2\) solves the third equation. To see that this is a solution for the whole system, let us at first recall the following: The point \([n]\) lies on the null quadric whence \(\|q_{0}\|-\|q_{3}\|-S(q_{1},q_{2})=0\). Using this and the Study condition \(\operatorname{v}(q_{0}\widetilde{q_{3}})=\operatorname{v}(q_{1}\widetilde{q _{2}})\) we obtain \[\begin{split}(q_{0}+q_{3})\widetilde{(q_{0}-q_{3})}& =\|q_{0}\|-\|q_{3}\|-2\operatorname{v}(q_{0}\widetilde{q_{3}})\\ &=S(q_{1},q_{2})-2\operatorname{v}(q_{1}\widetilde{q_{2}})\\ &=2q_{2}\widetilde{q_{1}}.\end{split} \tag{16}\] Similarly, we will get \[\widetilde{(q_{0}+q_{3})}(q_{0}-q_{3})=2\widetilde{q_{1}}q_{2}. \tag{17}\] Using Equations (16) and (17) it is easy to see, that we indeed found a solution for our system. Solution 1.2: Let us set \(X=q_{1}\widetilde{(q_{0}+q_{3})}=-(q_{0}+q_{3})\widetilde{q_{1}}\). Again using Equations (16) and (17), the system of equations reads \[(x_{0}+\tfrac{1}{2}\|q_{0}+q_{3}\|)(q_{0}-q_{3}) =0\] \[(2\|q_{1}\|+2x_{\infty})q_{2} =0\] \[(\|q_{0}+q_{3}\|+2x_{0})q_{1} =0\] which obviously admits the solution \(x_{0}=-\|q_{0}+q_{3}\|/2\), \(x_{\infty}=-\|q_{1}\|\).3 Footnote 3: It can be shown that Solutions 1.1 and 1.2 are linearly dependent. The purpose of stating them separately is that in our further discussion we can assume that both of them vanish which gives us more conditions to work with. Case 2: It might still be the case, that any linear combination of the solutions given above is zero, i.e. \((q_{0}-q_{3})\widetilde{q_{2}}=(q_{0}+q_{3})\widetilde{q_{1}}=0\) and \(\|q_{0}+q_{3}\|=\|q_{0}-q_{3}\|=\|q_{1}\|=\|q_{2}\|=0\). From these assumptions, we immediately get \(S(q_{0},q_{3})=0\) and \(\|q_{0}\|=-\|q_{3}\|\) which simplifies the null quadric condition to \(2\|q_{0}\|=S(q_{1},q_{2})\). Again we will give two solutions for this case. For this, we need the following property for zero divisors in the algebra of quaternions: For two quaternions \(x\), \(y\in\mathbb{H}\) with \(\|x\|=0\) it holds \[x\widetilde{y}x=(x\widetilde{y}+y\widetilde{x})x=S(x,y)x. \tag{18}\] Solution 2.1: Let us choose \(X=q_{2}(\widetilde{q_{0}+q_{3}})=-(q_{0}+q_{3})\widetilde{q_{2}}\). Equations (i) and (iv) suggests \(x_{0}=0\) as a solution. Putting \(x_{\infty}=-2\|q_{0}\|\) and using Equations (16) and (17), Equation (ii) reads \[x_{\infty}(q_{0}+q_{3})+(q_{0}+q_{3})\widetilde{q_{2}}q_{1} =\big{(}x_{\infty}+\tfrac{1}{2}S(q_{0}+q_{3},q_{0}-q_{3})\big{)}( q_{0}+q_{3})\] \[=\big{(}x_{\infty}+S(q_{1},q_{2})\big{)}(q_{0}+q_{3})\] \[=(x_{\infty}+2\|q_{0}\|)(q_{0}+q_{3})\] \[=0,\] and Equation(iii) reads \[\widetilde{q_{2}(q_{0}+q_{3})}(q_{0}-q_{3})+2x_{\infty}q_{2} =2q_{2}\widetilde{q_{1}}q_{2}+2x_{\infty}q_{2}\] \[=2(S(q_{1},q_{2})+x_{\infty})q_{2}\] \[=2(2\|q_{0}\|+x_{\infty})q_{2}\] \[=0.\] Thus, we have really found a solution for this case. Solution 2.2: Let us choose \(X=q_{1}\widetilde{(q_{0}-q_{3})}=-(q_{0}-q_{3})\widetilde{q_{1}}\). From Eqs. (ii) and (iii) we obtain \(x_{\infty}=0\). Similar to the solution above, we can show that \(x_{0}=-2\|q_{0}\|\) gives a solution for our system of equations. Case 3: In this case, the construction of the solution is a bit more intricate. For now, we will assume, that \(q_{1}\neq 0\), \(q_{2}\neq 0\), \(q_{0}+q_{3}\neq 0\) and \(q_{0}-q_{3}\neq 0\). Again we need to consider the case, where every linear combination of Solution 2.1 and Solution 2.2 is zero. This condition yields \(\|q_{0}\|=\|q_{1}\|=\|q_{2}\|=\|q_{3}\|=0\) and further \[(q_{0}-q_{3})\widetilde{q_{2}}=(q_{0}+q_{3})\widetilde{q_{2}}=(q_{0}+q_{3}) \widetilde{q_{1}}=(q_{0}-q_{3})\widetilde{q_{1}}=0.\] Denoting, once more, the intersection of null quadric and \(\mathbb{P}(\mathbb{H})=\mathbb{P}^{3}(\mathbb{R})\) by \(\mathcal{N}\), above equations show that \([q_{0}+q_{3}]\) and \([q_{0}-q_{3}]\) lie on a ruling of \(\mathcal{N}\) through \([q_{1}]\) and on a ruling of \(\mathcal{N}\) through \([q_{2}]\). Thus, these rulings must coincide and therefore all four projective points in \(\mathbb{P}(\mathbb{H})\) lie on a common ruling. Through each of these points passes a ruling of the other (second) family of rulings. They intersect the plane of vectorial quaternions. They can be obtained as \([U_{1}]\), \([U_{2}]\), \([U_{3}]\), and \([U_{4}]\) where \[U_{1}\coloneqq q_{1}r\widetilde{q_{1}},\quad U_{2}\coloneqq q_{2}r\widetilde{ q_{2}},\quad U_{3}\coloneqq(q_{0}+q_{3})\widetilde{r(q_{0}+q_{3})},\quad U_{4} \coloneqq(q_{0}-q_{3})\widetilde{r(q_{0}-q_{3})},\] and \(r\) is an arbitrary quaternion such that \(U_{1}\), \(U_{2}\), \(U_{3}\), and \(U_{4}\) are all different from zero. Case 3.1: Let us assume, that the \(U_{i}\), \(i=1,\ldots,4\) are pairwise linearly independent. Then there exist coefficients \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\), \(\lambda_{4}\in\mathbb{C}\) such that \[\lambda_{1}U_{1}+\lambda_{3}U_{3}=\lambda_{2}U_{2}+\lambda_{4}U_{4}. \tag{19}\] To solve our system of equations, we will choose \(X=\lambda_{1}U_{1}+\lambda_{3}U_{3}=\lambda_{2}U_{2}+\lambda_{4}U_{4}\). Plugging this into Equation (1) and using a Study condition as well as (18), we obtain \[x_{0}(q_{0}-q_{3})-\lambda_{4}U_{4}q_{2} =x_{0}(q_{0}-q_{3})-\lambda_{4}(q_{0}-q_{3})\widetilde{r(q_{0}-q_ {3})}q_{2}\] \[=x_{0}(q_{0}-q_{3})+\lambda_{4}(q_{0}-q_{3})r\widetilde{q_{2}}(q_ {0}-q_{3})\] \[=(q_{0}-q_{3})(x_{0}+\lambda_{4}S(q_{0}-q_{3},q_{2}\widetilde{r})).\] This suggests the solution \(x_{0}=-\lambda_{4}S(q_{0}-q_{3},q_{2}\widetilde{r})\). In the same way, Equation (iv) suggests the solution \(x_{0}=\lambda_{1}S(q_{1},(q_{0}+q_{3})\widetilde{r})/2\). Equations (ii) and (iii) lead us to \(x_{\infty}=-\lambda_{3}S(q_{0}+q_{3},q_{1}\widetilde{r})\) and \(x_{\infty}=\lambda_{2}S(q_{2},(q_{0}-q_{3})\widetilde{r})/2\), respectively. To see, that these two solutions for \(x_{0}\) coincide, let us multiply Equation (19) with \((q_{0}+q_{3})\) from the right hand side and with \(\widetilde{q_{2}}\) from the left hand side and use Equation (17) to obtain \[\lambda_{1}\widetilde{q_{2}}U_{1}(q_{0}+q_{3}) =\lambda_{4}\widetilde{q_{2}}U_{4}(q_{0}+q_{3}),\] \[-\lambda_{1}\widetilde{q_{2}}q_{1}\widetilde{r(q_{0}+q_{3})}q_{1} =2\lambda_{4}\widetilde{q_{2}}(q_{0}-q_{3})r\widetilde{q_{2}}q_{1},\] \[\widetilde{q_{2}}q_{1}(-\lambda_{1}S(q_{1},(q_{0}+q_{3})\widetilde {r})) =\widetilde{q_{2}}q_{1}(2\lambda_{4}S(\widetilde{q_{2}},\widetilde {r(q_{0}-q_{3})})).\] Further, it holds \(S(\widetilde{q_{2}},\widetilde{r(q_{0}-q_{3})}))=S(q_{2},(q_{0}-q_{3})r)=S(q_ {2}\widetilde{r},q_{0}-q_{3})\). This shows, that both solutions for \(x_{0}\) coincide, provided \(\widetilde{q_{2}}q_{1}\neq 0\). If this expression would be zero, we would have \(q_{1}\widetilde{q_{2}}=\widetilde{q_{1}}q_{2}=0\), which implies that \(q_{1}\) and \(q_{2}\) and in turn \(U_{1}\) and \(U_{2}\) are linearly dependent. This is a contradiction to our assumptions for Case 3.1. To finish the discussion of Case 3.1 we have to show that the two solutions for \(x_{\infty}\) are the same. Multiplying Equation (19) with \(\widetilde{q_{1}}\) from the left, and with \((q_{0}-q_{3})\) from the right yields \[\widetilde{q_{1}}q_{2}(2\lambda_{3}S(q_{0}+q_{3},q_{1}\widetilde{r}))= \widetilde{q_{1}}q_{2}(-\lambda_{2}S(q_{2},(q_{0}-q_{3})\widetilde{r})),\] which shows, that the solutions for \(x_{\infty}\) coincide, provided \(\widetilde{q_{1}}q_{2}\neq 0\). Case 3.2. If, however, \(\widetilde{q_{2}}q_{1}=0\), it follows from Equation (17) that also \(\widetilde{(q_{0}+q_{3})}(q_{0}-q_{3})=0\) which shows that \([q_{1}]=[q_{2}]\) and \([q_{0}+q_{3}]=[q_{0}-q_{3}]\). But this in turn implies, that \(U_{1}\) and \(U_{2}\) as well as \(U_{3}\) and \(U_{4}\) are linearly dependent. Thus we have two degrees of freedom for choosing \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\), and \(\lambda_{4}\) such that (19) holds, and two linear conditions, that need to be fulfilled, i.e. the solutions for \(x_{0}\) and \(x_{\infty}\) should coincide. Therefore, it is possible, to choose \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\), and \(\lambda_{4}\), such that our system of equations is fulfilled. Case 3.3. In Case 3.2 we have actually treated the situation where \(U_{1}\) and \(U_{2}\) are linearly dependent. There, we also showed that this is the case if and only if \(U_{3}\) and \(U_{4}\) are linearly dependent. Hence, let us assume here that \(U_{1}\) and \(U_{3}\) are linearly dependent. We choose \(X=\lambda_{2}U_{2}+\lambda_{4}U_{4}\) for some yet to be determined \(\lambda_{2}\) and \(\lambda_{4}\). This means, by the arguments used in Case 3.1, that Equations (1) and (iii) are fulfilled with \(x_{0}=-\lambda_{4}S(q_{0}-q_{3},q_{2}\widetilde{r})\) and \(x_{\infty}=\lambda_{2}S(q_{2},(q_{0}-q_{3})\widetilde{r})/2\). The map \([\lambda_{2},\lambda_{4}]\mapsto Xq_{1}\) is a projective map between two projective lines. Hence we can select \(\lambda_{2}\) and \(\lambda_{4}\) such that \(Xq_{1}\) is a scalar multiple of \(q_{1}\) and \(X(q_{0}+q_{3})\) is a scalar multiple of \(q_{0}+q_{3}\) (and hence also of \(q_{1}\)). Now, by left-multiplying Equation (1) with \(X\), we obtain \[x_{0}X(q_{0}-q_{3})-X^{2}q_{2}=0. \tag{20}\] Note that \(X^{2}\) is a scalar. We plug (20) into Equation (iii) to get \(X^{2}=-2x_{0}x_{\infty}\). Left-multiplying Equation (ii) with \(X\) we can transform it into Equation (iv). Thus, Equations (ii) and (iv) actually just give _one further_ linear condition on \(\lambda_{2}\) and \(\lambda_{4}\) so that a solution does exist. Case 4: Here, we discuss the cases where it least one \(q_{1}\), \(q_{2}\), \(q_{0}+q_{3}\) or \(q_{0}-q_{3}\) is zero. Case 4.1: Let us assume \(q_{1}=0\). Then (ii) gives \(x_{\infty}=0\). Using this, we see from (iii), that we should choose \(X=\lambda_{4}U_{4}\). From this, we get \(x_{0}=-\lambda_{4}S(q_{0}-q_{3},q_{2}\widetilde{r})\) by Equation (1). Equation (17) implies, that (iv) is fulfilled, hence we found a solution. The other cases, where only one of the quaternions \(q_{2}\), \(q_{0}+q_{3}\), \(q_{0}-q_{3}\) is zero, are similar. Case 4.2: Let us assume \(q_{1}=q_{0}+q_{3}=0\), i.e., \(U_{1}=U_{3}=0\). In this case, Eqs. (ii) and (iv) are already fulfilled. Thus, after choosing \(X\coloneqq\lambda_{2}U_{2}+\lambda_{4}U_{4}\), we are left with one equation to determine \(x_{0}=-\lambda_{4}S(q_{0}-q_{3},q_{2}\widetilde{r})\) and one to determine \(x_{\infty}=\lambda_{2}S(q_{2},(q_{0}-q_{3})\widetilde{r})/2\). Case 4.3: The case \(q_{2}=q_{0}-q_{3}=0\), i.e., \(U_{2}=U_{4}=0\) is similar to Case 3.2. Case 4.4: Let us assume \(q_{1}=q_{0}-q_{3}=0\), i.e., \(U_{1}=U_{4}=0\). In this case, Eqs. (ii) and (iii) immediately yield \(x_{\infty}=0\). We are only left with two equations \(Xq_{2}=0\) and \(X(q_{0}+q_{3})=0\) which in general will only be fulfilled for \(X=0\). However, \(x_{0}\) can be chosen arbitrarily and we have found a non-zero solution. Case 4.5: Let us assume \(q_{2}=q_{0}+q_{3}=0\). In this case, we have \(x_{0}=0\), \(X=0\) and \(x_{\infty}\) can be chosen arbitrarily.
2304.05693
A Persistent-Excitation-Free Method for System Disturbance Estimation Using Concurrent Learning
Observer-based methods are widely used to estimate the disturbances of different dynamic systems. However, a drawback of the conventional disturbance observers is that they all assume persistent excitation (PE) of the systems. As a result, they may lead to poor estimation precision when PE is not ensured, for instance, when the disturbance gain of the system is close to the singularity. In this paper, we propose a novel disturbance observer based on concurrent learning (CL) with time-variant history stacks, which ensures high estimation precision even in PE-free cases. The disturbance observer is designed in both continuous and discrete time. The estimation errors of the proposed method are proved to converge to a bounded set using the Lyapunov method. A history-sample-selection procedure is proposed to reduce the estimation error caused by the accumulation of old history samples. A simulation study on epidemic control shows that the proposed method produces higher estimation precision than the conventional disturbance observer when PE is not satisfied. This justifies the correctness of the proposed CL-based disturbance observer and verifies its applicability to solving practical problems.
Zengjie Zhang, Fangzhou Liu, Tong Liu, Jianbin Qiu, Martin Buss
2023-04-12T08:32:39Z
http://arxiv.org/abs/2304.05693v2
# A Persistent-Excitation-Free Method for System Disturbance Estimation Using Concurrent Learning ###### Abstract Observer-based methods are widely used to estimate the disturbances of different dynamic systems. However, a drawback of the conventional disturbance observers is that they all assume persistent excitation (PE) of the systems. As a result, they may lead to poor estimation precision when PE is not ensured, for instance, when the disturbance gain of the system is close to the singularity. In this paper, we propose a novel disturbance observer based on concurrent learning (CL) with time-variant history stacks, which ensures high estimation precision even in PE-free cases. The disturbance observer is designed in both continuous and discrete time. The estimation errors of the proposed method are proved to converge to a bounded set using the Lyapunov method. A history-sample-selection procedure is proposed to reduce the estimation error caused by the accumulation of old history samples. A simulation study on epidemic control shows that the proposed method produces higher estimation precision than the conventional disturbance observer when PE is not satisfied. This justifies the correctness of the proposed CL-based disturbance observer and verifies its applicability to solving practical problems. robust control, fault detection and identification, disturbance estimation, disturbance resistant control, persistent excitation, unknown-input observer, concurrent learning, networked epidemic model. ## I Introduction The disturbance is an important reason for performance degradation of many practical systems, such as switched systems [1], circuit systems [2], and multi-agent systems [3]. Disturbances are often recognized as unexpected unknown inputs of the systems, such as actuator faults [4], external impacts [5], impulses [6], or vibrations [7]. To attenuate and mitigate the influences of the disturbances on system performance, various disturbance-tolerant control methods are proposed to compensate for the disturbance effects by refining the control inputs [8, 9, 10]. These methods require precise estimation of the system disturbances. The most effective methods for disturbance estimation are mainly based on analytical redundancy technology, namely disturbance observers. A classical type of disturbance observer is the unknown-input observer that reconstructs the disturbances using the linear observer theory. Nevertheless, the main drawback is that it assumes the smoothness of the system nonlinearity [11]. Also, this method requires the decoupling between the disturbance and the observation of the system. To improve the robustness of the estimation, the sliding mode observer is proposed to compensate for the unmodeled dynamics using high-frequency switching [12]. Besides, the nonlinear disturbance observer [13, 14] utilizes feedback linearization to construct linear error dynamics and provide precise disturbance estimation. It does not require the decoupling property but needs the derivatives of the system states [15, 16]. To obtain the exact state derivatives, the second-order and the integral sliding mode observers [17, 18] are proposed. These are the most representative disturbance-estimation methods presented in the previous work. Recent surveys on the variants of these methods can be referred to in [19, 20]. Nevertheless, the conventional observer-based methods are only applicable to cases where the persistent excitation (PE) condition is satisfied for the system. PE is an important concept in system identification to address that the system is sufficiently actuated by a rich amount of spectral components of the input signals [21]. It is a necessary condition to guarantee that the system parameter or structure to be identified can be precisely reconstructed under the actuation of these input signals. If the input signals of a system do not satisfy the PE condition, there may exist large deviations between the identified system parameters and their true values [22]. Many works are devoted to solving system identification problems without PE conditions [23]. Nevertheless, disturbance estimation under PE-free conditions has not attracted much attention. To the best knowledge of the authors, there has not been a work that solves the estimation of disturbance or time-variant parameters for PE-free systems. The main reason is that PE-free cases are not very common in practice. Most of the practical systems support the PE assumptions since they usually have non-singular disturbance gains [19]. For example, the disturbance gain is a constant non-singular matrix in [24]. Also, the disturbance gain of a robotic system is usually its inverse inertia which is typically always positive-definite [18, 25]. The conventional disturbance observers have no problems when applied to these systems. However, there exist some systems of which the disturbance gain may become singular in some states, making the systems lose PE. Examples of such systems include the networked epidemic model [30], the population dynamic model [26], the underactuated robot model with external collisions [27], circuit network with noise [28], or general networked systems with impulse disturbances [29], which will be detailedly elaborated in Sec. II-C. For these systems, the conventional disturbance-observer-based methods may produce large estimation errors when the systems are close to the singularity states. This issue attracted our attention due to our previous work on the control and filtering of networked epidemic models [30, 31]. We believe that investigating PE-free disturbance estimation is valuable work, considering the completeness of the observation theory. This work is the first attempt to solve this problem. The system disturbance losing PE reflects the lack of global diffeomorphic mappings for the disturbance-output feedback linearization of the system, which will be explained in Sec. II-A and Sec. II-B. An effective method for PE-free estimation problems is concurrent learning (CL). Since proposed in [32], CL is widely applied to system identification [33, 34], adaptive control [35], robust control [36], optimal control [37], observer design [38], and differential games [39]. By utilizing the history stack, a queue structure storing history system states and inputs, CL ensures precise approximation of system parameters without PE [40]. However, compared to constant parameters, estimating time-variant disturbances is challenging due to the accumulated errors brought up by the history stacks. Even though CL is applied to state observation [41] where the accumulated errors are avoided by utilizing the known intrinsic dynamics, similar techniques can not be applied to disturbances that are exerted by unknown extrinsic dynamics. Another solution to precisely estimate disturbance for PE-free systems is to utilize the higher-order derivatives of the system states to reconstruct the disturbance [16, 42], which, however, increases the complexity of the observers. Also, solving state derivatives is difficult in practice due to the existence of noise. Thus, such methods are not widely used by the previous work due to lack of applicability. In this paper, we propose a novel CL-based disturbance observer to estimate the disturbance of PE-free systems. The systems we are concerned with have state-dependent disturbance gains which become or get close to singularity under some states. The main contribution of this work is reflected from the following two perspectives. * Firstly, for the first time, we present how to use CL to precisely estimate disturbance for PE-free systems. Specifically, an extrinsic model adapted from the previous work [42] is used to approximate the external dynamics of the disturbance. Two time-variant history stacks are constructed to refine the updates of the disturbance estimation. Also, we define and analyze the accumulated errors brought up by CL, which is unique and nontrivial for disturbance estimation problems. The Lyapunov-based method is used to address the boundedness of the estimation error. * Secondly, we propose a history-sample-selection procedure to reduce the accumulated errors caused by CL. We apply the proposed CL-based disturbance observer to a networked epidemic model which is a very practical model for epidemics prediction and reaction. Specifically, the proposed observer is used to predict the infection rates of a simulated epidemic process. The precise prediction of the infection rates indicates the success of the proposed method. The simulation results indicate that the proposed methods can be realized on a normal PC without graphics cards. Compared to the conventional disturbance observers, the proposed CL-based disturbance observer has lower efficiency due to the stacking of the history data. Nevertheless, it wins with higher estimation precision. Our method is promising to promote the precision of unknown input estimation for large-scale network systems such as circuit networks. The rest of this paper is organized as follows, section II formulates the disturbance estimation problem, and section III presents the main results of the CL-based disturbance observer. In section IV, an epidemic-control case is simulated to validate the feasibility and efficacy of the proposed method. Finally, section V concludes the paper. _Notations:_\(\mathbb{R}_{\geq 0}\) and \(\mathbb{R}^{+}\) are the set of non-negative and positive real numbers. \(\mathbb{N}\), \(\mathbb{N}_{\geq 0}\), and \(\mathbb{N}^{+}\) are the sets of integers, non-negative integers, and positive integers, respectively. For a vector \(x\in\mathbb{R}^{n}\), \(x_{i}\) or \((x)_{i}\) denotes its \(i\)-th element, \(i=1,\cdots,n\), \(\|x\|\) is its 2-norm, and \(\operatorname{diag}(x)\in\mathbb{R}^{n\times n}\) is the diagonal matrix composed of \(x\). For any differentiable vector function \(v(x):\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), \(n,m\in\mathbb{N}^{+}\), \(\nabla v(x)=\partial v(x)/\partial x\in\mathbb{R}^{m\times n}\) denotes its gradient. For any \(h(x):\mathbb{R}^{n}\rightarrow\mathbb{R}\) and \(f(x):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), \(L_{f}h(x)=\nabla h(x)f\) is the Lie-derivative of \(h\) for \(f\) and \(L_{f}^{m}h(x)=\underbrace{L_{f}L_{f}\cdots L_{f}}_{m}h(x)\) is the \(m\)-th order Lie-derivative. For a matrix \(M\in\mathbb{R}^{m\times n}\), \(\|M\|\) denotes its spectral radius. \(I\) and \(O\) are the identity and zero matrices. ## II Preliminaries In this section, we present the preliminary knowledge that is needed to interpret the results of our work. Firstly, we formulate the disturbance estimation problem for a nonlinear system. Secondly, we address the PE-free conditions for the introduced system and explain the main challenges of PE-free disturbance estimation. Finally, we introduce three dynamic models as examples that can meet PE-free conditions. ### _Problem Formulation_ We consider the following general nonlinear system, \[\dot{x}(t)=f(x(t))+E(x(t))u(t)+G(x(t))d(t), \tag{1}\] where \(x(t)\in\Omega_{x}\subseteq\mathbb{R}^{n}\) is the system state, \(\Omega_{x}\) is the feasible state domain, \(u(t)\in\mathbb{R}^{m}\) and \(d(t)\in\mathbb{R}^{p}\) are respectively the control input and time-dependent disturbance of the system, \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), \(E:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m}\), and \(G:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times p}\) are smooth vector functions. The problem studied in this paper is to estimate the disturbance \(d(t)\) using the measurable state \(x(t)\) and its history data \(x(t_{i})\), \(0<t_{1}<t_{2}<\cdots<t_{i}<\cdots<t\). To clarify the PE-free situation for system (1), we reformulate it into a regularized form using feedback linearization. Suppose the existence of a smooth vector function \(h(x)\in\mathbb{R}^{p}\) and \(p\) integer scalars \(r_{1}\), \(\cdots\), \(r_{p}\in\mathbb{N}^{+}\), such that the following conditions hold. 1. There exists \(r\in\mathbb{N}^{+}\), such that \(p\leq r=\sum_{i=1}^{p}r_{i}\leq n\). 2. For all \(x\in\Omega_{x}\), \(i=1\), \(\cdots\), \(p\), and \(k=1\), \(\cdots\), \(r_{i}-1\), \[\lfloor L_{g_{1}}L_{f}^{k-1}h_{i}(x)\ \ \cdots\ L_{g_{L}}L_{f}^{k-1}h_{i}(x)\rceil=0\] where \(g_{j}\), \(j=1\), \(\cdots\), \(p\), is the \(j\)-th column of \(G(x)\). 3. There exists \(x\in\Omega_{x}\), such that for all \(i=1\), \(\cdots\), \(p\), \[l_{i}(x)=\left[\begin{array}{cccc}L_{g_{1}}L_{f}^{r_{i}-1}&\cdots&L_{g_{p}}L_{ f}^{r_{i}-1}h_{i}(x)\end{array}\right]\neq 0.\] Then, for \(x\in\Omega_{x}\) where 3) holds, there exist diffeomorphic mappings \(z=\psi(x)=\left[\begin{array}{cccc}\psi_{1}(x)&\cdots&\psi_{r}(x)\end{array} \right]^{\top}\in\mathbb{R}^{r}\) and \(w=\phi(x)=\left[\begin{array}{cccc}\phi_{1}(x)&\cdots&\phi_{n-r}(x)\end{array} \right]^{\top}\in\mathbb{R}^{n-r}\), where for each \(i=1,\cdots,r\), \(\psi_{i}(x)=\left[\begin{array}{cccc}h_{i}(x)&L_{f}h_{i}(x)&\cdots&L_{f}^{r_{ i}-1}h_{i}(x)\end{array}\right]\), and the elements of \(\phi\) satisfy \(L_{g_{i}}\phi_{i}(x)=0\), for \(i=1,\cdots,n-r\), \(j=1,\cdots,p\), such that system (1) can be represented as \[\dot{w} =\eta(x,u) \tag{2a}\] \[\dot{z} =\gamma(x,u)+BL(x)d(t),\] (2b) \[y =Cz, \tag{2c}\] where \(\gamma(x,u)=A\psi(x)+\iota(x,u)+B\alpha(x)\), and \(A\in\mathbb{R}^{r\times r}\), \(B\in\mathbb{R}^{r\times p}\), \(C\in\mathbb{R}^{p\times r}\), \(\alpha(x)\in\mathbb{R}^{p}\) and \(L(x)\in\mathbb{R}^{p\times p}\) are \[A =\operatorname{diag}\left(A_{1},\,A_{2},\,\cdots,\,A_{p}\right), \tag{3}\] \[B =\left[\begin{array}{cccc}B_{1}^{\top}\,B_{2}^{\top}&\cdots&B_{ p}^{\top}\end{array}\right]^{\top},\ C=\left[\begin{array}{cccc}C_{1}&C_{2}&\cdots&C_{p}\end{array} \right],\] \[\alpha(x) =\left[\begin{array}{cccc}L_{f}^{r_{1}}h_{1}(x)&L_{f}^{r_{2}}h _{2}(x)&\cdots&L_{f}^{r_{p}}h_{p}(x)\end{array}\right]^{\top},\] \[L(x) =\left[l_{1}^{\top}(x)&l_{2}^{\top}(x)&\cdots&l_{p}^{\top}(x) \end{array}\right]^{\top},\] where \(A_{i}\in\mathbb{R}^{r_{i}\times r_{i}}\), \(B_{i}\in\mathbb{R}^{r_{i}\times p}\) and \(C_{i}\in\mathbb{R}^{p\times r_{i}}\), \(i=1,\cdots,p\) are sub-blocks of matrices \(A\), \(B\) and \(C\), \[A_{i}\!=\!\!\left[\begin{array}{cccc}I_{(r_{i}-1)}\\ O_{1\times 1}\end{array}\right]\!,\,B_{i}\!=\!\!\left[\begin{array}{cccc}O_{(i-1) \times(r_{i}-1)}\\ I_{1\times 1}\\ O_{(p-i)\times(r_{i}-1)}\end{array}\right]^{\top}\!, \tag{4}\] and \(C_{i}\!=\!B_{i}^{\top}\), where all blank positions in the matrices are zero. The state-input-dependent smooth functions \(\iota(x,u)\in\mathbb{R}^{r}\) and \(\eta(x,u)\in\mathbb{R}^{n-r}\) are determined by the mappings \(\psi\) and \(\phi\). Specifically, the \(i\)-th elements of \(\iota(x,u)\) and \(\eta(x,u)\), \(i=1,2,\cdots,n-r\), are respectively represented as \[\iota_{i}(x,u)= \sum_{j=1}^{p}\sum_{k=1}^{r_{i}}L_{e_{j}}L_{f}^{r_{i}-k}h_{i}(x)u_ {j}^{(k-1)}(t),\] \[\eta_{i}(x,u)= \nabla\phi_{i}(x)\left(f(x)+E(x)u\right),\] where \(e_{j}\) is the \(j\)-th column vector of \(E(x)\) and \(u_{j}^{(k-1)}(t)\) is the \(j\)-th element of \(u^{(k-1)}(t)\), the \((k-1)\)-th derivative of \(u(t)\). Note that we assume \(u(t)\) is \(\max_{i}(r_{i})-1\)-times differentiable. The matrix \(L(x)\) in (2) is the state-dependent disturbance gain. ### _The PE-Free Conditions_ For the regularized system (2), if the diffeomorphic mappings \(\psi(x)\) and \(\phi(x)\) globally exist for all \(x\in\Omega_{x}\), the regularized form (2) holds in the entire state domain \(\Omega_{x}\)[43] and the disturbance gain \(L(x)\) contains no zero rows. If \(L(x)\) is further assumed to be non-singular for all \(x\in\Omega_{x}\) (such as the Euler-Lagrangian systems [18]), for any \(t\in\mathbb{R}_{\geq 0}\), there exist \(T,\tilde{\tau}\in\mathbb{R}^{+}\), such that \(d(t)\) and \(L(x)\) satisfies \[\int_{t}^{t+T}L(x(\tau))d(\tau)(L(x(\tau))d(\tau))^{\top}\mathrm{d}\tau\geq \tilde{\tau}I, \tag{5}\] which indicates that \(d(t)\) persistently excites the system [32]. The conventional disturbance observers are eligible for this situation. However, for a certain system for which no global diffeomorphic mappings exist, condition (c) in Sec II does not hold for all \(x\in\Omega_{x}\). Instead, there exists \(x\in\Omega_{x}\) such that some rows of \(L(x)\) become zeros and \(L(x)\) becomes singular. In this situation, PE is not ensured and the conventional PE-based observers can provide imprecise estimation results. A typical example is the network epidemic model which will be introduced in the simulation study in section IV, where we will see that the conventional disturbance observer produces large estimation errors in a PE-free situation. Besides, the mobile robots with external forces in singular positions [27] and the population system with empty regions [26] also address the similar issue. The critical point in resolving this issue is to exploit the history system data in the closed loop of the observation rather than only the system state. To solve this problem, in this paper, we design a CL-based disturbance observer by exploiting the history data of the system, such that precise estimation is ensured even when PE is not satisfied. ### _Examples of Disturbance-PE-Free Systems_ PE-free conditions are not typical for practical system disturbances. This is also why PE-free disturbance estimation has not attracted much attention. Nevertheless, there exist some systems for which PE-free cases should be incorporated. Here, we raise three examples. #### Iii-C1 Networked epidemic model [30] The epidemic process over a social network with \(n\in\mathbb{N}^{+}\) nodes is represented by the following continuous-time model \[\dot{x}(t)=(I-\operatorname{diag}(x(t)))W\operatorname{diag}(x(t))d(t)- \operatorname{diag}(x(t))\delta(t), \tag{6}\] where \(x(t)\in\mathbb{R}^{n}\) is the infection probabilities of the nodes, \(W\in\mathbb{R}^{n\times n}\) is the adjacency matrix of the network, and \(d(t),\delta(t)\in\mathbb{R}^{n}\) are respectively the infection and curing rates of the social nodes. This model will be interpreted in detail in Sec. IV-A. Recognizing the infection rate \(d(t)\) as the unknown disturbance of the system, this model is in a regular form as (2) without performing feedback linearization. In this sense, \((I-\operatorname{diag}(x(t)))W\operatorname{diag}(x(t))\) is the disturbance gain, and the system loses PE when at least one individual has zero or one infection probability. #### Iii-C2 Population model [26] In a certain region that contains \(n\in\mathbb{N}^{+}\) areas, the continuous-time dynamic model of a population system is denoted as \(\dot{x}(t)=Hx(t)+\operatorname{diag}(x(t))Fb(t)+w(t)\), where \(x(t)\in\mathbb{R}^{n}\) is the population in different areas, \(H\in\mathbb{R}^{n\times n}\) is the population transition matrix, \(b(t)\in\mathbb{R}^{n}\) is the fertility rate, \(F\in\mathbb{R}^{n\times n}\) is the fertility matrix and \(w(t)\) is a vector depicting the migration. If the fertility \(Fb(t)\) is recognized as a disturbance, the system complies with the regulated form in (2). The system loses PE when the disturbance gain \(\operatorname{diag}(x_{t})\) becomes singular when the population of at least one area decreases to zero. #### Iii-C3 Wheeled robot [27] The continuous-time dynamic model of a unicycle robot with collision force reads \(M_{\theta}\vec{\theta}=\tau+J_{\theta}^{\top}F\), where \(\theta\in\mathbb{R}^{2}\) are the rotation angles of the robot wheels, \(M_{\theta},J_{\theta}\in\mathbb{R}^{2\times 2}\) are the inertia and Jacobian matrices, \(\tau\in\mathbb{R}^{2}\) is the actuation torque, and \(F\in\mathbb{R}^{2}\) is the collision force exerted on the mobile robot. If the collision force \(F\) is recognized as a disturbance, the system loses PE when the disturbance gain \(m_{\theta}^{-1}J_{\theta}^{\top}\) becomes singular. This condition is also referred to as the _singular configuration_ in robotics. For all these systems, the conventional disturbance estimation methods may lead to large errors due to not incorporating PE-free conditions. In Sec. IV, we use a networked epidemic model to address how CL is used to resolve this issue. ## III Main Results In this section, we introduce the design of the CL-based PE-free disturbance observer. In subsections III-A and III-B, we present the continuous-time and discrete-time forms of the observer, respectively. The proofs of the estimation-error convergence are provided using the Lyapunov methods. Then, the accumulated errors caused by the history stacks are analyzed. To restrict the accumulated errors, we present the history-sample-selection procedure in subsection III-C. ### _Disturbance Observer in Continuous Time_ Most of the conventional disturbance estimation methods use predefined disturbance models to provide the necessary prior knowledge of the disturbance. In this paper, we use the following disturbance observer model adapted from [42], \[\dot{\hat{d}}(t)=\Lambda\hat{\hat{d}}(t)-\Lambda{d}(t), \tag{7}\] where \(\hat{d}(t)\in\mathbb{R}^{p}\) is the estimated value of the disturbance \(d(t)\) and \(\Lambda=\mathrm{diag}(\lfloor\,\lambda_{1},\,\lambda_{2},\,\cdots,\,\lambda_{p}\, \rfloor)\) is a constant Hurwitz and diagonal matrix, where \(\lambda_{i}<0\) for all \(i=1,\cdots,p\). In this sense, (7) serves as a linear low-pass filter of \(d(t)\) or a linear-invariant system with unknown input \(d(t)\). The target of the disturbance estimation problem is to precisely estimate \(d(t)\) in a real-time manner. Another commonly used disturbance model is the _exogenously-driven model_ addressed in [44, 45, 46], which assumes that the dynamic model of the disturbance is precisely depicted by a linear time-invariant system with unknown initial conditions. Actually, the disturbance model (7) is a generalized version of the _exogenously-driven model_ since we do not have strict assumptions on the dynamic model of the disturbance \(d(t)\). Based on this, we present the following CL-based disturbance observer \[\dot{\hat{d}}(t)=(\Lambda-\kappa S(\tau_{s},t))\hat{\hat{d}}(t)+\kappa X(\tau _{s},t), \tag{8}\] where \(\kappa\in\mathbb{R}^{+}\) is a constant gain parameter, \(S(\tau_{s},t)\) and \(X(\tau_{s},t)\) are the history stacks defined as, \[S(\tau_{s},t) =\sum_{j=1}^{n_{s}}S(t_{j},t),\,\,X(\tau_{s},t)=\sum_{j=1}^{n_{s}} X(t_{j},t)\] where \(n_{s}\in\mathbb{N}^{+}\) is the depth of the stacks, \(\tau_{s}=\{t_{1},\,t_{2},\,\cdots,\,t_{n_{s}}\}\) is a queue that contains the sampling instants of the samples \(x(\tau_{s})=\{x(t_{1}),\,x(t_{2}),\,\cdots,\,x(t_{n_{s}})\}\), where we assume the samples are ordered by the sampling sequence, i.e., \(0<t_{1}<t_{2}\leq\cdots\leq t_{n_{s}}\leq t\). For each \(j=1,2,\cdots,n_{s}\), \[S(t_{j},t) =e^{\Lambda(t_{j}-t)}L_{j}^{\top}L_{j}e^{\Lambda(t_{j}-t)}, \tag{9a}\] \[X(t_{j},t) =e^{\Lambda(t_{j}-t)}L_{j}^{\top}B^{\top}\zeta_{j}, \tag{9b}\] where \(L_{j}=L(x(t_{j}))\) and \(\zeta_{j}\) is a difference term \[\zeta_{j}=\gamma(x(t_{j}),u(t_{j}))-\nabla\psi(x(t_{j}))\dot{x}(t_{j}), \tag{10}\] where \(u(\tau_{s})=\{u(t_{1}),u(t_{2}),\cdots,u(t_{n_{s}})\}\), and \(\dot{x}(\tau_{s})=\{\dot{x}(t_{1}),\,\dot{x}(t_{2}),\,\cdots,\,\dot{x}(t_{n_{s }})\}\) are the inputs and state-derivatives at the history sampling instants. From (9a), it is noticed that \(S(t_{j},t)\) is symmetrically semi-positive definite for all \(j=1\), \(2\), \(\cdots\), \(n_{s}\) since \(\Lambda\) is diagonal. Thus, \(S(\tau_{s},t)\) is also symmetrically semi-positive definite. **Remark 1**.: _The time-variant history stacks \(S(\tau_{s},t)\) and \(X(\tau_{s},t)\) are the critical technical points of the CL-based disturbance observer (8). Different from the conventional CL methods in [40, 47], the history stacks in this paper contain a time-variant coefficient \(e^{\Lambda(t_{j}-t)}\) for each sample \(x_{j}\), \(t_{j}\in\tau_{s}\), where \(\Lambda\) comes from the basic observer model (7) and depicts its filtering bandwidth. The state derivatives \(\dot{x}(\tau_{s})\) used to construct the stack \(X(\tau_{s},t)\) can be estimated using exact differentiator [48] or derivative estimator [49] based methods, which is beyond the scope of this paper._ **Remark 2**.: _In the disturbance model (8), the diagonal matrix \(\Lambda\) is designed to be Hurwitz, which may lead to the infinity of \(e^{\Lambda(t_{j}-t)}\) as \(t-t_{j}\to+\infty\). To avoid this, it is kept in mind that an upper limit should be exerted on \(t-t_{j}\) for any sample. This indicates that old samples should be eliminated from the stacks, which will be discussed in the sample selection procedure in Sec. III-C._ To assist the following analysis, we define a residual signal \[\xi_{d}(t)=\dot{d}(t)-\Lambda{d}(t) \tag{11}\] and the accumulated error \[\xi(\tau_{s},t)=\xi_{d}(t)+\sum_{j=1}^{n_{s}}\int_{t_{j}}^{t}S(t_{j},t)e^{ \Lambda(t-\tau)}\xi_{d}(\tau)\mathrm{d}\tau. \tag{12}\] Then, the convergence of the estimation error \(\tilde{d}(t)=d(t)-\tilde{d}(t)\) is given by the following theorem. **Theorem 1**.: _For the dynamic system in (1), and the disturbance observer in (8), the estimation error \(\tilde{d}(t)\) is uniformly ultimately bounded (UUB) by_ \[\mathcal{D}(\varrho)=\left\{\tilde{d}(t)\big{|}\|\tilde{d}(t)\|<\frac{\varrho+ 1}{\omega}\overline{\xi}_{t}\right\}, \tag{13}\] _where \(\overline{\xi}_{t}\in\mathbb{R}^{+}\) is the upper bound of \(\xi(\tau_{s},t)\), for all \(x(0)\in\mathbb{R}^{n}\), \(d(0)\in\mathbb{R}^{p}\), and \(\varrho\in\mathbb{R}^{+}\), if there exists \(\omega\in\mathbb{R}^{+}\), such that the history stack \(S(\tau_{s},t)\) satisfies_ \[S(\tau_{s},t)>\frac{\omega I+\Lambda}{\kappa}. \tag{14}\] Proof.: See Sec. V-A. **Remark 3**.: _Theorem 1 shows that the estimator (8) guarantees a UUB property \(\|\tilde{d}(t)\|\in\mathcal{D}(\varrho)\) which regulates the precision level of the disturbance estimation. With a determined \(\omega\), the bounding scalar \(\overline{\xi}_{t}\) of the accumulated error \(\xi(\tau_{s},t)\) is the main factor that affects the ultimate error bound \(\mathcal{D}(\varrho)\). For (12), there exists \(t_{j}<t_{j}^{\prime}<t\) for each sample, such that_ \[\|\xi(\tau_{s},t)\|\leq\xi_{d}(t)+\sum_{j=1}^{n_{s}}\int_{t_{j}}^{t }\|S(t_{j},t)e^{\Lambda(t-\tau)}\xi_{d}(\tau)\|\mathrm{d}\tau\] \[\quad=\xi_{d}(t)+\sum_{j=1}^{n_{s}}(t-t_{j})\|S(t_{j},t)e^{\Lambda( t-t_{j}^{\prime})}\xi_{d}(t_{j}^{\prime})\|\] \[\quad\leq\|\xi_{d}(t)\|\left(1+\sum_{j=1}^{n_{s}}(t-t_{j})\|S(t_{j},t )\|\|e^{\Lambda(t-t_{j}^{\prime})}\|\right).\] _Therefore, the bounding scalar \(\overline{\xi}_{t}\) is determined by both the residual signal \(\xi_{d}(t)\) and the matrix norms \(\|S(t_{j},t)\|\) and \(\|e^{\Lambda(t-t_{j}^{\prime})}\|\). While the former is mainly determined by the property of the disturbance \(d(t)\), the latter are affected by the time increment \(t-t_{j}\) which depicts the accumulation of the residual signal \(\xi_{d}(t)\) as time increases. In this sense, the time increment \(t-t_{j}\) should be confined to limit the accumulative error \(\xi(\tau_{s},t)\), which addresses a similar concern to Remark 1. This can be achieved by eliminating the old samples from the stacks, which will be considered for the history sample selection procedure in Sec. III-C. Besides, the constant \(k\) can also adjust the bound \(\overline{\xi}_{t}\) by releasing the requirement on the stack \(S(\tau_{s},t)\). With a larger \(\kappa\), fewer samples are needed in the history stack \(S(\tau_{s},t)\) and the bound \(\overline{\xi}_{t}\) can be restricted._ The proposed CL-based disturbance observer is equivalent to the conventional disturbance observers proposed in [42, 50], where the history stacks only contain the most recent sample, i.e., \(\tau_{s}=\{t\}\) and \(S(\tau_{s},t)=L^{\top}(x(t))L(x(t))\), \(\forall\,t\in\mathbb{R}^{+}\). In this situation, the convergence condition (14) may not hold when the disturbance gain \(L(x(t))\) is close to the singularity which reflects the lack of PE according to (5). Compared to the conventional disturbance observers, the CL-based observer can still ensure the convergence condition (14) due to the accumulation of the history stack \(S(\tau_{s},t)\), even when PE is not satisfied. Besides, compared to the constant \(\kappa\), \(S(\tau_{s},t)\) can adaptively adjust the feedforward gain of the observer to avoid it being overlarge. Note that an overlarge gain may cause instability of the closed-loop dynamics of the observer with finite sampling frequency. This will be further discussed for the discrete-time form in the next subsection. ### _Disturbance Observer in Discrete Time_ For the application of the method to discrete-time systems, we present the discrete-time form of the disturbance observer (8) considering the finite sampling rate. We study the following discrete-sampled system as a substitution of (2b), \[z(k+1)=z(k)+h\gamma(x(k),u(k))+hBL(x(k))d(k), \tag{15}\] where \(x(k)\in\mathbb{R}^{n}\), \(z(k)\in\mathbb{R}^{r}\), \(u(k)\in\mathbb{R}^{m}\) and \(d(k)\in\mathbb{R}^{p}\) are respectively the system states, input and disturbance at sampling instant \(k\in\mathbb{N}_{\geq 0}\), and \(h\) is the sampling period. The discrete-time disturbance observer is formulated as \[\hat{d}(k+1)=\big{(}e^{h\Lambda}-\kappa hS(\kappa_{s},k)\big{)}\hat{d}(k)+ \kappa X(\kappa_{s},k), \tag{16}\] where \(\hat{d}(k)\in\mathbb{R}^{p}\) is the estimation of \(d(k)\), \(\kappa_{s}=\{k_{1}\), \(\cdots\), \(k_{n_{s}}\}\) are the sampling instants of the history data, and \[S(\kappa_{s},k)=\sum_{j=1}^{n_{s}}S(k_{j},k),\ X(\kappa_{s},k)=\sum_{j=1}^{n_{ s}}X(k_{j},k),\] are the discrete-time history stacks, where \[S(k_{j},k) = e^{h\Lambda(k_{j}-k)}L_{j}^{\top}L_{j}e^{h\Lambda(kj-k)}, \tag{17a}\] \[X(k_{j},k) = e^{h\Lambda(kj-k)}L_{j}^{\top}\mathbb{P}^{\top}\breve{\zeta}_{j}, \tag{17b}\] where \(L_{j}=L(x(k_{j}))\) and \(\breve{\zeta}_{j}\) is a difference term \[\breve{\zeta}_{j}=h\gamma(x(k_{j}),u(k_{j}))+\psi(x(k_{j}))-\psi(x(k_{j}+1))\,. \tag{18}\] where \(x(k_{j})\) and \(u(k_{j})\) are respectively the system state and input at sampling instant \(k_{j}\). Compared to the continuous-time observer (8), the discrete-time form (16) does not need the state derivatives but requires the successor states instead. Therefore, for any sampling instant \(k\), the current state \(x(k)\) is not stacked since its successor state \(x(k+1)\) is not available. Similar to (12), we define the discrete-time residual signal \[\xi_{d}(k)=d(k+1)-e^{h\Lambda}d(k) \tag{19}\] and the accumulated error \[\xi(\kappa_{s},k) = \xi_{d}(k)+\sum_{j=1}^{n_{s}}\sum_{i=k_{j}}^{k-1}e^{h\Lambda(k_{j} -k)}L_{j}^{\top}L_{j}e^{h\Lambda(k_{j}-k)}\xi_{d}(i).\] Then, the convergence of the estimation error \(\tilde{d}(k)=d(k)-\hat{d}(k)\) is given by the following theorem. **Theorem 2**.: _For the discrete-time system (15) and the disturbance observer (16), the estimation error \(\tilde{d}(k)\) is UUB by_ \[\tilde{\mathcal{D}}(\varrho) = \left\{\tilde{d}(k)\left\|\|\tilde{d}(k)\|\!<\!\left(\frac{1}{2 \omega}\!+\!\sqrt{\frac{1}{4\omega}+\frac{h}{\omega}}\right)\!(\varrho\!+\!1) \overline{\xi}_{k}\right\}\right. \tag{20}\] _for all \(x(0)\in\mathbb{R}^{n}\), \(d(0)\in\mathbb{R}^{p}\), and \(\varrho\in\mathbb{R}^{+}\), if there exists \(\omega\in\mathbb{R}^{+}\), such that the history stack \(S(\kappa_{s},k)\) satisfies_ \[S_{L}<S(\kappa_{s},k)<S_{U}, \tag{21}\] _where \(\overline{\xi}_{k}\) is the upper bound of \(\xi(\kappa_{s},k)\) and_ \[S_{L} = \frac{1}{h\kappa}\Big{(}e^{h\Lambda}-\left(\frac{1}{2}\!+\!\sqrt {\frac{1}{4}\!-\!h\omega}\right)I\Big{)}\,, \tag{22a}\] \[S_{U} = \frac{1}{h\kappa}\Big{(}e^{h\Lambda}-\left(\frac{1}{2}\!-\!\sqrt {\frac{1}{4}\!-\!h\omega}\right)I\Big{)} \tag{22b}\] _are constant matrices prescribing the bounds of \(S(\kappa_{s},k)\)._ Proof.: See Sec. V-B. **Remark 4**.: _Compared to the continuous-time form (8), the discrete-time observer (16) renders a more strict convergence condition due to the sampling period \(h\), although both forms ensure the estimation errors to be UUB. Besides, the ultimate error bound \(\tilde{\mathcal{D}}(\varrho)\) is more conservative than \(\mathcal{D}(\varrho)\) with an additional multiplier \(o_{1}(h\omega)\!>\!1\), \(\forall\,h\in\mathbb{R}^{+}\). In an extreme case, \(\tilde{\mathcal{D}}(\varrho)\rightarrow\mathcal{D}(\varrho)\) as \(h\to 0\). Note that an overlarge gain \(\kappa\) may violate the convergence condition (21) and leads to imprecise estimation results. Thus, the history stack \(S(\kappa_{s},k)\) can adaptively adjust the gain of the observer to consistently ensure the error convergence._ **Remark 5**.: _Note that the convergence condition in (21) is subject to bilateral constraints. If the value of \(h\kappa\) is too large, there may exist cases when (21) is not feasible. To avoid this issue, either \(h\) or \(\kappa\) should be selected as small._ The CL-based disturbance observer has been presented in both continuous-time and discrete-time forms. Although \(X(\tau_{s},t)\) and \(X(\kappa_{s},k)\) respectively use the state derivative \(\dot{x}(\tau_{s})\) and the subsequent state \(x^{\prime}(\kappa_{s})\), \(S(\tau_{s},t)\) and \(S(\kappa_{s},k)\) have the same structure with the correspondence \(t=hk\). Also, for both forms, the estimation errors are mainly affected by the accumulated errors which are inevitable for the CL-based disturbance observer since the disturbance is actuated by the unknown extrinsic dynamics. In this sense, the accumulated errors reflect the compromise of the estimation to the imperfect knowledge of the disturbance. Note that the arguments on the continuous-time accumulated error \(\xi(\tau_{s},t)\) in Remarks 2 and 3 also hold for the discrete-time one \(\xi(\kappa_{s},k)\). In the next subsection, we will discuss how the accumulated errors can be restricted using a history-sample-selection procedure. ### _History Sample Selection_ As addressed in Remarks 1 and 2, too much data in the history stacks may lead to large accumulated errors. Also, to ensure the convergence of the estimation errors, the history stack \(S(\kappa_{s},k)\) in (21) is limited by both upper and lower bounds. Thus, a procedure is needed to limit the amount of the history data in the stacks to ensure both the static and dynamic performance of the estimation, namely the ultimate estimation accuracy and the convergence of the estimation errors, respectively. Similar procedures used to purge stacks and remove erroneous data are referred to as _history stack management_ in previous work on CL [51]. In this subsection, we propose a history sample selection procedure, shown in Algorithm 1, to resolve this problem. The algorithm is only presented in discrete time for brevity, although it can be adapted to continuous time according to the inverse discretization \(t=hk\). It requires the current time \(k\), the sampling instants \(\kappa_{s}\), the terms \(\check{\zeta}_{j}\) and \(L_{j}\) for all \(j\in\kappa_{s}\cup k-1\), and the history stacks \(S\) and \(X\). The main technical points of the history sample selection procedure are introduced as follows. #### Iii-C1 Updating New Sample At every run-time instant \(k\in\mathbb{N}^{+}\), we add new data to the history stacks. Line 1 stores the latest instant \(k-1\) to the queue \(\kappa_{s}\). Line 2 and Line 3 use incremental approaches to update the history stacks \(S\) and \(X\). #### Iii-C2 Purging Old Samples After collecting new data, we purge the old data from the history stacks. The principle is that we always start purging from the oldest sampling instant. The objective of the purging is to ensure that condition (21) holds. Also, the stack \(S(\kappa_{s},k)\) should be as close to \(S_{L}\) as possible to guarantee a small error bound. As shown in the _for_-loop between Line 4 and Line 12, we purge the old samples one by one, until \(S(\kappa_{s},k)>S_{L}\) is just satisfied and is about to be violated. The bounds \(S_{L}\) and \(S_{U}\) are calculated using (22) with a feasible \(\omega\). The symbol END refers to the ending index of the queue \(\kappa_{s}\). **Remark 6**.: _Algorithm 1 does not affect the convergence of the estimation error, although it renders a non-trivial sampling process for the history stacks. The reason is that the procedure only adapts the feed-forward gain of the closed-loop of the estimation but does not interfere with its stability._ The result of Algorithm 1 is that only the newest samples are kept in the history stacks and the accumulated errors are restricted to the lowest possible level. When the disturbance gain approaches the singularity, more samples will be stacked to ensure the convergence of the estimation error, which renders a PE-free method. Otherwise, redundant samples will be eliminated and the observer behaves closely to a conventional PE-based disturbance observer. Thus, the CL-based disturbance observer ensures precise estimation when PE is not ensured while maintaining a low error bound when PE is satisfied. This indicates its advantage compared to the conventional observers in terms of both flexibility and adaptability. ``` 1:\(k\), \(\kappa_{s}\), \(\zeta_{j}\), \(L_{j}\), for all \(j\in\kappa_{s}\cup k-1\), \(S\), \(X\) 2:\(\kappa_{s}\), \(S\), \(X\) 3:\(\kappa_{s}=\kappa_{s}\cup k-1\) 4:\(S=e^{-h\Lambda}\big{(}S+L_{k-1}^{\top}L_{k-1}\big{)}e^{-h\Lambda}\) 5:\(X=e^{-h\Lambda}\big{(}X+L_{k-1}^{\top}\zeta_{k-1}\big{)}\) 6:for\(j=1\)to END do 7:\(S^{\prime}=S-e^{h\Lambda(k-j)}L_{j}^{\top}L_{j}e^{h\Lambda(k_{j}-k)}\) 8:if\(S^{\prime}>S_{L}\)or\(S^{\prime}>S_{U}\)then 9:\(S=S^{\prime}\) 10:\(X=X-e^{h\Lambda(j-k)}L_{j}^{\top}\zeta_{j}\) 11:else 12:break 13:endif 14:endfor 15:\(\kappa_{s}=\kappa_{s}(j:\mathrm{END})\) ``` **Algorithm 1** History Sample Selection Procedure ## IV Simulation Study: Epidemic Control In this section, we evaluate the proposed CL-based disturbance observer in an epidemic-control case that simulates the spread of the epidemic over a social network. The networked epidemic model is widely used to characterize the epidemic spreading process and to predict the contiguous states where the population is not well-mixed. Sec. II-C addressed that the model may lose the PE condition when the network is close to zero infection probabilities. Therefore, it is an ideal model to validate the advantage of our proposed method over conventional disturbance observers. Note that the epidemic model is just a baseline model that can best reflect the advantage of the proposed methods. In theory, the proposed method can be applied to any PE-free dynamic systems, including circuit or grid networks [28]. The study applies the proposed observer to the estimation of the infection rates of the epidemic to improve the control scheme. A comparative study is conducted to show the superior precision of the proposed observer over the conventional disturbance observer when the system fails to ensure PE. The simulation is conducted in MATLAB R2020b using the first-order Euler solver. The hardware to run the simulation is a Thinkpad Laptop without graphic cards. The simulation program and the dataset can be referred to in our GitHub repository [52]. ### _The Networked Epidemic Model_ In this study, we use the discrete-time susceptible-infected-susceptible (SIS) model [30, 31] to formulate the spread of epidemics over a social network. It considers a weighted digraph \(\mathcal{G}=(\mathcal{V},\mathcal{E},W)\) with \(n\in\mathbb{N}^{+}\) nodes, where \(\mathcal{V}=\{1,2,\ldots,n\}\) and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) are respectively the set of vertices and the set of edges. \(W=[w_{ij}]\in\mathbb{R}^{n\times n}\) is the adjacency matrix of \(\mathcal{G}\). Here, we only consider a graph \(\mathcal{G}\) with no self-loops, i.e., \(w_{ii}\!=\!0\), \(\forall\,i\in\mathcal{V}\). We also confine ourselves that \(w_{ij}>0\), \(\forall\,i,j\in\mathcal{V}\), if there exists an edge from \(j\) to \(i\). Then, the discrete-time dynamic model of the SIS model is [53] \[x_{i}(k\!+\!1)\!=\!h(1\!-\!x_{i}(k))\sum_{j=1}^{n}\!w_{ij}d_{j}x_{j}(k)\!+\!(1 \!-\!h\delta_{i})x_{i}(k), \tag{23}\] \(i\in\mathcal{V}\), where for each note \(i\), \(x_{i}(k)\) denotes its infection probability at time step \(k\!\in\!\mathbb{N}_{\geq 0}\), \(d_{i}\in\mathbb{R}_{\geq 0}\) is the proactive infection rate, \(\delta_{i}\in\mathbb{R}_{\geq 0}\) is the passive curing rate at instant \(k\), and \(h\in\mathbb{R}^{+}\) is the sampling period. Under appropriate assumptions [54], the infection probability \(x_{i}(k)\), for all \(k\in\mathbb{N}_{\geq 0}\) and \(i\in\mathcal{V}\), is well defined, i.e., given any initial condition \(x_{i}(0)\in\left[\,0,\,1\,\right]\), the system state is confined by \(x_{i}(k)\in\left[\,0,\,1\,\right]\). Note that this model (23) is the discretization of the continuous-time model in (6). In practice, the graph \(\mathcal{G}\) of the SIS model represents the social network of the people in a community. Each node \(i\in\mathcal{V}\) is recognized as an individual. The edge set \(\mathcal{E}\) and the adjacency matrix \(W\) denote the consistent contacts among the nodes. The system state \(x_{i}(k)\) represents how likely an individual is infected in the statistical sense. The infection rate \(d_{i}\) measures how susceptible an individual is towards the epidemic and the curing rate \(\delta_{i}\) describes how easily one recovers from the epidemic. High infection rates may bring up the infection probabilities while high curing rates perform otherwise. In general, people with inferior immunity usually correspond to high infection rates and those receiving positive treatments tend to have higher curing rates. Also, the infection rates may slowly change due to seasons, foods, or health conditions. On the contrary, the curing rates can be improved by manual intervention like active medical treatments. The essential objective of epidemic control is to design a control scheme for the curing rate, such that the infection probabilities are driven to zeros with the existence of the infection rates. ### _Experimental Configuration_ In this experiment, we use a graph containing \(n=67\) nodes to represent the social connection of the residences in a community. The connection of the graph is visualized in Fig. 1. It is noticed that this graph has high complexity due to both its large scale and the strong connection between its nodes, which is sufficient to validate the efficacy of the proposed method. We use the networked epidemic model in (23) to simulate the spreading of the disease. The parameters of the model, namely the adjacency matrix \(W\), the initial infection probabilities \(x_{i}(0)\), the ground truth of disturbance \(d_{i}\), and the baseline curing rate \(\delta_{i}\), \(i=1,2,\cdots,n\), can be found in our online repository [52] and are not enumerated due to the large scales. We assume the infection rates to be time-variant and sinusoidal as shown in Fig. 2 to simulate their changes as time increases. Also, the initial infection probabilities are sampled from a uniform distribution \(x_{i}(0)\!\sim\!\mathsf{U}(0,1)\), \(i=1,2,\cdots,n\), to simulate general cases. The simulation duration is \(T=5\) with a discrete sampling rate \(h\!=\!10^{-4}\). The infection probabilities of the nodes under the influence of the given infection rates are shown in Fig. 3. It is noticed that the infection probabilities are close to zero values around the time instant \(t\!=\!3\). This indicates that the disturbance gain \((I-\mathrm{diag}(x_{t}))W\mathrm{diag}(x_{t})\) of the epidemic model (6) is close to singularity around this timing point and the system loses PE. the conventional observer has the same parameters \(\kappa\) and \(\Lambda\) as the proposed one. History data selection is not needed for the conventional observer since its history stacks only have one single sample. The disturbance estimation errors of the CL-based observer and the conventional observer are respectively shown in Fig. 4(a) and Fig. 4(c). The depth of the history stacks of the CL-based observer is also shown in Fig. 4(b). The comparison study indicates that the conventional observer produces larger estimation errors than the CL-based observer. The errors are especially large when the infection probabilities are close zeros and the system loses PE, from \(t=3\) to \(t=4\). The reason is that PE is not ensured during this time period and the constant gain \(\kappa\) is not sufficient to guarantee the convergence of the estimation error. Nevertheless, the CL-based observer still provides precise estimation results even with a slight deviation. This is because that the stacked history samples can provide history data of the system rather than the most recent state. Therefore, we can infer that the history stacks strengthen the convergence condition (21). Fig. 4(b) shows that more samples are stacked when the system is close to the singular states where PE is lost. Specifically, the largest amount appears around \(t=3\)s when the infection probabilities are closest to zeros (see Fig. 3. This indicates that the application of CL is the main reason to ensure the estimation precision of the proposed observer when PE is lost. ### _Disturbance Compensation Control_ Since the proposed CL-based disturbance observer provides accurate disturbance estimation even when the system does not satisfy PE, it is promising to be used to improve the performance of epidemic control. To verify this, we design a feedforward control law to compensate for the time-variant infection rates during epidemic control. In this case, the curing rate for \(i=1,2,\cdots,n\) reads \[\delta_{i}(k)=\bar{\delta}_{i}+(1\!-\!x_{i}(k))\sum_{j=1}^{n}\!w_{ij}\hat{d}_ {j}(k)x_{j}(k)/x_{i}(k). \tag{24}\] where \(\bar{\delta}_{i}\) is the baseline curing rate used to generate the infection probabilities in Fig. 3. Therefore, based on the baseline curing rates, the control scheme (24) has an additional term to compensate for the effects of the infection rates using the estimation. The resulting controlled infection probabilities are illustrated in Fig. 5. Compared to Fig. 3, it is noticed that Fig. 5 presents a decent control performance since its infection probabilities are consistently retained close to zero values, which means that the epidemics are well controlled. In contrast, Fig. 3 shows an inferior control result, where the infection probabilities rebound after \(t=3\). This simulation study indicates that the epidemic control performance can be greatly improved by estimating the infection rates in run-time and compensating for it in the control input. This use case verifies the potential of the proposed CL-based observer in solving practical problems. From the simulation results in this section, we notice that the proposed CL-based disturbance observer presents higher estimation precision than the conventional method when PE is not ensured. It also helps to improve the performance of a nominal control scheme. Besides, the application of the proposed method to epidemic control indicates its value in solving practical problems. From this perspective, the target of this paper, to propose a high-precision disturbance observer for a PE-free system, is achieved. ## V Conclusion We propose a CL-based disturbance observer to resolve the inferior precision issue for the conventional observer in a PE-free situation. We prove the convergence of the observer using a Lyapunov method and obtain a convergence condition as the substitution of PE. During the application of CL, we notice that large estimation errors may be caused by accumulated errors. To restrict the accumulated errors, we present a history-sample-selection procedure to timely eliminate the Fig. 4: The performance of the CL-based and the conventional observers. Fig. 5: The infection probabilities under the compensation control with timing discretization \(t=hk\). Each line shows the infection probability of a node. Note that we only show nodes \(i=1,8,15,\cdots,64\), for brevity. old samples. The proposed method serves as a disturbance observer with an adaptive feedforward gain which ensures precise estimation results even when PE is not guaranteed. We use an epidemic control study to show the advantage of the proposed observer over the conventional method and how it can benefit an epidemic control scheme, although it has the potential to be applied to circuit and grid network systems. Therefore, we have addressed both the theoretical feasibility and the implementational effectiveness of the CL-based disturbance observer. It is worth mentioning that the main advantage of the CL-based disturbance observer is the higher disturbance estimation precision in PE-free cases. If PE is satisfied, its performance is very similar to a conventional disturbance observer, since the history stacks do not need to store much history data. In future work, we will investigate its possible applications to a wider range of systems, such as the collision-force estimation of mobile robots with singular configurations. ## Appendix: Proofs In the appendix, we provide the proof for the theorems proposed in this article. ### _Proof of Theorem 1_ Subtrating (8) from (11), we obtain the error dynamics \[\dot{\tilde{d}}(t)=\Lambda\tilde{d}(t)+\kappa S(\tau_{s},t)\tilde{d}(t)-\kappa X (\tau_{s},t)+\xi_{d}(t). \tag{25}\] Note that (11) is a linear model, i.e., for any \(t_{j}\in\mathbb{R}^{+}\) and \(t_{j}<t\), we have \(d(t)=e^{\Lambda(t-t_{j})}d(t_{j})+\int_{t_{j}}^{t}e^{\Lambda(t-\tau)}\xi_{d}( \tau)\mathrm{d}\tau\). Considering the non-singularity of \(e^{\Lambda(t-t_{j})}\), it leads to \[d(t_{j})=e^{\Lambda(t_{j}-t)}d(t)-\int_{t_{j}}^{t}e^{\Lambda(t_{j}-\tau)}\xi_{ d}(\tau)\mathrm{d}\tau. \tag{26}\] Also, substituting the system dynamics (2) to (10), we obtain \(\zeta_{j}=BL_{j}d(t_{j})\), \(j=1,\cdots,n_{s}\), which leads the history stack \(X(\tau_{s},t)\) in (9b) to \[X(\tau_{s},t)=\sum_{j=1}^{n_{s}}e^{\Lambda(t_{j}-t)}L_{j}^{\top}B\mathbb{N} ^{\top}\!BL_{j}d(t_{j}). \tag{27}\] Meanwhile, from (4), it is straightforward to infer \(B^{\top}\!B=I\). Therefore, substituting (26) to (27), we obtain \[X(\tau_{s},t) =\!\sum_{j=1}^{n_{s}}e^{\Lambda(t_{j}-t)}L_{j}^{\top}L_{j}e^{ \Lambda(t_{j}-t)}d(t)\] \[-\sum_{j=1}^{n_{s}}\int_{t_{j}}^{t}e^{\Lambda(t_{j}-t)}L_{j}^{ \top}L_{j}e^{\Lambda(t_{j}-\tau)}\xi_{d}(\tau)\mathrm{d}\tau\] \[=S(\tau_{s},t)d(t)-\xi(\tau_{s},t)+\xi_{d}(t).\] Substituting it to (25), we obtain \[\dot{\tilde{d}}(t)=\,(\Lambda-\kappa S(\tau_{s},t))\tilde{d}(t)+\xi(\tau_{s},t). \tag{28}\] We define the Lyapunov function as \(V(t)=\frac{1}{2}\bar{d}^{\top}(t)\tilde{d}(t)\). Substituting (28) to its derivative \(\dot{V}(t)=\bar{d}^{\top}(t)\dot{\tilde{d}}(t)\), we obtain \[\dot{V}(t)=\,\bar{d}^{\top}(t)\,(\Lambda-\kappa S(\tau_{s},t))\,\tilde{d}(t)+ \bar{d}^{\top}(t)\xi(\tau_{s},t).\] Substituting the condition (14) to it, we obtain \[\dot{V}(t)<-\omega\bar{d}^{\top}(t)\tilde{d}(t)+\|\tilde{d}(t)\|\left\|\xi( \tau_{s},t)\right\|.\] Since \(\|\tilde{d}(t)\|=\sqrt{2V(t)}\) and \(\|\xi(\tau_{s},t)\|\leq\overline{\xi}_{t}\), we have \[\dot{V}(t)\!<\!-2\omega V(t)\!+\!\overline{\xi}_{t}\!\sqrt{2V(t)}\!=\!\sqrt{2V( t)}\!\left(\!\overline{\xi}_{t}\!-\!\omega\!\sqrt{2V(t)}\!\right). \tag{29}\] Let \(\overline{\mathcal{D}}(\varrho)\) be the supplementary set of \(\mathcal{D}(\varrho)\). Thus, \(\forall\tilde{d}(t)\in\overline{\mathcal{D}}(\varrho)\), we have \(\sqrt{2V(t)}=\|\tilde{d}(t)\|\geq\frac{\varrho+1}{\omega}\overline{\xi}_{t}\) which leads (29) to \[\dot{V}(t)\!<\!-\varrho\overline{\xi}_{t}\sqrt{2V(t)}, \tag{30}\] which ensures \(\dot{V}(t)<0\), \(\forall\,\tilde{d}(t)\in\overline{\mathcal{D}}(\varrho)\). Therefore, \(\tilde{d}(t)\) ultimately converges to the bounded set \(\mathcal{D}(\varrho)\), which uniformly holds for all \(x(0)\in\mathbb{R}^{n}\), \(d(0)\in\mathbb{R}^{p}\), and \(\varrho\in\mathbb{R}^{+}\). ### _Proof of Theorem 2_ Subtracting (16) from (19), we obtain the error dynamics \[\tilde{d}(k+1)=e^{h\Lambda}\tilde{d}(k)+h\kappa S(\kappa_{s},k)\tilde{d}(k)- \kappa X(\kappa_{s},k)+h\xi_{d}(k).\] From (19), for any \(k_{j}\in\mathbb{N}^{+}\), \(k_{j}<k\), we obtain \[d(k)=e^{h\Lambda(k-k_{j})}d(k_{j})+\sum_{i=k_{j}}^{k-1}e^{h\Lambda(k-i-1)}.\] Considering the non-singularity of \(e^{h\Lambda(k-k_{j})}\), it leads to \[d(k_{j})=\,e^{h\Lambda(k_{j}-k)}d(k)-\sum_{i=k_{j}}^{k-1}e^{h\Lambda(k_{j}-i-1) }\xi_{d}(i). \tag{31}\] Substituting the discrete-time system (15) to (18), we have \[\ddot{\zeta}_{j}=hBL_{j}d(k_{j}).\] Thus, the history stack \(X(\kappa_{s},k)\) in (17b) leads to \[X(\kappa_{s},k) =h\sum_{j=1}^{n_{s}}e^{h\Lambda(k_{j}-k)}L_{j}^{\top}L_{j}e^{h \Lambda(k_{j}-k)}d(k)\] \[-h\sum_{j=1}^{n_{s}}\sum_{i=k_{j}}^{k-1}e^{h\Lambda(k_{j}-k)}L_{j}^ {\top}L_{j}e^{h\Lambda(k_{j}-k)}\xi_{d}(i)\] \[=hS(\kappa_{s},k)d(k)-h\xi(\kappa_{s},k)+h\xi_{d}(k).\] We define a Lyapunov function \[V(k)=\tfrac{1}{2}\bar{d}^{\top}(k)\tilde{d}(k). \tag{32}\] Its time increment \(\Delta V(k)=V(k+1)-V(k)\) reads \[\Delta V(k)=\bar{d}^{\top}(k)\Delta\tilde{d}(k)+\frac{1}{2}\|\Delta\tilde{d}(k )\|^{2}, \tag{33}\] where \[\Delta\tilde{d}(k) =\tilde{d}(k+1)-\tilde{d}(k)\] \[=(\overline{e^{h\Lambda}}-h\kappa S(\kappa_{s},k))\tilde{d}(k)+h \xi(\kappa_{s},k),\] where \(\overline{e^{h\Lambda}}=e^{h\Lambda}-I\). Since we have \[\bar{d}^{\top}(k)\Delta\tilde{d}(k) =\bar{d}^{\top}(k)\Big{(}\overline{e^{h\Lambda}}\!-\!h\kappa S( \kappa_{s},k)\Big{)}\tilde{d}(k)+h\bar{d}^{\top}(k)\xi, \tag{34a}\] \[\frac{1}{2}\|\Delta\tilde{d}(k)\|^{2}\!\leq \,\!\bar{d}^{\top}(k)\Big{(}\overline{e^{h\Lambda}}\!-\!h\kappa S( \kappa_{s},k)\Big{)}^{2}\tilde{d}(k)+h^{2}\|\xi\|^{2}, \tag{34b}\] and the condition (21) leads to \[\Big{(}\overline{e^{h\Lambda}}-h\kappa S(\kappa_{s},k)\Big{)}\!+\!\Big{(} \overline{e^{h\Lambda}}-h\kappa S(\kappa_{s},k)\Big{)}^{2}<-h\omega I, \tag{35}\] substituting (34) and (35) to (33) leads to \[\Delta V(k)<-h\omega\tilde{d}(k)\|^{2}+h\|\tilde{d}(k)\|\|\xi\|+h^{2}\|\xi\|^{2}.\] Considering \(\|\tilde{d}(k)\|=\sqrt{2V(k)}\) and \(\|\xi\|\leq\overline{\xi}_{k}\), we have \[\Delta V(k)<-2\hbar\omega V(k)+h\overline{\xi}_{k}\sqrt{2V(k)}+h^{2}\overline{ \xi}_{k}^{2}. \tag{36}\] Using (32), it leads to \[\Delta V(k)<-h\omega\|\tilde{d}(k)\|^{2}+h\overline{\xi}_{k}\|\tilde{d}(k)\|+h^ {2}\overline{\xi}_{k}^{2}. \tag{37}\] Letting \(\overline{\mathcal{D}}(\varrho)\) be the complementary set of \(\tilde{\mathcal{D}}(\varrho)\), we have \[\Delta V(k)<-h\omega\|\tilde{d}(k)\|^{2}+h\overline{\xi}_{k}\|\tilde{d}(k)\|+ h^{2}\overline{\xi}_{k}^{2}<0, \tag{38}\] for all \(\tilde{d}(k)\in\tilde{\mathcal{D}}(\varrho)\), \(\varrho\in\mathbb{R}^{+}\) Note that (38) uniformly holds for all \(V(0)\in\mathbb{R}^{+}\), which indicates that \(\tilde{d}(k)\) is uniformly ultimately bounded by \(\tilde{\mathcal{D}}(\varrho)\).
2310.17308
Wild Bootstrap for Counting Process-Based Statistics
The wild bootstrap is a popular resampling method in the context of time-to-event data analyses. Previous works established the large sample properties of it for applications to different estimators and test statistics. It can be used to justify the accuracy of inference procedures such as hypothesis tests or time-simultaneous confidence bands. This paper consists of two parts: in Part~I, a general framework is developed in which the large sample properties are established in a unified way by using martingale structures. The framework includes most of the well-known non- and semiparametric statistical methods in time-to-event analysis and parametric approaches. In Part II, the Fine-Gray proportional sub-hazards model exemplifies the theory for inference on cumulative incidence functions given the covariates. The model falls within the framework if the data are censoring-complete. A simulation study demonstrates the reliability of the method and an application to a data set about hospital-acquired infections illustrates the statistical procedure.
Marina T. Dietrich, Dennis Dobler, Mathisca C. M. de Gunst
2023-10-26T11:07:24Z
http://arxiv.org/abs/2310.17308v1
# Wild Bootstrap for Counting Process-Based Statistics ###### Abstract The wild bootstrap is a popular resampling method in the context of time-to-event data analyses. Previous works established the large sample properties of it for applications to different estimators and test statistics. It can be used to justify the accuracy of inference procedures such as hypothesis tests or time-simultaneous confidence bands. This paper consists of two parts: in Part I, a general framework is developed in which the large sample properties are established in a unified way by using martingale structures. The framework includes most of the well-known non- and semiparametric statistical methods in time-to-event analysis and parametric approaches. In Part II, the Fine-Gray proportional sub-hazards model exemplifies the theory for inference on cumulative incidence functions given the covariates. The model falls within the framework if the data are censoring-complete. A simulation study demonstrates the reliability of the method and an application to a data set about hospital-acquired infections illustrates the statistical procedure. **Keywords:** censored data, confidence regions, inference, resampling, survival analysis ## Part I: A Martingale Theory Approach ### Introduction In medical studies about, say, the 5-year survival chances of patients who underwent a novel treatment, not only the point estimate after five years is of interest, but also a confidence interval which quantifies the estimation uncertainty. Furthermore, it makes an essential difference for the patient whether the survival chances fall rather swiftly or slowly towards the 5-year survival chance, because the rate of decrease of the survival chance affects, for instance, the expected remaining lifetime. For this reason, it is more instructive to inspect confidence _regions_ for the entire run of the survival curve, such as _time-simultaneous bands_, than confidence intervals for the survival chances at single time points. In order to construct confidence regions, naturally information about the uncertainty of the estimation along the entire trajectory is required. Thus, one is interested in the distribution of the estimator around the target quantity as a function in time. Likewise, in the context of statistical testing, the distribution of the test statistic under the null hypothesis has to be determined. In both cases, because of the complex nature of the involved stochastic processes, the exact distribution of the estimator or the test statistic is generally unknown and needs to be approximated. A solution to the problem of assessing the distribution of a time-dependent statistic or the null distribution of an intricate test statistic is given by resampling techniques like random permutation, algebraic group-based re-randomization (Dobler, 2023), the bootstrap (Efron, 1979) or many variants thereof such as the wild bootstrap (Wu, 1986). Certain variants of these techniques were also proposed in survival analysis contexts where time-to-event data could be incomplete due to, e.g., independent left-truncation or right-censoring. Early references are Efron (1981) and Akritas (1986) for the classical bootstrap (drawing with replacement from the individual data points), Neuhaus (1993) for random permutation (of the censoring indicators), and Lin et al. (1993) for the wild bootstrap (mimicking martingale increments related to counting processes). Because of its popularity, elegance, and flexibility, in this Part I we focus on the wild bootstrap as the method of choice in the context of survival and event history analysis. Indeed, the wild bootstrap has been used frequently and in various models, though most often with normally distributed multipliers--an unnecessary restriction. For example, in Lin (1994) and Dobler et al. (2019) the wild bootstrap is applied to Cox models, and in Lin (1997), Beyersmann et al. (2013), and Dobler et al. (2017) the wild bootstrap is applied to cumulative incidence functions in competing risks models. In contrast to the pioneer papers of Lin (et al.), in the publications of Dobler et al. and Beyersmann et al. it has been allowed for generally distributed and data-dependent multipliers, respectively. Furthermore, in Spiekerman and Lin (1998) multivariate failure time models are considered, in Fine and Gray (1999) proportional subdistribution hazard models, in Lin et al. (2000) means in semiparametric models, and in Scheike and Zhang (2003) Cox-Aalen models are studied. More recently, Bluhmki and colleagues analyzed Aalen-Johansen estimators in general Markovian multi-state models (Bluhmki et al. (2018)) and general Nelson-Aalen estimators (Bluhmki et al. (2019)), and Feifel and Dobler treated nested case-control design models (Feifel and Dobler (2021)). In this Part I, we develop a rigorous theory to justify the use of the wild bootstrap under various survival analysis models. As in the above-mentioned articles, we employ the wild bootstrap for mimicking the martingale processes related to individual counting processes. We allow the individual counting processes to have multiple jumps each. Nonparametric models, parametric models and semiparametric (regression) models are covered in a unified approach. In this sense, the present Part I provides an umbrella theory for a large variety of specific applications of the wild bootstrap in the context of counting processes. In particular, we show that the asymptotic distribution of the resampled process coincides with that of the statistic of interest. In this way we verify the asymptotic validity of the wild bootstrap as an approximation procedure. Our proofs rely on weak regularity conditions and, differently from those in the above-mentioned articles, are developed in a novel way based on the martingale theory for counting processes as given in Rebolledo's original paper Rebolledo (1980). In particular, our approach solves an open problem of handling the Lindeberg condition in a suitable way. We also illustrate our approach for a couple of frequently used models. The present Part I is organized as follows. In Section I.2 we introduce the general set-up, the precise form of the counting process-based statistic, and derive its asymptotic distribution. In Section I.3 we define the wild bootstrap counterpart of the statistic under consideration and study its asymptotic distribution. Furthermore, we illustrate our findings with some examples in Section I.4. Finally, in Section I.5 we provide a discussion. All proofs are presented in the appendix. ### I.2 General Set-Up and a Weak Convergence Result for Counting Process-Based Estimators Let \(N_{1}(t),\ldots,N_{n}(t)\), \(t\in\mathcal{T}\), be independent and identically distributed counting processes, where each individual counting process \(N_{i}\), \(i=1,\ldots,n\), has in total \(n_{i}\) jumps of size \(1\) at the observed event times \(T_{i,1},\ldots,T_{i,n_{i}}\). Here, \(\mathcal{T}=[0,\tau]\) is a finite time window. The multivariate counting process \((N_{1},\ldots,N_{n})\) containing all \(n\) individual counting processes is denoted by \(\mathbf{N}(t)\), \(t\in\mathcal{T}\), and it is assumed that no two counting processes \(N_{i}\) jump simultaneously. The corresponding at-risk indicator for individual \(i\) is denoted by \(Y_{i}(t)\), \(t\in\mathcal{T}\), \(i=1,\ldots,n\). The multivariate at-risk indicator \((Y_{1},\ldots,Y_{n})\) is denoted by \(\mathbf{Y}(t)\), \(t\in\mathcal{T}\). Additionally, an individual \(d\)-variate covariate vector \(\tilde{\mathbf{Z}}_{i}(t)\), \(t\in\mathcal{T}\), possibly time-dependent, may also be available for individuals \(i=1,\ldots,n\). In general, \(\tilde{\mathbf{Z}}_{i}\) is available only as long as \(Y_{i}=1\). The observable vector of covariates \(\tilde{\mathbf{Z}}_{i}Y_{i}\) is denoted by \(\mathbf{Z}_{i}(t)\), \(t\in\mathcal{T}\), \(i=1,\ldots,n\). The list of all \(n\) observable covariate vectors each of dimension \(d\) is denoted by \(\mathbf{Z}(t)\), \(t\in\mathcal{T}\). We assume a parametric model for the data \((\mathbf{N}(t),\mathbf{Y}(t),\mathbf{Z}(t),t\in\mathcal{T})\), but our approach is suitable for nonparametric or semiparametric models as well. In the case of a parametric regression model, a parameter coefficient \(\boldsymbol{\beta}\in\mathbb{R}^{q}\) with \(q\geq d\) contains the \(d\)-dimensional parameter coefficient that specifies the influence of the covariates \(\mathbf{Z}\) on the jump times of \(\mathbf{N}\), but additional parameters may be included in \(\boldsymbol{\beta}\). If a nonparametric or semiparametric regression model is preferred, the set-up changes accordingly, cf. Examples I.4.1 and I.4.3. Finally, \((\Omega,\mathcal{A},\mathbb{P})\) denotes the underlying probability space, and \(\stackrel{{\mathbb{P}}}{{\longrightarrow}}\), \(\stackrel{{\mathcal{L}}}{{\longrightarrow}}\) denote convergence in probability and convergence in law, respectively. We usually write multivariate quantities in bold type and when we specify a stochastic quantity as finite, this is always to be understood as almost surely finite. In the present context, one is often interested in the estimation of a vector-valued stochastic function \(\mathbf{X}(t)\), \(t\in\mathcal{T}\), of dimension \(p\) by a counting process-based statistic of the form \[\mathbf{X}_{n}(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u, \boldsymbol{\hat{\beta}}_{n})dN_{i}(u),\quad t\in\mathcal{T},\] (I.1) where the \(p\)-dimensional integrands \(\mathbf{k}_{n,i}(t,\boldsymbol{\beta})\) defined on \(\mathcal{T}\times\mathbb{R}^{q}\) are stochastic processes that are not necessarily independent, with \(\mathbf{k}_{n,i}(\cdot,\boldsymbol{\beta})\) locally bounded and predictable for \(\boldsymbol{\beta}=\boldsymbol{\beta}_{0}\), and \(\mathbf{k}_{n,i}(t,\cdot)\) almost surely continuously differentiable in \(\boldsymbol{\beta}\), \(i=1,\ldots,n\). We assume that \(\boldsymbol{\hat{\beta}}_{n}\) is a consistent estimator of the true model parameter \(\boldsymbol{\beta}_{0}\) with \[\boldsymbol{\hat{\beta}}_{n}-\boldsymbol{\beta}_{0}=O_{p}(n^{-1/2}).\] (I.2) Additionally, we impose an assumption on the asymptotic representation of \(\sqrt{n}(\boldsymbol{\hat{\beta}}_{n}-\boldsymbol{\beta}_{0})\) for \(n\to\infty\), which will be specified later in this section. In other contexts, one may be interested in employing univariate test statistics of the form (I.1) to test a null hypothesis \(H\) against an alternative hypothesis \(K\). Obviously, useful estimation of the process \(\mathbf{X}\) is only achievable if the distribution of \(\mathbf{X}_{n}-\mathbf{X}\) is appropriately analyzed, and approximated if necessary. Likewise for the null distribution of a test statistic \(X_{n}\) in the case of testing. In the following, we focus on estimation in the situation in which the exact distribution of \(\mathbf{X}_{n}-\mathbf{X}\) is unknown. Thus, the goal of this section is to determine the asymptotic distribution of the stochastic process \(\sqrt{n}\big{(}\mathbf{X}_{n}-\mathbf{X}\big{)}\) for \(n\to\infty\), which will be used in Section I.3 to identify the wild bootstrap as a suitable approximation procedure. A special feature of such counting process-based statistics is that they have a strong connection to martingales, and martingale theory can be used to analyze the asymptotic distribution. The connection to martingale theory is established by means of the Doob-Meyer decomposition, which links the counting process \(N_{i}\) uniquely to the process \[M_{i}(t)=N_{i}(t)-\Lambda_{i}(t,\mathbf{\beta}_{0}),\quad t\in\mathcal{T},\] (I.3) which is a martingale with respect to the filtration \[\mathcal{F}_{1}(t)=\sigma\{N_{i}(u),Y_{i}(u),\mathbf{Z}_{i}(u),0\leq u\leq t,i= 1,\ldots,n\},\quad t\in\mathcal{T}.\] The cumulative intensity process \(\Lambda_{i}(t,\mathbf{\beta}_{0})\) as introduced in (I.3) is the compensator of \(N_{i}(t)\), \(t\in\mathcal{T}\); it is a non-decreasing predictable function in \(t\) with \(\Lambda_{i}(0,\mathbf{\beta}_{0})=0\), \(i=1,\ldots,n\). Additionally, we assume \(\Lambda_{i}(t,\mathbf{\beta}_{0})\) to be absolutely continuous with rate process \(\lambda_{i}=\dfrac{d}{dt}\Lambda_{i}\) and expected value \(E(\Lambda_{i}(\tau,\mathbf{\beta}_{0}))<\infty\). Furthermore, some event times may be unobservable due to independent right-censoring, left-truncation, or more general incomplete data patterns such as independent censoring on intervals. These censoring mechanisms are captured by the at-risk function \(Y_{i}\), \(i=1,\ldots,n\), and incorporated in the structure of the rate process by assuming that the individual counting process \(N_{i}\) satisfies the multiplicative intensity model. In particular, we assume for \(i=1,\ldots,n\), \[\lambda_{i}(t,\mathbf{\beta}_{0})=Y_{i}(t)\alpha_{i}(t,\mathbf{\beta}_{0}),\quad t\in \mathcal{T},\] where \(\alpha_{i}(\cdot,\mathbf{\beta}_{0})\) is the hazard rate related to the events registered by the counting process \(N_{i}\), and does not depend on the censoring or the truncation. In the case of a parametric or semiparametric model the hazard rate \(\alpha_{i}(t,\mathbf{\beta}_{0})\) takes the form \(\alpha_{0}(t,\mathbf{\beta}_{1;0})r(\mathbf{\beta}_{2;0}^{\top}\mathbf{Z}_{i}(t))\) or \(\alpha_{0}(t)r(\mathbf{\beta}_{0}^{\top}\mathbf{Z}_{i}(t))\), \(t\in\mathcal{T}\), respectively, with \(\mathbf{\beta}_{0}=(\mathbf{\beta}_{1;0},\mathbf{\beta}_{2;0})\). Here, \(r(\cdot)\) is some relative risk function and \(\alpha_{0}(\cdot,\mathbf{\beta}_{1;0})\), respectively, \(\alpha_{0}\) is the corresponding parametric or nonparametric baseline hazard function. For a general reference on counting processes and the ingredients of the model that we introduced above, we refer to Andersen et al. (1993). We now focus on the derivation of an asymptotic representation for \(\sqrt{n}\big{(}\mathbf{X}_{n}-\mathbf{X}\big{)}\) that plays a key role in deducing the corresponding asymptotic distribution. In this regard we make a number of assumptions. In Section I.4 we will illustrate with some examples that these assumptions are commonly satisfied. We start by rewriting \(\sqrt{n}\big{(}\mathbf{X}_{n}-\mathbf{X}\big{)}\) in basically two steps. In particular, we consecutively apply the Doob-Meyer decomposition (I.3) and a Taylor expansion around \(\mathbf{\beta}_{0}\). Here, we recall that, for fixed \(t\in\mathcal{T}\), the integrands \(\mathbf{k}_{n,i}(t,\cdot)\) are almost surely continuously differentiable in \(\mathbf{\beta}\), \(i=1,\ldots,n\). We thus find for \(t\in\mathcal{T}\) \[\sqrt{n}(\mathbf{X}_{n}(t)-\mathbf{X}(t))\] \[=\sqrt{n}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\big{[} \mathbf{k}_{n,i}(u,\hat{\mathbf{\beta}}_{n})-\mathbf{k}_{n,i}(u,\mathbf{\beta}_{0})+ \mathbf{k}_{n,i}(u,\mathbf{\beta}_{0})\big{]}dN_{i}(u)-\mathbf{X}(t)\Big{)}\] \[=\sqrt{n}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u, \boldsymbol{\beta}_{0})\big{(}dM_{i}(u)+d\Lambda_{i}(u,\beta_{0})\big{)}-\mathbf{ X}(t)\] \[\qquad+\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\big{[}\mathbf{k}_{n, i}(u,\hat{\boldsymbol{\beta}}_{n})-\mathbf{k}_{n,i}(u,\boldsymbol{\beta}_{0}) \big{]}dN_{i}(u)\Big{)}\] \[=\sqrt{n}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u,\boldsymbol{\beta}_{0})dM_{i}(u)\] (I.4) \[\qquad+\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u, \boldsymbol{\beta}_{0})d\Lambda_{i}(u,\beta_{0})-\mathbf{X}(t)\] \[\qquad+\big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathrm{D} \mathbf{k}_{n,i}(u,\boldsymbol{\beta}_{0})dN_{i}(u)\big{)}(\hat{\boldsymbol {\beta}}_{n}-\boldsymbol{\beta}_{0})+o_{p}(\hat{\boldsymbol{\beta}}_{n}- \boldsymbol{\beta}_{0})\Big{)},\] where \(\mathrm{D}\mathbf{f}\) denotes the Jacobian of a function \(\mathbf{f}\) with respect to \(\boldsymbol{\beta}\). For the next step we make the following regularity assumption: \[\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u,\boldsymbol{\beta}_{0} )d\Lambda_{i}(u,\boldsymbol{\beta}_{0})-\mathbf{X}(t)=o_{p}(n^{-1/2})\text{ for all }t\in\mathcal{T}.\] (I.5) We now continue from the right hand side of the equality labeled by (I.4), and with (I.2) in combination with (I.5) we obtain for \(t\in\mathcal{T}\) \[\sqrt{n}(\mathbf{X}_{n}(t)-\mathbf{X}(t))\] \[=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u, \boldsymbol{\beta}_{0})dM_{i}(u)\] (I.6) \[\qquad+\big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathrm{D} \mathbf{k}_{n,i}(u,\boldsymbol{\beta}_{0})dN_{i}(u)\big{)}\sqrt{n}(\hat{ \boldsymbol{\beta}}_{n}-\boldsymbol{\beta}_{0})+o_{p}(1),\] where we denote the \((p\times q)\)-dimensional counting process integral in (I.6) by \[\mathbf{B}_{n}(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathrm{D}\mathbf{k}_{n,i}(u,\boldsymbol{\beta}_{0})dN_{i}(u),\quad t\in\mathcal{T}.\] (I.7) Moreover, we assume the following asymptotic representation: \[\sqrt{n}(\hat{\boldsymbol{\beta}}_{n}-\boldsymbol{\beta}_{0})=\mathbf{C}_{n} \frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{\tau}\mathbf{g}_{n,i}(u,\boldsymbol{ \beta}_{0})dM_{i}(u)+o_{p}(1),\] (I.8) where \({\bf C}_{n}\) is a \((q\times b)\)-dimensional random matrix that we leave unspecified and the \(b\)-dimensional integrands \({\bf g}_{n,i}(t,\boldsymbol{\beta})\) defined on \({\cal T}\times\mathbb{R}^{d}\) are locally bounded stochastic processes that are predictable for \(\boldsymbol{\beta}=\boldsymbol{\beta}_{0}\), \(i=1,\ldots,n\). In Remark I.2.7 at the end of this section, we illustrate why (I.8) is a natural condition. Combining (I.6), (I.7) and (I.8) we obtain the asymptotic representation of \(\sqrt{n}({\bf X}_{n}-{\bf X})\) we were aiming for, i.e., \[\begin{split}&\sqrt{n}({\bf X}_{n}(t)-{\bf X}(t))\\ &=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}{\bf k}_{n,i}(u, \boldsymbol{\beta}_{0})dM_{i}(u)\\ &\qquad+{\bf B}_{n}(t){\bf C}_{n}\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \int_{0}^{\tau}{\bf g}_{n,i}(u,\boldsymbol{\beta}_{0})dM_{i}(u)+o_{p}(1),\quad t \in{\cal T}.\end{split}\] (I.9) In view of the similar structure of the two martingale integrals displayed in (I.9), we introduced the joint \((p+b)\)-dimensional stochastic process \({\bf D}_{n,h}=({\bf D}_{n,k}^{\top},{\bf D}_{n,g}^{\top})^{\top}\) with \[{\bf D}_{n,h}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}{\bf h}_{n,i}(u, \boldsymbol{\beta}_{0})dM_{i}(u),\quad t\in{\cal T},\] (I.10) where the \((p+b)\)-dimensional integrands \({\bf h}_{n,i}(t,\boldsymbol{\beta})=({\bf k}_{n,i}(t,\boldsymbol{\beta})^{ \top},{\bf g}_{n,i}(t,\boldsymbol{\beta})^{\top})^{\top}\) defined on \({\cal T}\times\mathbb{R}^{q}\) are locally bounded stochastic processes that are predictable for \(\boldsymbol{\beta}=\boldsymbol{\beta}_{0}\), \(i=1,\ldots,n\). In particular, \({\bf D}_{n,h}\) is composed of the \(p\)-dimensional stochastic process \({\bf D}_{n,k}\) and the \(b\)-dimensional stochastic process \({\bf D}_{n,g}\) with which we denote the first and second martingale integral on the right hand side of (I.9). With this notation, (I.9) becomes \[\sqrt{n}({\bf X}_{n}(t)-{\bf X}(t))={\bf D}_{n,k}(t)+{\bf B}_{n}(t){\bf C}_{n}{ \bf D}_{n,g}(\tau)+o_{p}(1),\quad t\in{\cal T}.\] (I.11) In order to derive the asymptotic distribution of the right-hand side of (I.11), we focus on the asymptotic distribution of its components \(({\bf D}_{n,k},{\bf D}_{n,g})\), \({\bf B}_{n}\), and \({\bf C}_{n}\) first. For this, we start by analyzing the joint asymptotic distribution of \({\bf D}_{n,h}=({\bf D}_{n,k}^{\top},{\bf D}_{n,g}^{\top})^{\top}\). According to Proposition II.4.1 of Andersen et al. (1993), \({\bf D}_{n,h}\) is a local square integrable martingale with respect to \({\cal F}_{1}\). By the use of this property, we will show that under regularity conditions \({\bf D}_{n,h}\) converges in law to a Gaussian martingale in \((D({\cal T}))^{p+b}\), as \(n\to\infty\). Here, \((D({\cal T}))^{p+b}\) is the space of cadlag functions in \(\mathbb{R}^{p+b}\) equipped with the product Skorohod topology. In the sequel, the \(p\times p\) matrix \(\boldsymbol{v}\cdot\boldsymbol{v}^{\top}\) for some \(\boldsymbol{v}\in\mathbb{R}^{p}\) will be denoted by \(\boldsymbol{v}^{\otimes 2}\), \(\|\cdot\|\) will denote a norm, e.g., the Euclidean norm, and \({\cal B}\) a neighborhood of \(\boldsymbol{\beta}_{0}\). Furthermore, we need the following regularity assumptions. **Assumption I.2.1**.: For each \(i\in\mathbb{N}\) there exists a \((p+b)\)-dimensional stochastic process \(\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta})\) defined on \(\mathcal{T}\times\mathcal{B}\) such that 1. \(\sup_{t\in\mathcal{T},i\in 1,\ldots,n}\|\mathbf{h}_{n,i}(t,\tilde{\boldsymbol{ \beta}}_{n})-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|\stackrel{{ \mathbb{P}}}{{\longrightarrow}}0\), as \(n\to\infty\), for any consistent estimator \(\tilde{\boldsymbol{\beta}}_{n}\) of \(\boldsymbol{\beta}_{0}\); 2. \(\tilde{\mathbf{h}}_{i}(t,\cdot)\) is a continuous function in \(\boldsymbol{\beta}\in\mathcal{B}\) and bounded on \(\mathcal{T}\times\mathcal{B}\); 3. the \((p+b+1)\)-tuples \((\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0}),\lambda_{i}(t,\boldsymbol{ \beta}_{0}))\), \(i=1,\ldots,n\), are pairwise independent and identically distributed for all \(t\in\mathcal{T}\). We are now ready to formulate the following result on the limit in distribution of \(\mathbf{D}_{n,h}\). **Lemma I.2.2**.: If Assumption I.2.1 holds, then \[\mathbf{D}_{n,h}\stackrel{{\mathcal{L}}}{{\longrightarrow}} \mathbf{D}_{\tilde{h}},\quad\text{in}\;(D(\mathcal{T}))^{p+b},\text{ as }n\to\infty,\] where \(\mathbf{D}_{\tilde{h}}=(\mathbf{D}_{k}^{\top},\mathbf{D}_{\tilde{g}}^{\top}) ^{\top}\) is a continuous zero-mean Gaussian \((p+b)\)-dimensional vector martingale with \(\langle\mathbf{D}_{\tilde{h}}\rangle(t)=\mathbf{V}_{\tilde{h}}(t)=\int_{0}^{t }\mathbb{E}(\tilde{\mathbf{h}}_{1}(u,\boldsymbol{\beta}_{0})^{\otimes 2} \lambda_{1}(u,\boldsymbol{\beta}_{0}))du\), \(t\in\mathcal{T}\). In particular, \[\mathbf{V}_{\tilde{h}}=\begin{pmatrix}\mathbf{V}_{\tilde{k}}&\mathbf{V}_{ \tilde{k},\tilde{g}}\\ \mathbf{V}_{\tilde{g},\tilde{k}}&\mathbf{V}_{\tilde{g}}\end{pmatrix},\] with \[\mathbf{V}_{\tilde{k}}(t)=\langle\mathbf{D}_{\tilde{k}}\rangle(t)=\int_{0}^{t }\mathbb{E}(\tilde{\mathbf{k}}_{1}(u,\boldsymbol{\beta}_{0})^{\otimes 2} \lambda_{1}(u,\boldsymbol{\beta}_{0}))du,\quad t\in\mathcal{T},\] \[\mathbf{V}_{\tilde{g}}(t)=\langle\mathbf{D}_{\tilde{g}}\rangle(t)=\int_{0}^{t }\mathbb{E}(\tilde{\mathbf{g}}_{1}(u,\boldsymbol{\beta}_{0})^{\otimes 2} \lambda_{1}(u,\boldsymbol{\beta}_{0}))du,\quad t\in\mathcal{T},\] and cross-covariance \[\mathbf{V}_{\tilde{k},\tilde{g}}(t)=\mathbf{V}_{\tilde{g},\tilde{k}}(t)^{ \top}=\langle\mathbf{D}_{\tilde{k}},\mathbf{D}_{\tilde{g}}\rangle(t)=\int_{0}^ {t}\mathbb{E}(\tilde{\mathbf{k}}_{1}(u,\boldsymbol{\beta}_{0})\tilde{\mathbf{ g}}_{1}(u,\boldsymbol{\beta}_{0})^{\top}\lambda_{1}(u,\boldsymbol{\beta}_{0}))du, \quad t\in\mathcal{T}.\] Proof.: See Appendix. We note that \(\mathbf{V}_{\tilde{h}}(t)\), \(t\in\mathcal{T}\), in Lemma I.2.2 is by construction a continuous, deterministic and positive semidefinite matrix-valued function with \(\mathbf{V}_{\tilde{h}}(0)=0\). Next, we study the limiting behaviour of the counting process integral \(\mathbf{B}_{n}\), and characterize the limit in probability of the random matrix \(\mathbf{C}_{n}\). The following assumptions are required. **Assumption I.2.3**.: For each \(i\in\mathbb{N}\) there exists a \((p\times q)\)-dimensional stochastic process \(\tilde{\mathbf{K}}_{i}(t,\boldsymbol{\beta})\) defined on \(\mathcal{T}\times\mathcal{B}\) such that 1. \(\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}\|{\rm D}\mathbf{k}_{n,i}(t, \tilde{\mathbf{\beta}}_{n})-\tilde{\mathbf{K}}_{i}(t,\mathbf{\beta}_{0})\|\stackrel{{\mathbb{P}}}{{\longrightarrow}}0\), as \(n\to\infty\), for any consistent estimator \(\tilde{\mathbf{\beta}}_{n}\) of \(\mathbf{\beta}_{0}\); 2. \(\tilde{\mathbf{K}}_{i}(\cdot,\mathbf{\beta}_{0})\) is predictable w.r.t. \(\mathcal{F}_{1}\) and bounded on \(\mathcal{T}\); 3. the \((p+q+1)\)-tuples \((\mbox{vec}(\tilde{\mathbf{K}}_{i}(t,\mathbf{\beta}_{0})), \lambda_{i}(t,\mathbf{\beta}_{0}))\), \(i=1,\ldots,n\), are pairwise independent and identically distributed for all \(t\in\mathcal{T}\). The next lemma describes the limiting behaviour of \(\mathbf{B}_{n}\). **Lemma I.2.4**.: If Assumption I.2.3 holds, then \[\sup_{t\in\mathcal{T}}\|\mathbf{B}_{n}(t)-\mathbf{B}(t)\| \stackrel{{\mathbb{P}}}{{\longrightarrow}}0,\mbox{ as }n\to\infty,\] where \(\mathbf{B}(t)=\int_{0}^{t}\mathbb{E}(\tilde{\mathbf{K}}_{1}( u,\mathbf{\beta}_{0})\lambda_{1}(u,\mathbf{\beta}_{0}))du\), \(t\in\mathcal{T}\), is a \((p\times q)\)-dimensional continuous, deterministic function. Proof.: See Appendix. \(\blacksquare\) With respect to the limiting behaviour of \(\mathbf{C}_{n}\), we require the following. **Assumption I.2.5**.: There exists a \((q\times b)\)-dimensional matrix **C** such that \[\|\mathbf{C}_{n}-\mathbf{C}\|\stackrel{{ \mathbb{P}}}{{\longrightarrow}}0,\mbox{ as }n\to\infty,\] where **C** is deterministic. Finally, we can state the limit in distribution of \(\sqrt{n}(\mathbf{X}_{n}-\mathbf{X})\). For this, we combine the results we have obtained on the weak limits of \(\mathbf{D}_{n,h}\), and \(\mathbf{B}_{n}\) with our assumption on that of \(\mathbf{C}_{n}\). **Theorem I.2.6**.: If the asymptotic representation (I.11) is fulfilled, and Assumptions I.2.1, I.2.3, and I.2.5 hold, then, \[\sqrt{n}\big{(}\mathbf{X}_{n}-\mathbf{X}\big{)}=\mathbf{D}_{n,k}+\mathbf{B}_{n}\mathbf{C}_{n}\mathbf{D}_{n,g}(\tau)+o_{p}(1)\stackrel{{\mathcal{L}}}{{ \longrightarrow}}\mathbf{D}_{\tilde{k}}+\mbox{\bf B}\mbox{\bf C} \mathbf{D}_{\tilde{g}}(\tau),\mbox{ in }(D(\mathcal{T}))^{p},\] as \(n\to\infty\), with \(\mathbf{D}_{\tilde{k}}\) and \(\mathbf{D}_{\tilde{g}}\) as in Lemma I.2.2, and **B** as in Lemma I.2.4. Moreover, the matrix-valued variance function of \(\mathbf{D}_{\tilde{k}}+\mbox{\bf B}\mbox{\bf C}\mathbf{D}_{ \tilde{g}}(\tau)\) is given as \[t\mapsto\mathbf{V}_{\tilde{k}}(t)+\mathbf{B}(t)\mathbf{C}\mathbf{V}_{\tilde{g}}(\tau)\mathbf{C}^{\top} \mathbf{B}(t)^{\top}+\mathbf{V}_{\tilde{k},\tilde{g}}(t) \mathbf{C}^{\top}\mathbf{B}(t)^{\top}+\mathbf{B}(t) \mathbf{C}\mathbf{V}_{\tilde{g},\tilde{k}}(t).\] Proof.: See the appendix. \(\blacksquare\) The proof of Theorem I.2.6 is based on martingale theory which we will also use in Section I.3. For this we make use of the following notation. Given a multi-dimensional vector of local square integrable martingales \({\bf H}_{n}(t),t\in{\cal T}\), its predictable covariation process and its optional covariation process are denoted by \(\langle{\bf H}_{n}\rangle(t)\) and \([{\bf H}_{n}](t)\), respectively. Moreover, \({\cal L}({\bf H}_{n})\) and \({\cal L}({\bf H}_{n}|\cdot)\) denote the law and the conditional law of \({\bf H}_{n}\), respectively. Additionally, \(d[\cdot,\cdot]\) is an appropriate distance measure between probability distributions, for example the Prohorov distance. **Remark I.2.7**.: To illustrate that (I.8) is a a natural condition, we note that for parametric models it is common practice to take the maximum likelihood estimator as the estimator \(\hat{\mathbf{\beta}}_{n}\) for estimating the true parameter \(\mathbf{\beta}_{0}\). In Borgan (1984) parametric survival models are considered, where for \(n\)-variate counting processes \((N_{1},\ldots,N_{n})\) the likelihood equations take the form \[\sum_{i=1}^{n}\int_{0}^{\tau}\nabla\alpha_{i}(u,\mathbf{\beta})\alpha _{i}(u,\mathbf{\beta})^{-1}dN_{i}(u)-\sum_{i=1}^{n}\int_{0}^{\tau} \nabla\alpha_{i}(u,\mathbf{\beta})Y_{i}(u)du=0,\] for some parametric functions \(\alpha_{i}\), \(i=1,\ldots,n\), where \(\nabla\alpha_{i}\) denotes the gradient of \(\alpha_{i}\) with respect to \(\beta\). Let us denote the left-hand side of the likelihood equations above by \({\bf U}_{n}(\mathbf{\beta},\tau)\). Then \({\bf U}_{n}(\mathbf{\beta},\cdot)\) evaluated at \(\mathbf{\beta}=\mathbf{\beta}_{0}\) is a local square integrable martingale. In particular, \[{\bf U}_{n}(\mathbf{\beta}_{0},\tau)=\sum_{i=1}^{n}\int_{0}^{\tau} \frac{\nabla\alpha_{i}(u,\mathbf{\beta}_{0})}{\alpha_{i}(u,\mathbf{\beta}_{0})}dM_{i}(u),\] as \(\alpha_{i}(t,\mathbf{\beta}_{0})Y_{i}(t)dt=d\Lambda_{i}(t,\mathbf{\beta}_{0})\) is the compensator of \(dN_{i}(t)\). Under regularity conditions a Taylor expansion of \({\bf U}_{n}(\hat{\mathbf{\beta}}_{n},\tau)\) around \(\mathbf{\beta}_{0}\) yields \[\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})=- \Big{(}\frac{1}{n}D\,{\bf U}_{n}(\mathbf{\beta}_{0},\tau)\Big{)}^{-1} \frac{1}{\sqrt{n}}\,{\bf U}_{n}(\mathbf{\beta}_{0},\tau)+o_{p}(1).\] Thus, (I.8) holds with \(\mathbf{{\bf g}}_{n,i}(u,\mathbf{\beta}_{0})=\nabla\alpha_{ i}(u,\mathbf{\beta}_{0})\alpha_{i}(u,\mathbf{\beta}_{0})^{-1}\) and \({\bf C}_{n}=-\big{(}\frac{1}{n}D\,{\bf U}_{n}(\mathbf{\beta}_{0},\tau )\big{)}^{-1}\), where \[D\,{\bf U}_{n}(\mathbf{\beta}_{0},\tau)=\sum_{i=1}^{n}\int_{0}^{\tau} \nabla^{2}\log(\alpha_{i}(u,\mathbf{\beta}_{0}))dN_{i}(u)-\sum_{i=1} ^{n}\int_{0}^{\tau}\nabla^{2}\alpha_{i}(u,\mathbf{\beta}_{0})Y_{i}(u )du.\] Note that \(-\frac{1}{n}D\,{\bf U}_{n}(\mathbf{\beta}_{0},\tau)\) is asymptotically equivalent to the optional covariation process \(-\frac{1}{n}[{\bf U}_{n}(\mathbf{\beta}_{0},\cdot)]\) of \(-\frac{1}{\sqrt{n}}\,{\bf U}_{n}(\mathbf{\beta}_{0},\cdot)\) at \(\tau\), which will be of use in Remark I.3.11. The Wild Bootstrap for Counting Process-Based Estimators and a Weak Convergence Result In Section I.2 we have introduced the counting process-based statistic \(\mathbf{X}_{n}\) given in (I.1) as an estimator of the multidimensional function \(\mathbf{X}\). In the current section we use the wild bootstrap as an approximation procedure to recover the unknown distribution of \(\mathbf{X}_{n}-\mathbf{X}\). The wild bootstrap counterpart of \(\mathbf{X}_{n}\) will be denoted by \(\mathbf{X}_{n}^{*}\). In order to verify the validity of the approximation procedure, we will prove that under regularity conditions the distributions of \(\sqrt{n}(\mathbf{X}_{n}-\mathbf{X})\) and \(\sqrt{n}(\mathbf{X}_{n}^{*}-\mathbf{X}_{n})\) are asymptotically equivalent. For this we will discover that \(\sqrt{n}(\mathbf{X}_{n}^{*}-\mathbf{X}_{n})\) can be represented by an expression with the same structure as \(\sqrt{n}(\mathbf{X}_{n}-\mathbf{X})=\mathbf{D}_{n,k}+\mathbf{B}_{n}\mathbf{C} _{n}\mathbf{D}_{n,g}(\tau)+o_{p}(1)\). Additionally, we will show with the proof of Theorem I.3.10 that the joint distribution of the components involved in the representation of \(\sqrt{n}(\mathbf{X}_{n}^{*}-\mathbf{X}_{n})\) converges to the same asymptotic distribution as the joint distribution of the components of \(\sqrt{n}(\mathbf{X}_{n}-\mathbf{X})\). With the help of the continuous mapping theorem we then obtain the asymptotic equivalence of the distributions of \(\sqrt{n}(\mathbf{X}_{n}-\mathbf{X})\) and \(\sqrt{n}(\mathbf{X}_{n}^{*}-\mathbf{X}_{n})\). In order to define the wild bootstrap estimator \(\mathbf{X}_{n}^{*}\), we first introduce the core idea of the wild bootstrap. Naturally, the realisations of \(\mathbf{X}_{n}\) vary with the underlying data sets. If we would have many data sets and thus many estimates, we could draw conclusions about the distribution of the estimator. The wild bootstrap provides for this: the variation immanent in the estimates arising from different data sets is produced by so-called random multipliers such that for this procedure only the one available data set \(\{\mathbf{N}(t),\mathbf{Y}(t),\mathbf{Z}(t),t\in\mathcal{T}\}\) is needed. In particular, the estimate calculated based on that data set is perturbed by random multipliers such that for each random multiplier a new estimate is created. Based on these so-called wild bootstrap estimates the distribution of the estimator can be inferred. Thus, the multiplier processes, denoted by \(G_{i}(t)\), \(t\in\mathcal{T}\), with \(E(G_{i})=0\) and \(E(G_{i}^{2})=1\), \(i=1,\ldots,n\), lie at the heart of the wild bootstrap. They are random piecewise constant functions that we consider in further detail below. The construction of the wild bootstrap counterpart \(\mathbf{X}_{n}^{*}\) of \(\mathbf{X}_{n}\), \(\mathbf{B}_{n}^{*}\) of \(\mathbf{B}_{n}\), \(\mathbf{C}_{n}^{*}\) of \(\mathbf{C}_{n}\), \(\mathbf{D}_{n,h}^{*}\) of \(\mathbf{D}_{n,h}\), or of any of the quantities that arise in this context, can be attributed to the following replacements: **Replacement I.3.1.** 1. The square integrable martingale increment \(dM_{i}(t)\) is replaced by the randomly perturbed counting process increment \(G_{i}(t)dN_{i}(t)\), \(i=1,\ldots,n\); 2. the unknown increment of the cumulative intensity process \(\Lambda_{i}(dt,\boldsymbol{\beta}_{0})\) is replaced by the estimator \(dN_{i}(t)\), \(i=1,\ldots,n\); 3. the unknown parameter coefficient \(\boldsymbol{\beta}_{0}\) is replaced by the estimator \(\hat{\boldsymbol{\beta}}_{n}\); 4. we set all \(o_{p}(1)\) terms in asymptotic representations to \(0\). Note that the substitution \(G_{i}(t)dN_{i}(t)\) of \(dM_{i}(t)\), \(t\in\mathcal{T}\), in Replacement I.3.1 (i) is a square integrable martingale increment itself, given the data set, cf. Lemma I.3.2. Moreover, for wider applicability we chose in Replacement I.3.1 (ii) the nonparametric estimator \(dN_{i}(t)\) rather than a semiparametric estimator \(\hat{\Lambda}_{i}(dt,\hat{\boldsymbol{\beta}}_{n})\), \(t\in\mathcal{T}\). As a consequence of Replacement I.3.1, we also replace the counting process increments \(dN_{i}(t)\) in two steps. First, it is decomposed into \(dM_{i}(t)+d\Lambda_{i}(t,\boldsymbol{\beta}_{0})\) according to the Doob-Meyer decomposition given in (I.3). Second, Replacement I.3.1 (i) and (ii) are applied. Step one and two combined yield \[\big{(}G_{i}(t)+1\big{)}dN_{i}(t),\quad t\in\mathcal{T}\] as the replacement for \(dN_{i}\). Furthermore, we obtain a wild bootstrap counterpart of \(\hat{\boldsymbol{\beta}}_{n}\) via its asymptotic representation given in (I.8). According to that equation we have \[\hat{\boldsymbol{\beta}}_{n}=\boldsymbol{\beta}_{0}+\mathbf{C}_{n}\frac{1}{n} \sum_{i=1}^{n}\int_{0}^{\tau}\mathbf{g}_{n,i}(u,\boldsymbol{\beta}_{0})dM_{i} (u)\ +\ o_{p}(1).\] (I.12) In order to define the wild bootstrap counterpart \(\hat{\boldsymbol{\beta}}_{n}^{*}\) of \(\hat{\boldsymbol{\beta}}_{n}\), we replace \(\mathbf{C}_{n}\) by some \((q\times b)\)-dimensional random matrix \(\mathbf{C}_{n}^{*}\) which is a wild bootstrap counterpart of \(\mathbf{C}_{n}\), and apply Replacement I.3.1 to the other terms on the right hand side of (I.12). This yields \[\hat{\boldsymbol{\beta}}_{n}^{*}=\hat{\boldsymbol{\beta}}_{n}+\mathbf{C}_{n}^ {*}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{\tau}\mathbf{g}_{n,i}(u,\hat{\boldsymbol {\beta}}_{n})G_{i}(u)dN_{i}(u).\] (I.13) Note that \(\mathbf{C}_{n}^{*}\) could take many different forms as long as it is asymptotically equivalent to \(\mathbf{C}_{n}\), i.e., as long as \(\|\mathbf{C}_{n}^{*}-\mathbf{C}_{n}\|=o_{p}(1)\) holds for \(n\to\infty\), cf. Assumption I.3.9. When working with a particular model a natural choice for \(\mathbf{C}_{n}^{*}\) might be apparent as we shall demonstrate in Remark I.3.11. We now consider the multiplier processes \(G_{i}(t)\), \(t\in\mathcal{T},i=1,\ldots,n\), in more detail. We define \(G_{i}\) as a random piecewise constant function with jump time points identical to those of the counting process \(N_{i}\), i.e., at \[\mathcal{T}_{n,i}^{\Delta}=\{t\in\mathcal{T}:\Delta N_{i}(t)=1\}=\{T_{i,1}, \ldots,T_{i,n_{i}}\}.\] (I.14) We note that the number of jumps for the \(i\)-th process is the random number \(n_{i}=N_{i}(\tau)\geq 0\). Moreover, the multiplier processes \(G_{i}\) are constructed such that at the jump time points \(T_{i,j}\in\mathcal{T}_{n,i}^{\Delta}\) they take the values of i.i.d. random variables \(G_{i,j}\), \(j=1,2,\ldots\), that have mean zero, unit variance and finite fourth moment, and that are independent of \(\mathcal{F}_{1}(\tau)\). In particular, \(G_{i}(t)=0\) for \(t<T_{i,1}\) and \(G_{i}(t)=G_{i,j}\) for \(T_{i,j}\leq t<T_{i,j+1}\), where \(T_{i,n_{i}+1}=\infty\). Furthermore, the multiplier processes \(G_{1}(t),\ldots,G_{n}(t)\), \(t\in\mathcal{T}\), are pairwise independent and identically distributed. Conditionally on \(\mathcal{F}_{1}(\tau)\), however, their jump times are fixed and the identical distribution is lost. See Bluhmki et al. (2018, 2019) for similar approaches. Let us revisit Replacement I.3.1 and the direct consequences of its application to \(N_{i}\) and \(\hat{\mathbf{\beta}}_{n}\). Due to the construction of the multiplier processes \(G_{i}\), \(i=1,\ldots,n\), the wild bootstrap replacement \(\big{(}G_{i}+1\big{)}N_{i}\) varies vertically around \(N_{i}\), i.e., the jump size deviates from \(1\), while the jump time points are fixed. A similar behaviour holds for the wild bootstrap estimator \(\hat{\mathbf{\beta}}_{n}^{*}\) around \(\hat{\mathbf{\beta}}_{n}\), as we will see in Lemma I.3.2 that the integral on the right-hand side of (I.13) is a zero-mean martingale evaluated at \(t=\tau\). Finally, we obtain the wild bootstrap counterpart \(\mathbf{X}_{n}^{*}\) of \(\mathbf{X}_{n}\) by applying Replacement I.3.1 to (I.1) which results in the following definition \[\mathbf{X}_{n}^{*}(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u, \hat{\mathbf{\beta}}_{n}^{*})\big{(}G_{i}(u)+1\big{)}dN_{i}(u),\quad t\in\mathcal{ T}.\] (I.15) Recall that the replacement of \(\hat{\mathbf{\beta}}_{n}\) by \(\hat{\mathbf{\beta}}_{n}^{*}\) can be traced back to Replacement I.3.1 by first substituting \(\hat{\mathbf{\beta}}_{n}\) in (I.15) by the right-hand side of (I.12) and then applying Replacement I.3.1 to the corresponding components. Moreover, we point out that due to the fluctuation of \(\big{(}G_{i}+1\big{)}N_{i}\) around \(N_{i}\) and \(\hat{\mathbf{\beta}}_{n}^{*}\) around \(\hat{\mathbf{\beta}}_{n}\), a reasonable amount of variation of the wild bootstrap estimator \(\mathbf{X}_{n}^{*}\) around \(\mathbf{X}_{n}\) is induced. The remaining part of this section concerns the asymptotic behaviour of the wild bootstrap estimator \(\mathbf{X}_{n}^{*}\) around \(\mathbf{X}_{n}\). In order to study the asymptotic distribution of \(\sqrt{n}\big{(}\mathbf{X}_{n}^{*}-\mathbf{X}_{n}\big{)}\), we start by deriving a representation of \(\sqrt{n}\big{(}\mathbf{X}_{n}^{*}-\mathbf{X}_{n}\big{)}\) similar to the one stated in (I.11). For this, we rewrite \(\sqrt{n}\big{(}\mathbf{X}_{n}^{*}-\mathbf{X}_{n}\big{)}\) as follows, i.e., for \(t\in\mathcal{T}\) we have \[\begin{split}&\sqrt{n}\big{(}\mathbf{X}_{n}^{*}(t)-\mathbf{X}_{n}(t) \big{)}\\ &=\sqrt{n}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\big{[} \mathbf{k}_{n,i}(u,\hat{\mathbf{\beta}}_{n}^{*})-\mathbf{k}_{n,i}(u,\hat{\mathbf{\beta }}_{n})+\mathbf{k}_{n,i}(u,\hat{\mathbf{\beta}}_{n})\big{]}\big{(}G_{i}(u)+1\big{)} dN_{i}(u)\\ &\quad-\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u, \hat{\mathbf{\beta}}_{n})dN_{i}(u)\Big{)}\\ &=\sqrt{n}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\big{[} \mathbf{k}_{n,i}(u,\hat{\mathbf{\beta}}_{n})(G_{i}(u)+1)-\mathbf{k}_{n,i}(u,\hat{ \mathbf{\beta}}_{n})\big{]}dN_{i}(u)\\ &\quad+\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\big{[}\mathbf{k}_{n,i}(u,\hat{\mathbf{\beta}}_{n}^{*})-\mathbf{k}_{n,i}(u,\hat{\mathbf{\beta}}_{n})\big{]} (G_{i}(u)+1)dN_{i}(u)\Big{)}\\ &=\sqrt{n}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k} _{n,i}(u,\hat{\mathbf{\beta}}_{n})G_{i}(u)dN_{i}(u)\\ &\quad+\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\big{[}\mathbf{k}_{n,i}(u,\hat{\mathbf{\beta}}_{n}^{*})-\mathbf{k}_{n,i}(u,\hat{\mathbf{\beta}}_{n})\big{]} (G_{i}(u)+1)dN_{i}(u)\Big{)}.\end{split}\] (I.16) Next, we apply a Taylor expansion around \(\hat{\mathbf{\beta}}_{n}\) to the second term on the right-hand side of the last equality of (I.16). Here, we recall that, for fixed \(t\in\mathcal{T}\), the \(\mathbf{k}_{n,i}(t,\cdot)\) are almost surely continuously differentiable in \(\mathbf{\beta}\), \(i=1,\ldots,n\). The Taylor expansion yields \[\begin{split}&\sqrt{n}\big{(}\mathbf{X}_{n}^{*}(t)-\mathbf{X}_{n}(t) \big{)}\\ &=\sqrt{n}\big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_ {n,i}(u,\hat{\mathbf{\beta}}_{n})G_{i}(u)dN_{i}(u)\\ &\quad+\big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathrm{D} \mathbf{k}_{n,i}(u,\hat{\mathbf{\beta}}_{n})\big{(}G_{i}(u)+1\big{)}dN_{i}(u) \big{)}(\hat{\mathbf{\beta}}_{n}^{*}-\hat{\mathbf{\beta}}_{n})+o_{p}(\hat{\mathbf{\beta}}_ {n}^{*}-\hat{\mathbf{\beta}}_{n})\big{)}\\ &=\sqrt{n}\big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_ {n,i}(u,\hat{\mathbf{\beta}}_{n})G_{i}(u)dN_{i}(u)+\mathbf{B}_{n}^{*}(t)(\hat{\mathbf{ \beta}}_{n}^{*}-\hat{\mathbf{\beta}}_{n})+o_{p}(\hat{\mathbf{\beta}}_{n}^{*}-\hat{\bm {\beta}}_{n})\big{)},\end{split}\] (I.17) where \[\mathbf{B}_{n}^{*}(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathrm{D}\mathbf{k }_{n,i}(u,\hat{\mathbf{\beta}}_{n})(G_{i}(u)+1)dN_{i}(u),\quad t\in\mathcal{T}.\] (I.18) We thus retrieved \(\mathbf{B}_{n}^{*}\) as the wild bootstrap version of \(\mathbf{B}_{n}(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathrm{D}\mathbf{k}_{n,i}(u,\mathbf{\beta}_{0})dN_{i}(u)\), \(t\in\mathcal{T}\), as if we had applied Replacement I.3.1 directly to \(\mathbf{B}_{n}\). Finally, combining (I.13) and (I.17), we obtain the following representation of \(\sqrt{n}\big{(}\mathbf{X}_{n}^{*}-\mathbf{X}_{n}\big{)}\): \[\begin{split}&\sqrt{n}\big{(}\mathbf{X}_{n}^{*}(t)-\mathbf{X}_{n}(t )\big{)}\\ &=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u, \hat{\mathbf{\beta}}_{n})G_{i}(u)dN_{i}(u)\\ &\quad+\mathbf{B}_{n}^{*}(t)\mathbf{C}_{n}^{*}\frac{1}{\sqrt{n}} \sum_{i=1}^{n}\int_{0}^{\tau}\mathbf{g}_{n,i}(u,\hat{\mathbf{\beta}}_{n})G_{i}(u) dN_{i}(u)+o_{p}(1),\quad t\in\mathcal{T}.\end{split}\] (I.19) Indeed, as we will see later, \(\hat{\mathbf{\beta}}_{n}^{*}-\hat{\mathbf{\beta}}_{n}=O_{p}(n^{-1/2})\). Hence, \(o_{p}(\hat{\mathbf{\beta}}_{n}^{*}-\hat{\mathbf{\beta}}_{n})=o_{p}(1)\). Additionally, we point out that the components of (I.19) are the wild bootstrap counterparts of the components specified in (I.9). In particular, the first term of (I.19) is the wild bootstrap counterpart of \(\mathbf{D}_{n,k}\) and the second term of (I.19) contains the wild bootstrap counterpart of \(\mathbf{D}_{n,g}\), both of which could also have been obtained by applying Replacement I.3.1 directly to \(\mathbf{D}_{n,k}\) respectively \(\mathbf{D}_{n}\). This leads us to the definition of the wild bootstrap counterpart \(\mathbf{D}^{*}_{n,h}=(\mathbf{D}^{*\top}_{n,k},\mathbf{D}^{*\top}_{n,g})^{\top}\) of \(\mathbf{D}_{n,h}\), \[\mathbf{D}^{*}_{n,h}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{h}_ {n,i}(u,\hat{\boldsymbol{\beta}}_{n})G_{i}(u)dN_{i}(u),\quad t\in\mathcal{T},\] (I.20) where, as before, \(\mathbf{h}_{n,i}=(\mathbf{k}^{\top}_{n,i},\mathbf{g}^{\top}_{n,i})^{\top}\). We assume that \(\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})\), \(t\in\mathcal{T}\), is a known, \(\mathcal{F}_{1}(\tau)\)-measurable multi-dimensional function. We still need to specify a filtration that reflects the available information: (i) at time zero, all data are available from the resampling-point of view, i.e., \(\mathcal{F}_{1}(\tau)\); (ii) during the course of time \(t\in\mathcal{T}\), the wild bootstrap multiplier processes \(G_{i}\) evolve. Hence, the following filtration is a sensible choice: \[\mathcal{F}_{2}(t)=\sigma\{G_{i}(s),N_{i}(u),Y_{i}(u),\mathbf{Z}_{i}(u),0<s \leq t,u\in\mathcal{T},i=1,\ldots,n\},\quad t\in\mathcal{T}.\] Note that \(\mathcal{F}_{2}(0)=\mathcal{F}_{1}(\tau)\) represents the available data. From now on, the underlying filtered probability space is \((\Omega,\mathcal{A},\mathbb{P},\mathcal{F}_{2})\). In the following lemma, we identify \(\mathbf{D}^{*}_{n,h}\) as a square integrable martingale with respect to the proposed filtration and state its predictable and optional variation process. **Lemma I.3.2**.: \(\mathbf{D}^{*}_{n,h}\) is a square integrable martingale with respect to \(\mathcal{F}_{2}\). Moreover, its predictable and optional covariation processes are \[\langle\mathbf{D}^{*}_{n,h}\rangle(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t} \mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2}\,dN_{i}(u),\ t\in \mathcal{T},\] and \[[\mathbf{D}^{*}_{n,h}](t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t} \mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2}G_{i}^{2}(u)\,dN_{i}(u ),\ t\in\mathcal{T},\] respectively. Proof.: See Appendix. Next, we aim at deriving the asymptotic distribution of \(\mathbf{D}^{*}_{n,h}\) by making use of martingale theory. Recall that \(\mathbf{D}^{*}_{n,h}\) is the wild bootstrap counterpart of \(\mathbf{D}_{n,h}\) defined in (I.10). In particular, \(\mathbf{D}_{n,h}\) is an integral with respect to a counting process martingale. To prove the convergence in distribution of \(\mathbf{D}_{n,h}\) in Lemma I.2.2, we used Rebolledo's martingale central limit theorem as stated in Theorem II.5.1 of Andersen et al. (1993) for counting process martingales (see Appendix). Although it is tempting to apply this theorem to \(\mathbf{D}^{*}_{n,h}\) as well, this does not work for the following reason. In Theorem II.5.1 of Andersen et al. (1993) the predictable covariation process of the process which contains all the jumps of the martingales that exceed in absolute value some \(\epsilon>0\) is considered. Let us call this process the \(\epsilon\)-jump process. As we will see in Example I.3.3, the \(\epsilon\)-jump process of the wild bootstrap counterpart \(\mathbf{D}_{n,h}^{*}\) of \(\mathbf{D}_{n,h}\) is in general not a martingale. Hence, it does not make sense to speak of its predictable covariation process. Consequently, the above-mentioned variant of Rebolledo's theorem cannot be used to analyze the asymptotic behaviour of the martingale \(\mathbf{D}_{n,h}^{*}\). **Example I.3.3**.: Let us consider the case where \(N_{i}\leq 1\) and the square integrable martingale \(D_{n,h}^{*}\) with integrand \(h_{n,i}(t,\hat{\beta})\equiv 1\), i.e., \(D_{n,h}^{*}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}1\cdot G_{i}dN_{i}(u)\), \(t\in\mathcal{T}\), and \(G_{i}\) may be considered time-constant. Then, for the \(\epsilon\)-jump process \(D_{n,h}^{\epsilon,*}(t)=\int_{0}^{t}1\!\left\{|\Delta D_{n,h}^{*}(u)|\geq \epsilon\right\}\cdot D_{n,h}^{*}(du)\), \(t\in\mathcal{T}\), we have \[\mathbb{E}(D_{n,h}^{\epsilon,*}(t)|\mathcal{F}_{2}(s))=\mathbb{E} \Big{(}\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}1\Big{\{}\Big{|}\frac{1}{ \sqrt{n}}\sum_{i=1}^{n}G_{i}\Delta N_{i}(u)\Big{|}\geq\epsilon\Big{\}}G_{i} dN_{i}(u)\Big{|}\mathcal{F}_{2}(s)\Big{)}\] \[=D_{n,h}^{\epsilon,*}(s)+\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{s} ^{t}\mathbb{E}\Big{(}1\Big{\{}\Big{|}\frac{1}{\sqrt{n}}\sum_{i=1}^{n}G_{i} \Delta N_{i}(u)\Big{|}\geq\epsilon\Big{\}}G_{i}\Big{|}\mathcal{F}_{2}(s) \Big{)}dN_{i}(u)\] \[=D_{n,h}^{\epsilon,*}(s)+\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\mathbb{ E}\Big{(}1\Big{\{}\Big{|}\frac{1}{\sqrt{n}}G_{i}\Big{|}\geq\epsilon\Big{\}}G_{i} \Big{)}(N_{i}(t)-N_{i}(s)),\] which is in general not equal to \(D_{n,h}^{\epsilon,*}(s)\) if the zero mean random variables \(G_{1},\ldots,G_{n}\) follow an asymmetric distribution. Hence, \(D_{n,h}^{\epsilon,*}(t)\), \(t\in\mathcal{T}\), does not fulfill the martingale property for the multiplier processes \(G_{1},\ldots,G_{n}\) as defined above. The non-applicability of the mentioned version of Rebolledo's theorem constitutes a gap in the literature that needs to be filled. Even though one may argue in a different way why the \(\epsilon\)-jump process is asymptotically negligible and then draw conclusions for the convergence in law of a wild bootstrap-based martingale (Bluhmki et al., 2019; Dobler et al., 2019), it is of general interest to have a broadly applicable solution that makes ad hoc workarounds superfluous. As a solution, we revisit Rebolledo's original paper Rebolledo (1980) to examine his Lindeberg condition which requires the squared \(\epsilon\)-jump process to converge to zero in \(\mathrm{L}_{1}\), as \(n\to\infty\). We combine this easily accessible Lindeberg condition with Rebolledo's theorem for square integrable martingales by using the Lindeberg condition as a replacement for the rather technical ARJ(2) condition of that theorem; see also Proposition 1.5 of the same reference. For the sake of completeness we now state this version of Rebolledo's theorem. **Theorem I.3.4** (Rebolledo's martingale central limit theorem, Theorem V.1 of Rebolledo (1980)).: Let \(H_{n}\) be a locally square integrable zero-mean martingale which satisfies the Lindeberg condition, i.e., for each \(\epsilon>0\) and \(t\in\mathcal{T}\), \[\mathbb{E}(\sigma^{\epsilon}[H_{n}](t))=\mathbb{E}\Big{(}\sum_{s\leq t}(\Delta H _{n}(s))^{2}\mathbb{1}\left\{|\Delta H_{n}(s)|>\epsilon\right\}\Big{)}\to 0, \quad\text{as}\;n\to\infty.\] (I.21) Consider the two following relations. 1. \(\langle H_{n}\rangle(t)\stackrel{{\mathbb{P}}}{{\longrightarrow}}V (t)\), as \(n\to\infty\), for all \(t\in\mathcal{T}\), 2. \([H_{n}](t)\stackrel{{\mathbb{P}}}{{\longrightarrow}}V(t)\), as \(n\to\infty\), for all \(t\in\mathcal{T}\). If 1 (respectively 2) holds, then relation 2 (respectively 1) is also valid and \[H_{n}\stackrel{{\mathcal{L}}}{{\longrightarrow}}H,\text{ in }D( \mathcal{T}),\text{ as }n\to\infty.\] Here, \(H\) denotes the 1-dimensional Gaussian centered continuous martingale with covariance function \(\Sigma(s,t)=V(s\wedge t)\), \((s,t)\in\mathcal{T}^{2}\), where \(V(t)=\langle H\rangle(t)\) is a continuous increasing real function with \(V(0)=0\). We remark that Rebolledo considers one-dimensional martingales in the aforementioned paper. In contrast, we consider multi-dimensional martingales. To bridge this gap, we will make use of the Cramer-Wold theorem. The following lemma takes care of the convergence of the predictable covariation process of \(\mathbf{D}^{*}_{n,h}\), as required in Condition 1 of Theorem I.3.4. **Lemma I.3.5**.: If Assumption I.2.1 holds, then, conditionally on \(\mathcal{F}_{2}(0)\), \[\langle\mathbf{D}^{*}_{n,h}\rangle(t)\stackrel{{\mathbb{P}}}{{ \longrightarrow}}\mathbf{V}_{\tilde{h}}(t),\text{ as }n\to\infty,\text{ for all }t\in\mathcal{T},\] with \(\mathbf{V}_{\tilde{h}}\) as defined in Lemma I.2.2. Proof.: See Appendix. Based on the discussed theory, we study the convergence in law of the process \(\mathbf{D}^{*}_{n,h}\) in the proof of the upcoming Lemma I.3.6. From Lemmas I.2.2 and I.3.5 it follows that the predictable variation process \(\langle\mathbf{D}^{*}_{n,h}\rangle\) of \(\mathbf{D}^{*}_{n,h}\) converges to the same matrix-valued function \(\mathbf{V}_{\tilde{h}}\) as the predictable variation process \(\langle\mathbf{D}_{n,h}\rangle\) of \(\mathbf{D}_{n,h}\). This gives rise to the supposition that those two processes converge in distribution to the same Gaussian martingale. In fact, we show that the conditional distribution of \(\mathbf{D}^{*}_{n,h}\) asymptotically coincides with the distribution of \(\mathbf{D}_{n,h}\). **Lemma I.3.6**.: If Assumption I.2.1 holds, then, conditionally on \(\mathcal{F}_{2}(0)\), \[\mathbf{D}_{n,h}^{*}\stackrel{{\mathcal{L}}}{{\longrightarrow}}\mathbf{D}_{ \bar{h}},\quad\text{in}\;(D(\mathcal{T}))^{p+b},\text{ as }n\to\infty\] in probability, with \(\mathbf{D}_{\bar{h}}=(\mathbf{D}_{\bar{k}},\mathbf{D}_{\bar{g}})\) as given in Lemma I.2.2. Proof.: See Appendix. In the proof of Lemma I.3.6 in the appendix one can see that under Assumption I.2.1 the stochastic process \(\mathbf{D}_{n,h}^{*}\) fulfills the Lindeberg condition. Thus, Corollary I.3.7 below is a direct consequence of Theorem I.3.4 and Lemma I.3.5. However, instead of employing Theorem I.3.4 we provide an alternative proof of Corollary I.3.7 in the appendix based on Lenglart's inequality. **Corollary I.3.7**.: If Assumption I.2.1 holds, then, conditionally on \(\mathcal{F}_{2}(0)\), \[[\mathbf{D}_{n,h}^{*}](t)\stackrel{{\mathbb{P}}}{{\longrightarrow}} \mathbf{V}_{\bar{h}}(t),\text{ as }n\to\infty,\text{ for all }t\in\mathcal{T},\] with \(\mathbf{V}_{\bar{h}}\) as defined in Lemma I.2.2. Proof.: See Appendix. After having assessed the joint convergence in distribution of \(\mathbf{D}_{n,h}^{*}=(\mathbf{D}_{n,k}^{*},\mathbf{D}_{n,g}^{*})\) by means of Lemma I.3.6, we focus again on the representation of \(\sqrt{n}(\mathbf{X}_{n}^{*}-\mathbf{X}_{n})=\mathbf{D}_{n,k}^{*}+\mathbf{B}_{n}^{*}\mathbf{C}_{n}^ {*}\mathbf{D}_{n,g}^{*}(\tau)+o_{p}(1)\) given in (I.19) together with (I.20). We first address the convergence of the components \(\mathbf{B}_{n}^{*}\) and \(\mathbf{C}_{n}^{*}\) before we eventually consider the representation as a whole. **Lemma I.3.8**.: If Assumption I.2.1 (iii) and Assumption I.2.3 hold, then, conditionally on \(\mathcal{F}_{2}(0)\), \[\sup_{t\in\mathcal{T}}\lVert\mathbf{B}_{n}^{*}(t)-\mathbf{B}(t)\rVert\stackrel{{ \mathbb{P}}}{{\longrightarrow}}0,\text{ as }n\to\infty\] with \(\mathbf{B}\) as in Lemma I.2.4. Proof.: See Appendix. **Assumption I.3.9**.: Under Assumption I.2.5 we further assume that the \((q\times b)\)-dimensional random matrices \(\mathbf{C}_{n}\) and \(\mathbf{C}_{n}^{*}\) are asymptotically equivalent, \[\lVert\mathbf{C}_{n}^{*}-\mathbf{C}_{n}\rVert\stackrel{{\mathbb{P}}}{{ \longrightarrow}}0,\quad n\to\infty.\] Finally, we are ready to derive the asymptotic distribution of \(\sqrt{n}(\mathbf{X}_{n}^{*}-\mathbf{X}_{n})\). **Theorem I.3.10**.: If the representation (I.19) is fulfilled, and Assumptions I.2.1, I.2.3, I.2.5, and I.3.9 hold, then, conditionally on \(\mathcal{F}_{2}(0)\), \[\sqrt{n}\big{(}\mathbf{X}_{n}^{*}-\mathbf{X}_{n}\big{)}=\mathbf{D}_{n,k}^{*}+ \mathbf{B}_{n}^{*}\mathbf{C}_{n}^{*}\mathbf{D}_{n,g}^{*}(\tau)+o_{p}(1) \stackrel{{\mathcal{L}}}{{\longrightarrow}}\mathbf{D}_{\bar{k}} +\mathbf{B}\mathbf{C}\mathbf{D}_{\bar{g}}(\tau),\ \text{in}\ (D(\mathcal{T}))^{p},\] in probability, as \(n\to\infty\), with \(\mathbf{D}_{\bar{k}},\mathbf{D}_{\bar{g}}\), and \(\mathbf{B}\) as stated in Lemma I.2.2 and Lemma I.2.4, respectively. If additionally (I.11) is satisfied, we have \[d[\mathcal{L}(\sqrt{n}(\mathbf{X}_{n}^{*}-\mathbf{X}_{n})|\mathcal{F}_{2}(0)),\mathcal{L}(\sqrt{n}(\mathbf{X}_{n}-\mathbf{X}))]\stackrel{{ \mathbb{P}}}{{\longrightarrow}}0,\ \text{as}\ n\to\infty.\] Proof.: See Appendix. In conclusion, with Theorem I.3.10 we verify the asymptotic validity of the wild bootstrap as an appropriate approximation procedure for counting process-based statistics of the form given in (I.1). **Remark I.3.11**.: We continue Remark I.2.7 in order to illustrate how to choose the wild bootstrap counterpart \(\mathbf{C}_{n}^{*}\) of \(\mathbf{C}_{n}\) in parametric survival models such that I.3.9 holds. In this way, we underline the wild bootstrap as an alternative to the parametric bootstrap. As stated in Remark I.2.7, \(\mathbf{C}_{n}\) is asymptotically related to the optional covariation process \(\frac{1}{n}[\mathbf{U}_{n}(\boldsymbol{\beta}_{0},\cdot)]\) of \(\frac{1}{\sqrt{n}}\mathbf{U}_{n}(\boldsymbol{\beta}_{0},\cdot)\). Hence, we propose to choose \(\mathbf{C}_{n}^{*}\) similarly based on the optional covariation process \(\frac{1}{n}[\mathbf{U}_{n}^{*}(\boldsymbol{\beta}_{n},\cdot)]\) of the wild bootstrap version \(\frac{1}{\sqrt{n}}\mathbf{U}_{n}^{*}(\boldsymbol{\beta}_{n},\cdot)\) of the martingale \(\frac{1}{\sqrt{n}}\mathbf{U}_{n}(\boldsymbol{\beta}_{0},\cdot)\). Application of Replacement I.3.1 to \(\frac{1}{\sqrt{n}}\mathbf{U}_{n}(\boldsymbol{\beta}_{0},\cdot)\) yields \[\mathbf{D}_{n,g}^{*}(\tau)=\frac{1}{\sqrt{n}}\mathbf{U}_{n}^{*}(\boldsymbol{ \hat{\beta}}_{n},\tau)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{\tau}\frac{ \nabla\alpha_{i}(u,\boldsymbol{\hat{\beta}}_{n})}{\alpha_{i}(u,\boldsymbol{ \hat{\beta}}_{n})}G_{i}(u)dN_{i}(u).\] According to Lemma I.3.2 we obtain the following structure: \[\mathbf{C}_{n}^{*}=\big{(}-\frac{1}{n}[\mathbf{U}_{n}^{*}(\boldsymbol{\hat{ \beta}}_{n},\cdot)](\tau)\big{)}^{-1}=-\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0} ^{\tau}\frac{(\nabla\alpha_{i}(u,\boldsymbol{\hat{\beta}}_{n}))^{\otimes 2}}{ \alpha_{i}(u,\boldsymbol{\hat{\beta}}_{n})^{2}}G_{i}^{2}(u)dN_{i}(u)\Big{)}^{-1}.\] This is a natural choice for \(\mathbf{C}_{n}^{*}\) in the present context, because under regularity conditions the (conditional) distributions of \(\mathbf{D}_{n,g}^{*}\) and \(\mathbf{D}_{n,g}=\frac{1}{\sqrt{n}}\mathbf{U}_{n}(\boldsymbol{\beta}_{0},\cdot)\) are asymptotically equivalent and the same holds for their optional covariation processes, cf. Lemma I.2.2 and Lemma I.3.6 in combination with Theorem I.3.4. Examples We will now present a series of examples, which is by no means exhaustive, of specific cases of the general set-up described in Sections I.2 and I.3. In particular, it is briefly outlined how the theory developed in this Part I can be applied to these models. In Part II we apply the present approach to the Fine-Gray model under censoring-complete data and work out the details of the wild bootstrap for this specific model. **Example I.4.1**.: (Nelson-Aalen estimator) Let \(X(t)=A(t)=\int_{0}^{t}\alpha(u)du\), \(t\in\mathcal{T}\), be the cumulative hazard function of a continuous survival time \(T\), i.e., \(\alpha(u)du=\mathbb{P}(T\in[u,u+du]|T\geq u)\). Let \(N_{1}(t),\ldots,N_{n}(t)\), \(t\in\mathcal{T}\), be the counting processes that are related to \(n\) independent copies of \(T\) which possibly involve right-censoring. For \(\hat{X}_{n}(t)\), \(t\in\mathcal{T}\), we take the Nelson-Aalen estimator \(\hat{A}_{n}(t)=\sum_{i=1}^{n}\int_{0}^{t}\frac{J(u)}{Y(u)}dN_{i}(u)\), \(t\in\mathcal{T}\), Aalen (1978), where \(Y_{i}(t)\) is the at-risk indicator for individual \(i\) at time \(t\), \(Y(t)=\sum_{i=1}^{n}Y_{i}(t)\), and \(J(t)=1\{Y(t)>0\}\). Thus, the counting process-based estimator \(\hat{A}_{n}\) exhibits the general structure stated in (I.1) with \(k_{n}(t)=\frac{nJ(t)}{Y(t)}\), \(t\in\mathcal{T}\). Furthermore, we have for \(t\in\mathcal{T}\), \[\sqrt{n}(\hat{A}_{n}(t)-A(t))=\sqrt{n}\sum_{i=1}^{n}\int_{0}^{t}\frac{J(u)}{Y( u)}(dN_{i}(u)-d\Lambda_{i}(u))+\sqrt{n}\int_{0}^{t}(J(u)-1)dA(u),\] (I.22) where \(d\Lambda_{i}=Y_{i}dA\). As the integrand \(k_{n}=\frac{nJ}{Y}\) is bounded by \(J\) and predictable due to the predictability of \(Y\), the first term on the right-hand side of (I.22) is a local square integrable martingale. This martingale refers to \(D_{n,k}\), cf. (I.10). The second term on the right-hand side of (I.22) is asymptotically negligible as \(n\to\infty\), because \(J(t)\stackrel{{\mathbb{P}}}{{\longrightarrow}}1\) as \(n\to\infty\), \(t\in\mathcal{T}\). Hence, (I.5) is satisfied. Furthermore, we make the natural assumption that there exists a deterministic function \(y\), which is bounded away from zero on \(\mathcal{T}\) and such that \[\sup_{t\in\mathcal{T}}\big{|}\frac{Y(t)}{n}-y(t)\big{|}=o_{p}(1).\] (I.23) This weak assumption implies Assumption I.2.1. Moreover, we deal with a nonparametric model and as such we have for \(t\in\mathcal{T}\), \(Dk_{n}(t)\equiv 0\). This implies that Assumption I.2.3 is trivially satisfied and that \(\mathbf{B}_{n}\equiv 0\). Additionally, due to the nonparametric model, the assumption on the asymptotic representation of the parameter estimator stated in (I.8) is superfluous and we set \(\mathbf{C}_{n}=0\) and \(\mathbf{D}_{n,g}(\tau)=0\). Therefore, also Assumptions I.2.5 and I.3.9 are redundant. In conclusion, we point out that for the normalized Nelson-Aalen process \(\sqrt{n}(\hat{A}_{n}-A)\) stated in (I.22) the asymptotic representation (I.11) holds with \(\mathbf{B}_{n}\mathbf{C}_{n}\mathbf{D}_{n,g}(\tau)\equiv 0\), i.e., \(\sqrt{n}(\hat{A}_{n}-A)=D_{n,k}+o_{p}(1)\). According to Replacement I.3.1, the wild bootstrap version of the normalized Nelson-Aalen process is \[\sqrt{n}(\hat{A}_{n}^{*}(t)-\hat{A}_{n}(t)) =\sqrt{n}\big{(}\sum_{i=1}^{n}\int_{0}^{t}\frac{J(u)}{Y(u)}(G_{i}+1 )dN_{i}(u)-\sum_{i=1}^{n}\int_{0}^{t}\frac{J(u)}{Y(u)}dN_{i}(u)\big{)}\] \[=\sqrt{n}\sum_{i=1}^{n}\int_{0}^{t}\frac{J(u)}{Y(u)}G_{i}dN_{i}(u ),\quad t\in\mathcal{T},\] where the term on the right-hand side of the second equality of the equation above refers to \(D_{n,k}^{*}\), cf. (I.20). Thus, also (I.19) holds with \(\mathbf{B}_{n}^{*}\mathbf{C}_{n}^{*}\mathbf{D}_{n,g}^{*}(\tau)\equiv 0\) and \(o_{p}(1)\) set to zero, i.e., \(\sqrt{n}(\hat{A}_{n}^{*}-\hat{A}_{n})=D_{n,k}^{*}\). Note, that the multipliers \(G_{i}\) can be chosen time-independent, \(i=1,\ldots,n\). Finally, Theorem I.3.10 can be used to justify the wild bootstrap as a suitable resampling method for the Nelson-Aalen estimator. In particular, the (conditional) distributions of \(\sqrt{n}(\hat{A}_{n}(t)-A(t))\) and \(\sqrt{n}(\hat{A}_{n}^{*}(t)-\hat{A}_{n}(t))\) are asymptotically equivalent. Furthermore, similar structures hold for more general multivariate Nelson-Aalen estimators in not necessarily survival set-ups, except that the multiplier processes might be time-dependent (Bluhmki et al., 2019). **Example I.4.2**.: (Weighted logrank test) The two-sample weighted logrank statistic is \[T_{n_{1},n_{2}}(w) =\sqrt{\frac{n_{1}+n_{2}}{n_{1}n_{2}}}\int_{0}^{\infty}w(\hat{S} _{n}(t-))\frac{Y^{(1)}(t)Y^{(2)}(t)}{Y(t)}(d\hat{A}_{n}^{(1)}(t)-d\hat{A}_{n}^ {(2)}(t))\] \[=\frac{1}{\sqrt{n_{1}}}\sum_{i=1}^{n_{1}}\int_{0}^{\infty}\sqrt{ \frac{n_{1}+n_{2}}{n_{2}}}w(\hat{S}_{n}(t-))\frac{Y^{(2)}(t)}{Y(t)}dN_{i}^{(1 )}(t)\] (I.24) \[\quad-\frac{1}{\sqrt{n_{2}}}\sum_{i=1}^{n_{2}}\int_{0}^{\infty} \sqrt{\frac{n_{1}+n_{2}}{n_{1}}}w(\hat{S}_{n}(t-))\frac{Y^{(1)}(t)}{Y(t)}dN_{ i}^{(2)}(t),\] where \(\hat{A}_{n}^{(j)}\) are the Nelson-Aalen estimators, \(N_{i}^{(j)}\), \(i=1,\ldots,n\), the counting processes, and \(Y^{(j)}\) the at-risk counters in samples \(j=1,2\), \(n_{1},n_{2}\) are the sample sizes, \(Y=Y^{(1)}+Y^{(2)}\), \(w\) is a positive weight function, and \(\hat{S}_{n}\) is the Kaplan-Meier estimator (Kaplan and Meier, 1958) in the pooled sample, cf., e.g., Ditzhaus and Friedrich (2020) who conducted weighted logrank tests as permutation tests and Ditzhaus and Pauly (2019) who used the wild bootstrap. Hence, \(T_{n_{1},n_{2}}(w)\) is the sum of two counting process-based statistics, say, \(X_{n_{1},n_{2}}^{(1)}(\infty)\) and \(X_{n_{1},n_{2}}^{(2)}(\infty)\) of a form similar to the one given in (I.1) evaluated at the upper integration bound \(\infty\), where the integrand of the statistic \(X_{n_{1},n_{2}}^{(1)}(\infty)\) equals \(k_{n_{1},n_{2}}^{(1)}(t)=\sqrt{\frac{n_{1}+n_{2}}{n_{2}}}w(\hat{S}_{n}(t-)) \frac{Y^{(2)}(t)}{Y(t)}\) and the integrand of the statistic \(X_{n_{1},n_{2}}^{(2)}(\infty)\) equals \(k_{n_{1},n_{2}}^{(2)}(t)=-\sqrt{\frac{n_{1}+n_{2}}{n_{1}}}w(\hat{S}_{n}(t-)) \frac{Y^{(1)}(t)}{Y(t)}\), \(t\geq 0\). Under the null hypothesis of equal hazards or, equivalently, equal survival functions, \(H_{0}:A^{(1)}=A^{(2)}\), we have \[\begin{split}& Y^{(2)}\sum_{i=1}^{n_{1}}dN_{i}^{(1)}-Y^{(1)}\sum_{i=1 }^{n_{2}}dN_{i}^{(2)}\\ &=Y^{(2)}\big{(}\sum_{i=1}^{n_{1}}dM_{i}^{(1)}+Y^{(1)}dA^{(1)} \big{)}-Y^{(1)}\big{(}\sum_{i=1}^{n_{2}}dM_{i}^{(2)}+Y^{(2)}dA^{(2)}\big{)}\\ &\stackrel{{ H_{0}}}{{=}}Y^{(2)}\sum_{i=1}^{n_{1}} dM_{i}^{(1)}-Y^{(1)}\sum_{i=1}^{n_{2}}dM_{i}^{(2)},\end{split}\] (I.25) where we have applied the Doob-Meyer decomposition in the first step of (I.25) (cf. (I.3)), and \(M_{i}^{(j)}\), \(i=1,\ldots,n_{j}\), are the sample \(j\)-specific counting process martingales. Due to (I.25), the test statistic \(T_{n_{1},n_{2}}(w)\) has the following form under the null hypothesis: \[\begin{split} T_{n_{1},n_{2}}(w)&\stackrel{{ H_{0}}}{{=}}\frac{1}{\sqrt{n_{1}}}\sum_{i=1}^{n_{1}}\int_{0}^{ \infty}\sqrt{\frac{n_{1}+n_{2}}{n_{2}}}w(\hat{S}_{n}(t-))\frac{Y^{(2)}(t)}{Y(t )}dM_{i}^{(1)}(t)\\ &-\frac{1}{\sqrt{n_{2}}}\sum_{i=1}^{n_{2}}\int_{0}^{\infty}\sqrt {\frac{n_{1}+n_{2}}{n_{1}}}w(\hat{S}_{n}(t-))\frac{Y^{(1)}(t)}{Y(t)}dM_{i}^{(2 )}(t).\end{split}\] (I.26) Under regularity conditions on the weight function and the sample sizes \((\frac{n_{j}}{n_{1}+n_{2}}\to\nu_{j}\) as \(\min(n_{1},n_{2})\to\infty\), with \(\nu_{j}\in(0,1)\), \(j=1,2)\), the stochastic processes \(k_{n_{1},n_{2}}^{(j)}\), \(j=1,2\), are uniformly bounded on any interval \(\mathcal{T}=[0,\tau]\). Clearly, they are also predictable. Thus, under \(H_{0}\), the test statistic can be written as the sum of two local square integrable martingales of a form similar to the one given in (I.10) evaluated at the upper integration bound \(\infty\), i.e., \(T_{n_{1},n_{2}}(w)\stackrel{{ H_{0}}}{{=}}D_{n_{1},n_{2},k^{(1)}} (\infty)+D_{n_{1},n_{2},k^{(2)}}(\infty)\), where the local square integrable martingale \(D_{n_{1},n_{2},k^{(1)}}(t)\), \(t\geq 0\), relates to the first term on the right-hand side of (I.26) and the local square integrable martingale \(D_{n_{1},n_{2},k^{(2)}}(t)\), \(t\geq 0\), relates to the second term on the right-hand side of (I.26). In order to obtain a similar structure for \(T_{n_{1},n_{2}}(w)\) as given in (I.11), we consider the 2-dimensional vectors \(\mathbf{M}_{n_{1},n_{2}}^{\top}=(\frac{1}{\sqrt{n_{1}}}\sum_{i=1}^{n_{1}}M_{i} ^{(1)},\frac{1}{\sqrt{n_{2}}}\sum_{i=1}^{n_{2}}M_{i}^{(2)})^{\top}\) and \(\mathbf{k}_{n_{1},n_{2}}^{\top}=(k_{n_{1},n_{2}}^{(1)},k_{n_{1},n_{2}}^{(2)})^ {\top}\), \(t\geq 0\). With this notation we get \[T_{n_{1},n_{2}}(w)\stackrel{{ H_{0}}}{{=}}\int_{0}^{\infty} \mathbf{k}_{n_{1},n_{2}}(t)^{\top}d\mathbf{M}_{n_{1},n_{2}}(t),\] (I.27) where the right-hand side of (I.27) is the multidimensional martingale counterpart of the first term on the right-hand side of (I.11). With (I.27) we thus obtained a similar structure for \(T_{n_{1},n_{2}}(w)\) as in (I.11) with the second term on the right-hand side of (I.11) set to zero due to the nonparametric setting. The wild bootstrap version \(T^{*}_{n_{1},n_{2}}(w)\) of \(T_{n_{1},n_{2}}(w)\) under \(H_{0}\) is obtained by applying Replacement I.3.1 to (I.27): \[T^{*}_{n_{1},n_{2}}(w)\stackrel{{ H_{0}}}{{=}}\int_{0}^{\infty}{ \bf k}^{*}_{n_{1},n_{2}}(t)^{\top}d{\bf M}^{*}_{n_{1},n_{2}}(t),\] (I.28) where \({\bf M}^{*\top}_{n_{1},n_{2}}=(\frac{1}{\sqrt{n_{1}}}\sum_{i=1}^{n_{1}}G^{(1)} _{i}N^{(1)}_{i},\frac{1}{\sqrt{n_{2}}}\sum_{i=1}^{n_{2}}G^{(2)}_{i}N^{(2)}_{i}) ^{\top}\) is the wild bootstrap counterpart of \({\bf M}_{n_{1},n_{2}}\), and \({\bf k}^{*\top}_{n_{1},n_{2}}=(k^{*(1)}_{n_{1},n_{2}},k^{*(2)}_{n_{1},n_{2}})^ {\top}\) with \[k^{*(j)}_{n_{1},n_{2}}(t)=(-1)^{j+1}\sqrt{\frac{n_{1}+n_{2}}{n_{3-j}}}w(\hat{ S}^{*}_{n}(t-))\frac{Y^{(3-j)}(t)}{Y(t)},\quad t\geq 0,j=1,2,\] is the wild bootstrap counterpart of \({\bf k}_{n_{1},n_{2}}\). Here, the multiplier processes \(G^{(1)}_{1},\ldots,G^{(1)}_{n_{1}}\), \(G^{(2)}_{1},\ldots,G^{(2)}_{n_{2}}\) are pairwise independent and identically distributed. Note that this definition of \(T^{*}_{n_{1},n_{2}}(w)\) deviates slightly from the corresponding definition given in Ditzhaus and Pauly (2019) as it contains the wild bootstrap counterpart \(\hat{S}^{*}_{n}\) of the pooled Kaplan-Meier estimator \(\hat{S}_{n}\). In Part II we will give an idea of how such a reampling version may be constructed based on a functional relationship between the estimator of interest and Nelson-Aalen estimators; we will exemplify this by means of cumulative incidence functions in semiparametric models. With (I.28) we thus obtained a similar structure for \(T^{*}_{n_{1},n_{2}}(w)\) as stated in (I.19) with \({\bf B}^{*}_{n}{\bf C}^{*}_{n}{\bf D}^{*}_{n,g}(\tau)\equiv 0\) due to the nonparametric setting and \(o_{p}(1)\) set to zero. It is left to show that a result as stated in Theorem I.3.10 holds for \(T_{n_{1},n_{2}}(w)\) and \(T^{*}_{n_{1},n_{2}}(w)\) under the null hypothesis. For this, one may first argue with respect to any finite upper bound of integration \(\tau\). With one additional argument, the remaining integral from \(\tau\) to \(\infty\) can be shown to be asymptotically negligible for \(n\to\infty\) followed by \(\tau\to\infty\); use for instance Theorem 3.2 in Billingsley (1999). In this way, one obtains a justification of the wild bootstrap for the weighted logrank test within a multidimensional martingale framework which can be seen as an extension of the setting presented in this Part I. **Example I.4.3**.: (Cox model) Given the \(d\)-variate predictable covariate vectors \({\bf Z}_{i}(t)\), \(t\in{\cal T}\), the intensity process of the counting process \(N_{i}\) is \(E(dN_{i}(t)|{\bf Z}_{i}(t))=\lambda_{i}(t,{\bf Z}_{i}(t),{\boldsymbol{\beta}} _{0})dt=Y_{i}(t)\exp({\bf Z}_{i}^{\top}(t){\boldsymbol{\beta}}_{0})\alpha_{0} (t)dt\), \(t\in{\cal T}\), \(i=1,\ldots,n\). Here, \(\alpha_{0}\) is the so-called baseline hazard rate for an individual with the zero covariate vector. In this case the processes \(M_{i}(t)=N_{i}(t)-\Lambda_{i}(t,{\bf Z}_{i}(t),{\boldsymbol{\beta}}_{0})\), \(t\in{\cal T}\), are martingales, where \(\Lambda_{i}(t,{\bf Z}_{i}(t),{\boldsymbol{\beta}})=\int_{0}^{t}\lambda_{i}(u,{ \bf Z}_{i}(t),{\boldsymbol{\beta}})du\). The Breslow estimator for the cumulative baseline hazard function \(X(t)=A_{0}(t)=\int_{0}^{t}\alpha_{0}(u)du\), \(t\in{\cal T}\), is given by \[\hat{X}_{n}(t)=\hat{A}_{0,n}(t,\hat{{\boldsymbol{\beta}}}_{n})=\sum_{i=1}^{n} \int_{0}^{t}\frac{J(u)}{S^{(0)}_{n}(u,\hat{{\boldsymbol{\beta}}}_{n})}dN_{i}(u ),\quad t\in{\cal T},\] where \(\hat{\mathbf{\beta}}_{n}\) is the solution to the score equation \[\sum_{i=1}^{n}\int_{0}^{\tau}\Big{(}\mathbf{Z}_{i}(u)-\frac{\mathbf{S}_{n}^{(1)}(u,\mathbf{ \beta})}{S_{n}^{(0)}(u,\mathbf{\beta})}\Big{)}dN_{i}(u)=0,\] \(\tau>0\) is the terminal evaluation time, and \(S_{n}^{(0)}(t,\mathbf{\beta})=\sum_{i=1}^{n}Y_{i}(t)\exp(\mathbf{Z}_{i}^{\top}(t)\mathbf{ \beta})\), \(\mathbf{S}_{n}^{(1)}(t,\mathbf{\beta})=\sum_{i=1}^{n}Y_{i}(t)\mathbf{Z}_{i}(t)\exp(\mathbf{Z}_ {i}^{\top}(t)\mathbf{\beta})\), \(\mathbf{S}_{n}^{(2)}(t,\mathbf{\beta})=\sum_{i=1}^{n}Y_{i}(t)\mathbf{Z}_{i}(t)^{\otimes 2 }\exp(\mathbf{Z}_{i}^{\top}(t)\mathbf{\beta})\), \(t\in\mathcal{T}\). In particular, \(\hat{A}_{0,n}(\cdot,\hat{\mathbf{\beta}}_{n})\) follows the general counting process-based structure stated in (I.1) with \(k_{n}(t,\mathbf{\beta}_{0})=\frac{nJ(t)}{S_{n}^{(0)}(t,\mathbf{\beta}_{0})}\), \(t\in\mathcal{T}\). For the Breslow estimator it is well-known that for \(t\in\mathcal{T}\) \[\sqrt{n}(\hat{A}_{0,n}(t,\hat{\mathbf{\beta}}_{n})-A_{0}(t)) =\sqrt{n}\sum_{i=1}^{n}\int_{0}^{t}\frac{J(u)}{S_{n}^{(0)}(u,\mathbf{ \beta}_{0})}dM_{i}(u)\] \[\quad-\int_{0}^{t}\frac{J(u)\mathbf{S}_{n}^{(1)}(u,\mathbf{\beta}_{0})}{ S_{n}^{(0)}(u,\mathbf{\beta}_{0})^{2}}dN_{i}(u)\] (I.29) \[\quad\cdot\mathbf{C}_{n}\frac{1}{\sqrt{n}}\Big{(}\sum_{i=1}^{n}\int_{ 0}^{\tau}\Big{(}\mathbf{Z}_{i}(u)-\frac{\mathbf{S}_{n}^{(1)}(u,\mathbf{\beta}_{0})}{S_{n} ^{(0)}(u,\mathbf{\beta}_{0})}\Big{)}dM_{i}(u)\Big{)}+o_{p}(1),\] where \(\mathbf{C}_{n}\) is a certain (random) \(d\times d\) matrix. Note that in (I.29) it has been used that (I.5) and (I.8) are satisfied, i.e., \[\sqrt{n}\big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}k_{n}(u,\mathbf{ \beta}_{0})d\Lambda_{i}(u,\mathbf{\beta}_{0})-A_{0}(t)\big{)}=\sqrt{n}\int_{0}^{t} (J(u)-1)dA_{0}(u)=o_{p}(1),\quad t\in\mathcal{T},\] and \[\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})=\mathbf{C}_{n}\frac{1}{\sqrt{n}} \Big{(}\sum_{i=1}^{n}\int_{0}^{\tau}\Big{(}\mathbf{Z}_{i}(u)-\frac{\mathbf{S}_{n}^{(1) }(u,\mathbf{\beta}_{0})}{S_{n}^{(0)}(u,\mathbf{\beta}_{0})}\Big{)}dM_{i}(u)\Big{)}+o_{ p}(1).\] Additionally, we have \(Dk_{n}(t,\mathbf{\beta}_{0})=-\frac{nJ(t)\mathbf{S}_{n}^{(1)}(t,\mathbf{\beta}_{0})}{S_{n} ^{(0)}(t,\mathbf{\beta}_{0})^{2}}\) and \(\mathbf{g}_{n,i}(t,\mathbf{\beta}_{0})=\mathbf{Z}_{i}(t)-\frac{\mathbf{S}_{n}^{(1)}(t,\mathbf{ \beta}_{0})}{S_{n}^{(0)}(t,\mathbf{\beta}_{0})}\), \(t\in\mathcal{T}\). As a result of the boundedness of the covariates and the boundedness of \(JS_{n}^{(0)}\) away from zero on \(\mathcal{T}\), \(k_{n},Dk_{n}\), and \(\mathbf{g}_{n,i}\) as functions in \(t\) are bounded on \(\mathcal{T}\). Additionally, they are predictable due to the predictability of the covariates. Thus, the first term and the martingale integral in the second term of the form (I.10) on the right-hand side of (I.29) are local square integrable martingales. In conclusion, with (I.29) we retrieve the asymptotic representation (I.11), i.e., \(\sqrt{n}(\hat{A}_{0,n}(\cdot,\hat{\mathbf{\beta}}_{n})-A_{0})=D_{n,k}+\mathbf{B}_{n}\mathbf{ C}\mathbf{D}_{n,g}(\tau)+o_{p}(1)\). The uniform limits in probability of \(k_{n}\) and \(\mathbf{g}_{n,i}\) are \(\tilde{k}=\dfrac{1}{s^{(0)}}\) and \(\tilde{\mathbf{g}}_{i}=\mathbf{Z}_{i}-\dfrac{s^{(1)}}{s^{(0)}}\), respectively, where \(s^{(j)}\) are the uniform deterministic limits in probability of \(n^{-1}S^{(j)}_{n}\), \(j=0,1\). Under the typically made assumptions (Condition VII.2.1 of Andersen et al. 1993) and under the assumption that the covariate vectors \(\mathbf{Z}_{i}\), \(i=1,\ldots,n\), are pairwise independent and identically distributed, Assumption I.2.1 is fulfilled. Similarly, the uniform limit in probability of \(Dk_{n}\) is \(\tilde{K}=\dfrac{s^{(1)}}{(s^{(0)})^{2}}\). Again, under Condition VII.2.1 and (7.2.28) of Andersen et al. (1993), Assumptions I.2.3 and I.2.5 are valid. In particular, \(\mathbf{C}_{n}\) in Assumption I.2.5 takes the form \[\Big{[}\dfrac{1}{n}\sum_{i=1}^{n}\int_{0}^{\tau}\Big{(}\dfrac{S^{(2)}_{n}(u, \boldsymbol{\beta}_{0})}{S^{(0)}_{n}(u,\boldsymbol{\beta}_{0})}-\Big{(}\dfrac{ S^{(1)}_{n}(u,\boldsymbol{\beta}_{0})}{S^{(0)}_{n}(u,\boldsymbol{\beta}_{0})} \Big{)}^{\otimes 2}\Big{)}dN_{i}(u)\Big{]}^{-1}.\] Eventually, the wild bootstrap counterpart \(\sqrt{n}(\hat{A}^{*}_{0,n}(\cdot,\boldsymbol{\hat{\beta}}^{*}_{n})-\hat{A}_{0,n}(\cdot,\boldsymbol{\hat{\beta}}_{n}))\) of \(\sqrt{n}(\hat{A}_{0,n}(\cdot,\boldsymbol{\hat{\beta}}_{n})-A_{0})\) can be formulated by applying Replacement I.3.1 to (I.29). This yields for \(t\in\mathcal{T}\) \[\sqrt{n}(\hat{A}^{*}_{0,n}(t,\boldsymbol{\hat{\beta}}^{*}_{n})- \hat{A}_{0,n}(t,\boldsymbol{\hat{\beta}}_{n})) =\sqrt{n}\sum_{i=1}^{n}\int_{0}^{t}\dfrac{J(u)}{S^{(0)}_{n}(u, \boldsymbol{\hat{\beta}})}G_{i}\,dN_{i}(u)\] (I.30) \[\quad-\sum_{i=1}^{n}\int_{0}^{t}\dfrac{J(u)\boldsymbol{S}^{(1)}_{ n}(u,\boldsymbol{\hat{\beta}})}{S^{(0)}_{n}(u,\boldsymbol{\hat{\beta}})^{2}}(G_{i}+1) dN_{i}(u)\] \[\quad\cdot\boldsymbol{C}^{*}_{n}\dfrac{1}{\sqrt{n}}\Big{(}\sum_{ i=1}^{n}\int_{0}^{\tau}\Big{(}\mathbf{Z}_{i}(u)-\dfrac{\boldsymbol{S}^{(1)}_{n}(u, \boldsymbol{\hat{\beta}})}{S^{(0)}_{n}(u,\boldsymbol{\hat{\beta}})}\Big{)}G_{ i}\,dN_{i}(u)\Big{)}.\] Here \(\boldsymbol{C}^{*}_{n}\) as given in Remark I.3.11 simplifies for the Cox model to \[\boldsymbol{C}^{*}_{n}=\Big{[}\dfrac{1}{n}\sum_{i=1}^{n}\int_{0}^{\tau}\Big{(} \dfrac{S^{(2)}_{n}(u,\boldsymbol{\hat{\beta}})}{S^{(0)}_{n}(u,\boldsymbol{ \hat{\beta}})}-\Big{(}\dfrac{S^{(1)}_{n}(u,\boldsymbol{\hat{\beta}})}{S^{(0)}_ {n}(u,\boldsymbol{\hat{\beta}})}\Big{)}^{\otimes 2}\Big{)}G_{i}^{2}dN_{i}(u)\Big{]}^{-1}.\] Additionally, Assumption I.3.9 is satisfied as argued in Remark I.3.11. In conclusion, (I.30) implies that (I.19) holds with \(o_{p}(1)\) set to zero, i.e., \(\sqrt{n}(\hat{A}^{*}_{0,n}(\cdot,\boldsymbol{\hat{\beta}}^{*}_{n})-\hat{A}_{0,n}(\cdot,\boldsymbol{\hat{\beta}}_{n}))=D^{*}_{n,k}+\boldsymbol{B}^{*}_{n} \boldsymbol{C}^{*}\boldsymbol{D}^{*}_{n,g}(\tau)\). Finally, Theorem I.3.10 can be applied to verify the asymptotic validity of the wild bootstrap for statistical inference on the Breslow estimator. Note that all expressions used in this example are similar to the ones in Dobler et al. (2019). Discussion We have proposed and validated a widely applicable wild bootstrap procedure for general nonparametric and (semi-)parametric counting process-based statistics. We gave a step by step description of how to construct the wild bootstrap counterpart of the statistic. In particular, it is crucial to match each individual with one multiplier process. In order to justify the validity of the wild bootstrap, we have studied the asymptotic distributions of the statistic of interest and of the wild bootstrap counterpart which turned out to coincide. We have found the wild bootstrapped martingales to be martingales as well. Thus, in the corresponding proof, we made use of a carefully chosen variant of Rebolledo's martingale central limit theorem. We illustrated the method for several main models in survival analysis. As we have seen in Examples I.4.1-I.4.3, the assumptions we have made throughout the Part I are rather weak: they are satisfied under very natural regularity conditions. However, Assumption I.2.1 (iii) is, for example, not satisfied in shared frailty models, because in these models it is assumed that common unobserved variables influence the intensity processes of multiple individuals. For the construction of the wild bootstrap counterpart of a given counting process-based statistic we have chosen the nonparametric estimator \(G_{i}dN_{i}\) for the martingale increment \(dM_{i}\), cf. Replacement I.3.1 (i). This choice guarantees a more general applicability of the proposed wild bootstrap resampling procedure, because no specifications on the form of the cumulative hazard rate have to be made. In contrast, Spiekerman and Lin proposed a semiparametric approach by choosing \(G_{i}[dN_{i}-d\hat{\Lambda}_{i}(\cdot,\hat{\mathbf{\beta}}_{n})]\) as the replacement for the martingale increment (Spiekerman and Lin (1998)). Under this semiparametric estimator the information encoded in the parameter \(\mathbf{\beta}\) is incorporated in the wild bootstrap estimators, which could potentially lead to more accurate results. However, their approach is not as widely applicable as the nonparametric one that we decided to employ. Moreover, in the context of Cox models, in Dobler et al. (2019) it is revealed by means of a substantial simulation study that the difference between the results of the two methods is not significant. In conclusion, the wild bootstrap procedure as proposed in this Part I is applicable to a wide range of models and simple to implement. By means of this method, one may easily approximate the unknown distribution of a counting process-based statistic around the target quantity. Aside from the theoretical justification of this resampling procedure, in Part II we present an extensive simulation study based on which we explore the small sample performance of the method. That Part I concentrates on Fine-Gray models for censoring-complete data. In particular, we explain on the basis of the cumulative incidence function how to obtain wild bootstrap confidence bands for a functional applied to a vector of two statistics of the form considered in the present Part I. ## Appendix A: Proofs For the proofs we introduce some additional notation: we write \(\|\cdot\|_{\infty}\) for the maximum norm of a vector \(\mathbf{v}\in\mathbb{R}^{p}\) or a matrix \(\mathbf{G}\in\mathbb{R}^{p\times p}\), which denotes the largest element in absolute value of \(\mathbf{v}\) and \(\mathbf{G}\), respectively. Moreover, \(\mathcal{C}[0,\tau]^{m}\) denotes the set of all continuous functions with values from \([0,\tau]\) to \(\mathbb{R}^{m}\) for any \(m\in\mathbb{N}\). ### A.1 Proofs of Section I.2 **Proof of Lemma I.2.2.** As explained in Section I.2 below (I.10), \(\mathbf{D}_{n,h}\) is a local square integrable counting process martingale. Thus, we can apply Rebolledo's martingale central limit theorem as stated in Theorem II.5.1 of Andersen et al. (1993). It follows that we have to show two conditions. The predictable covariation process \(\langle\mathbf{D}_{n,h}\rangle(t)\) or the optional covariation process \([\mathbf{D}_{n,h}](t)\) of \(\mathbf{D}_{n,h}\) must converges in probability, as \(n\to\infty\), to a continuous, deterministic and positive semidefinite matrix-valued function on \(\mathcal{T}\) with \(\mathbf{V}_{\tilde{h}}(0)=0\). Additionally, condition (2.5.3) of Andersen et al. (1993) on the jumps of \(\mathbf{D}_{n,h}\) must hold. We first show the convergence in probability of the predictable covariation process \(\langle\mathbf{D}_{n,h}\rangle(t)\) to the matrix-valued function \(\mathbf{V}_{\tilde{h}}(t)\) for all \(t\in\mathcal{T}\), as \(n\to\infty\). According to Proposition II.4.1 of Andersen et al. (1993) together with (I.10), we have \[\begin{split}\langle\mathbf{D}_{n,h}\rangle(t)&= \frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{h}_{n,i}(u,\mathbf{\beta}_{0})^{ \otimes 2}d\Lambda_{i}(u,\mathbf{\beta}_{0})\\ &=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}[\mathbf{h}_{n,i}(u,\mathbf{ \beta}_{0})^{\otimes 2}-\tilde{\mathbf{h}}_{i}(u,\mathbf{\beta}_{0})^{\otimes 2}]d \Lambda_{i}(u,\mathbf{\beta}_{0})\\ &\quad+\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\tilde{\mathbf{h}}_{ i}(u,\mathbf{\beta}_{0})^{\otimes 2}d\Lambda_{i}(u,\mathbf{\beta}_{0}).\end{split}\] (I.31) We start with focusing on the first term of the second step of (I.31). We want to show that \[\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}[\mathbf{h}_{n,i}(u,\mathbf{\beta}_{0})^{ \otimes 2}-\tilde{\mathbf{h}}_{i}(u,\mathbf{\beta}_{0})^{\otimes 2}]d\Lambda_{i}(u,\mathbf{ \beta}_{0})=o_{p}(1),\text{ for all }t\in\mathcal{T},\text{ as }n\to\infty.\] (I.32) For this it suffices to bound its largest component: \[\begin{split}&\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\lVert\mathbf{h} _{n,i}(u,\boldsymbol{\beta}_{0})^{\otimes 2}-\tilde{\mathbf{h}}_{i}(u,\boldsymbol{ \beta}_{0})^{\otimes 2}\rVert_{\infty}d\Lambda_{i}(u,\boldsymbol{\beta}_{0})\\ &\leq\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\lVert\mathbf{h} _{n,i}(t,\boldsymbol{\beta}_{0})^{\otimes 2}-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{ \beta}_{0})^{\otimes 2}\rVert_{\infty}\frac{1}{n}\sum_{i=1}^{n}\Lambda_{i}( \tau,\boldsymbol{\beta}_{0})\\ &\leq\Big{(}\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\lVert( \mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0})-\tilde{\mathbf{h}}_{i}(t, \boldsymbol{\beta}_{0}))\mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0})^{\top} \rVert_{\infty}\\ &\qquad+\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\lVert\tilde{ \mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})(\mathbf{h}_{n,i}(t,\boldsymbol{ \beta}_{0})-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0}))^{\top}\rVert_{ \infty}\Big{)}\frac{1}{n}\sum_{i=1}^{n}\Lambda_{i}(\tau,\boldsymbol{\beta}_{0} )\end{split}\] (I.33) where the last step is due to the triangle inequality and \(\boldsymbol{a}^{\otimes 2}-\boldsymbol{b}^{\otimes 2}=(\boldsymbol{a}- \boldsymbol{b})\boldsymbol{a}^{\top}+\boldsymbol{b}(\boldsymbol{a}- \boldsymbol{b})^{\top}\) for two vectors \(\boldsymbol{a},\boldsymbol{b}\). Both terms in brackets converge to zero in probability, as \(n\to\infty\), according to Assumption I.2.1 (i), (ii), and since \(\mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0})\) is locally bounded for \(i=1,\ldots,n\). Note that Assumption I.2.1 (i) holds for any consistent estimator \(\tilde{\boldsymbol{\beta}}_{n}\), in particular for \(\boldsymbol{\beta}_{0}\) itself. From Assumption I.2.1 (iii) in combination with the integrability of the cumulative intensities and the law of large numbers, we get \(\frac{1}{n}\sum_{i=1}^{n}\Lambda_{i}(\tau,\boldsymbol{\beta}_{0})\stackrel{{ \mathbb{P}}}{{\longrightarrow}}\mathbb{E}(\Lambda_{1}(\tau, \boldsymbol{\beta}_{0}))\), as \(n\to\infty\). Hence, the whole expression converges to zero in probability, as \(n\to\infty\), and we conclude that (I.32) holds. The subsequent considerations relate to the second term of the second step of (I.31). According to Assumption I.2.1 (ii) it holds that \(\sup_{t\in\mathcal{T}}\lVert\tilde{\mathbf{h}}_{1}(t,\boldsymbol{\beta}_{0}) \rVert_{\infty}\) is bounded. Moreover, we have \(\mathbb{E}(\Lambda_{1}(\tau,\boldsymbol{\beta}_{0}))<\infty\) by assumption. These two statements combined yield for all \(t\in\mathcal{T}\), \[\mathbb{E}\Big{(}\int_{0}^{t}\lVert\tilde{\mathbf{h}}_{1}(u,\boldsymbol{\beta }_{0})^{\otimes 2}\rVert_{\infty}\,d\Lambda_{1}(u,\boldsymbol{\beta}_{0}) \Big{)}\leq\mathbb{E}\Big{(}\sup_{t\in\mathcal{T}}\lVert\tilde{\mathbf{h}}_{1 }(t,\boldsymbol{\beta}_{0})^{\otimes 2}\rVert_{\infty}\Lambda_{1}(t, \boldsymbol{\beta}_{0})\Big{)}<\infty.\] (I.34) On the basis of (I.34) and Assumption I.2.1 (iii), we make use of the law of large numbers and get for the second term of the second step of (I.31) \[\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\tilde{\mathbf{h}}_{i}(u,\boldsymbol{ \beta}_{0})^{\otimes 2}d\Lambda_{i}(u,\boldsymbol{\beta}_{0})\stackrel{{ \mathbb{P}}}{{\longrightarrow}}\mathbb{E}(\int_{0}^{t}\tilde{ \mathbf{h}}_{1}(u,\boldsymbol{\beta}_{0})^{\otimes 2}d\Lambda_{1}(u,\boldsymbol{ \beta}_{0})),\quad n\to\infty,\] for any fixed \(t\in\mathcal{T}\). Note that the integrability of the intensity process \(\lambda_{1}(t,\boldsymbol{\beta}_{0})\) follows from the integrability of the cumulative intensity process \(\Lambda_{1}(t,\boldsymbol{\beta}_{0})\). Thus, due to the integrability of the cumulative intensities and Assumption I.2.1 (ii), we can make use of Fubini's theorem, due to which we can exchange the order of integration. Thus, we have \[\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\tilde{\mathbf{h}}_{i}(u,\boldsymbol{\beta}_ {0})^{\otimes 2}d\Lambda_{i}(u,\boldsymbol{\beta}_{0})\stackrel{{ \mathbb{P}}}{{\longrightarrow}}\int_{0}^{t}\mathbb{E}(\tilde{\mathbf{h}}_{1}( u,\boldsymbol{\beta}_{0})^{\otimes 2}\lambda_{1}(u,\boldsymbol{\beta}_{0}))du,\] (I.35) for all \(t\in\mathcal{T}\), as \(n\to\infty\). Finally, combining (I.31) with (I.32) and (I.35) yields \[\langle\mathbf{D}_{n,h}\rangle(t)\stackrel{{\mathbb{P}}}{{ \longrightarrow}}\int_{0}^{t}\mathbb{E}(\tilde{\mathbf{h}}_{1}(u,\boldsymbol{ \beta}_{0})^{\otimes 2}\lambda_{1}(u,\boldsymbol{\beta}_{0}))du=\mathbf{V}_{ \tilde{h}}(t),\text{ for all }t\in\mathcal{T},\text{ as }n\to\infty.\] When taking into consideration that we have \(\tilde{\mathbf{h}}=(\tilde{\mathbf{k}},\tilde{\mathbf{g}})\), we can write the covariance matrix in block form \[\mathbf{V}_{\tilde{h}}=\mathbf{V}_{(\tilde{k},\tilde{g})}=\begin{pmatrix} \mathbf{V}_{\tilde{k}}&\mathbf{V}_{\tilde{k},\tilde{g}}\\ \mathbf{V}_{\tilde{g},\tilde{k}}&\mathbf{V}_{\tilde{g}}\end{pmatrix},\] where for \(t\in\mathcal{T}\), \[\mathbf{V}_{\tilde{k}}(t)=\langle\mathbf{D}_{\tilde{k}}\rangle(t)=\int_{0}^{t }\mathbb{E}(\tilde{\mathbf{k}}_{1}(u,\boldsymbol{\beta}_{0})^{\otimes 2} \lambda_{1}(u,\boldsymbol{\beta}_{0}))du,\] \[\mathbf{V}_{\tilde{g}}(t)=\langle\mathbf{D}_{\tilde{g}}\rangle(t)=\int_{0}^{t }\mathbb{E}(\tilde{\mathbf{g}}_{1}(u,\boldsymbol{\beta}_{0})^{\otimes 2} \lambda_{1}(u,\boldsymbol{\beta}_{0}))du,\] \[\mathbf{V}_{\tilde{k},\tilde{g}}(t)=\mathbf{V}_{\tilde{g},\tilde{k}}(t)= \langle\mathbf{D}_{\tilde{k}},\mathbf{D}_{\tilde{g}}\rangle(t)=\int_{0}^{t} \mathbb{E}(\tilde{\mathbf{k}}_{1}(u,\boldsymbol{\beta}_{0})\cdot\tilde{ \mathbf{g}}_{1}(u,\boldsymbol{\beta}_{0})^{\top}\lambda_{1}(u,\boldsymbol{ \beta}_{0}))du.\] Second, we verify condition (2.5.3) of Rebolledo's theorem of Andersen et al. (1993). For this we introduce the stochastic process \(\mathbf{D}_{n,h}^{\epsilon}\) given by \[\mathbf{D}_{n,h}^{\epsilon}(t)=\int_{0}^{t}\mathbb{1}\{|\Delta\mathbf{D}_{n,h} (u)|>\epsilon\}\mathbf{D}_{n,h}(du),\quad t\in\mathcal{T},\] (I.36) which we refer to as the \(\epsilon\)-jump process of \(\mathbf{D}_{n,h}\). Here, the indicator function is to be understood vector-wise, specifying for each element \(D_{n,h}^{j}(t)\) of the \(p\)-dimensional vector \(\mathbf{D}_{n,h}(t)=(D_{n,h}^{1}(t),\ldots,D_{n,h}^{p}(t))\) whether the jump at time t is larger in absolute value than \(\epsilon\). Note that the elements of the indicator function \(\mathbb{1}\{|\Delta\mathbf{D}_{n,h}(u)|\geq\epsilon\}\) may be unequal to zero only at discontinuities of \(D_{n,h}^{j}\), which correspond to discontinuities of the martingale \(M_{i}\). In addition, the jumps of the martingale \(M_{i}\) occur only at event times registered by the counting processes \(N_{i}\), because we assumed the cumulative intensity process \(\Lambda_{i}(\cdot,\boldsymbol{\beta}_{0})\) to be absolutely continuous. This means that the \(\epsilon\)-jump process \(\mathbf{D}_{n,h}^{\epsilon}\) accumulates all the jumps of components of \(\mathbf{D}_{n,h}\) that are larger in absolute value than \(\epsilon\). Recall that no two counting processes \(N_{i}\), \(i=1,\ldots,n\), jump simultaneously. Combining (I.36) with the above reasoning yields \[\mathbf{D}^{\epsilon}_{n,h} = \frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{h}_{n,i}(u, \boldsymbol{\beta}_{0})\mathbb{1}\Big{\{}\Big{|}\frac{1}{\sqrt{n}}\sum_{i=1}^{ n}\mathbf{h}_{n,i}(u,\boldsymbol{\beta}_{0})\Delta N_{i}(u)\Big{|}>\epsilon\Big{\}}dM_{i}(u),\] \[= \frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{h}_{n,i}(u, \boldsymbol{\beta}_{0})\mathbb{1}\Big{\{}\Big{|}\frac{1}{\sqrt{n}}\mathbf{h}_ {n,i}(u,\boldsymbol{\beta}_{0})\Delta N_{i}(u)\Big{|}>\epsilon\Big{\}}dM_{i}(u).\] The aforementioned condition (2.5.3) is fulfilled, if the predictable covariation process \(\langle\mathbf{D}^{\epsilon}_{n,h}\rangle(t)\) of \(\mathbf{D}^{\epsilon}_{n,h}\) converges to zero in probability for all \(t\in\mathcal{T},\epsilon>0\), as \(n\to\infty\). Note that the predictable covariation process \(\langle\mathbf{D}^{\epsilon}_{n,h}\rangle(t)\) is defined as the \((p+b)\times(p+b)\)-dimensional matrix of the predictable covariation processes \(\big{(}\langle D^{\epsilon,j}_{n,h},D^{\epsilon,l}_{n,h}\rangle(t)\big{)}^{p+b}_ {j,l=1}\) of the components \[D^{\epsilon,j}_{n,h}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}h^{j}_{n,i}(u,\boldsymbol{\beta}_{0})\mathbb{1}\Big{\{}\Big{|}\frac{1}{\sqrt{n}}h^{j}_{n,i} (u,\boldsymbol{\beta}_{0})\Delta N_{i}(u)\Big{|}>\epsilon\Big{\}}dM_{i}(u),\] where \(h^{j}_{n,i}\) denotes the \(j\)-th component of the \((p+b)\)-dimensional function \(\mathbf{h}_{n,i}\), \(j=1,\ldots,p+b\). It is easy to see that the largest entry (in absolute value) of \(\langle\mathbf{D}^{\epsilon}_{n,h}\rangle(t)\) is located on the diagonal and that a diagonal element takes the following form: \[\langle D^{\epsilon,j}_{n,h}\rangle(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}h^ {j}_{n,i}(u,\boldsymbol{\beta}_{0})^{2}\mathbb{1}\Big{\{}\Big{|}\frac{1}{ \sqrt{n}}h^{j}_{n,i}(u,\boldsymbol{\beta}_{0})\Delta N_{i}(u)\Big{|}>\epsilon \Big{\}}d\Lambda_{i}(u),\] \(j=1,\ldots,p+b\). Thus, it suffices to show that the diagonal elements \(\langle D^{\epsilon,j}_{n,h}\rangle(t)\) of \(\langle\mathbf{D}^{\epsilon}_{n,h}\rangle(t)\) converge to \(0\) in probability as \(n\to\infty\) for each \(t\in\mathcal{T}\), \(j=1,\ldots,p+b\). That is, for every \(\delta>0\) the probability \(\mathbb{P}(\langle D^{\epsilon,j}_{n,h}\rangle(t)\geq\delta)\) must go to zero for all \(j=1,\ldots,p+b\). For this, we bound this probability from above as follows: \[\mathbb{P}(\langle D_{n,h}^{\epsilon,j}\rangle(t)\geq\delta)\] \[\leq\mathbb{P}\Big{(}\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}} \mathbbm{1}\,\{\frac{1}{\sqrt{n}}\|\mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0}) \|_{\infty}>\epsilon\}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}h_{n,i}^{j}(u, \boldsymbol{\beta}_{0})^{2}d\Lambda_{i}(u)\geq\delta\Big{)}\] \[\leq\mathbb{P}\Big{(}\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}} \mathbbm{1}\,\{\frac{1}{\sqrt{n}}\|\mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0}) \|_{\infty}>\epsilon\}=1\Big{)}\] \[=1-\mathbb{P}\Big{(}\text{for all }i,t:\frac{1}{\sqrt{n}}\| \mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0})\|_{\infty}\leq\epsilon\Big{)}\] (I.37) \[=o(1)+1-\mathbb{P}\Big{(}\text{for all }i,t:\frac{1}{\sqrt{n}}\| \mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0})\|_{\infty}\leq\epsilon,\] \[\quad\quad\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\|\mathbf{h}_ {n,i}(t,\boldsymbol{\beta}_{0})-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{ 0})\|_{\infty}<\eta\Big{)}\] \[\leq o(1)+1-\mathbb{P}\Big{(}\text{for all }i,t:\frac{1}{\sqrt{n}}\| \tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|_{\infty}+\frac{\eta}{\sqrt {n}}<\epsilon\Big{)}.\] where the one but last equality of (I) is due to Assumption I.2.1 (i) and holds for any \(\eta>0\). The inequality in the last line of (I) was obtained by adding and subtracting \(\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\) to the norm two lines above it, namely by writing \(\|\mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0})\|_{\infty}=\|\mathbf{h}_{n,i}(t, \boldsymbol{\beta}_{0})-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})+ \tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|_{\infty}\). Under Assumption I.2.1 (ii) the probability \(\mathbb{P}(\text{for all }i,t:\ \frac{1}{\sqrt{n}}\|\tilde{\mathbf{h}}_{i}(t, \boldsymbol{\beta}_{0})\|_{\infty}+\frac{\eta}{\sqrt{n}}\ <\ \epsilon)\) converges to one and, hence, the initial probability \(\mathbb{P}(\langle D_{n,h}^{\epsilon,j}\rangle(t)\geq\delta)\) to zero as \(n\to\infty\) for each \(t\in\mathcal{T}\) and across all components \(j=1,\ldots,d\). Thus, condition (2.5.3) of Rebolledo's theorem as stated in Theorem II.5.1 of Andersen et al. (1993) is fulfilled. In conclusion, both requirements of Rebolledo's theorem have been verified and the proof of Lemma I.2.2 is complete. \(\blacksquare\) **Proof of Lemma I.2.4.** We wish to show that \[\sup_{t\in\mathcal{T}}\|\mathbf{B}_{n}(t)-\mathbf{B}(t)\|\stackrel{{ \mathbb{P}}}{{\longrightarrow}}0,\ \text{as }n\to\infty,\] where \(\mathbf{B}_{n}(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathrm{D}\mathbf{k}_{n,i}(u,\boldsymbol{\beta}_{0})dN_{i}(u)\) and \(\mathbf{B}(t)=\int_{0}^{t}\mathbb{E}(\tilde{\mathbf{K}}_{1}(u,\boldsymbol{ \beta}_{0})\lambda_{1}(u,\boldsymbol{\beta}_{0}))du\), \(t\in\mathcal{T}\). For this we point out that the compensator of \(\frac{1}{n}\sum_{i=1}^{n}N_{i}(\tau)\) is equal to \(\frac{1}{n}\sum_{i=1}^{n}\Lambda_{i}(\tau,\boldsymbol{\beta}_{0})\). From the integrability of the cumulative intensities, Assumption I.2.3 (iii), and the law of large numbers, we can conclude that \(\frac{1}{n}\sum_{i=1}^{n}\Lambda_{i}(\tau,\boldsymbol{\beta}_{0})=O_{p}(1)\). Thus, we get from Lenglart's inequality that \(\frac{1}{n}\sum_{i=1}^{n}N_{i}(\tau)=O_{p}(1)\). Combining this argument with Assumption I.2.3 (i) yields \[\sup_{t\in\mathcal{T}}\lVert\mathbf{B}_{n}(t)-\mathbf{B}(t)\rVert\] \[\leq\sup_{t\in\mathcal{T}}\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\int_{0} ^{t}[\mathrm{D}\mathbf{k}_{n,i}(u,\boldsymbol{\beta}_{0})-\tilde{\mathbf{K}}_{i }(u,\boldsymbol{\beta}_{0})]dN_{i}(u)\Big{\|}\] (I.38) \[\quad+\sup_{t\in\mathcal{T}}\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\int_ {0}^{t}\tilde{\mathbf{K}}_{i}(u,\boldsymbol{\beta}_{0})dN_{i}(u)-\int_{0}^{t} \mathbb{E}(\tilde{\mathbf{K}}_{1}(u,\boldsymbol{\beta}_{0})\lambda_{1}(u, \boldsymbol{\beta}_{0}))du\Big{\|}\] \[\leq\sup_{t\in\mathcal{T}}\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\int_ {0}^{t}\tilde{\mathbf{K}}_{i}(u,\boldsymbol{\beta}_{0})dM_{i}(u)\Big{\|}\] \[\quad+\sup_{t\in\mathcal{T}}\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\int_ {0}^{t}\tilde{\mathbf{K}}_{i}(u,\boldsymbol{\beta}_{0})d\Lambda_{i}(u, \boldsymbol{\beta}_{0})-\int_{0}^{t}\mathbb{E}(\tilde{\mathbf{K}}_{1}(u, \boldsymbol{\beta}_{0})\lambda_{1}(u,\boldsymbol{\beta}_{0}))du\Big{\|}+o_{p}( 1),\] where in the last step the Doob-Meyer decomposition (I.3) has been applied. With Assumption I.2.3 (ii) and Proposition II.4.1. of Andersen et al. (1993) it follows that the integral \(\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\tilde{\mathbf{K}}_{i}(u,\boldsymbol{ \beta}_{0})dM_{i}(u)\) is a local square integrable martingale. The elements of the corresponding predictable covariation process at \(\tau\) can be bounded from above by \[\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}\lVert\tilde{\mathbf{K}}_{i}(u, \boldsymbol{\beta}_{0})\rVert_{\infty}^{2}d\Lambda_{i}(u,\boldsymbol{\beta}_{ 0}).\] According to Assumption I.2.3 (ii), \(\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\lVert\tilde{\mathbf{K}}_{i}(t, \boldsymbol{\beta}_{0})\rVert_{\infty}^{2}\) is bounded for \(i\in\mathbb{N}\), and, as stated above, it holds \(\frac{1}{n}\sum_{i=1}^{n}\Lambda_{i}(\tau,\boldsymbol{\beta}_{0})=O_{p}(1)\). Hence, the considered predictable covariation process and further, according to Lenglart's inequality, the first term of the second step on the right-hand side of (I.38) converges to zero in probability, as \(n\to\infty\). It is only left to show that \[\sup_{t\in\mathcal{T}}\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\tilde{ \mathbf{K}}_{i}(u,\boldsymbol{\beta}_{0})d\Lambda_{i}(u,\boldsymbol{\beta}_{0} )-\int_{0}^{t}\mathbb{E}(\tilde{\mathbf{K}}_{1}(u,\boldsymbol{\beta}_{0}) \lambda_{1}(u,\boldsymbol{\beta}_{0}))du\Big{\|}=o_{p}(1),\] (I.39) as \(n\to\infty\). According to the integrability of the cumulative intensities and Assumption I.2.3 (ii) it follows that \(\mathbb{E}(\int_{0}^{t}\lVert\tilde{\mathbf{K}}_{1}(u,\boldsymbol{\beta}_{0}) \rVert_{\infty}\lambda_{1}(u,\boldsymbol{\beta}_{0})du)<\infty\). From this argument in combination with Assumption I.2.3 (iii) and the law of large numbers, we have that \(\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\tilde{\mathbf{K}}_{i}(u,\boldsymbol{ \beta}_{0})\lambda_{i}(u,\boldsymbol{\beta}_{0})du\) converges almost surely to \(\mathbb{E}(\int_{0}^{t}\tilde{\mathbf{K}}_{1}(u,\boldsymbol{\beta}_{0}) \lambda_{1}(u,\boldsymbol{\beta}_{0})du)\) for any fixed \(t\in\mathcal{T}\), as \(n\to\infty\). Note that the integrability of the intensity process \(\lambda_{1}(t,\boldsymbol{\beta}_{0})\) follows from the integrability of the cumulative intensity process \(\Lambda_{1}(t,\boldsymbol{\beta}_{0})\). Thus, due to the integrability of the cumulative intensities and Assumption I.2.3 (ii), we can make use of Fubini's theorem, by which we can exchange the order of integration. We can conclude that \[\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\tilde{\mathbf{K}}_{i}(u,\mathbf{ \beta}_{0})d\Lambda_{i}(u,\mathbf{\beta}_{0})\stackrel{{\mathbb{P}}}{{ \longrightarrow}}\int_{0}^{t}\mathbb{E}(\tilde{\mathbf{K}}_{1}(u,\mathbf{\beta}_{ 0})\lambda_{1}(u,\mathbf{\beta}_{0}))du,\] (I.40) pointwise in \(t\in\mathcal{T}\), as \(n\to\infty\). Next, we show the corresponding uniform convergence in probability on \(\mathcal{T}\). For this, we divide the interval \(\mathcal{T}=[0,\tau]\) into \(N\) equidistant subintervals \([t_{l},t_{l+1}]\) with \(t_{0}=0\), \(t_{N}=\tau\), and \(l\in\{0,1,\ldots,N-1\}\). The width of a subinterval is chosen such that \[\int_{t_{l}}^{t_{l+1}}\mathbb{E}(\|\tilde{\mathbf{K}}_{1}(u,\mathbf{ \beta}_{0})\|\lambda(u,\mathbf{\beta}_{0}))du\leq\delta/2\] for all \(l\in\{0,1,\ldots,N-1\}\). For \(t\in[0,\tau)\) we denote the lower and upper endpoint of the subinterval containing \(t\) by \(t_{l(t)}=\max_{l\in\{0,1,\ldots,N-1\}}\{t_{l}:t_{l}\leq t\}\) and \(t_{l(t)+1}=\min_{l\in\{1,\ldots,N\}}\{t_{l}:t_{l}>t\}\), respectively. For \(t=\tau\) we choose \(t_{l(\tau)}=t_{l(\tau)+1}=\tau\). In the following derivation we make use of (I.40) and get \[\sup_{t\in\mathcal{T}}\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t }\tilde{\mathbf{K}}_{i}(u,\mathbf{\beta}_{0})\lambda_{i}(u,\mathbf{\beta}_{0})du-\int _{0}^{t}\mathbb{E}(\tilde{\mathbf{K}}_{1}(u,\mathbf{\beta}_{0})\lambda_{1}(u,\mathbf{ \beta}_{0}))du\Big{\|}\] \[= \sup_{t\in\mathcal{T}}\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{ t}\tilde{\mathbf{K}}_{i}(u,\mathbf{\beta}_{0})\lambda_{i}(u,\mathbf{\beta}_{0})du-\frac{1}{n }\sum_{i=1}^{n}\int_{0}^{t_{l(t)}}\tilde{\mathbf{K}}_{i}(u,\mathbf{\beta}_{0}) \lambda_{i}(u,\mathbf{\beta}_{0})du\] \[+\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t_{l(t)}}\tilde{\mathbf{K}}_{ i}(u,\mathbf{\beta}_{0})\lambda_{i}(u,\mathbf{\beta}_{0})du-\int_{0}^{t_{l(t)}}\mathbb{E} (\tilde{\mathbf{K}}_{1}(u,\mathbf{\beta}_{0})\lambda_{1}(u,\mathbf{\beta}_{0}))du\] \[+\int_{0}^{t_{l(t)}}\mathbb{E}(\tilde{\mathbf{K}}_{1}(u,\mathbf{\beta }_{0})\lambda_{1}(u,\mathbf{\beta}_{0}))du-\int_{0}^{t}\mathbb{E}(\tilde{\mathbf{ K}}_{1}(u,\mathbf{\beta}_{0})\lambda_{1}(u,\mathbf{\beta}_{0}))du\Big{\|}\] \[\leq \sup_{t\in\mathcal{T}}\Big{(}\Big{\|}\frac{1}{n}\sum_{i=1}^{n}\int _{t_{l(t)}}^{t}\tilde{\mathbf{K}}_{i}(u,\mathbf{\beta}_{0})\lambda_{i}(u,\mathbf{\beta }_{0})du-\int_{t_{l(t)}}^{t}\mathbb{E}(\tilde{\mathbf{K}}_{1}(u,\mathbf{\beta}_{0} )\lambda_{1}(u,\mathbf{\beta}_{0}))du\Big{\|}\Big{)}+o_{p}(1)\] \[\leq \sup_{t\in\mathcal{T}}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{t_{l (t)}}^{t}\|\tilde{\mathbf{K}}_{i}(u,\mathbf{\beta}_{0})\|\lambda_{i}(u,\mathbf{\beta}_{ 0})du+\int_{t_{l(t)}}^{t}\mathbb{E}(\|\tilde{\mathbf{K}}_{1}(u,\mathbf{\beta}_{0}) \|\lambda_{1}(u,\mathbf{\beta}_{0}))du\Big{)}+o_{p}(1)\] \[\leq \max_{l\in\{0,\ldots,N-1\}}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{ t_{l}}^{t_{l+1}}\|\tilde{\mathbf{K}}_{i}(u,\mathbf{\beta}_{0})\|\lambda_{i}(u,\mathbf{ \beta}_{0})du\] \[+\int_{t_{l}}^{t_{l+1}}\mathbb{E}(\|\tilde{\mathbf{K}}_{1}(u,\mathbf{ \beta}_{0})\|\lambda_{1}(u,\mathbf{\beta}_{0}))du\Big{)}+o_{p}(1)\] \[\stackrel{{\mathbb{P}}}{{\longrightarrow}}2\cdot\max_{l\in\{0,\ldots,N-1 \}}\Big{(}\int_{t_{l}}^{t_{l+1}}\mathbb{E}(\|\tilde{\mathbf{K}}_{1}(u,\mathbf{ \beta}_{0})\|\lambda_{1}(u,\mathbf{\beta}_{0}))du\Big{)}\leq\delta,\quad n\to\infty.\] The convergence involved in the last step of the considerations above, follows from the same arguments that led to (I.40). As we can choose the length of the subintervals \([t_{l},t_{l+1}]\) such that \(\delta>0\) is arbitrarily small, we obtain (I.39). \(\blacksquare\) **Proof of Theorem I.2.6.** We aim to derive the limit in law of \(\mathbf{D}_{n,k}+\mathbf{B}_{n}\mathbf{C}_{n}\mathbf{D}_{n,g}(\tau)\), as \(n\to\infty\), where \(\mathbf{D}_{n,k}\) and \(\mathbf{D}_{n,g}\) are vector-valued local square integrable martingales, \(\mathbf{B}_{n}\) is a matrix-valued stochastic process and \(\mathbf{C}_{n}\) is a random matrix. For this, we first show that the weak limit of \((\mathbf{D}_{n,k}^{\top},\mathbf{D}_{n,g}^{\top},\mathrm{vec}(\mathbf{B}_{n}) ^{\top},\mathrm{vec}(\mathbf{C}_{n})^{\top})\) is \((\mathbf{D}_{k}^{\top},\mathbf{D}_{g}^{\top},\mathrm{vec}(\mathbf{B})^{\top},\mathrm{vec}(\mathbf{C})^{\top})\), as \(n\to\infty\). According to Lemma I.2.2, we have \[(\mathbf{D}_{n,k}^{\top},\mathbf{D}_{n,g}^{\top})^{\top}=\mathbf{D}_{n,h} \stackrel{{\mathcal{L}}}{{\longrightarrow}}\mathbf{D}_{\hbar}=( \mathbf{D}_{\tilde{k}}^{\top},\mathbf{D}_{\tilde{g}}^{\top})^{\top},\quad \text{in }(D(\mathcal{T}))^{p+b},\text{ as }n\to\infty,\] where \(\mathbf{D}_{\tilde{h}}\) is a continuous zero-mean Gaussian \((p+b)\)-dimensional vector martingale with covariance function \(\mathbf{V}_{\tilde{h}}(t)=\int_{0}^{t}\mathbb{E}(\tilde{\mathbf{h}}_{1}(u,\bm {\beta}_{0})^{\otimes 2}\lambda_{1}(u,\mathbf{\beta}_{0}))du\), \(t\in\mathcal{T}\). As \(\mathbf{D}_{\tilde{h}}\in\mathcal{C}[0,\tau]^{p+b}\), we know that \(\mathbf{D}_{\tilde{h}}\) is separable. Furthermore, we have shown in Lemma I.2.4 that there exists a \(p\times q\)-dimensional continuous, deterministic function \(\mathbf{B}(t)\), \(t\in\mathcal{T}\), such that \(\sup_{t\in\mathcal{T}}\|\mathbf{B}_{n}(t)-\mathbf{B}(t)\|\stackrel{{ \mathbb{P}}}{{\longrightarrow}}0\), as \(n\to\infty\). In other words, the limit in law \(\mathrm{vec}(\mathbf{B})\) of \(\mathrm{vec}(\mathbf{B}_{n})\) is a constant of the space \(\mathcal{C}[0,\tau]^{pq}\). Thus, we conclude with Example 1.4.7 of van der Vaart and Wellner (1996) that \[(\mathbf{D}_{n,h}^{\top},\mathrm{vec}(\mathbf{B}_{n})^{\top})\stackrel{{ \mathcal{L}}}{{\longrightarrow}}(\mathbf{D}_{h}^{\top},\mathrm{vec}( \mathbf{B})^{\top}),\text{ in }D[0,\tau]^{p+b+pq},\text{ as }n\to\infty.\] As the last step of the first part of this proof we argue that \[(\mathbf{D}_{n,h}^{\top},\mathrm{vec}(\mathbf{B}_{n})^{\top},\mathrm{vec}( \mathbf{C}_{n})^{\top})\stackrel{{\mathcal{L}}}{{\longrightarrow }}(\mathbf{D}_{\tilde{h}}(t)^{\top},\mathrm{vec}(\mathbf{B})^{\top},\mathrm{vec} (\mathbf{C})^{\top}),\] (I.41) in \(\mathcal{D}[0,\tau]^{p+b+pq}\times\mathbb{R}^{pq}\), as \(n\to\infty\). For this, we point out that \((\mathbf{D}_{\tilde{h}}^{\top},\mathrm{vec}(\mathbf{B})^{\top})\in\mathcal{C}[ 0,\tau]^{p+b+pq}\). Thus, \((\mathbf{D}_{\tilde{h}}^{\top},\mathrm{vec}(\mathbf{B})^{\top})\) is separable. Additionally, we have assumed in Assumption I.2.5 that the random \(q\times p\)-dimensional matrix \(\mathbf{C}_{n}\) converges in probability to the deterministic matrix \(\mathbf{C}\), as \(n\to\infty\). Because \(\mathbf{C}_{n}\) is asymptotically degenerate and \((\mathbf{D}_{\tilde{h}}^{\top},\mathrm{vec}(\mathbf{B})^{\top})\) is separable, we again use Example 1.4.7 of van der Vaart and Wellner (1996) and infer that (I.41) holds. It only remains to apply the continuous mapping theorem to (I.41) in order to derive the weak limit of \({\bf D}_{n,k}+{\bf B}_{n}{\bf C}_{n}{\bf D}_{n,g}\), as \(n\to\infty\). In particular, we use the following three maps \[f_{1}:({\bf D}_{n,k}^{\top},{\bf D}_{n,g}(\tau)^{\top},{\rm vec}({ \bf B}_{n})^{\top},{\rm vec}({\bf C}_{n})^{\top})\mapsto({\bf D}_{n,k}^{\top},{ \bf D}_{n,g}(\tau)^{\top},{\rm vec}({\bf B}_{n}{\bf C}_{n})^{\top})\] \[f_{2}:({\bf D}_{n,k}^{\top},{\bf D}_{n,g}(\tau)^{\top},{\rm vec}({ \bf B}_{n}{\bf C}_{n})^{\top})\mapsto({\bf D}_{n,k}^{\top},({\bf B}_{n}{\bf C}_ {n}{\bf D}_{n,g}(\tau))^{\top})\] \[f_{3}:({\bf D}_{n,k}^{\top},({\bf B}_{n}{\bf C}_{n}{\bf D}_{n,g}( \tau))^{\top})\mapsto({\bf D}_{n,k}+{\bf B}_{n}{\bf C}_{n}{\bf D}_{n,g}(\tau)).\] Recall that \(({\bf D}_{\tilde{k}}^{\top},{\bf D}_{\tilde{g}}^{\top},{\rm vec}({\bf B})^{ \top},{\rm vec}({\bf C})^{\top})\in{\cal C}[0,\tau]^{p+b+2pq}\). Thus, it follows successively with the continuous mapping theorem and the maps \(f_{1},f_{2}\) and \(f_{3}\) that \[{\bf D}_{n,k}+{\bf B}_{n}{\bf C}_{n}{\bf D}_{n,g}(\tau)\stackrel{{ {\cal L}}}{{\longrightarrow}}{\bf D}_{\tilde{k}}+{\bf B}{\bf C}{\bf D}_{ \tilde{g}}(\tau)\ {\rm in}\ D[0,\tau]^{p},\] as \(n\to\infty\). Moreover, the covariance function of \({\bf D}_{\tilde{k}}+{\bf B}{\bf C}{\bf D}_{\tilde{g}}(\tau)\) at \(t\in{\cal T}\) maps \(t\) to \[{\bf V}_{\tilde{k}}(t)+{\bf B}(t){\bf C}{\bf V}_{\tilde{g}}(\tau){ \bf C}^{\top}{\bf B}(t)^{\top}+[{\bf V}_{\tilde{k},\tilde{g}}(t)+{\rm Cov}({ \bf D}_{\tilde{k}}(t),{\bf D}_{\tilde{g}}(\tau)-{\bf D}_{\tilde{g}}(t))]{\bf C }^{\top}{\bf B}(t)^{\top}\] \[\quad+{\bf B}(t){\bf C}[{\bf V}_{\tilde{g},\tilde{k}}(t)+{\rm Cov }({\bf D}_{\tilde{g}}(\tau)-{\bf D}_{\tilde{g}}(t),{\bf D}_{\tilde{k}}(t))]\] \[={\bf V}_{\tilde{k}}(t)+{\bf B}(t){\bf C}{\bf V}_{\tilde{g}}(\tau ){\bf C}^{\top}{\bf B}(t)^{\top}+{\bf V}_{\tilde{k},\tilde{g}}(t){\bf C}^{ \top}{\bf B}(t)^{\top}+{\bf B}(t){\bf C}{\bf V}_{\tilde{g},\tilde{k}}(t),\] where \({\rm Cov}({\bf D}_{\tilde{k}}(t),{\bf D}_{\tilde{g}}(\tau)-{\bf D}_{\tilde{g} }(t))={\rm Cov}({\bf D}_{\tilde{g}}(\tau)-{\bf D}_{\tilde{g}}(t),{\bf D}_{ \tilde{k}}(t))^{\top}=0\), because \[\mathbb{E}({\bf D}_{\tilde{k}}(t)({\bf D}_{\tilde{g}}(\tau)-{\bf D }_{\tilde{g}}(t))^{\top}) =\mathbb{E}(\mathbb{E}(\mathbb{E}({\bf D}_{\tilde{k}}(t)({\bf D}_ {\tilde{g}}(\tau)-{\bf D}_{\tilde{g}}(t))^{\top}|{\cal F}_{1}(t)))\] \[=\mathbb{E}({\bf D}_{\tilde{k}}(t)\mathbb{E}(({\bf D}_{\tilde{g}} (\tau)-{\bf D}_{\tilde{g}}(t))^{\top}))\] \[=0.\] Here the one but last step holds because \(\sigma({\bf D}_{\tilde{k}}(t))\in{\cal F}_{1}(t)\) and \({\bf D}_{\tilde{g}}(\tau)-{\bf D}_{\tilde{g}}(t)\) is independent of \({\cal F}_{1}(t)\). In the last step it has been applied that \(\mathbb{E}({\bf D}_{\tilde{g}}(\tau)-{\bf D}_{\tilde{g}}(t))=0\). \(\blacksquare\) ### A.2 Proofs of Section I.3 **Proof of Lemma I.3.2.** In the first part of this proof, we show that, conditionally on the initial \(\sigma\)-algebra \({\cal F}_{2}(0)\), the stochastic process \({\bf D}_{n,h}^{*}(t)=(D_{n,h}^{*,1}(t),\ldots,D_{n,h}^{*,p+b}(t))\), \(t\in{\cal T}\), is a \((p+b)\)-dimensional vector of square integrable martingales with respect to \({\cal F}_{2}(t)\). Here, the j-th element \(D_{n,h}^{*,j}\) of \({\bf D}_{n,h}^{*}\), \(j=1,\ldots,p+b\), is given by \[D_{n,h}^{*,j}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}h_{n,i}^{j}(u, \hat{\mathbf{\beta}}_{n})G_{i}(u)\,dN_{i}(u),\quad t\in{\cal T},\] where \(h^{j}_{n,i}(t,\hat{\mathbf{\beta}}_{n})\) denotes the j-th element of the \((p+b)\)-dimensional function \(\mathbf{h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})\). For later use we write \(D^{*,j}_{n,h}\) as the scaled sum over \(D^{*,j}_{n,h,i}=\int_{0}^{\cdot}h^{j}_{n,i}(u,\hat{\mathbf{\beta}}_{n})G_{i}(u)\, dN_{i}(u)\), namely \(D^{*,j}_{n,h}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}D^{*,j}_{n,h,i}(t)\), \(t\in\mathcal{T}\). Furthermore, by incorporating the jump time points \(T_{i,1},\ldots,T_{i,n_{i}}\) of the counting process \(N_{i}\), we can write \[D^{*,j}_{n,h}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\sum_{r:T_{i,r}\leq t}h^{j}_{n,i}(T_{i,r},\hat{\mathbf{\beta}})G_{i}(T_{i,r}),\quad t\in\mathcal{T}.\] Clearly, all stochastic processes \(D^{*,j}_{n,h}(t)\), \(t\in\mathcal{T}\), \(j=1,\ldots,p+b\), are adapted to the filtration \(\mathcal{F}_{2}(t)\), \(t\in\mathcal{T}\). Moreover, for all \(j=1,\ldots,p+b\), \(D^{*,j}_{n,h}\) is cadlag, as the same holds for the counting processes \(N_{i}\), \(i=1,\ldots,n\). As we work with a probability space, square integrability implies integrability of a stochastic process. Thus, we directly show that \(D^{*,j}_{n,h}\) is square integrable for all \(j=1,\ldots,p+b\). For this we wish to show that \[\sup_{t\in\mathcal{T}}\mathbb{E}_{0}(D^{*,j}_{n,h}(t)^{2})=\sup_{t\in \mathcal{T}}\mathbb{E}_{0}\Big{(}\frac{1}{n}\Big{(}\sum_{i=1}^{n}D^{*,j}_{n, h,i}(t)\Big{)}^{2}\Big{)}<\infty,\] where \(\mathbb{E}_{0}\) denotes the conditional expectation \(\mathbb{E}(\cdot|\mathcal{F}_{2}(0))\). In preparation for this, we state \[\begin{split}&\frac{1}{n}\Big{(}\sum_{i=1}^{n}D^{*,j}_{n,h,i}(t) \Big{)}^{2}=\frac{1}{n}\sum_{i=1}^{n}\sum_{l=1}^{n}D^{*,j}_{n,h,i}(t)D^{*,j}_{ n,h,l}(t)\\ &=\frac{1}{n}\sum_{i=1}^{n}\sum_{l=1}^{n}\sum_{r:T_{i,r}\leq t} \sum_{v:T_{i,v}\leq t}h^{j}_{n,i}(T_{i,r},\hat{\mathbf{\beta}}_{n})h^{j}_{n,l}(T_ {l,v},\hat{\mathbf{\beta}}_{n})G_{i}(T_{i,r})G_{l}(T_{l,v}).\end{split}\] (I.42) In the next step we use that the functions \(\mathbf{h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})\), \(i=1,\ldots,n\), are \(\mathcal{F}_{2}(0)\)-measurable. Additionally, we apply that the values of the multiplier process \(G_{i}(t)\), \(t\in\mathcal{T}^{\Delta}_{n,i}\), are independent of the \(\sigma\)-algebra \(\mathcal{F}_{2}(0)\). Combining these assumptions with (I.42), we get \[\begin{split}&\mathbb{E}_{0}((D^{*,j}_{n,h}(t))^{2})\\ &=\frac{1}{n}\sum_{i=1}^{n}\sum_{l=1}^{n}\sum_{r:T_{i,r}\leq t} \sum_{v:T_{i,v}\leq t}h^{j}_{n,i}(T_{i,r},\hat{\mathbf{\beta}}_{n})h^{j}_{n,l}(T_{l,v},\hat{\mathbf{\beta}}_{n})\mathbb{E}(G_{i}(T_{i,r})G_{l}(T_{l,v})).\end{split}\] (I.43) By construction of the multiplier processes we have for \(i\neq l\) or \(\{i=l,r\neq v\}\) \[\mathbb{E}(G_{i}(T_{i,k})G_{l}(T_{l,v}))=\mathbb{E}(G_{i}(T_{i,k}))\mathbb{E}( G_{l}(T_{l,v}))=0,\] and for \(\{i=l,r=v\}\) \[\mathbb{E}(G_{i}(T_{i,r})G_{l}(T_{l,v}))=\mathbb{E}(G_{i}(T_{i,r})^{2})=1.\] Thus, (I.43) simplifies to \(\mathbb{E}_{0}((D^{*,j}_{n,h}(t))^{2})=\frac{1}{n}\sum_{i=1}^{n}\sum_{r:T_{i,r }\leq t}h^{j}_{n,i}(T_{i,r},\hat{\mathbf{\beta}}_{n})^{2}\). Finally, it holds that \[\sup_{t\in\mathcal{T}}\mathbb{E}_{0}(D^{*,j}_{n,h}(t)^{2})\leq\sup_{t\in \mathcal{T},i\in\{1,\ldots,n\}}h^{j}_{n,i}(t,\hat{\mathbf{\beta}}_{n})^{2}\cdot \max_{i\in\{1,\ldots,n\}}N_{i}(\tau)<\infty,\] since \(\mathbf{h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})\) is a known function and hence, all components \(h^{j}_{n,i}(t,\hat{\mathbf{\beta}}_{n})\), \(j=1,\ldots,p+b\), are bounded on \(\mathcal{T}\). Moreover, the observed number of events within the time frame \(\mathcal{T}=[0,\tau]\), \(N_{i}(\tau)\), is finite for all individuals \(i=1,\ldots,n\). In conclusion, \(D^{*,j}_{n,h}(t)\), \(t\in\mathcal{T}\), is square integrable for all \(j=1,\ldots,p+b\), given the initial \(\sigma\)-algebra \(\mathcal{F}_{2}(0)\). Next, we consider the martingale property for the stochastic process \(D^{*,j}_{n,h}(t)\), \(t\in\mathcal{T}\). Due to the linearity of the conditional expectation, is suffices to verify the martingale property for the summands \(D^{*,j}_{n,h,i}(t)\) of the scaled sum \(D^{*,j}_{n,h}(t)\), \(i=1,\ldots,n\). For this, we recall that the function \(\mathbf{h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})\) and the counting process \(N_{i}(t)\) are \(\mathcal{F}_{2}(0)\subset\mathcal{F}_{2}(t)\)-measurable for \(t\in\mathcal{T}\), respectively, \(i=1,\ldots,n\). Furthermore, for a jump at \(u\leq s\), the multiplier process \(G_{i}(u)\) is \(\mathcal{F}_{2}(s)\)-measurable, and, if \(u\) is greater than or equal to the earliest jump time point, say \(T_{i}(s^{+})\), of process \(i\) in \((s,\tau]\), the values of \(G_{i}(u)\) and the filtration \(\mathcal{F}_{2}(s)\) are independent, \(i=1,\ldots,n\). Moreover, we use that the multiplier process \(G_{i}(t)\), \(t\in\mathcal{T}\), has mean zero. This yields for any \(t>s\), \[\mathbb{E}[D^{*,j}_{n,h,i}(t)|\mathcal{F}_{2}(s)]\] \[=\mathbb{E}\Big{[}\int_{0}^{t}h^{j}_{n,i}(u,\hat{\mathbf{\beta}}_{n}) G_{i}(u)\,dN_{i}(u)|\mathcal{F}_{2}(s)\Big{]}\] \[=\mathbb{E}\Big{[}\int_{0}^{s}h^{j}_{n,i}(u,\hat{\mathbf{\beta}}_{n}) G_{i}(u)\,dN_{i}(u)+\int_{s}^{t}h^{j}_{n,i}(u,\hat{\mathbf{\beta}}_{n})G_{i}(u)\,dN_{i}( u)\Big{|}\mathcal{F}_{2}(s)\Big{]}\] \[=D^{*,j}_{n,h,i}(s)+\int_{s}^{t}h^{j}_{n,i}(u,\hat{\mathbf{\beta}}_{n })\,\mathbb{E}(G_{i}(u)|\mathcal{F}_{2}(s))\,dN_{i}(u)\] \[=D^{*,j}_{n,h,i}(s)+\int_{T_{i}(s^{+})}^{t}h^{j}_{n,i}(u,\hat{\bm {\beta}}_{n})\,\mathbb{E}(G_{i}(u))\,dN_{i}(u)\] \[=D^{*,j}_{n,h,i}(s).\] Thus, we have shown that all elements \(D^{*,j}_{n,h}\) of \(\mathbf{D}^{*}_{n,h}\), \(j=1,\ldots,p+b\), fulfill the martingale property. In conclusion, the stochastic process \(\mathbf{D}^{*}_{n,h}\) is a \((p+b)\)-dimensional vector of square integrable martingales with respect to \(\mathcal{F}_{2}(t)\), \(t\in\mathcal{T}\). With this the first part of Lemma I.3.2 has been proven. In the second part of this proof we derive the predictable covariation process \(\langle\mathbf{D}^{*}_{n,h}\rangle\) and the optional covariation process \([\mathbf{D}^{*}_{n,h}]\) of \(\mathbf{D}^{*}_{n,h}\). First, we consider the predictable covariation process \(\langle\mathbf{D}^{*}_{n,h}\rangle(t)\): \[\begin{split}&\langle\mathbf{D}^{*}_{n,h}\rangle=\frac{1}{n} \Big{\langle}\sum_{i=1}^{n}(D^{*,1}_{n,h,i},\ldots,D^{*,p+b}_{n,h,i})\Big{\rangle} \\ &=\frac{1}{n}\Big{(}\Big{\langle}\sum_{i=1}^{n}D^{*,j}_{n,h,i}, \sum_{i=1}^{n}D^{*,r}_{n,h,i})\Big{\rangle}^{p+b}_{j,r=1}\\ &=\frac{1}{n}\Big{(}\sum_{i=1}^{n}\sum_{l=1}^{n}\langle D^{*,j}_ {n,h,i},D^{*,r}_{n,h,l})\Big{\rangle}^{p+b}_{j,r=1}\\ &=\frac{1}{n}\sum_{i=1}^{n}\sum_{l=i}\big{(}\langle D^{*,j}_{n, h,i},D^{*,r}_{n,h,l})\big{\rangle}^{p+b}_{j,r=1}\quad+\quad\frac{1}{n}\sum_{i=1}^{ n}\sum_{l\neq i}\big{(}\langle D^{*,j}_{n,h,i},D^{*,r}_{n,h,l})\big{\rangle}^{p+b}_{j,r=1},\end{split}\] (I.44) where in the second step of (I.44) we used that the predictable covariation process of a vector valued martingale is the matrix of the predictable covariation processes of its components. In the following we consider the predictable covariation processes \(\langle D^{*,j}_{n,h,i},D^{*,r}_{n,h,l}\rangle\) for \(i=l\) and \(i\neq l\) separately. Recall that the functions \(\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})\) and the counting processes \(N_{i}\) are \(\mathcal{F}_{2}(0)\subset\mathcal{F}_{2}(t)\)-measurable, respectively, and that the values of the multiplier processes \(G_{i}(t)\), \(t\in\mathcal{T}\), are independent of the \(\sigma\)-algebra \(\mathcal{F}_{2}(t-)\), \(i=1,\ldots,n\). We then get for \(i=l\), \[\begin{split}&\langle D^{*,j}_{n,h,i},D^{*,r}_{n,h,i}\rangle(t) \\ &=\int_{0}^{t}\text{Cov}\big{(}dD^{*,j}_{n,h,i}(u),dD^{*,r}_{n,h,i} (u)|\mathcal{F}_{2}(u-)\big{)}\\ &=\int_{0}^{t}\text{Cov}\big{(}h^{j}_{n,i}(u,\hat{\boldsymbol{ \beta}}_{n})\,G_{i}(u)\,dN_{i}(u),h^{r}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n}) \,G_{i}(u)\,dN_{i}(u)|\mathcal{F}_{2}(u-)\big{)}\\ &=\int_{0}^{t}h^{j}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})h^{r}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})\text{Var}(G_{i}(u))dN_{i}(u)\\ &=\int_{0}^{t}h^{j}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})h^{r}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})dN_{i}(u),\end{split}\] (I.45) where for the last equation above we have used that the multiplier processes \(G_{i}(t)\), \(t\in\mathcal{T}\), have unit variance, \(i=1,\ldots,n\). For \(i\neq l\) it holds that \[\begin{split} d\langle D^{*,j}_{n,h,i},D^{*,r}_{n,h,l}\rangle(t)& =\text{Cov}\big{(}dD^{*,j}_{n,h,i}(u),dD^{*,r}_{n,h,l}(u)|\mathcal{F}_{2}(u -)\big{)}\\ &=\text{Cov}\big{(}h^{j}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})G_{i }(u)dN_{i}(u),h^{r}_{n,l}(u,\hat{\boldsymbol{\beta}}_{n})G_{l}(u)dN_{l}(u)| \mathcal{F}_{2}(u-)\big{)}\\ &=h^{j}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})h^{r}_{n,l}(u,\hat{ \boldsymbol{\beta}}_{n})\text{Cov}\big{(}G_{i}(u),G_{l}(u)\big{)}dN_{i}(u)dN_{l }(u)\\ &=0,\end{split}\] (I.46) where in the last step we have applied that the multiplier processes \(G_{1}(t),\ldots,G_{n}(t)\), \(t\in\mathcal{T}\), are pairwise independent and no two processes jump simultaneously. Hence, \(\langle D^{*,j}_{n,h,i},D^{*,r}_{n,h,l}\rangle(t)=0\) for \(i\neq l\). Combining (I.44), (I.45), and (I.46), we can state the final form of the predictable covariation process \(\langle\mathbf{D}^{*}_{n,h}\rangle\) of \(\mathbf{D}^{*}_{n,h}\) at \(t\in\mathcal{T}\) in matrix notation \[\langle\mathbf{D}^{*}_{n,h}\rangle(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t} \big{(}h^{j}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})h^{r}_{n,i}(u,\hat{ \boldsymbol{\beta}}_{n})\big{)}_{j,r=1}^{p+b}dN_{i}(u)=\frac{1}{n}\sum_{i=1}^{ n}\int_{0}^{t}\mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2}dN_{i}(u),\] which proves the second part of Lemma I.3.2. For the optional covariation process \([\mathbf{D}^{*}_{n,h}]\) of \(\mathbf{D}^{*}_{n,h}\) we can write analogously to (I.44) \[[\mathbf{D}^{*}_{n,h}](t)=\frac{1}{n}\sum_{i=1}^{n}\sum_{l=i}\big{(}[D^{*,j}_{ n,h,i},D^{*,r}_{n,h,l}](t)\big{)}_{j,r=1}^{p+b}+\frac{1}{n}\sum_{i=1}^{n}\sum_{l \neq i}\big{(}[D^{*,j}_{n,h,i},D^{*,r}_{n,h,l}](t)\big{)}_{j,r=1}^{p+b}.\] (I.47) Again, we consider the optional covariation process \([D^{*,j}_{n,h,i},D^{*,r}_{n,h,l}]\) for \(i=l\) and \(i\neq l\) separately. For \(i=l\) we get \[\begin{split}[D^{*,j}_{n,h,i},D^{*,r}_{n,h,i}](t)&= \sum_{u\leq t}\Delta D^{*,j}_{n,h,i}(u)\Delta D^{*,r}_{n,h,i}(u)\\ &=\sum_{u\leq t}h^{j}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})\,G_{i }(u)\,\Delta N_{i}(u)h^{r}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})\,G_{i}(u)\, \Delta N_{i}(u)\\ &=\int_{0}^{t}h^{j}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})h^{r}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})\,G_{i}^{2}(u)\,dN_{i}(u).\end{split}\] (I.48) For \(i\neq l\) it holds that \[[D_{n,h,i}^{*,j},D_{n,h,l}^{*,r}](t) =\sum_{u\leq t}\Delta D_{n,h,i}^{*,j}(u)\Delta D_{n,h,l}^{*,r}(u)\] \[=\sum_{u\leq t}h_{n,i}^{j}(u,\hat{\boldsymbol{\beta}}_{n})\,G_{i}( u)\,\Delta N_{i}(u)h_{n,l}^{r}(u,\hat{\boldsymbol{\beta}}_{n})\,G_{l}(u)\, \Delta N_{l}(u)\] (I.49) \[=0,\] where in the last step of the equation above we have used that no two counting processes jump at the same time. Combining (I.47), (I.48), and (I.49), we find for the optional covariation process \([\mathbf{D}_{n,h}^{*}]\) of \(\mathbf{D}_{n,h}^{*}\) at \(t\in\mathcal{T}\) in matrix notation: \[[\mathbf{D}_{n,h}^{*}](t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\big{(}h_{n,i}^ {j}(u,\hat{\boldsymbol{\beta}}_{n})h_{n,i}^{r}(u,\hat{\boldsymbol{\beta}}_{n} )\big{)}_{j,r=1}^{p+b}\,G_{i}(u)^{2}dN_{i}(u)\] \[=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{h}_{n,i}(u,\hat{\boldsymbol{ \beta}}_{n})^{\otimes 2}\,G_{i}(u)^{2}dN_{i}(u),\] which proves the third part of Lemma I.3.2 and the proof of the lemma is complete. \(\blacksquare\) **Proof of Lemma I.3.5.** According to Lemma I.3.2, \(\mathbf{D}_{n,h}^{*}\) is a vector of square integrable martingales and its predictable covariation process takes the form \[\langle\mathbf{D}_{n,h}^{*}\rangle(t) =\langle\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{\cdot} \mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})G_{i}(u)\,dN_{i}(u)\rangle(t)\] \[=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{h}_{n,i}(u,\hat{ \boldsymbol{\beta}}_{n})^{\otimes 2}\,dN_{i}(u)\] \[=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{h}_{n,i}(u,\hat{ \boldsymbol{\beta}}_{n})^{\otimes 2}\,(dM_{i}(u)+d\Lambda_{i}(u,\boldsymbol{ \beta}_{0})),\] where in the third step we have used the Doob-Meyer decomposition with \(M_{i}\) a square integrable martingale with respect to \(\mathcal{F}_{1}\) and \(\Lambda_{i}(\cdot,\boldsymbol{\beta}_{0})\) its compensator. Note the similarity of the integral with respect to \(\Lambda_{i}(t,\boldsymbol{\beta}_{0})\) to that of \(\langle\mathbf{D}_{n,h}\rangle(t)\) in (I.31), the only difference being that the integrand is evaluated at \(\hat{\boldsymbol{\beta}}_{n}\) instead of at \(\boldsymbol{\beta}_{0}\). We make use of the result about \(\langle\mathbf{D}_{n,h}\rangle(t)\) and consider \[\begin{split}&\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{h}_{n,i} (u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2}\,d\Lambda_{i}(u,\boldsymbol{\beta}_{0})- \langle\mathbf{D}_{n,h}\rangle(t)+\langle\mathbf{D}_{n,h}\rangle(t)\\ &=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}[\mathbf{h}_{n,i}(u,\hat{ \boldsymbol{\beta}}_{n})^{\otimes 2}-\mathbf{h}_{n,i}(u,\boldsymbol{\beta}_{0})^{ \otimes 2}]\,d\Lambda_{i}(u,\boldsymbol{\beta}_{0})+\langle\mathbf{D}_{n,h} \rangle(t),\end{split}\] (I.50) where the first term on the right-hand side can be bounded from above in the following way. \[\begin{split}&\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}[\mathbf{h}_{n,i} (u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2}-\mathbf{h}_{n,i}(u,\boldsymbol{ \beta}_{0})^{\otimes 2}]\,d\Lambda_{i}(u,\boldsymbol{\beta}_{0})\\ &\leq\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\lVert\mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2}-\tilde{\mathbf{h}}_{i}(t, \boldsymbol{\beta}_{0})^{\otimes 2}+\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0}) ^{\otimes 2}-\mathbf{h}_{n,i}(u,\boldsymbol{\beta}_{0})^{\otimes 2}\rVert_{ \infty}\frac{1}{n}\sum_{i=1}^{n}\Lambda_{i}(t,\boldsymbol{\beta}_{0})\\ &\leq\Big{(}\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\lVert( \mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})-\tilde{\mathbf{h}}_{i}(t, \boldsymbol{\beta}_{0}))\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})^{ \top}\rVert_{\infty}\\ &\qquad+\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\lVert\tilde{ \mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})(\mathbf{h}_{n,i}(t,\hat{\boldsymbol {\beta}}_{n})-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0}))^{\top}\rVert_{ \infty}\\ &\qquad+\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\lVert( \mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0})-\tilde{\mathbf{h}}_{i}(t, \boldsymbol{\beta}_{0}))\mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0})^{\top}\rVert _{\infty}\\ &\qquad+\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\lVert\tilde{ \mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})(\mathbf{h}_{n,i}(t,\boldsymbol{ \beta}_{0})-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0}))^{\top}\rVert_{ \infty}\Big{)}\frac{1}{n}\sum_{i=1}^{n}\Lambda_{i}(t,\boldsymbol{\beta}_{0}). \end{split}\] All four terms in brackets converge to zero in probability, as \(n\to\infty\), according to Assumption I.2.1 (i), (ii), and the fact that \(\mathbf{h}_{n,i}(t,\boldsymbol{\beta}_{0})\) and \(\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})\) are (locally) bounded. In the following we make use of results of the proof of Lemma I.2.2. For this we note that convergence in probability is equivalent to convergence in conditional probability, cf. Fact 1 of the supplement of Dobler et al. (2019). As stated in the proof of Lemma I.2.2, \(\frac{1}{n}\sum_{i=1}^{n}\Lambda_{i}(t,\boldsymbol{\beta}_{0})=O_{p}(1)\), according to Assumption I.2.1 (iii), the integrability of \(\Lambda_{i}(t,\boldsymbol{\beta}_{0})\) and the law of large numbers. Hence, the first term on the right-hand side of (I.50) converges to zero in probability, as \(n\to\infty\). Additionally, according to Assumption I.2.1 (ii), (iii), the integrability of \(\Lambda_{i}(t,\boldsymbol{\beta}_{0})\) and the law of large numbers, we have shown in the proof of Lemma I.2.2 that \[\langle\mathbf{D}_{n,h}\rangle(t)\stackrel{{\mathbb{P}}}{{ \longrightarrow}}\int_{0}^{t}\mathbb{E}\Big{(}\tilde{\mathbf{h}}_{1}(u, \boldsymbol{\beta}_{0})^{\otimes 2}\lambda_{1}(u,\boldsymbol{\beta}_{0}) \Big{)}du=\mathbf{V}_{\tilde{h}}(t),\text{ for all }t\in\mathcal{T},\text{ as }n\to\infty;\] cf. Assumption I.2.1 (iii). In particular, \(\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{h}_{n,i}(u,\hat{\boldsymbol{ \beta}}_{n})^{\otimes 2}\,d\Lambda_{i}(u,\boldsymbol{\beta}_{0})\) and \(\langle\mathbf{D}_{n,h}\rangle(t)\) are asymptotically equivalent. Next, we consider the integral with respect to the local square integrable martingale \(M_{i}\), \(i=1,\ldots,n\). As, conditionally on \({\cal F}_{2}(0)\), the integrands \({\bf h}_{n,i}(\cdot,\hat{\mathbf{\beta}})^{\otimes 2}\), \(i=1,\ldots,n\), are known and, hence, predictable with respect to \({\cal F}_{2}\) and locally bounded, the corresponding integral \({\bf W}_{n}(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}{\bf h}_{n,i}(u,\hat{ \mathbf{\beta}})^{\otimes 2}\,dM_{i}(u)\) is a local square integrable martingale (Proposition II.4.1, Andersen et al. 1993, p. 78). Hence, we apply Lenglart's inequality in order to show that \({\bf W}_{n}(t)\) converges to zero in probability for all \(t\in{\cal T}\), as \(n\to\infty\). For this purpose, we consider its predictable covariation process \[\begin{split}\langle{\rm vec}({\bf W}_{n})\rangle(\tau)& =\langle\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{\cdot}{\rm vec}({\bf h }_{n,i}(u,\hat{\mathbf{\beta}})^{\otimes 2})\,dM_{i}(u)\rangle(\tau)\\ &=\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}{\rm vec}({\bf h}_ {n,i}(u,\hat{\mathbf{\beta}})^{\otimes 2})^{\otimes 2}\,d\Lambda_{i}(u, \mathbf{\beta}_{0}),\\ &=\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}[{\rm vec}({\bf h }_{n,i}(u,\hat{\mathbf{\beta}})^{\otimes 2})^{\otimes 2}-{\rm vec}(\tilde{ \bf h}_{i}(u,\mathbf{\beta}_{0})^{\otimes 2})^{\otimes 2}]\,d\Lambda_{i}(u, \mathbf{\beta}_{0})\\ &\quad+\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}{\rm vec}( \tilde{\bf h}_{i}(u,\mathbf{\beta}_{0})^{\otimes 2})^{\otimes 2}\,d\Lambda_{i}(u,\mathbf{\beta}_{0}),\end{split}\] (I.51) where in the second equality it has been used that the martingales \(M_{1}(t),\ldots,M_{n}(t)\) are independent. We wish to show that the first term on the right-hand side of the third step converges to zero in probability, as \(n\to\infty\). For this, it suffices to consider the largest component \[\begin{split}&\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}\|{ \rm vec}({\bf h}_{n,i}(u,\hat{\mathbf{\beta}}_{n})^{\otimes 2})^{ \otimes 2}-{\rm vec}(\tilde{\bf h}_{i}(u,\mathbf{\beta}_{0})^{\otimes 2})^{ \otimes 2}\|_{\infty}\,d\Lambda_{i}(u,\mathbf{\beta}_{0})\\ &\leq\sup_{i\in\{1\ldots,n\},t\in{\cal T}}\|{\rm vec}({\bf h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})^{\otimes 2})^{\otimes 2}-{\rm vec}( \tilde{\bf h}_{i}(t,\mathbf{\beta}_{0})^{\otimes 2})^{\otimes 2}\|_{ \infty}\frac{1}{n^{2}}\sum_{i=1}^{n}\Lambda_{i}(\tau,\mathbf{\beta}_{0 }).\end{split}\] It holds that \[\begin{split}&\|{\rm vec}({\bf h}_{n,i}(t,\hat{\mathbf{ \beta}}_{n})^{\otimes 2})^{\otimes 2}-{\rm vec}(\tilde{\bf h}_{i}(t,\mathbf{\beta}_{0})^{\otimes 2})^{\otimes 2}\|_{\infty}\\ &\leq\ \ \|{\bf h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})\|_{ \infty}^{2}\Big{[}\|{\bf h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})-\tilde {\bf h}_{i}(t,\mathbf{\beta}_{0})\|_{\infty}\|{\bf h}_{n,i}(t,\hat{ \mathbf{\beta}}_{n})\|_{\infty}\\ &\quad+\|\tilde{\bf h}_{i}(t,\mathbf{\beta}_{0})\|_{ \infty}\|{\bf h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})-\tilde{\bf h}_{i}( t,\mathbf{\beta}_{0})\|_{\infty}\Big{]}\\ &\quad+\|\tilde{\bf h}_{i}(t,\mathbf{\beta}_{0})\|_{ \infty}^{2}\Big{[}\|{\bf h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})-\tilde {\bf h}_{i}(t,\mathbf{\beta}_{0})\|_{\infty}\|{\bf h}_{n,i}(t,\hat{ \mathbf{\beta}}_{n})\|_{\infty}\\ &\quad+\|\tilde{\bf h}_{i}(t,\mathbf{\beta}_{0})\|_{ \infty}\|{\bf h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})-\tilde{\bf h}_{i}( t,\mathbf{\beta}_{0})\|_{\infty}\Big{]},\end{split}\] where we used the triangle inequality and applied \({\bf a}^{\otimes 2}-{\bf b}^{\otimes 2}=({\bf a}-{\bf b}){\bf a}^{\top}+{\bf b }({\bf a}-{\bf b})^{\top}\) for two vectors \({\bf a},{\bf b}\) twice, i.e., \[\begin{split}{\rm vec}[{\bf a}^{\otimes 2}]^{\otimes 2}-{\rm vec }[{\bf b}^{\otimes 2}]^{\otimes 2}&={\rm vec}[({\bf a}-{\bf b}){\bf a}^{ \top}+{\bf b}({\bf a}-{\bf b})^{\top}]{\rm vec}[{\bf aa}^{\top}]^{\top}\\ &\quad+{\rm vec}[{\bf bb}^{\top}]{\rm vec}[({\bf a}-{\bf b}){\bf a }^{\top}+{\bf b}({\bf a}-{\bf b})^{\top}]^{\top}.\end{split}\] (I.52) Hence, according to Assumption I.2.1 (i), (ii), and since \({\bf h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})\) is locally bounded for \(i=1,\ldots,n\), it follows that \(\sup_{i\in\{1\ldots,n\},t\in{\cal T}}\|{\rm vec}({\bf h}_{n,i}(u,\hat{\mathbf{\beta}}_{n})^{\otimes 2})^{\otimes 2}-{\rm vec}(\tilde{\bf h}_{i}(u, \mathbf{\beta}_{0})^{\otimes 2})^{\otimes 2}\|_{\infty}=o_{p}(1)\). As explained before, we have \(\frac{1}{n}\sum_{i=1}^{n}\Lambda_{i}(\tau,\mathbf{\beta}_{0})=O_{p}(1)\). In conclusion, the first term on the right-hand side of the third step of (I.51) converges to zero in probability, as \(n\to\infty\). We futher need to show that the corresponding second term vanishes asymptotically. For this we consider the largest component of \(\mathbb{E}\Big{(}\int_{0}^{\tau}{\rm vec}(\tilde{\bf h}_{1}(u,\mathbf{\beta}_{0})^{\otimes 2})^{\otimes 2}\,d\Lambda_{1}(u,\mathbf{\beta}_{0})\Big{)}\), for which it holds that \[\mathbb{E}(\int_{0}^{\tau}\|\tilde{\bf h}_{1}(u,\mathbf{\beta}_{0}) \|_{\infty}^{4}\,d\Lambda_{1}(u,\mathbf{\beta}_{0}))=\mathbb{E}(\sup _{t\in{\cal T}}\|\tilde{\bf h}_{1}(t,\mathbf{\beta}_{0})\|_{\infty}^ {4}\Lambda_{1}(\tau,\mathbf{\beta}_{0}))<\infty,\] due to Assumption I.2.1 (ii) and the integrability of \(\Lambda_{i}(\tau,\mathbf{\beta}_{0})\). Combining this with Assumption I.2.1 (iii) and the law of large numbers yields \[\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{\tau}{\rm vec}(\tilde{\bf h}_{i}(u,\mathbf{\beta}_{0})^{\otimes 2})^{\otimes 2}\,d\Lambda_{i}(u,\mathbf{\beta}_{0})\stackrel{{\mathbb{P}}}{{\longrightarrow}} \mathbb{E}\Big{(}\int_{0}^{\tau}{\rm vec}(\tilde{\bf h}_{1}(u,\mathbf{\beta}_{0})^{\otimes 2})^{\otimes 2}\,d\Lambda_{1}(u,\mathbf{\beta}_{0})\Big{)},\] as \(n\to\infty\). Finally, for the second term on the right-hand side of the third step of (I.51) we have \(\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}{\rm vec}(\tilde{ \bf k}_{i}(t,\mathbf{\beta}_{0})^{\otimes 2})^{\otimes 2}\,d\Lambda_{i}(u, \mathbf{\beta}_{0})=o(1)\cdot O_{p}(1)\). Thus, \({\bf W}_{n}(t)\) converges to zero in probability for all \(t\in{\cal T}\), as \(n\to\infty\), according to Lenglart's inequality. In conclusion, the predictable covariation process \(\langle{\bf D}_{n,h}^{*}\rangle(t)\) of \({\bf D}_{n,h}^{*}\) at \(t\) converges to the matrix-valued function \({\bf V}_{\tilde{h}}(t)=\int_{0}^{t}\mathbb{E}(\tilde{\bf h}_{1}(u,\mathbf{\beta}_{0})^{\otimes 2}\,\lambda_{1}(u,\mathbf{\beta}_{0}))du\) in probability, as \(n\to\infty\), for all \(t\in{\cal T}\) (cf. Assumption I.2.1 (iii)). This completes the proof of Lemma I.3.5. \(\blacksquare\) **Proof of Lemma I.3.6.** We use the modified version of Rebolledo's central limit theorem as stated in Theorem I.3.4 to prove the weak convergence of \({\bf D}_{n,h}^{*}\) to the zero-mean Gaussian martingale \({\bf D}_{\tilde{h}}\). For this purpose, we first consider the term \(\sigma^{\epsilon}[\mathbf{\lambda}^{\top}{\bf D}_{n,h}^{*}](\tau)\) for some \(\mathbf{\lambda}\in S^{p+b-1}\), where \(S^{p+b-1}\) denotes the unit \((p+b-1)\)-sphere. It can be seen that \[\begin{split}&\sigma^{\epsilon}[\boldsymbol{\lambda}^{\top}\mathbf{D }^{*}_{n,h}](\tau)=\sum_{u\leq\tau}\lvert\Delta\boldsymbol{\lambda}^{\top} \mathbf{D}^{*}_{n,h}(u)\rvert^{2}1\{|\Delta\boldsymbol{\lambda}^{\top}\mathbf{D }^{*}_{n,h}(u)|>\epsilon\}\\ &=\sum_{u\leq\tau}\lvert\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})G_{ i}(u)\,\Delta N_{i}(u)\rvert^{2}1\{|\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})G_ {i}(u)\,\Delta N_{i}(u)|>\epsilon\}\\ &\leq\frac{1}{n}\sum_{u\leq\tau}\sum_{i=1}^{n}\lvert\boldsymbol{ \lambda}^{\top}\mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})G_{i}(u)\, \Delta N_{i}(u)\rvert^{2}1\{|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\boldsymbol{ \lambda}^{\top}\mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})G_{i}(u)\, \Delta N_{i}(u)|>\epsilon\}\\ &=\frac{1}{n}\sum_{i=1}^{n}\sum_{j:T_{i,j}\in\mathcal{T}^{\Delta }_{n,i}}(\boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(T_{i,j},\hat{\boldsymbol{ \beta}}_{n}))^{2}G_{i}^{2}(T_{i,j})1\{|\frac{1}{\sqrt{n}}\boldsymbol{\lambda}^ {\top}\mathbf{h}_{n,i}(T_{i,j},\hat{\boldsymbol{\beta}}_{n})G_{i}(T_{i,j})|> \epsilon\},\end{split}\] where in the third step of the derivation above it has been used that no two counting processes jump at the same time, i.e., \(\Delta N_{i}(t)\Delta N_{j}(t)=0\), for \(i\neq j\). From this it follows that \[\begin{split}&\mathbb{E}_{0}(\sigma^{\epsilon}[\boldsymbol{ \lambda}^{\top}\mathbf{D}^{*}_{n,h}](\tau))\\ &\leq\mathbb{E}_{0}(\frac{1}{n}\sum_{i=1}^{n}\sum_{j:T_{i,j}\in \mathcal{T}^{\Delta}_{n,i}}(\boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(T_{i, j},\hat{\boldsymbol{\beta}}_{n}))^{2}G_{i}^{2}(T_{i,j})1\{|\frac{1}{\sqrt{n}} \boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(T_{i,j},\hat{\boldsymbol{\beta}}_{ n})G_{i}(T_{i,j})|>\epsilon\})\\ &=\frac{1}{n}\sum_{i=1}^{n}\sum_{j:T_{i,j}\in\mathcal{T}^{\Delta }_{n,i}}(\boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(T_{i,j},\hat{\boldsymbol {\beta}}_{n}))^{2}\mathbb{E}_{0}(G_{i}^{2}(T_{i,j})1\{|\frac{1}{\sqrt{n}} \boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(T_{i,j},\hat{\boldsymbol{\beta}}_{ n})G_{i}(T_{i,j})|>\epsilon\})\\ &\leq\frac{1}{n}\sum_{i=1}^{n}\sum_{j:T_{i,j}\in\mathcal{T}^{ \Delta}_{n,i}}(\boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(T_{i,j},\hat{ \boldsymbol{\beta}}_{n}))^{2}\big{(}\mathbb{E}(G_{1,1}^{4})\mathbb{P}_{0}(| \frac{1}{\sqrt{n}}\boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(T_{i,j},\hat{ \boldsymbol{\beta}}_{n})G_{1,1}|>\epsilon)\big{)}^{1/2}\\ &\leq\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}(\boldsymbol{ \lambda}^{\top}\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n}))^{2}(\mathbb{ E}(G_{1,1}^{4}))^{1/2}[\mathbb{P}_{0}(\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}| \boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})||G _{1,1}|>\epsilon\sqrt{n})]^{1/2}\\ &\quad\cdot\frac{1}{n}\sum_{i=1}^{n}N_{i}(\tau),\end{split}\] where \(\mathbb{E}_{0}(\cdot)\) and \(\mathbb{P}_{0}(\cdot)\) denote the conditional expectation \(\mathbb{E}(\cdot|\mathcal{F}_{2}(0))\) and the conditional probability \(\mathbb{P}(\cdot|\mathcal{F}_{2}(0))\), respectively, given the initial filtration \(\mathcal{F}_{2}(0)\). In the first step of the equation above we have used that \(\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}})\in\mathcal{F}_{2}(0)\). In the second step, the Cauchy-Schwarz inequality has been applied. In the same step it has additionally been used that the multiplier processes \(G_{i}(t)\), \(t\in\mathcal{T},i=1,\ldots,n\), are i.i.d. and independent of \(\mathcal{F}_{2}(0)\). As our first goal is to verify the conditional Lindeberg condition in probability, i.e., \(\mathbb{E}_{0}(\sigma^{\epsilon}[\boldsymbol{\lambda}^{\top}\mathbf{D}^{*}_{n,h }](\tau))\overset{\mathbb{P}}{\longrightarrow}0\), \(n\to\infty\), we point out that for the terms of the last step of the equation above we have \(\mathbb{E}(G_{1,1}^{4})<\infty\) and \(\frac{1}{n}\sum_{i=1}^{n}N_{i}(\tau)=O_{p}(1)\). The latter holds according to the integrability of \(\Lambda_{i}(\tau,\beta_{0})\) and Assumption I.2.1 (iii), as explained at the beginning of the proof of Lemma I.2.4 in combination with Fact 1 of the supplement of Dobler et al. (2019). Furthermore, the limiting function \(\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\) of \(\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})\) exists and is assumed to be bounded on \(\mathcal{T}\) for all \(n\in\mathbb{N}\), according to Assumption I.2.1 (i) and (ii). Therefore, \(\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}(\boldsymbol{\lambda}^{\top} \mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}))^{2}\) is stochastically bounded: \[\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}(\boldsymbol{\lambda}^{ \top}\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n}))^{2}\] \[\leq(p+b)\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}\sum_{j=1}^{p+ b}\lambda_{j}^{2}\mathbf{h}_{n,i}^{j}(t,\hat{\boldsymbol{\beta}}_{n}))^{2}\] \[\leq(p+b)^{2}\|\boldsymbol{\lambda}\|_{\infty}^{2}\sup_{t\in \mathcal{T},i\in\{1,\ldots,n\}}\|\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_ {n}))-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0}))+\tilde{\mathbf{h}}_{i }(t,\boldsymbol{\beta}_{0}))\|_{\infty}^{2}\] \[\leq 2(p+b)^{2}\|\boldsymbol{\lambda}\|_{\infty}^{2}\big{(}\sup_{t \in\mathcal{T},i\in\{1,\ldots,n\}}\|\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta} }_{n}))-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0}))\|_{\infty}^{2}+\sup _{t\in\mathcal{T},i\in\{1,\ldots,n\}}\|\tilde{\mathbf{h}}_{i}(t,\boldsymbol{ \beta}_{0}))\|_{\infty}^{2}\big{)}\] \[=2(p+b)^{2}\|\boldsymbol{\lambda}\|_{\infty}^{2}\big{(}o_{p}(1)+ \sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}\|\tilde{\mathbf{h}}_{i}(t, \boldsymbol{\beta}_{0}))\|_{\infty}^{2}\big{)}\] \[=O_{p}(1).\] Hence, it is only left to show that \(\mathbb{P}(\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}|\boldsymbol{\lambda}^{ \top}\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})||G_{1,1}|>\epsilon\sqrt {n}|\mathcal{F}_{2}(0))=o_{p}(1)\). For this purpose, recall that \(1\{\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}\|\mathbf{h}_{n,i}(t,\hat{ \boldsymbol{\beta}}_{n})-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|_{ \infty}<\delta\}\) converges to one in probability, according to Assumption I.2.1 (i). Thus, we can proceed with the following term: \[\mathbb{P}_{0}(\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}| \boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})||G _{1,1}|>\sqrt{n}\epsilon)\mathbb{1}\{\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}} \|\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})-\tilde{\mathbf{h}}_{i}(t, \boldsymbol{\beta}_{0})\|_{\infty}<\delta\}\] \[=\mathbb{P}_{0}(\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}| \boldsymbol{\lambda}^{\top}\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})- \boldsymbol{\lambda}^{\top}\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})+ \boldsymbol{\lambda}^{\top}\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})||G _{1,1}|>\sqrt{n}\epsilon,\] \[\quad\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}\|\mathbf{h}_{n,i} (t,\hat{\boldsymbol{\beta}}_{n})-\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0 })\|_{\infty}<\delta)\] \[\leq\mathbb{P}_{0}((p+b)\|\boldsymbol{\lambda}\|_{\infty}(\delta+ \sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}\|\tilde{\mathbf{h}}_{i}(t, \boldsymbol{\beta}_{0})\|_{\infty})|G_{1,1}|>\sqrt{n}\epsilon)\] \[\quad\cdot\mathbb{1}\{\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}} \|\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})-\tilde{\mathbf{h}}_{i}(t, \boldsymbol{\beta}_{0})\|_{\infty}<\delta\}\] \[\leq\mathbb{P}_{0}\big{(}|G_{1,1}|>\frac{\sqrt{n}\epsilon}{(p+b) \|\boldsymbol{\lambda}\|_{\infty}(\delta+\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}} \|\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|_{\infty})}\big{)}\] \[\stackrel{{\mathbb{P}}}{{\longrightarrow}}0,\;n\to\infty.\] Here, the convergence in probability of the conditional probability in the last step holds, because \(\tilde{\mathbf{h}}_{i}(t,\mathbf{\beta}_{0})\) is bounded on \(\mathcal{T}\) for all \(i\in\mathbb{N}\), as stated in Assumption I.2.1 (ii). We can conclude that \(\mathbb{P}(\sup_{t\in\mathcal{T},i\in\{1,\dots,n\}}|\mathbf{\lambda}^{\top} \mathbf{h}_{n,i}(t,\hat{\mathbf{\beta}}_{n})||G_{1,1}|>\epsilon\sqrt{n}|\mathcal{F} _{2}(0))=o_{p}(1)\). Thus, the conditional Lindeberg condition in probability is fulfilled for \(\mathbf{\lambda}^{\top}\mathbf{D}_{n,h}^{*}(t)\) with \(\mathbf{\lambda}\in S^{p+b-1}\). As \(\|\mathbf{\lambda}\|_{\infty}\leq 1\), we can get an upper bound independent of \(\mathbf{\lambda}\), and thus we in fact know that the asserted Lindeberg condition holds for all \(\mathbf{\lambda}\in S^{p+b-1}\). We would like to point out that the probability space can more conveniently be modelled as a product space \((\Omega,\mathcal{A},\mathbb{P})=(\Omega_{1}\times\Omega_{2},\mathcal{A}_{1} \otimes\mathcal{A}_{2},\mathbb{P}_{1}\otimes\mathbb{P}_{2})=(\Omega_{1}, \mathcal{A}_{1},\mathbb{P}_{1})\otimes(\Omega_{2},\mathcal{A}_{2},\mathbb{P} _{2})\). In the following we make use of this notation to explicitly refer to the probability space \((\Omega_{1},\mathcal{A}_{1},\mathbb{P}_{1})\) underlying the data sets \(\{\mathbf{N}(t),\mathbf{Y}(t),\mathbf{Z}(t),t\in\mathcal{T}\}\), and the probability space \((\Omega_{2},\mathcal{A}_{2},\mathbb{P}_{2})\) underlying the sets of multiplier processes \(\{G_{1}(t),\dots,G_{n}(t),\,t\in\mathcal{T}\}\). Additionally, we denote by \(\xrightarrow{\mathcal{E}_{\mathbb{P}_{2}}}\) the convergence in law w.r.t the probability measure \(\mathbb{P}_{2}\). Moreover, for some stochastic quantity \(\mathbf{H}_{n}\), we denote \(\mathbf{H}_{n}\) conditional on a particular data set as \(\mathbf{H}_{n}|\mathcal{F}_{2}(0)(\omega)\), \(\omega\in\Omega_{1}\). From the conditional Lindeberg condition in probability it follows that there exists for all subsequences \(n_{1}\) of \(n\) a further subsequence \(n_{2}\) such that \(\mathbb{E}_{\mathbb{P}_{2}}(\sigma^{\epsilon}[\mathbf{\lambda}^{\top}\mathbf{D}_{ n_{2},h}^{*}](\tau)|\mathcal{F}_{2}(0))(\omega)\longrightarrow 0\), \(n\rightarrow\infty\), for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\) and for all \(\mathbf{\lambda}\in S^{p+b-1}\). Here, \(\mathbb{E}_{\mathbb{P}_{2}}(\cdot)\) indicates that the expectation is taken with respect to \(\mathbb{P}_{2}\). Hence, the (unconditional) Lindeberg condition holds along the subsequence \(n_{2}\) for \(\mathbb{P}_{1}\)-almost all data sets. Next, we consider the predictable covariation process of \(\mathbf{\lambda}^{\top}\mathbf{D}_{n,h}^{*}\) for some \(\mathbf{\lambda}\in S^{p+b-1}\) and get, conditionally on \(\mathcal{F}_{2}(0)\), \[\langle\mathbf{\lambda}^{\top}\mathbf{D}_{n,h}^{*}\rangle(t)=\mathbf{\lambda}^{\top} \langle\mathbf{D}_{n,h}^{*}\rangle(t)\mathbf{\lambda}\xrightarrow{\mathbb{P}_{1} \otimes\mathbb{P}_{2}}\mathbf{\lambda}^{\top}\mathbf{V}_{\tilde{h}}(t)\mathbf{\lambda },\text{ as }n\rightarrow\infty,\text{ for all }t\in\mathcal{T},\] according to Lemma I.3.5. Furthermore, we have \[\mathbf{\lambda}^{\top}\big{(}(\langle\mathbf{D}_{n,h}^{*}\rangle- \mathbf{V}_{\tilde{h}})(t)\big{)}\mathbf{\lambda}=\sum_{j=1}^{p+b}\sum_{l=1}^{p+b} \lambda_{j}(\langle\mathbf{D}_{n,h}^{*}\rangle-\mathbf{V}_{\tilde{h}})_{j,l}(t )\lambda_{l}\] \[\leq(p+b)^{2}\|\mathbf{\lambda}\|_{\infty}^{2}\cdot\|\langle\mathbf{D }_{n,h}^{*}\rangle(t)-\mathbf{V}_{\tilde{h}}(t)\|_{\infty},\] where \((\langle\mathbf{D}_{n,h}^{*}\rangle-\mathbf{V}_{\tilde{h}})_{j,l}\) denotes the \((j,l)\)-th entry of the corresponding matrix. As \(\|\mathbf{\lambda}\|_{\infty}\leq 1\) and \(\|\langle\mathbf{D}_{n,h}^{*}\rangle(t)-\mathbf{V}_{\tilde{h}}(t)\|_{\infty}=o _{p}(1)\), in view of Lemma I.3.5 we thus obtain \[\langle\mathbf{\lambda}^{\top}\mathbf{D}_{n,h}^{*}\rangle(t)\xrightarrow{\mathbb{ P}_{1}\otimes\mathbb{P}_{2}}\mathbf{\lambda}^{\top}\mathbf{V}_{\tilde{h}}(t)\mathbf{ \lambda},\text{ as }n\rightarrow\infty,\text{ for all }t\in\mathcal{T},\text{ and all }\mathbf{\lambda}\in S^{p+b-1}.\] Hence, there exists for every subsequenetec \(n_{3}\) of \(n_{2}\) a further subsequence \(n_{4}\) such that \(\langle\mathbf{\lambda}^{\top}\mathbf{D}_{n_{4},h}^{*}\rangle|\mathcal{F}_{2}(0)( \omega)\xrightarrow{\mathbb{P}_{2}}\mathbf{\lambda}^{\top}\mathbf{V}_{\tilde{h}}(t) \mathbf{\lambda},\text{ as }n\rightarrow\infty\), for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\), all \(t\in\mathcal{T}\), and all \(\mathbf{\lambda}\in S^{p+b-1}\). Clearly, it also holds that \(\mathbb{E}_{\mathbb{P}_{2}}(\sigma^{\epsilon}[\mathbf{\lambda}^{\top}\mathbf{D}_{n_{4},h}^{*}](\tau)|\mathcal{F}_{2}(0))(\omega)\to 0\), \(n\rightarrow\infty\), for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\) and all \(\boldsymbol{\lambda}\in S^{p+b-1}\). Thus, with Theorem I.3.4 it follows that \[\boldsymbol{\lambda}^{\top}\mathbf{D}_{n_{4},h}^{*}|\mathcal{F}_{2}(0)(\omega) \stackrel{{\mathcal{L}_{\mathbb{P}_{2}}}}{{\longrightarrow}} \boldsymbol{\lambda}^{\top}\mathbf{D}_{\tilde{h}},\text{ in }D(\mathcal{T}),\text{ as }n \rightarrow\infty,\] for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\) and all \(\boldsymbol{\lambda}\in S^{p+b-1}\). As the weak convergence of \(\boldsymbol{\lambda}^{\top}\mathbf{D}_{n_{4},h}^{*}|\mathcal{F}_{2}(0)(\omega)\) holds for all \(\boldsymbol{\lambda}\in S^{p+b-1}\), the Cramer-Wold device yields \(\mathbf{D}_{n_{4},h}^{*}|\mathcal{F}_{2}(0)(\omega)\stackrel{{ \mathcal{L}_{\mathbb{P}_{2}}}}{{\longrightarrow}}\mathbf{D}_{\tilde{h}}\), in \(D(\mathcal{T})^{p+b}\), as \(n\rightarrow\infty\), for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). Finally, we get, conditionally on \(\mathcal{F}_{2}(0)\), \[\mathbf{D}_{n,h}^{*}\stackrel{{\mathcal{L}_{\mathbb{P}_{2}}}}{{ \longrightarrow}}\mathbf{D}_{\tilde{h}},\text{ in }D(\mathcal{T})^{p+b},\text{ as }n \rightarrow\infty,\] in \(\mathbb{P}_{1}\)-probability. This completes the proof of Lemma I.3.6. \(\blacksquare\) **Proof of Corollary I.3.7.** We relate the optional covariation process \([\mathbf{D}_{n,h}^{*}](t)\) and the predictable covariation process \(\langle\mathbf{D}_{n,h}^{*}\rangle(t)\) of \(\mathbf{D}_{n,h}^{*}(t)\) to each other by noting the obvious \[[\mathbf{D}_{n,h}^{*}](t)=[\mathbf{D}_{n,h}^{*}](t)-\langle\mathbf{D}_{n,h}^{ *}\rangle(t)+\langle\mathbf{D}_{n,h}^{*}\rangle(t).\] Consequently, if the predictable covariation process \(\langle\mathbf{D}_{n,h}^{*}\rangle(t)\) converges in probability to \(\mathbf{V}_{\tilde{h}}(t)\), as \(n\rightarrow\infty\), and it holds that \([\mathbf{D}_{n,h}^{*}](t)-\langle\mathbf{D}_{n,h}^{*}\rangle(t)=o_{p}(1)\), then also the optional covariation \([\mathbf{D}_{n,h}^{*}](t)\) converges in probability to \(\mathbf{V}_{\tilde{h}}(t)\), as \(n\rightarrow\infty\), and vice versa. Hence, for this proof we assume that Lemma I.3.5 holds and show that the difference between the optional covariation process and the predictable covariation process of \(\mathbf{D}_{n,h}^{*}(t)\) vanishes asymptotically. Let us consider the vectorized version \(\mathbf{Q}_{n}\) of the difference between the optional covariation process and the predictable covariation process of \(\mathbf{D}_{n,h}^{*}(t)\), \(t\in\mathcal{T}\), \[\mathbf{Q}_{n}(t) =\text{vec}([\mathbf{D}_{n,h}^{*}](t)-\langle\mathbf{D}_{n,h}^{*} \rangle(t))\] \[=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\text{vec}(\mathbf{h}_{n,i} (u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2})(G_{i}^{2}(u)-1)dN_{i}(u).\] The \(\text{vec}(\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2})\) in the integrands are known and locally bounded and predictable. Hence, according to Theorem II.3.1 of Andersen et al. (1993), \(\mathbf{Q}_{n}\) is a vector of local square integrable martingales if \(\int_{0}^{\cdot}(G_{i}^{2}(u)-1)dN_{i}(u)\) is a finite variation local square integrable martingale for all \(i=1,\ldots,n\). This is what we show in the following three steps. 1. The finite variation holds, because \[\int_{0}^{\tau}\lvert(G_{i}^{2}(u)-1)dN_{i}(u)\rvert\leq(\sup_{t\in\mathcal{T}}G_{ i}^{2}(t)+1)N_{i}(\tau),\] and the term on the right-hand side is almost surely finite as \(N_{i}(\tau)<\infty\), and the supremum is a maximum of almost surely finitely many random variables. 2. It is square integrable, since \[\sup_{t\in\mathcal{T}}\mathbb{E}_{0}\Big{(}\Big{[}\int_{0}^{t}(G_{ i}^{2}(u)-1)dN_{i}(u)\Big{]}^{2}\Big{)} =\sup_{t\in\mathcal{T}}\mathbb{E}_{0}\Big{(}\Big{[}\sum_{j:T_{i,j }\leq t}(G_{i,j}^{2}-1)\Big{]}^{2}\Big{)}\] \[\leq N_{i}(\tau)\sum_{j=1}^{n_{i}}\mathbb{E}(G_{i,j}^{4}-2G_{i,j} ^{2}+1)\] \[\leq N_{i}(\tau)^{2}\mathbb{E}(G_{1,1}^{4})<\infty,\] where \(\mathbb{E}_{0}(\cdot)\) denotes the conditional expectation \(\mathbb{E}(\cdot|\mathcal{F}_{2}(0))\) and \(\big{|}\{j:T_{i,j}\leq t\}\big{|}\) the cardinality of the corresponding set. Moreover, in the third step we have applied that the counting processes \(N_{i}(t)\), \(t\in\mathcal{T}\), are \(\mathcal{F}_{2}(0)\)-measurable, whereas the values of \(G_{i,j}\) and the filtration \(\mathcal{F}_{2}(0)\) are independent for all \(j=1,\ldots,n_{i}\) and \(i=1,\ldots,n\). Additionally, in the fourth step we used that \(G_{i,1},\ldots,G_{i,n_{i}}\) are identically distributed with zero mean, unit variance and finite fourth moment for all \(i=1,\ldots,n\). 3. The martingale property is valid, as \[\mathbb{E}\big{(}\int_{0}^{t}(G_{i}^{2}(u)-1)dN_{i}(u)|\mathcal{ F}_{2}(s)\big{)}\] \[=\mathbb{E}\big{(}\int_{0}^{s}(G_{i}^{2}(u)-1)dN_{i}(u)+\int_{s}^ {t}(G_{i}^{2}(u)-1)dN_{i}(u)|\mathcal{F}_{2}(s)\big{)}\] \[=\int_{0}^{s}(G_{i}^{2}(u)-1)dN_{i}(u)+\int_{s}^{t}\big{(} \mathbb{E}(G_{i}^{2}(u))-1\big{)}dN_{i}(u)\] \[=\int_{0}^{s}(G_{i}^{2}(u)-1)dN_{i}(u),\] where in the second step we have used that the counting process \(N_{i}(t)\) is \(\mathcal{F}_{2}(0)\subset\mathcal{F}_{2}(t)\)-measurable for \(t\in\mathcal{T}\), \(i=1,\ldots,n\). Furthermore, for a jump at \(u\leq s\), the multiplier process \(G_{i}(u)\) is \(\mathcal{F}_{2}(s)\)-measurable, and, if \(u\) is greater than or equal to the earliest jump time point, say \(T_{i}(s^{+})\), of process \(N_{i}\) in \((s,\tau]\), the values of \(G_{i}(u)\) and the filtration \(\mathcal{F}_{2}(s)\) are independent, \(i=1,\ldots,n\). In the third step we used that the multiplier processes \(G_{i}(t)\), \(t\in\mathcal{T}\), have zero mean and unit variance, \(i=1,\ldots,n\). In conclusion, \(\mathbf{Q}_{n}\) is a vector of local square integrable martingales. Next, we wish to show that \(\mathbf{Q}_{n}(t)\) converges to zero in probability, as \(n\to\infty\). For this we apply Lenglart's inequality and consider the predictable covariation process \(\langle\mathbf{Q}_{n}\rangle(\tau)\) of the martingale \(\mathbf{Q}_{n}\) at \(\tau\) \[\langle\mathbf{Q}_{n}\rangle(\tau) =\Big{\langle}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{\cdot}\text{vec }(\mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2})(G_{i}^{2}(u)-1)dN_{i}(u )\Big{\rangle}(\tau)\] \[=\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}\text{vec}(\mathbf{ h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2})^{\otimes 2}d\Big{\langle}\int_{0}^{ \cdot}(G_{i}^{2}(v)-1)dN_{i}(v)\Big{\rangle}(u)\] \[=\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}\text{vec}(\mathbf{ h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2})^{\otimes 2}(\mathbb{E}(G_{i}^{4}(u)) -1)dN_{i}(u),\] where in the second step we have used that \[d\Big{\langle}\int_{0}^{\cdot}(G_{i}^{2}(u)-1)dN_{i}(u),\int_{0 }^{\cdot}(G_{l}^{2}(u)-1)dN_{l}(u)\Big{\rangle}(t)\] \[=\text{Cov}\big{(}(G_{i}^{2}(t)-1)dN_{i}(t),(G_{l}^{2}(t)-1)dN_{l }(t)|\mathcal{F}_{t\cdot}\big{)}\] \[=\text{Cov}\big{(}G_{i}^{2}(t),G_{l}^{2}(t)\big{)}dN_{i}(t)dN_{l} (t)\] \[=0,\] because \(G_{1}(t),\ldots,G_{n}(t)\), \(t\in\mathcal{T}\), are pairwise independent and no two counting processes jump simultaneously. The third step holds due to \[d\Big{\langle}\int_{0}^{\cdot}(G_{i}^{2}(u)-1)dN_{i}(u)\Big{\rangle}(t) =\mathbb{E}\big{(}[(G_{i}^{2}(t)-1)dN_{i}(t)]^{2}|\mathcal{F}_{t \cdot}\big{)}\] \[=\big{(}\mathbb{E}(G_{i}^{4}(t))-2\mathbb{E}(G_{i}(t)^{2})+1\big{)} dN_{i}(t)\] \[=\big{(}\mathbb{E}(G_{i}^{4}(t))-1\big{)}dN_{i}(t).\] We continue by stating that \[\langle\mathbf{Q}_{n}\rangle(\tau)=\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{ \tau}\text{vec}(\tilde{\mathbf{h}}_{i}(u,\boldsymbol{\beta}_{0})^{\otimes 2})^{ \otimes 2}(\mathbb{E}(G_{i}^{4}(u))-1)dN_{i}(u)\] (I.53) \[+\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}[\text{vec}(\mathbf{h}_ {n,i}(u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2})^{\otimes 2}-\text{vec}(\tilde{ \mathbf{h}}_{i}(u,\boldsymbol{\beta}_{0})^{\otimes 2})^{\otimes 2}](\mathbb{E}(G_{i}^{4}(u))-1) dN_{i}(u).\] For the first term on the right-hand side we have \[\begin{split}&\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}\text{ vec}(\tilde{\mathbf{h}}_{i}(u,\boldsymbol{\beta}_{0})^{\otimes 2})^{\otimes 2}(\mathbb{E}(G_{i}^ {4}(u))-1)dN_{i}(u)\\ &\leq\frac{1}{n}\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\| \tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|_{\infty}^{4}(\mathbb{E}(G_ {1,1}^{4})-1)\frac{1}{n}\sum_{i=1}^{n}N_{i}(\tau)=o_{p}(1),\text{ as }n\to\infty,\end{split}\] (I.54) since \(\mathbb{E}(G_{1,1}^{4})<\infty\) according to Assumption I.2.1 (ii), and \(\frac{1}{n}\sum_{i=1}^{n}N_{i}(\tau)=O_{p}(1)\), as was derived at the beginning of the proof of Lemma I.2.4. Additionally, for the second term on the right-hand side we find \[\begin{split}&\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\tau}[\text{ vec}(\mathbf{h}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2})^{\otimes 2}-\text{ vec}(\tilde{\mathbf{h}}_{i}(u,\boldsymbol{\beta}_{0})^{\otimes 2})^{ \otimes 2}](\mathbb{E}(G_{i}^{4}(u))-1)dN_{i}(u)\\ &\leq\frac{1}{n}\sup_{i\in\{1,\ldots,n\},t\in\mathcal{T}}\|\text {vec}(\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})^{\otimes 2})^{\otimes 2}- \text{vec}(\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})^{\otimes 2})^{ \otimes 2}\|_{\infty}\\ &\quad\cdot(\mathbb{E}(G_{1,1}^{4})-1)\frac{1}{n}\sum_{i=1}^{n}N_ {i}(\tau)\\ &\leq\|\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})\|_{ \infty}^{2}\Big{[}\|\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})-\tilde {\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|_{\infty}\|\mathbf{h}_{n,i}(t, \hat{\boldsymbol{\beta}}_{n})\|_{\infty}\\ &\quad+\|\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|_{ \infty}\|\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})-\tilde{\mathbf{h}}_ {i}(t,\boldsymbol{\beta}_{0})\|_{\infty}\Big{]}\\ &\quad+\|\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|_{ \infty}^{2}\Big{[}\|\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})-\tilde {\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|_{\infty}\|\mathbf{h}_{n,i}(t, \hat{\boldsymbol{\beta}}_{n})\|_{\infty}\\ &\quad+\|\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta}_{0})\|_{ \infty}\|\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})-\tilde{\mathbf{h}}_ {i}(t,\boldsymbol{\beta}_{0})\|_{\infty}\Big{]}\Big{)}\\ &\quad\cdot\frac{1}{n}(\mathbb{E}(G_{1,1}^{4})-1)\frac{1}{n}\sum_ {i=1}^{n}N_{i}(\tau)\\ &=o_{p}(1),\text{ as }n\to\infty,\end{split}\] (I.55) where we used \(\mathbb{E}(G_{1,1}^{4})<\infty\), \(\frac{1}{n}\sum_{i=1}^{n}N_{i}(\tau)=O_{p}(1)\), \(\|\mathbf{h}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})\|_{\infty}\)\(<\infty\), Assumption I.2.1 (i), (ii), and (I.52) in combination with the triangle inequality. In particular, the terms in brackets vanish asymptotically, as \(n\to\infty\). Combining (I.53), (I.54) and (I.55), we get \(\langle\mathbf{Q}_{n}\rangle(\tau)=o_{p}(1)\) as \(n\to\infty\), and with Lenglart's inequality it follows that \[{\bf Q}_{n}(t)=\mbox{vec}\big{(}[{\bf D}_{n,h}^{*}](t)-\langle{\bf D}_{n,h}^{*} \rangle(t)\big{)}\stackrel{{\mathbb{P}}}{{\longrightarrow}}0,\mbox{ as }n\to\infty,\mbox{ for all }t\in{ \cal T}.\] In combination with Lemma I.3.5, we have \([{\bf D}_{n,h}^{*}](t)\stackrel{{\mathbb{P}}}{{\longrightarrow}}{ \bf V}_{\tilde{h}}(t)\), as \(n\to\infty\), for all \(t\in{\cal T}\). This completes the proof of Corollary I.3.7. \(\blacksquare\) **Proof of Lemma I.3.8.** Recall from (I.18) that \({\bf B}_{n}^{*}(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}{\rm D}{\bf k}_{n,i}(u,\hat{\mathbf{\beta}}_{n})\big{(}G_{i}(u)+1\big{)}dN_{i}(u)\). Then, we have \[\begin{split}&\sup_{t\in{\cal T}}\lVert{\bf B}_{n}^{*}(t)-{\bf B}(t)\rVert \\ &\leq\sup_{t\in{\cal T}}\Big{\lVert}\frac{1}{n}\sum_{i=1}^{n} \int_{0}^{t}[{\rm D}{\bf k}_{n,i}(u,\mathbf{\beta}_{0})-\tilde{\bf K }_{i}(u,\mathbf{\beta}_{0})](G_{i}(u)+1)dN_{i}(u)\Big{\rVert}\\ &\quad+\sup_{t\in{\cal T}}\Big{\lVert}\frac{1}{n}\sum_{i=1}^{n} \int_{0}^{t}\tilde{\bf K}_{i}(u,\mathbf{\beta}_{0})(G_{i}(u)+1)dN_{i }(u)-{\bf B}(t)\Big{\rVert}\\ &\leq\sup_{t\in{\cal T}}\Big{\lVert}\frac{1}{n}\sum_{i=1}^{n} \int_{0}^{t}[{\rm D}{\bf k}_{n,i}(u,\mathbf{\beta}_{0})-\tilde{\bf K }_{i}(u,\mathbf{\beta}_{0})](G_{i}(u)+1)dN_{i}(u)\Big{\rVert}\\ &\quad+\sup_{t\in{\cal T}}\Big{\lVert}\frac{1}{n}\sum_{i=1}^{n} \int_{0}^{t}\tilde{\bf K}_{i}(u,\mathbf{\beta}_{0})G_{i}(u)dN_{i}(u) \Big{\rVert}\\ &\quad+\sup_{t\in{\cal T}}\Big{\lVert}\frac{1}{n}\sum_{i=1}^{n} \int_{0}^{t}\tilde{\bf K}_{i}(u,\mathbf{\beta}_{0})dN_{i}(u)-{\bf B}( t)\Big{\rVert}.\end{split}\] (I.56) We consider the second term on the right-hand side of the second step of (I.56) first. According to Lemma I.3.2 with \(h_{n,i}(t,\hat{\beta}_{n})\equiv 1\), \(\int_{0}^{t}G_{i}(u)\,dN_{i}(u)\) is a square integrable martingale w.r.t. \({\cal F}_{2}\). Moreover, it holds that \(\int_{0}^{\tau}\lvert G_{i}(u)\,dN_{i}(u)\rvert\leq\max_{j=1,\ldots,n_{i}} \lvert G_{i,j}\rvert N_{i}(\tau)<\infty\) almost surely, as the maximum is taken over finitely many almost surely finite random variables. Thus, the martingale is also of finite variation. Due to Assumption I.2.3 (ii) and with Theorem II.3.1. of Andersen et al. (1993), it follows that \(\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\tilde{\bf K}_{i}(u,\mbox{\boldmath$\beta$ }_{0})G_{i}(u)dN_{i}(u)\) is a local square integrable martingale w.r.t. \({\cal F}_{2}\). Furthermore, its predictable covariation process at \(\tau\) is given by \[\begin{split}&\Big{\langle}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{\cdot} \tilde{\mathbf{K}}_{i}(u,\boldsymbol{\beta}_{0})G_{i}(u)dN_{i}(u)\Big{\rangle}( \tau)\\ &=\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\Big{\langle}\int_{0} \tilde{\mathbf{K}}_{i}(u,\boldsymbol{\beta}_{0})G_{i}(u)dN_{i}(u),\int_{0}^{ \cdot}\tilde{\mathbf{K}}_{j}(u,\boldsymbol{\beta}_{0})G_{j}(u)dN_{j}(u)\Big{ \rangle}(\tau)\\ &=\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\int_{0}^{\cdot} \tilde{\mathbf{K}}_{i}(u,\boldsymbol{\beta}_{0})\,d\Big{\langle}\int_{0}^{ \cdot}G_{i}(s)dN_{i}(s),\int_{0}^{\cdot}G_{j}(s)dN_{j}(s)\Big{\rangle}(u)\, \tilde{\mathbf{K}}_{j}(u,\boldsymbol{\beta}_{0})^{\top}\\ &=\frac{1}{n^{2}}\sum_{i=1}^{n}\int_{0}^{\cdot}\tilde{\mathbf{K} }_{i}(u,\boldsymbol{\beta}_{0})^{\otimes 2}\,dN_{i}(u),\end{split}\] (I.57) because \(\langle\int_{0}^{\cdot}G_{i}(s)dN_{i}(s),\int_{0}^{\cdot}G_{j}(s)dN_{j}(s) \rangle(u)=N_{i}(u)\), for \(i=j\), and zero otherwise, according to Lemma I.3.2. Additionally, in the second step of (I.57) the aforementioned Theorem II.3.1. has been used. For the remaining part of this proof we use unconditional convergence in probability instead of conditionally on \(\mathcal{F}_{2}(0)\), because due to Fact 1 of the supplement of Dobler et al. (2019) these two types of convergence are equivalent. We wish to show that the last term on the right-hand side of (I.57) converges to zero in probability, as \(n\to\infty\). For this, we bound that term from above by \(\frac{1}{n}\sup_{i\in\{1,\dots,n\},t\in\mathcal{T}}\lVert\tilde{\mathbf{K}}_{i }(t,\boldsymbol{\beta}_{0})\rVert_{\infty}^{2}\frac{1}{n}\sum_{i=1}^{n}N_{i}(\tau)\). Recall that \(\sup_{i\in\{1,\dots,n\},t\in\mathcal{T}}\lVert\tilde{\mathbf{K}}_{i}(t, \boldsymbol{\beta}_{0})\rVert_{\infty}^{2}<\infty\), by Assumption I.2.3 (ii), and \(\frac{1}{n}\sum_{i=1}^{n}N_{i}(\tau)=O_{p}(1)\), by the integrability of \(\Lambda_{i}(\tau,\beta_{0})\) and Assumption I.2.1 (iii), as stated at the beginning of Lemma I.2.4. Hence, the predictable covariation process of \(\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{\cdot}\tilde{\mathbf{K}}_{i}(u, \boldsymbol{\beta}_{0})G_{i}(u)dN_{i}(u)\) at \(\tau\) converges to zero in probability, as \(n\to\infty\). With Lenglart's inequality it follows that the corresponding martingale converges to zero in probability, as \(n\to\infty\), for all \(t\in\mathcal{T}\). In other words, the second term on the right-hand side of the second step of (I.56) vanishes asymptotically. Next, we consider the first term on the right-hand side of the second step of (I.56). For this term we get \[\sup_{t\in\mathcal{T}}\lVert\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t }[\mathrm{D}\mathbf{k}_{n,i}(u,\boldsymbol{\beta}_{0})-\tilde{\mathbf{K}}_{i}( u,\boldsymbol{\beta}_{0})](G_{i}(u)+1)dN_{i}(u)\rVert\] \[\leq\sup_{i\in\{1,\dots,n\},t\in\mathcal{T}}\lVert\mathrm{D} \mathbf{k}_{n,i}(t,\hat{\boldsymbol{\beta}})-\tilde{\mathbf{K}}_{i}(t, \boldsymbol{\beta}_{0})\lVert\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{\tau}|G_{i}(u )+1|\,dN_{i}(u).\] According to Assumption I.2.3 (i), the first term on the right-hand side of the inequality above converges to zero in probability, as \(n\to\infty\). We now address the corresponding second term, which can be rewritten as \(\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n_{i}}\lvert G_{i,j}+1\rvert\). Furthermore, we have \[\begin{split}\mathbb{E}\Big{(}\sum_{j=1}^{n_{i}}\lvert G_{i,j}+1 \rvert\Big{)}&=\mathbb{E}\Big{(}\mathbb{E}\Big{(}\sum_{j=1}^{n_{i }}\lvert G_{i,j}+1\rvert\lvert\mathcal{F}_{2}(0)\Big{)}\Big{)}\\ &=\mathbb{E}\Big{(}\sum_{j=1}^{n_{i}}\mathbb{E}(\lvert G_{i,j}+1 \rvert)\Big{)}\\ &\leq 2\mathbb{E}(N_{i}(\tau))<\infty,\end{split}\] (I.58) where in the second step we have used that \(N_{i}(t)\) with \(N_{i}(\tau)=n_{i}\) is \(\mathcal{F}_{2}(0)\)-measurable and \(G_{i}(t)\), \(t\in\mathcal{T}\), is independent of \(\mathcal{F}_{2}(0)\). Additionally, in the last step of (I.58) we employed that \(\operatorname{Var}(\lvert G_{i,j}\rvert)=\mathbb{E}(G_{i,j}^{2})-\mathbb{E}( \lvert G_{i,j}\rvert)^{2}\geq 0\) and \(\mathbb{E}(G_{i,j}^{2})=1\) implies \(\mathbb{E}(\lvert G_{i,j}\rvert)\leq 1\). As the pairs \((G_{i}(t),N_{i}(t))\) are pairwise independent and identically distributed, it follows with (I.58) and the law of large numbers that \(\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n_{i}}\lvert G_{i,j}+1\rvert\stackrel{{ \mathbb{P}}}{{\longrightarrow}}\mathbb{E}(\sum_{j=1}^{n_{1}}\lvert G_{1,j}+1 \rvert)\), as \(n\to\infty\). Finally, we conclude that \(\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{\tau}\lvert G_{i}(u)+1\rvert\,dN_{i}(u)=O_{ p}(1)\), which is why also the first term on the right-hand side of the second step of (I.56) converges to zero in probability, as \(n\to\infty\). It is only left to consider the third term on the right-hand side of the second step of (I.56). In fact, we have already shown in the proof of Lemma I.2.4 that this term converges to zero in probability, as \(n\to\infty\). Thus, we have proven that all three terms of (I.56) converge to zero in probability, as \(n\to\infty\), which completes the proof of Lemma I.3.8. \(\blacksquare\) **Proof of Theorem I.3.10.** We aim to derive the weak limit of the term \(\mathbf{D}_{n,k}^{*}+\mathbf{B}_{n}^{*}\mathbf{C}_{n}^{*}\mathbf{D}_{n,g}^{*}(\tau)\), as \(n\to\infty\), where \(\mathbf{D}_{n,k}^{*}\) and \(\mathbf{D}_{n,g}^{*}\) are vector-valued stochastic processes, \(\mathbf{B}_{n}^{*}\) is a matrix-valued stochastic process and \(\mathbf{C}_{n}^{*}\) is a random matrix. Recall the notation introduced in the proof of Lemma I.3.6 regarding the product probability space \((\Omega_{1},\mathcal{A}_{1},\mathbb{P}_{1})\ \otimes\ (\Omega_{2},\mathcal{A}_{2}, \mathbb{P}_{2})\), the convergence in law w.r.t \(\mathbb{P}_{2}\), \(\stackrel{{\mathcal{L}_{\mathbb{P}_{2}}}}{{\longrightarrow}}\), and \(\cdot\lvert\mathcal{F}_{2}(0)(\omega)\). According to Lemma I.3.6, we have, conditionally on \(\mathcal{F}_{2}(0)\), \((\mathbf{D}_{n,k}^{*}{}^{\top},\mathbf{D}_{n,g}^{*}{}^{\top})^{\top}=\mathbf{ D}_{n,h}^{*}\stackrel{{\mathcal{L}_{\mathbb{P}_{2}}}}{{ \longrightarrow}}\mathbf{D}_{\tilde{h}}\), in \((D(\mathcal{T}))^{p+b}\), as \(n\to\infty\), in \(\mathbb{P}_{1}\)-probability, where \(\mathbf{D}_{\tilde{h}}\) is given in Theorem I.2.6. Thus, for every subsequence \(n_{1}\) of \(n\) there exists a further subsequence \(n_{2}\) such that \[\mathbf{D}_{n_{2},h}^{*}\lvert\mathcal{F}_{2}(0)(\omega)\stackrel{{ \mathcal{L}_{\mathbb{P}_{2}}}}{{\longrightarrow}}\mathbf{D}_{\tilde{h}},\ \text{in}\ (D(\mathcal{T}))^{p+b},\ \text{as}\ n\to\infty,\] (I.59) for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). Moreover, with Lemma I.3.8 it follows that, conditionally on \(\mathcal{F}_{2}(0)\), \(\mathbf{B}_{n_{2}}^{*}(t)\stackrel{{\mathbb{P}_{1}\otimes \mathbb{P}_{2}}}{{\longrightarrow}}\mathbf{B}(t)\) uniformly in \(t\in\mathcal{T}\), as \(n\to\infty\). Hence, for every subsequence \(n_{3}\) of \(n_{2}\) there exists a further subsequence \(n_{4}\) such that \(\mathbf{B}_{n_{4}}^{*}(t)\lvert\mathcal{F}_{2}(0)(\omega)\stackrel{{ \mathbb{P}_{2}}}{{\longrightarrow}}\mathbf{B}(t)\), as \(n\to\infty\), uniformly in \(t\in\mathcal{T}\), for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). Consequently, we have \[\mathbf{B}_{n_{4}}^{*}|\mathcal{F}_{2}(0)(\omega)\stackrel{{ \mathcal{L}_{\mathbb{P}_{2}}}}{{\longrightarrow}}\mathbf{B},\text{ in }\mathcal{D}(\mathcal{T}))^{pq},\text{ as }n\to\infty,\] (I.60) for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). Clearly, (I.59) also holds along the subsequence \(n_{4}\). Furthermore, we assume that, conditionally on \(\mathcal{F}_{2}(0)\), \(\mathbf{C}_{n}^{*}\) converges in \(\mathbb{P}_{1}\otimes\mathbb{P}_{2}\)-probability to \(\mathbf{C}\), i.e., the limits of \(\mathbf{C}_{n}^{*}\) and \(\mathbf{C}_{n}\), given in Section I.2, are identical. Thus, for every subsequence \(n_{5}\) of \(n_{4}\) there exists a further subsequence \(n_{6}\) such that \(\mathbf{C}_{n_{6}}^{*}|\mathcal{F}_{2}(0)(\omega)\stackrel{{ \mathbb{P}_{2}}}{{\longrightarrow}}\mathbf{C}\), as \(n\to\infty\), for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). Again, it follows that \[\mathbf{C}_{n_{6}}^{*}|\mathcal{F}_{2}(0)(\omega)\stackrel{{ \mathcal{L}_{\mathbb{P}_{2}}}}{{\longrightarrow}}\mathbf{C},\text{ as }n\to\infty,\] for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). Obviously, (I.59) and (I.60) also hold along the subsequence \(n_{6}\). Then, \[(\mathbf{D}_{n_{6},h}^{*},\mathbf{B}_{n_{6}}^{*},\mathbf{C}_{n_{6}}^{*})| \mathcal{F}_{2}(0)(\omega)\stackrel{{\mathcal{L}_{\mathbb{P}_{2} }}}{{\longrightarrow}}(\mathbf{D}_{\bar{h}},\mathbf{B},\mathbf{C})\text{ in }\mathcal{D}[0,\tau]^{p+b+pq}\times \mathbb{R}^{pq},\text{ as }n\to\infty,\] for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\) follows analogously to the proof of Theorem I.2.6. Eventually, the continuous mapping theorem with, successively, the functions \(f_{1},f_{2}\), and \(f_{3}\) given in the proof of Theorem I.2.6 is applied to \((\mathbf{D}_{n_{6},h}^{*},\mathbf{B}_{n_{6}}^{*},\mathbf{C}_{n_{6}}^{*})| \mathcal{F}_{2}(0)(\omega)\). In particular, we get \(\mathbf{D}_{n_{6},k}^{*}+\mathbf{B}_{n_{6}}^{*}\mathbf{C}_{n_{6}}^{*} \mathbf{D}_{n_{6},g}^{*}(\tau)|\mathcal{F}_{2}(0)(\omega)\stackrel{{ \mathcal{L}_{\mathbb{P}_{2}}}}{{\longrightarrow}}\mathbf{D}_{\bar{k}}+ \mathbf{BCD}_{\bar{g}}(\tau)\) for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). Finally, by invoking the help of the subsequence principle again, we can conclude that, conditionally on \(\mathcal{F}_{2}(0)\), \[\mathbf{D}_{n,k}^{*}+\mathbf{B}_{n}^{*}\mathbf{C}_{n}^{*}\mathbf{D}_{n,g}^{*}( \tau)\stackrel{{\mathcal{L}_{\mathbb{P}_{2}}}}{{\longrightarrow }}\mathbf{D}_{\bar{k}}+\mathbf{BCD}_{\bar{g}}(\tau),\text{ in }\mathcal{D}( \mathcal{T}))^{p},\text{ as }n\to\infty,\] in \(\mathbb{P}_{1}\)-probability. Moreover, we can summarize the results of Theorem I.2.6 and Theorem I.3.10 with the following statement \[d[\mathcal{L}_{\mathbb{P}_{2}}(\sqrt{n}(\mathbf{X}_{n}^{*}-\mathbf{X}_{n})| \mathcal{F}_{2}(0)),\mathcal{L}_{\mathbb{P}_{1}}(\sqrt{n}(\mathbf{X}_{n}- \mathbf{X}))]\stackrel{{\mathbb{P}_{1}}}{{\longrightarrow}}0,\text{ as }n\to\infty.\] ## Part II: Application in Fine-Gray Models ### 1 Introduction In this Part II, we apply the wild bootstrap as described in Part I to the estimators involved in the Fine-Gray model (Fine and Gray, 1999) under censoring-complete data. The Fine-Gray model, which is also called the subdistribution hazards model, has been developed for the competing risks setting. In competing risks analyses, the considered survival outcome is divided into several endpoints that preclude each other. This means that for each individual only one transition out of the initial state into one of the competing endpoints is possible. Although one often is primarily interested in only one particular endpoint, the so-called event of interest, it is important to choose a model that appropriately adjusts for the competing risks. For example, in Wolbers et al. (2009) the authors compared the results of a data set analysed with and without accounting for competing risks and thereby illustrated the bias that is introduced when the competing event is ignored. The two perhaps most popular types of regression models that take competing risks into account, are the cause-specific hazard model--based on fitting multiple Cox-models (Cox, 1972)--and the subdistribution hazard model, which is also called the Fine-Gray model. As stated in Austin et al. (2016), in the cause-specific hazard model "the effect of the covariates on the rate of occurrence of the outcome" is modeled, whereas in the subdistribution hazard model "the effect of covariates on the cumulative incidence function" is described. As a consequence, in the subdistribution hazard model there is a direct and easily interpretable link between the covariates and the cumulative incidence function for one type of event. This is beneficial, especially because the cumulative incidence function is often used to summarize competing risks data. In the cause-specific hazard model the cumulative incidence function depends on the cause-specific hazard of all event types. Thus, in this model the effect of a covariate on the cause-specific hazard of the event of interest may differ from the effect of the covariate on the corresponding cumulative incidence function due to the effect of the covariate on the cause-specific hazard(s) of the competing event(s) (Gray, 1988). In the subdistribution hazard model this is avoided by directly modeling the cumulative incidence function. In Fine and Gray (1999) a Cox proportional hazards model is proposed for this. Although the Fine-Gray model enjoys great popularity due to this direct relation, in Austin et al. (2021) it has been found that in certain situations the sum of multiple estimated cumulative incidence functions following Fine-Gray models might exceed 1. When to use which of the two models is discussed in Austin et al. (2016) and illustrated by means of a simulation study in Dignam et al. (2012). Further comparison of the cause-specific hazard model and the Fine-Gray model can be found in Putter et al. (2007) and Putter et al. (2020), where in the former paper the comparison is handled from a practical point of view and in the latter the so-called reduction factor has been introduced in order to relate the two models from a theoretical perspective. Several ways to extend the subdistribution hazards model have been introduced. For example, in Fine and Gray (1999) complete data, censoring-complete data and right-censored data have been considered, while in Li (2016) the subdistribution hazards model is extended to the case of interval censored data. Furthermore, instead of using the Cox proportional hazards model for the subdistribution it has been suggested to use an additive hazards model in Sun et al. (2006). All in all, the Fine-Gray model as proposed in Fine and Gray (1999) plays an important role in the competing risks setting, which is why in the present Part II we chose to justify the use of the wild bootstrap as an approximation procedure for the associated estimators under censoring-complete data. At the same time, this exemplifies how to apply the theory developed in Part I. In comparison to the examples given in that Part II, the present application is more involved: we show in detail that the proposed assumptions hold and we extend the theory to the cumulative incidence function as a functional of counting process-based estimators. In this regard, the estimators of the Fine-Gray model are either of the general counting process-based form we assumed in Part I or they have the asymptotic martingale representation we considered in that chapter. In both cases the theory established in Part I is applicable. Additionally, the exact distributions of these estimators around their target quantities are unknown which is why approximating the distribution is a natural solution, e.g., when the aim is an interval or band estimation. Due to the structure of the estimators and the need for an approximation procedure, this situation is exemplary for the general setting in which the wild bootstrap has been studied in Part I. The present chapter is organized as follows. The Fine-Gray model and the underlying notation is introduced in Section II.2.1. In Section II.2.2 we employ the theory developed in Part I to derive the limiting distribution of all relevant basic estimators. Furthermore, in Section II.2.3, we define the wild bootstrap estimators according to Part I and use the theory provided in that chapter to derive the corresponding limiting distributions. Additionally, in Section II.2.4 we extend the theory of Part I by considering a functional of the corresponding estimators, the cumulative incidence function. In particular, we study the weak limit of the cumulative incidence function by means of the functional \(\delta\)-method. In Section II.3 we derive time-simultaneous confidence bands for the cumulative incidence function. Section II.4 contains the results of an extensive simulation study with which various resampling details for small sample sizes are evaluated. A real data example is given in Section II.5 to illustrate the usefulness of wild bootstrap-based confidence bands. We conclude this chapter with a short discussion in Section II.6. All proofs are given in the Appendix. ### Application of the Wild Bootstrap to Fine-Gray Models #### The Fine-Gray Model under Censoring-Complete Data: Preliminaries and Notation For each of \(n\) individuals \(i=1,\ldots,n\), we let \(T_{i}\) be the survival time in a competing risk setting with \(K\) event types, and \(C_{i}\) the right-censoring time which are both defined on a probability space \((\Omega,\mathcal{A},\mathbb{P})\). The individuals may be observed within the time frame \(\mathcal{T}=[0,\tau]\), where \(\tau\) is the maximum follow-up time, but \(T_{i}\) is only observable if \(T_{i}\leq C_{i}\). On the other hand, \(C_{i}\) is assumed to be always observable, e.g., there is only administrative loss to follow-up. In other words, we consider in this chapter the case of censoring-complete data only. Moreover, for each \(i\) we observe bounded \(q\)-dimensional vectors of time-constant covariates \(\mathbf{Z}_{i}\), measured at baseline, and, if \(T_{i}\leq C_{i}\), the type of the occurred event \(\epsilon_{i}\in\{1,\ldots,K\}\). In the competing risks setting, the event types are mutually exclusive. It is assumed that the data \((\min(T_{i},C_{i}),1\,(T_{i}\leq C_{i}),1\,(T_{i}\leq C_{i})\epsilon_{i},C_{i },\mathbf{Z}_{i})\), \(i=1,\ldots,n\), are independent and identically distributed, and that the event times and events types are conditionally independent of the censoring times given the covariates. In the Fine-Gray model setting we focus on events of type 1 only, and individuals who have experienced an event of type other than 1 remain in the so-called risk set until their censoring times. Thus, the risk set at the event time of individual \(i\) is \[R_{i}=\{j:(\min(C_{j},T_{j})\geq T_{i})\text{ or }(T_{j}\leq T_{i}\leq C_{j} \text{ and }\epsilon_{j}\neq 1)\}.\] Note that, as Fine and Gray discussed in their original paper (Fine and Gray, 1999), the notion "risk set" is actually misleading because if an individual \(i\) has experienced some event of type other than 1, it is of course impossible that this individual experiences the event of type 1 in the future. However, this definition of the risk set leads to the particular form of the cumulative incidence function under the Fine-Gray model, see (II.1) below. Finally, multivariate quantities are written in bold type and whenever there is no ambiguity or no need for specification, we will suppress the subscript \(i\) that indicates the individual. The central role in the Fine-Gray model is played by the cumulative incidence function (CIF) of the event of type 1 which is denoted by \(F_{1}\) and defined as the probability that the event of type 1 has already occurred by time \(t\), given a particular covariate vector \(\mathbf{Z}\), this is, \[F_{1}(t|\mathbf{Z})=\mathbb{P}(T\leq t,\epsilon=1|\mathbf{Z}),\quad t\in \mathcal{T}.\] Moreover, the instantaneous risk of a type 1 event, given that one is "at risk" and given the covariate vector \(\mathbf{Z}\), is quantified by the so-called subdistribution hazard \(\alpha_{1}\). The subdistribution hazard is defined as \[\alpha_{1}(t|{\bf Z})\] \[=\lim_{\Delta t\to 0}\frac{1}{\Delta t}{\mathbb{P}}[t\leq T\leq t +\Delta t,\epsilon=1|\{\min(C,T)\geq t\}\cup(\{T\leq t\leq C\}\cap\{\epsilon\neq 1 \}),{\bf Z}]\] \[=\lim_{\Delta t\to 0}\frac{1}{\Delta t}{\mathbb{P}}[t\leq T\leq t +\Delta t,\epsilon=1|\{T\geq t\}\cup(\{T\leq t\}\cap\{\epsilon\neq 1\}),{\bf Z }],\quad t\in{\cal T};\] cf. Gray (1988) and Fine and Gray (1999). Due to the particular definition of the risk set, there is a direct relation between \(F_{1}\) and \(\alpha_{1}\), which is \(\alpha_{1}(t|{\bf Z})=-d\log\{1-F_{1}(t|{\bf Z})\}/dt\) or equivalently, \[F_{1}(t|{\bf Z})=1-\exp\Big{\{}-\int_{0}^{t}\alpha_{1}(u|{\bf Z})du\Big{\}}, \quad t\in{\cal T}.\] (II.1) As proposed by the authors of Fine and Gray (1999), we choose the following proportional hazards model for the subdistribution through which the covariates are included in a semi-parametric manner: \[\alpha_{1}(t|{\bf Z})=\alpha_{1}(t,\boldsymbol{\beta}_{0}|{\bf Z})=\alpha_{1; 0}(t)\exp({\bf Z}^{\top}\boldsymbol{\beta}_{0}),\quad t\in{\cal T},\] (II.2) where \(\alpha_{1;0}(t)\) denotes the unknown non-negative baseline subdistributional hazard of event type 1 at time \(t\), and \(\boldsymbol{\beta}_{0}\) denotes the unknown vector of regression coefficients. Combining (II.1) and (II.2), we specify the cumulative incidence function of event type 1 as follows \[F_{1}(t|{\bf Z})=1-\exp\{-\exp({\bf Z}^{\top}\boldsymbol{\beta}_{0})\cdot A_{1 ;0}(t)\},\quad t\in{\cal T},\] (II.3) where \(A_{1;0}(t)=\int_{0}^{t}\alpha_{1;0}(u)du\) is the cumulative baseline subdistribution hazard. We assume that \(A_{1;0}(\tau)<\infty\). Note that \(F_{1}\) is a functional, say \(\Gamma\), of \(\boldsymbol{\theta}_{0}(t)=(\boldsymbol{\beta}_{0}^{\top},A_{1;0}(t))^{\top}\), \(t\in{\cal T}\), i.e., \[F_{1}(t|{\bf Z})=\Gamma(\boldsymbol{\theta}_{0}(t)|{\bf Z}),\quad t\in{\cal T}.\] As a consequence, we may obtain an estimator \(\hat{F}_{1,n}\) for \(F_{1}\) via estimators \(\hat{\boldsymbol{\beta}}_{n}\) and \(\hat{A}_{1;0,n}\) for \(\boldsymbol{\beta}_{0}\) and \(A_{1;0}\), respectively. For \(\hat{\boldsymbol{\beta}}_{n}\) we will take the well-known maximum partial likelihood estimator (MPLE), and for \(\hat{A}_{1;0,n}\) the Breslow estimator (see Section II.2.2). Thus, \(\hat{F}_{1,n}\) is given as the functional \(\Gamma\) of \(\hat{\boldsymbol{\theta}}_{n}(t)=(\hat{\boldsymbol{\beta}}_{n}^{\top},\hat{A}_ {1;0,n}(t,\hat{\boldsymbol{\beta}}_{n}))^{\top}\), \(t\in{\cal T}\), so that \[\hat{F}_{1,n}(t|{\bf Z})=\Gamma(\hat{\boldsymbol{\theta}}_{n}(t)|{\bf Z})=1- \exp\{-\exp({\bf Z}^{\top}\hat{\boldsymbol{\beta}}_{n})\cdot\hat{A}_{1;0,n}(t,\hat{\boldsymbol{\beta}}_{n})\},\quad t\in{\cal T}.\] Considering \(F_{1}\) and \(\hat{F}_{1,n}\) as functionals of \(\mathbf{\theta}_{0}\) and \(\hat{\mathbf{\theta}}_{n}\), respectively, will be of use when studying the (limiting) distribution of the stochastic process \(\sqrt{n}(\hat{F}_{1,n}-F_{1})\). From a practical point of view, one is typically interested in an interval or band estimate of \(F_{1}\). For this, one needs the distribution of \(\hat{F}_{1,n}-F_{1}\). As the exact distribution of the corresponding stochastic process is unknown, we suggest to approximate it via the wild bootstrap. Therefore, we will introduce a wild bootstrap estimator \(\hat{\mathbf{\theta}}_{n}^{*}(t)=(\hat{\mathbf{\beta}}_{n}^{*\top},\hat{A}_{1;0,n}^{*} (t,\hat{\mathbf{\beta}}_{n}^{*}))^{\top}\), \(t\in\mathcal{T}\), for \(\mathbf{\theta}_{0}\) in Section II.2.3. Based on \(\hat{\mathbf{\theta}}_{n}^{*}\), we define the resampled cumulative incidence function \(\hat{F}_{1,n}^{*}\) by \[\hat{F}_{1,n}^{*}(t|\mathbf{Z})=\Gamma(\hat{\mathbf{\theta}}_{n}^{*}(t)|\mathbf{Z })=1-\exp\{-\exp(\mathbf{Z}^{\top}\hat{\mathbf{\beta}}_{n}^{*})\cdot\hat{A}_{1;0,n }^{*}(t,\hat{\mathbf{\beta}}_{n}^{*})\},\quad t\in\mathcal{T}.\] Furthermore, we approximate the distribution of \(\sqrt{n}(\Gamma(\hat{\mathbf{\theta}}_{n}|\mathbf{Z})-\Gamma(\mathbf{\theta}_{0}| \mathbf{Z}))\) by the conditional distribution, given the data, of \(\sqrt{n}(\Gamma(\hat{\mathbf{\theta}}_{n}^{*}|\mathbf{Z})-\Gamma(\hat{\mathbf{\theta}} _{n}|\mathbf{Z}))\). In fact, we will show with Theorem II.2.10 in Section II.2.4 that the (conditional) distributions of these two stochastic processes are asymptotically equivalent. The derivation of this result relies on results on the level of the estimators and on the functional \(\delta\)-method. For this reason, we will first study the limiting distribution of \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})\) and \(\sqrt{n}(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})-A_{1;0}(\cdot)\) in Section II.2.2 and the limit distribution of their wild bootstrap counterparts \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}^{*}-\hat{\mathbf{\beta}}_{n})\) and \(\sqrt{n}(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})-\hat{A}_{1;0,n}( \cdot,\hat{\mathbf{\beta}}_{n}))\) in Section II.2.3. Then, with Theorem II.2.8 of Section II.2.3 we will prove that the (conditional) distributions of \(\sqrt{n}(\hat{\mathbf{\theta}}_{n}-\mathbf{\theta}_{0})\) and \(\sqrt{n}(\hat{\mathbf{\theta}}_{n}^{*}-\hat{\mathbf{\theta}}_{n})\) are asymptotically equivalent. **Remark II.2.1**.: In this remark, we wish to distinguish the Fine-Gray model in the competing risks setting under censoring-complete data and the ordinary Cox model without competing events. In both models one describes the transition of an individual from the state "event (of interest) has not yet happened and individual has not yet been censored" to the state "event (of interest) has already occurred". In that sense, the Fine-Gray model can be understood as a reduction of a competing risks model in which the transitions to all competing events are considered separately and simultaneously to a model in which, like the ordinary Cox survival model, only one type of state transition is modelled. Additionally, in both models the (subdistribution) hazard is based on the same proportional model. The differences between the two models are in the definition of the counting process, the at-risk set, the at-risk indicator, and the filtration, while the remaining structures stay the same. In fact, for \(K=1\) the Fine-Gray model reduces to the ordinary Cox model. As a consequence, the structure of the theoretical results for the (wild bootstrap) estimators in the context of the Fine-Gray model coincides with the structure of the results for the (wild bootstrap) estimators in Cox models. Hence, one may compare the results presented in this chapter for the Fine-Gray model with those stated in Chapter VII of Andersen et al. (1993) for the standard estimators in Cox models and with those in Dobler et al. (2019) for their wild bootstrap counterparts. #### The Estimators involved in the Fine-Gray Model and Weak Convergence Results We will now introduce the counting process notation by means of which the estimators are formulated. The counting process \(N_{i}(t)=\mathbb{1}\left\{\min(T_{i},C_{i})\leq t,T_{i}\leq C_{i},\epsilon_{i}=1\right\}\) records for individual \(i\) the observable type 1 event time and \(Y_{i}(t)=\mathbb{1}\{C_{i}\geq t\}(1-N_{i}(t-))\) is the at-risk indicator of individual \(i\), \(i=1,\ldots,n\), \(t\in\mathcal{T}\). Note that each counting process jumps at most once in the present competing risks setting. Moreover, given \(\mathbf{Z}\), the cumulative intensity process for individual \(i\) is given by \(\Lambda_{i}(t,\boldsymbol{\beta}_{0})=\Lambda_{i}(t,\boldsymbol{\beta}_{0}| \mathbf{Z})=\int_{0}^{t}Y_{i}(u)\alpha_{1}(u,\boldsymbol{\beta}_{0}|\mathbf{Z} _{i})du\), which can be shown to be the compensator of the counting process \(N_{i}(t)\). In other terms, conditionally on \(\mathbf{Z}\) the process \[M_{i}(t)=N_{i}(t)-\Lambda_{i}(t,\boldsymbol{\beta}_{0})\] is a square integrable martingale with respect to the filtration \[\mathcal{F}_{1}(t)=\sigma\{1\{C_{i}\geq u\},N_{i}(u),Y_{i}(u),\mathbf{Z}_{i}, 0<u\leq t,i=1,\ldots,n\},t\in\mathcal{T};\] (II.4) cf. Fine and Gray (1999). Furthermore, denoting \(\mathbf{Z}_{i}^{\otimes 0}=1\), \(\mathbf{Z}_{i}^{\otimes 1}=\mathbf{Z}_{i}\), and \(\mathbf{Z}_{i}^{\otimes 2}=\mathbf{Z}_{i}\cdot\mathbf{Z}_{i}^{\top}\), we define for \(m\in\{0,1,2\}\) (in non-bold-type for \(m=0\)), \[\begin{split}\mathbf{S}_{n}^{(m)}(t,\boldsymbol{\beta})& =\frac{1}{n}\sum_{i=1}^{n}\mathbf{Z}_{i}^{\otimes m}Y_{i}(t)\exp \{\mathbf{Z}_{i}^{\top}\boldsymbol{\beta}\},\\ \mathbf{E}_{n}(t,\boldsymbol{\beta})&=\mathbf{S}_{n }^{(1)}(t,\boldsymbol{\beta})\cdot S_{n}^{(0)}(t,\boldsymbol{\beta})^{-1},\\ \mathbf{R}_{n}(t,\boldsymbol{\beta})&=\mathbf{S}_{n }^{(2)}(t,\boldsymbol{\beta})\cdot S_{n}^{(0)}(t,\boldsymbol{\beta})^{-1}- \mathbf{E}_{n}(t,\boldsymbol{\beta})^{\otimes 2}.\end{split}\] (II.5) In preparation for the upcoming results we state the following regularity assumptions. **Assumption II.2.2**.: There exists a bounded neighborhood \(\mathcal{B}\subset\mathbb{R}^{q}\) of \(\boldsymbol{\beta}_{0}\) and deterministic functions \(s^{(0)}\), \(\mathbf{s}^{(1)}\), and \(\mathbf{s}^{(2)}\) defined on \(\mathcal{T}\times\mathcal{B}\) such that for \(m=0,1,2\), 1. \[\sup_{t\in\mathcal{T},\boldsymbol{\beta}\in\mathcal{B}}\left\|\mathbf{S}^{(m)} (t,\boldsymbol{\beta})-\mathbf{s}^{(m)}(t,\boldsymbol{\beta})\right\| \underset{n\rightarrow\infty}{\longrightarrow}0;\] 2. \(\mathbf{s}^{(m)}\) is a continuous function of \(\boldsymbol{\beta}\in\mathcal{B}\) uniformly in \(t\in\mathcal{T}\) and bounded on \(\mathcal{T}\times\mathcal{B}\); 3. \(s^{(0)}(\cdot,\boldsymbol{\beta})\) is bounded away from zero on \(\mathcal{T}\); 4. \((Y_{i},N_{i},\mathbf{Z}_{i})\), \(i=1,\ldots,n\), are pairwise independent and identically distributed; 5. \(\mathbf{V}_{\tilde{g}}(\tau)=\int_{0}^{\tau}\mathbf{r}(u,\boldsymbol{\beta}_{0 })s^{(0)}(u,\boldsymbol{\beta}_{0})dA_{1;0}(u)\) is positive definite, where \(\mathbf{r}(t,\boldsymbol{\beta})=\mathbf{s}^{(2)}(t,\boldsymbol{\beta})\cdot\) \[s^{(0)}(t,\boldsymbol{\beta})^{-1}-\mathbf{e}(t,\boldsymbol{\beta})^{\otimes 2} \text{ and }\mathbf{e}(t,\boldsymbol{\beta})=\boldsymbol{s}^{(1)}(t,\boldsymbol{\beta}) \cdot s^{(0)}(t,\boldsymbol{\beta})^{-1}.\] Note that, due to the continuous mapping theorem, \(\mathbf{e}(t,\boldsymbol{\beta})=\mathbf{s}^{(1)}(t,\boldsymbol{\beta})\cdot s ^{(0)}(t,\boldsymbol{\beta})^{-1}\) and \(\mathbf{r}(t,\boldsymbol{\beta})=\mathbf{s}^{(2)}(t,\boldsymbol{\beta})\cdot s ^{(0)}(t,\boldsymbol{\beta})^{-1}-\mathbf{e}(t,\boldsymbol{\beta})^{\otimes 2}\) are the respective limits in probability of \(\mathbf{E}_{n}(t,\boldsymbol{\beta})\) and \(\mathbf{R}_{n}(t,\boldsymbol{\beta})\) as \(n\to\infty\). In fact, with Assumption II.2.2 (iv), the boundedness of the covariates, and the law of large numbers, we have \[\mathbf{s}^{(m)}(t,\boldsymbol{\beta})=\mathbb{E}(Y_{1}(t)\mathbf{Z}_{1}^{ \otimes m}\exp(\mathbf{Z}_{1}^{\top}\boldsymbol{\beta})),\] (II.6) for all fixed \(t\in\mathcal{T}\), \(m\in\{0,1,2\}\) (in non-bold-type for \(m=0\)), and \(\boldsymbol{\beta}\in\mathcal{B}\). Furthermore, with the following Lemma II.2.3 we connect Assumption II.2.2 above with Assumption I.2.1 and Assumption I.2.3 of Part I, and we connect Assumption II.2.2 with the assumptions stated in Condition VII.2.1 of Andersen et al. (1993). The relation with the assumptions made in Part I is needed when employing the corresponding results and the relation made with the Condition of Andersen et al. (1993) is needed for the asymptotic representation of the MPLE. **Lemma II.2.3**.: 1. If Assumption II.2.2 (i) - (iv) hold, then Assumption I.2.1 and Assumption I.2.3 of Part I hold. 2. If Assumption II.2.2 holds, then Assumption I.2.5 and Assumption I.3.9 of Part I hold. 3. If Assumption II.2.2 holds, then Condition VII.2.1 of Andersen et al. (1993) holds. Proof.: See Appendix. \(\blacksquare\) As we aim at translating the results of the general setting into results for (the estimators involved in) the Fine-Gray model, we recall the essential notation of Part I: \[\mathbf{X}_{n}(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u, \tilde{\boldsymbol{\beta}}_{n})dN_{i}(u),\quad t\in\mathcal{T},\] (II.7) that is, the statistic \(\mathbf{X}_{n}\) is a counting process integral with respect to a locally bounded stochastic process \(\mathbf{k}_{n,i}(\cdot,\boldsymbol{\beta})\) evaluated at a consistent estimator \(\boldsymbol{\beta}=\tilde{\boldsymbol{\beta}}_{n}\) of the true model parameter \(\boldsymbol{\beta}_{0}\), cf. (I.1). Under mild regularity assumptions, the asymptotic representation of \(\sqrt{n}(\mathbf{X}_{n}-\mathbf{X})\) is given by \[\sqrt{n}(\mathbf{X}_{n}-\mathbf{X})=\mathbf{D}_{n,k}+\mathbf{B}_{n}\cdot \mathbf{C}_{n}\cdot\mathbf{D}_{n,g}(\tau)+o_{p}(1),\] (II.8) where \(\mathbf{D}_{n,k}\) and \(\mathbf{D}_{n,g}\) are local square integrable martingales with respect to \(\mathcal{F}_{1}\), cf. (I.10) and (I.11) of Part I. In particular, \(\mathbf{D}_{n,k}\) and \(\mathbf{D}_{n,g}\) are martingale integrals with respect to locally bounded stochastic processes \(\mathbf{k}_{n,i}(\cdot,\boldsymbol{\beta})\) and \(\mathbf{g}_{n,i}(\cdot,\boldsymbol{\beta})\) evaluated at \(\boldsymbol{\beta}=\boldsymbol{\beta}_{0}\), respectively, that are predictable for \(\mathbf{\beta}=\mathbf{\beta}_{0}\). Moreover, \(\mathbf{B}_{n}\) is a matrix-valued counting-process integral and \(\mathbf{C}_{n}\) is a random matrix, cf. (I.7) of Part I. Lemmas I.2.2 and I.2.4, and Assumption I.2.5 of Part I give the conditions for \(\mathbf{D}_{n,k}\), \(\mathbf{D}_{n,g}\), \(\mathbf{B}_{n}\), and \(\mathbf{C}_{n}\) to converge to a continuous zero-mean Gaussian vector martingale \(\mathbf{D}_{\tilde{k}}\), a continuous zero-mean Gaussian vector martingale \(\mathbf{D}_{\tilde{g}}\), a continuous matrix-valued deterministic function \(\mathbf{B}(t)\), and a deterministic matrix \(\mathbf{C}\), respectively. Since we will use the general notation of (II.7) and (II.8) for both the MPLE \(\hat{\mathbf{\beta}}_{n}\) and the Breslow estimator \(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})\), we will add superscripts to the corresponding components to specify whether they refer to the MPLE (superscript (1)) or to the Breslow estimator (superscript (2)). The notation of the asymptotic results is not ambiguous, which is why we omit the superscripts there. Finally, we write \(D(\mathcal{T})^{p}\) for the space of cadlag functions mapping from \(\mathcal{T}\) to \(\mathbb{R}^{p}\) equipped with the product Skorohod topology, \(p\in\mathbb{N}\). We now investigate the MPLE \(\hat{\mathbf{\beta}}_{n}\) and the Breslow estimator \(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})\). As the name suggests, the MPLE \(\hat{\mathbf{\beta}}_{n}\) maximizes a partial likelihood, which has a counting process-based expression. In other words, the estimator \(\hat{\mathbf{\beta}}_{n}\) of \(\mathbf{\beta}_{0}\) is defined as the root of the score statistic \[\mathbf{U}_{n}(t,\mathbf{\beta})=\sum_{i=1}^{n}\int_{0}^{t}(\mathbf{Z}_{i}-\mathbf{E}_{n}(u, \mathbf{\beta}))dN_{i}(u)\] at \(t=\tau\), see (7.2.16) on p. 486 of Andersen et al. (1993). With a Taylor expansion of \(\mathbf{0}=\mathbf{U}_{n}(\tau,\hat{\mathbf{\beta}}_{n})\) around \(\mathbf{\beta}_{0}\) and due to the consistency of \(\hat{\mathbf{\beta}}_{n}\) according to Lemma II.2.3 (iii) in combination with Theorem VII.2.1 of Andersen et al. (1993) (see Remark II.6.1), we obtain under Assumption II.2.2 that \[\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})=\big{(}\frac{1}{n}\mathbf{I}_{n }(\tau,\mathbf{\beta}_{0})\big{)}^{-1}\frac{1}{\sqrt{n}}\mathbf{U}_{n}(\tau,\mathbf{ \beta}_{0})+o_{p}(1),\] (II.9) where \(\mathbf{I}_{n}(t,\mathbf{\beta})=\sum_{i=1}^{n}\int_{0}^{t}\mathbf{R}_{n}(u,\mathbf{ \beta})dN_{i}(u)\) is the negative Jacobian of the score statistic at \(\mathbf{\beta}=\mathbf{\beta}_{0}\). Note that, although the MPLE \(\hat{\mathbf{\beta}}_{n}\) is related to a counting process-based statistic via the score statistic, it does not have the general counting process-based form (II.7) itself. However, the general results established in Part I hold as long as the asymptotic representation (II.8) is retrieved. Thus, we wish to relate the asymptotic representation of \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})\) on the right-hand side of (II.9) with the right-hand side of (II.8), i.e., with \(\mathbf{D}_{n,k}^{(1)}+\mathbf{B}_{n}^{(1)}\mathbf{C}_{n}^{(1)}\mathbf{D}_{n,g }^{(1)}(\tau)\). In particular, we identify the corresponding components as follows: \[\mathbf{C}_{n}^{(1)}=\big{(}\frac{1}{n}\mathbf{I}_{n}(\tau,\mathbf{\beta}_{0}) \big{)}^{-1},\] (II.10) which is to be understood as a generalized inverse of \(\mathbf{I}_{n}(\tau,\mathbf{\beta}_{0})\), e.g., the corresponding Moore-Penrose inverse, if the inverse does not exist, and \[\mathbf{D}^{(1)}_{n,g}(t)=\frac{1}{\sqrt{n}}\mathbf{U}_{n}(t,\mathbf{\beta}_{0}), \quad t\in\mathcal{T},\] where \(\mathbf{U}_{n}(\cdot,\mathbf{\beta})\) evaluated at \(\mathbf{\beta}=\mathbf{\beta}_{0}\) is a local square integrable martingale with respect to \(\mathcal{F}_{1}\) according to Remark II.6.2. Additionally, the integrands \(\mathbf{g}^{(1)}_{n,i}\) of \(\mathbf{D}^{(1)}_{n,g}\) evaluated at \(\mathbf{\beta}=\mathbf{\beta}_{0}\) are given via \[\mathbf{g}^{(1)}_{n,i}(t,\mathbf{\beta})=\mathbf{Z}_{i}-\mathbf{E}_{n}(t,\mathbf{ \beta}),\quad t\in\mathcal{T},\] for \(i=1,\ldots,n\). The remaining components on the right-hand side of (II.8) are superfluous and we define \(\mathbf{D}^{(1)}_{n,k}\) as the \(q\)-dimensional zero process and we set \(\mathbf{B}^{(1)}_{n}\) equal to the \((q\times q)\)-dimensional identity matrix, cf. (II.41). Finally, with the notation introduced above, we rewrite (II.9) as \[\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})=\mathbf{C}^{(1)}_{n}\cdot \mathbf{D}^{(1)}_{n,g}(\tau)+o_{p}(1).\] (II.11) With (II.11) we retrieved the desired asymptotic martingale representation (II.8), for which we have derived asymptotic results in Part I. In the following lemma the corresponding asymptotic distribution is given. **Lemma II.2.4**.: If Assumption II.2.2 holds, then \[\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})\stackrel{{ \mathcal{L}}}{{\longrightarrow}}\mathbf{C}\cdot\mathbf{D}_{\tilde{g}}(\tau), \ \text{as}\ n\to\infty,\] where \(\mathbf{C}=\mathbf{V}_{\tilde{g}}(\tau)^{-1}\) and \(\mathbf{D}_{\tilde{g}}(\tau)\sim\mathcal{N}(0,\mathbf{V}_{\tilde{g}}(\tau))\) with \[\mathbf{V}_{\tilde{g}}(\tau)=\int_{0}^{\tau}\mathbb{E}\big{(}(\mathbf{Z}_{1}- \mathbf{e}(u,\mathbf{\beta}_{0}))^{\otimes 2}\lambda_{1}(u,\mathbf{\beta}_{0})\big{)}du= \int_{0}^{\tau}\mathbf{r}(u,\mathbf{\beta}_{0})s^{(0)}(u,\mathbf{\beta}_{0})dA_{1;0}( u).\] (II.12) Thus, \(\mathbf{C}\cdot\mathbf{D}_{\tilde{g}}(\tau)\sim\mathcal{N}(0,\mathbf{V}_{ \tilde{g}}(\tau)^{-1})\). Proof.: The statement follows from (II.11) by means of Lemma II.2.3 (i) & (ii) in combination with Theorem I.2.6 of Part I. Moreover, the limit in probability of \(\mathbf{C}^{(1)}_{n}\) as \(n\to\infty\) is derived in the proof of Lemma II.2.3(ii). Next, we consider the Breslow estimator \(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})\) of \(A_{1;0}(\cdot)\) which is given by \[\hat{A}_{1;0,n}(t,\hat{\mathbf{\beta}}_{n})=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t} \frac{J_{n}(u)}{S_{n}^{(0)}(u,\hat{\mathbf{\beta}}_{n})}dN_{i}(u),\quad t\in \mathcal{T},\] (II.13) where \(J_{n}(t)=\mathbb{1}\{\sum_{i=1}^{n}Y_{i}(t)>0\}\) equals zero if and only if no individual is at-risk anymore. As this estimator has the general counting process-based form considered in (II.7), we identify \(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})=X_{n}^{(2)}(\cdot)\) and \(A_{1;0}(\cdot)=X^{(2)}(\cdot)\). In particular, the integrand \(k_{n}^{(2)}(\cdot,\hat{\mathbf{\beta}}_{n})\) of \(X_{n}^{(2)}\) is given by \[k_{n}^{(2)}(t,\mathbf{\beta})=J_{n}(t)\cdot S_{n}^{(0)}(t,\mathbf{\beta})^{-1},\quad t \in\mathcal{T},\mathbf{\beta}\in\mathbb{R}^{q}.\] According to Remark II.6.3 in the appendix, \(\sqrt{n}(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})-A_{1;0}(\cdot))=\sqrt{n} (X_{n}^{(2)}(\cdot)-X^{(2)}(\cdot))\) exhibits the desired asymptotic representation given in (II.8) as we have \[\sqrt{n}(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})-A_{1;0}(\cdot))=D_{n,k}^{ (2)}(\cdot)+\mathbf{B}_{n}^{(2)}(\cdot)\cdot\mathbf{C}_{n}^{(2)}\cdot\mathbf{ D}_{n,g}^{(2)}(\tau)+o_{p}(1),\] (II.14) with \[D_{n,k}^{(2)}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}\frac{J_{n}(u)}{S _{n}^{(0)}(u,\mathbf{\beta}_{0})}dM_{i}(u),\quad t\in\mathcal{T},\] and \[\mathbf{B}_{n}^{(2)}(t)=-\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}J_{n}(u)\mathbf{ E}_{n}(u,\mathbf{\beta}_{0})^{\top}\cdot S_{n}^{(0)}(u,\mathbf{\beta}_{0})^{-1}dN_{i}(u), \quad t\in\mathcal{T}.\] (II.15) Here, \(-J_{n}(t)\cdot\mathbf{E}_{n}(t,\mathbf{\beta}_{0})^{\top}\cdot S_{n}^{(0)}(t,\mathbf{ \beta}_{0})^{-1}\) is the Jacobian of \(k_{n}(t,\mathbf{\beta})\) with respect to \(\mathbf{\beta}\) at \(\mathbf{\beta}=\mathbf{\beta}_{0}\). Note that \(D_{n,k}^{(2)}\) is a local square integrable martingale with respect to \(\mathcal{F}_{1}\) according to Proposition II.4.1 of Andersen et al. (1993), as \(k_{n}(\cdot,\mathbf{\beta})\) at \(\mathbf{\beta}=\mathbf{\beta}_{0}\) is predictable and locally bounded. Additionally, \(\mathbf{C}_{n}^{(2)}\cdot\mathbf{D}_{n,g}^{(2)}(\tau)=\mathbf{C}_{n}^{(1)} \cdot\mathbf{D}_{n,g}^{(1)}(\tau)\), because the MPLE \(\hat{\mathbf{\beta}}_{n}\) has been used as the consistent estimator \(\tilde{\mathbf{\beta}}_{n}\) of \(\mathbf{\beta}_{0}\) in the context of the Breslow estimator, cf. (II.7). We are now ready to state the limiting distribution of \(\sqrt{n}(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})-A_{1;0}(\cdot))\). **Lemma II.2.5**.: If Assumption II.2.2 holds, then \[\sqrt{n}(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})-A_{1;0}(\cdot)) \stackrel{{\mathcal{L}}}{{\longrightarrow}}D_{\tilde{k}}(\cdot)+ \mathbf{B}(\cdot)\cdot\mathbf{C}\cdot\mathbf{D}_{\tilde{g}}(\tau),\ \mbox{in}\ D(\mathcal{T}),\ \mbox{as}\ n\to\infty,\] where the zero-mean Gaussian martingale \(D_{\tilde{k}}\) is the weak limit of \(D_{n,k}^{(2)}\) and \(D_{\tilde{k}}\) has the variance function \[V_{\tilde{k}}(t)=\int_{0}^{t}\mathbb{E}(s^{(0)}(u,\mathbf{\beta}_{0})^{-2}\lambda _{1}(u,\mathbf{\beta}_{0}))du=\int_{0}^{t}s^{(0)}(u,\mathbf{\beta}_{0})^{-1}dA_{1;0}( u),\quad t\in\mathcal{T}.\] (II.16) Additionally, \(\mathbf{B}\) is the uniform limit in probability of \(\mathbf{B}_{n}^{(2)}\) with \[\mathbf{B}(t)=\int_{0}^{t}\mathbb{E}(-e(u,\boldsymbol{\beta}_{0})^{\top}\cdot s ^{(0)}(u,\boldsymbol{\beta}_{0})^{-1}\lambda_{i}(u,\boldsymbol{\beta}_{0}))du= \int_{0}^{t}-e(u,\boldsymbol{\beta}_{0})^{\top}dA_{1;0}(u),\quad t\in\mathcal{T},\] and \(\mathbf{C}\cdot\mathbf{D}_{\tilde{g}}(\tau)\) is as in Lemma II.2.4. Moreover, the covariance function of \(D_{\tilde{k}}+\mathbf{B}\cdot\mathbf{C}\cdot\mathbf{D}_{\tilde{g}}(\tau)\) is given by \[t\mapsto V_{\tilde{k}}(t)+\mathbf{B}(t)\cdot\mathbf{C}\cdot\mathbf{B}(t)^{ \top}.\] Proof.: This statement follows from (II.14) by means of Lemma II.2.3 (i) & (ii) in combination with Theorem I.2.6 of Part I. For the covariance function of \(D_{\tilde{k}}+\mathbf{B}\cdot\mathbf{C}\cdot\mathbf{D}_{\tilde{g}}(\tau)\) we have \[t \mapsto V_{\tilde{k}}(t)+\mathbf{B}(t)\cdot\mathbf{C}\cdot \mathbf{V}_{\tilde{g}}(\tau)\cdot\mathbf{C}^{\top}\cdot\mathbf{B}(t)^{\top}+ \mathbf{V}_{\tilde{k},\tilde{g}}(t)\cdot\mathbf{C}^{\top}\cdot\mathbf{B}(t)^{ \top}+\mathbf{B}(t)\cdot\mathbf{C}\cdot\mathbf{V}_{\tilde{g},\tilde{k}}(t)\] \[=V_{\tilde{k}}(t)+\mathbf{B}(t)\cdot\mathbf{C}\cdot\mathbf{B}(t) ^{\top}.\] The last equality follows from \(\mathbf{C}=\mathbf{V}_{\tilde{g}}(\tau)^{-1}\) and due to \[\begin{split}\mathbf{V}_{\tilde{k},\tilde{g}}(t)^{\top}& =\mathbf{V}_{\tilde{g},\tilde{k}}(t)=\langle\mathbf{D}_{\tilde{g}}, \mathbf{D}_{\tilde{k}}\rangle\\ &\int_{0}^{t}\mathbb{E}((\mathbf{Z}_{1}-\mathbf{e}(u,\boldsymbol{ \beta}_{0}))s^{(0)}(u,\boldsymbol{\beta}_{0})^{-1}\lambda_{1}(u,\boldsymbol{ \beta}_{0}))du\\ &=\int_{0}^{t}\mathbb{E}(\mathbf{Z}_{1}Y_{1}(u)\exp(\mathbf{Z}_{1 }^{\top}\boldsymbol{\beta}_{0}))s^{(0)}(u,\boldsymbol{\beta}_{0})^{-1}dA_{1;0 }(u)-\int_{0}^{t}\mathbf{e}(u,\boldsymbol{\beta}_{0})dA_{1;0}(u)\\ &=\mathbf{0}_{q\times 1},\end{split}\] (II.17) where \(\mathbf{0}_{q\times 1}\) denotes the \(q\)-dimensional vector of zeros. In other words, \(\mathbf{D}_{n,g}\) and \(D_{n,k}\) are asymptotically orthogonal. With Lemma II.2.4 and Lemma II.2.5 we retrieved the well-known results on the limiting distribution of \(\sqrt{n}(\hat{\boldsymbol{\beta}}_{n}-\boldsymbol{\beta}_{0})\) and \(\sqrt{n}(\hat{A}_{1;0,n}(\cdot,\hat{\boldsymbol{\beta}}_{n})-A_{1;0}(\cdot))\), respectively, by means of the theory established in Part I. Thereby we illustrated how to translate the general results into results for the basic estimators of a particular model. #### The Wild Bootstrap Estimators and Weak Convergence Results We will now apply the wild bootstrap to the MPLE \(\hat{\mathbf{\beta}}_{n}\) and to the Breslow estimator \(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}})\). Detailed information on this resampling scheme can be found in Section I.3. At this point, we merely want to draw attention to the most important ingredient of the wild bootstrap: the multiplier processes \(G_{1}(t),\ldots,G_{n}(t)\), \(t\in\mathcal{T}\). In the present context, in which the counting processes jump only once, the multiplier processes reduce to random variables \(G_{1},\ldots,G_{n}\) that are i.i.d. with mean zero, unit variance and finite fourth moment. Moreover, the filtration corresponding to the wild bootstrap is constructed such that at time zero it contains the data collected during follow-up, like \(\mathcal{F}_{1}(\tau)\), and that at the event times of type 1, the wild bootstrap multipliers \(G_{i}\) that belong to the individuals who experienced the event of type 1 are included, \(i=1,\ldots,n\). This results in the filtration \[\mathcal{F}_{2}(t)=\sigma\{G_{i}\cdot N_{i}(s),\mathbb{1}\{C_{i}\geq u\},N_{i }(u),Y_{i}(u),\mathbf{Z}_{i},0<s\leq t,u\in\mathcal{T},i=1,\ldots,n\},\quad t \in\mathcal{T},\] (II.18) from the resampling-point of view. Let us turn to the wild bootstrap counterparts \(\hat{\mathbf{\beta}}_{n}^{*}\) of \(\hat{\mathbf{\beta}}_{n}\) and \(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})\) of \(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})\). For this, we recall the wild bootstrap counterpart \(\mathbf{X}_{n}^{*}\) of \(\mathbf{X}_{n}\) introduced in Part I: \[\mathbf{X}_{n}^{*}(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{k}_{n,i}(u,\tilde{\mathbf{\beta}}_{n}^{*})\big{(}G_{i}(u)+1\big{)}dN_{i}(u),\quad t\in \mathcal{T},\] (II.19) where \(\mathbf{X}_{n}^{*}\) is obtained by applying Replacement I.3.1 of Part I to \(\mathbf{X}_{n}\), cf. (I.15) of Part I. Note, \(\tilde{\mathbf{\beta}}_{n}^{*}\) is the wild bootstrap counterpart of \(\tilde{\mathbf{\beta}}_{n}\). Under mild regularity assumptions, the asymptotic representation of \(\sqrt{n}(\mathbf{X}_{n}^{*}-\mathbf{X}_{n})\) is given by \[\sqrt{n}(\mathbf{X}_{n}^{*}-\mathbf{X}_{n})=\mathbf{D}_{n,k}^{*}+\mathbf{B}_ {n}^{*}\cdot\mathbf{C}_{n}^{*}\cdot\mathbf{D}_{n,g}^{*}(\tau)+o_{p}(1),\] (II.20) where \(\mathbf{D}_{n,k}^{*}\) and \(\mathbf{D}_{n,g}^{*}\) are square integrable martingales with respect to \(\mathcal{F}_{2}\) according to Lemma I.3.2 of Part I, cf. (I.19) and (I.20) of Part I combined. Additionally, \(\mathbf{B}_{n}^{*}\) and \(\mathbf{C}_{n}^{*}\) are the wild bootstrap counterparts of \(\mathbf{B}_{n}\) and \(\mathbf{C}_{n}\), respectively. As mentioned in Section II.2.2, the estimator \(\hat{\mathbf{\beta}}_{n}\) does not have the general counting process-based form of the right-hand side of (II.7), but the corresponding asymptotic representation \(\mathbf{D}_{n,k}+\mathbf{B}_{n}\mathbf{C}_{n}\mathbf{D}_{n,g}(\tau)+o_{p}(1)\) of (II.8) is retrieved by (II.11). Thus, we apply the wild bootstrap to the asymptotic representation of \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})\) on the right-hand side of (II.11) in order to obtain its wild bootstrap counterpart \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}^{*}-\hat{\mathbf{\beta}}_{n})\). In particular, we will apply Replacement I.3.1 of Part I to \({\bf D}^{(1)}_{n,g}\) to obtain the wild bootstrap version \({\bf D}^{*(1)}_{n,g}\), we replace \({\bf C}^{(1)}_{n}\) by a wild bootstrap counterpart \({\bf C}^{*(1)}_{n}\) such that Assumption I.3.9 of Part I holds, and we set \(o_{p}(1)\) to zero. These steps yield \[\sqrt{n}(\hat{\mathbf{\beta}}^{*}_{n}-\hat{\mathbf{\beta}})={\bf C}^{*(1)}_{n}\cdot{\bf D }^{*(1)}_{n,g}(\tau)+0,\] (II.21) where the wild bootstrap counterpart \({\bf D}^{*(1)}_{n,g}\) of \({\bf D}^{(1)}_{n,g}\) is given by \[{\bf D}^{*(1)}_{n,g}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}({\bf Z}_{i }-{\bf E}_{n}(u,\hat{\mathbf{\beta}}_{n}))G_{i}dN_{i}(u),\quad t\in{\cal T},\] and the wild bootstrap counterpart \({\bf C}^{*(1)}_{n}\) of \({\bf C}^{(1)}_{n}=\big{(}\frac{1}{n}{\bf I}_{n}(\tau,\mathbf{\beta}_{0})\big{)}^{-1}\) is defined through the optional covariation process \([{\bf D}^{*(1)}_{n,g}](\tau)\) of \({\bf D}^{*(1)}_{n,g}\) at \(\tau\), i.e., \[{\bf C}^{*(1)}_{n}=\big{(}[{\bf D}^{*(1)}_{n,g}](\tau)\big{)}^{-1}=\big{(}\frac {1}{n}\sum_{i=1}^{n}\int_{0}^{\tau}(\mathbf{Z}_{i}-\mathbf{E}_{n}(u,\hat{\mathbf{\beta}}_{ n}))^{\otimes 2}G_{i}^{2}dN_{i}(u)\big{)}^{-1};\] cf. Lemma I.3.2 of Part I. According to Lemma II.2.3 (ii), Assumption I.3.9 of Part I is fulfilled for this choice for \({\bf C}^{*(1)}_{n}\) under Assumption II.2.2. Moreover, we note that \({\bf D}^{*(1)}_{n,g}\) is a local square integrable martingale with respect to \({\cal F}_{2}\) according to Lemma I.3.2 of Part I, since the integrands \({\bf g}^{(1)}_{n,i}(t,\hat{\mathbf{\beta}}_{n})=({\bf Z}_{i}-{\bf E}_{n}(u,\hat{ \mathbf{\beta}}_{n}))\) of \({\bf D}^{*(1)}_{n,g}\) are known, \({\cal F}_{1}(\tau)\)-measurable functions, \(i=1,\ldots n\). In this way, we retrieved the asymptotic martingale representation (II.20) for \(\sqrt{n}(\hat{\mathbf{\beta}}^{*}_{n}-\hat{\mathbf{\beta}}_{n})\) with \(o_{p}(1)\) set to zero, namely \({\bf D}^{*(1)}_{n,k}+{\bf B}^{*(1)}_{n}{\bf C}^{*(1)}_{n}{\bf D}^{*(1)}_{n,g}(\tau)\) with \({\bf D}^{*(1)}_{n,k}\) defined as the \(q\)-dimensional zero process and \({\bf B}^{*(1)}_{n}\) set equal to the \((q\times q)\)-dimensional identity matrix. Finally, we obtain the wild bootstrap counterpart \(\hat{\mathbf{\beta}}^{*}_{n}\) of \(\hat{\mathbf{\beta}}_{n}\). By solving (II.21) for \(\hat{\mathbf{\beta}}^{*}_{n}\), we find \[\hat{\mathbf{\beta}}^{*}_{n}=\frac{1}{\sqrt{n}}{\bf C}^{*(1)}_{n}\cdot{\bf D}^{*(1 )}_{n,g}(\tau)+\hat{\mathbf{\beta}}_{n}.\] (II.22) We are now ready to present the asymptotic distribution of \(\sqrt{n}(\hat{\mathbf{\beta}}^{*}_{n}-\hat{\mathbf{\beta}})\). **Lemma II.2.6**.: If Assumption II.2.2 holds, then, conditionally on \({\cal F}_{2}(0)\), \[\sqrt{n}(\hat{\mathbf{\beta}}^{*}_{n}-\hat{\mathbf{\beta}}_{n})\stackrel{{ \cal L}}{{\longrightarrow}}{\bf C}\cdot{\bf D}_{\tilde{g}}(\tau),\ \mbox{in probability, as $n\to\infty$},\] with \({\bf C}\cdot{\bf D}_{\tilde{g}}(\tau)\) as in Lemma II.2.4. Proof.: This statement follows from (II.21) by means of Lemma II.2.3 (i) & (ii) in combination with Theorem I.3.10 of Part I. We see from (II.13) that the Breslow estimator \(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})\) has the general counting process-based form on the right-hand side of (II.7). By applying Replacement I.3.1 of Part I directly to \(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})\), we find that its wild bootstrap counterpart \(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})\) is given by \[\hat{A}_{1;0,n}^{*}(t,\hat{\mathbf{\beta}}_{n}^{*})=\frac{1}{n}\sum_{i=1}^{n}\int_ {0}^{t}\frac{J_{n}(u)}{S_{n}^{(0)}(u,\hat{\mathbf{\beta}}_{n}^{*})}(G_{i}+1)dN_{i} (u),\quad t\in\mathcal{T},\] and we identify \(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})=X_{n}^{*(2)}\). According to Remark II.6.4 in the appendix, \(\sqrt{n}(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})-\hat{A}_{1;0,n}( \cdot,\hat{\mathbf{\beta}}_{n}))=\sqrt{n}(X_{n}^{*(2)}-X_{n}^{(2)})\) has the desired asymptotic representation (II.20) with \(o_{p}(1)\) set to zero. Indeed, we have \[\sqrt{n}(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})-\hat{A}_{1;0,n}( \cdot,\hat{\mathbf{\beta}}_{n}))=D_{n,k}^{*(2)}(\cdot)+\mathbf{B}_{n}^{*(2)}( \cdot)\mathbf{C}_{n}^{*(2)}\mathbf{D}_{n.g}^{*(2)}(\tau),\] (II.23) where the wild bootstrap counterpart \(D_{n,k}^{*(2)}\) of \(D_{n,k}^{(2)}\) is given by \[D_{n,k}^{*(2)}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}\frac{J_{n}(u)} {S_{n}^{(0)}(u,\hat{\mathbf{\beta}}_{n})}G_{i}(u)dN_{i}(u),\quad t\in\mathcal{T},\] and the wild bootstrap counterpart \(\mathbf{B}_{n}^{*(2)}\) of \(\mathbf{B}_{n}^{(2)}\) equals \[\mathbf{B}_{n}^{*(2)}(t)=-\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}J_{n}(u)\cdot \mathbf{E}_{n}(u,\hat{\mathbf{\beta}}_{n})^{\top}\cdot S_{n}^{(0)}(u,\hat{\mathbf{ \beta}}_{n})^{-1}(G_{i}(u)+1)dN_{i}(u),\] (II.24) \(t\in\mathcal{T}\). Note that \(D_{n,k}^{*(2)}\) is a local square integrable martingale with respect to \(\mathcal{F}_{2}\) according to Lemma I.3.2 of Part I, because the integrand \(k_{n}^{(2)}(t,\hat{\mathbf{\beta}}_{n})=\frac{J_{n}(u)}{S_{n}^{(0)}(u,\hat{\mathbf{ \beta}}_{n})}\) of \(D_{n,k}^{*(2)}\) is a known, \(\mathcal{F}_{1}(\tau)\)-measurable function. Additionally, \(\mathbf{C}_{n}^{*(2)}\cdot\mathbf{D}_{n,g}^{*(2)}(\tau)=\mathbf{C}_{n}^{*(1)} \cdot\mathbf{D}_{n,g}^{*(1)}(\tau)\), because the wild bootstrap counterpart \(\hat{\mathbf{\beta}}_{n}^{*}\) of the MPLE \(\hat{\mathbf{\beta}}_{n}\) has been used as wild bootstrap estimator \(\tilde{\mathbf{\beta}}_{n}^{*}\) of \(\tilde{\mathbf{\beta}}_{n}\) in the context of the Breslow estimator, cf. (II.19). Finally, we present the asymptotic distribution of \(\sqrt{n}(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})-\hat{A}_{1;0,n}( \cdot,\hat{\mathbf{\beta}}_{n}))\). **Lemma II.2.7**.: If Assumption II.2.2 holds, then, conditionally on \(\mathcal{F}_{2}(0)\), \[\sqrt{n}(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})-\hat{A}_{1;0,n}( \cdot,\hat{\mathbf{\beta}}_{n}))\stackrel{{\mathcal{L}}}{{ \longrightarrow}}D_{\bar{k}}(\cdot)+\mathbf{B}(\cdot)\cdot\mathbf{C}\cdot \mathbf{D}_{\bar{g}}(\tau),\ \mbox{in}\ D(\mathcal{T}),\] in probability, as \(n\to\infty\), where all limit components of the statement above coincide with those given in Lemma II.2.4 and Lemma II.2.5. Proof.: The lemma follows from (II.23) by means of Lemma II.2.3 (i) & (ii) in combination with Theorem I.3.10 of Part I. As the final step of this section, we consider the joint (conditional) asymptotic distribution of the (wild bootstrap) estimators of \(\boldsymbol{\beta}_{0}\) and \(A_{1;0}\). This will be of use in Section II.2.4, in which we study the (conditional) asymptotic distribution of the (wild bootstrap) estimator for \(F_{1}\). Recall from Section II.2.1 that \[\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}_{0})( \cdot) =(\hat{\boldsymbol{\beta}}_{n}^{\top}-\boldsymbol{\beta}_{0}^{\top}, \hat{A}_{1;0,n}(\cdot,\hat{\boldsymbol{\beta}}_{n})-A_{1;0}(\cdot))^{\top},\] \[\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}^{*}-\hat{\boldsymbol{ \theta}}_{n})(\cdot) =(\hat{\boldsymbol{\beta}}_{n}^{*\top}-\hat{\boldsymbol{\beta}}_{n }^{\top},\hat{A}_{1;0,n}^{*}(\cdot,\hat{\boldsymbol{\beta}}_{n}^{*})-\hat{A} _{1;0,n}(\cdot,\hat{\boldsymbol{\beta}}_{n}))^{\top},\] where \(\boldsymbol{\theta}_{0}\), \(\hat{\boldsymbol{\theta}}_{n}\), and \(\hat{\boldsymbol{\theta}}_{n}^{*}\) are defined on \(D(\mathcal{T})^{q+1}\), respectively. Here and below, \(d[\cdot,\cdot]\) is an appropriate distance measure between probability distributions, for example the Prohorov distance. With this notation in mind, we can formulate the following theorem. **Theorem II.2.8**.: If Assumption II.2.2 holds, then \[d[\mathcal{L}(\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}^{*}-\hat{\boldsymbol{ \theta}}_{n})|\mathcal{F}_{2}(0)),\mathcal{L}(\sqrt{n}(\hat{\boldsymbol{\theta }}_{n}-\boldsymbol{\theta}_{0}))]\stackrel{{\mathbb{P}}}{{ \longrightarrow}}0,\text{ as }n\to\infty.\] Proof.: See Appendix. Hereby, we established the asymptotic validity of the wild bootstrap as an approximation procedure for the estimators of the Fine-Gray model under censoring-complete data. #### A Weak Convergence Result for CIFs We will now infer the (conditional) limiting distributions of \(\sqrt{n}(\hat{F}_{1,n}-F_{1})=\sqrt{n}(\Gamma(\hat{\boldsymbol{\theta}}_{n})- \Gamma(\boldsymbol{\theta}_{0}))\) and \(\sqrt{n}(\hat{F}_{1,n}^{*}-\hat{F}_{1,n})=\sqrt{n}(\Gamma(\hat{\boldsymbol{ \theta}}_{n}^{*})-\Gamma(\hat{\boldsymbol{\theta}}_{n}))\) from the (conditional) limiting distributions of \(\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}_{0})\) and \(\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}^{*}-\hat{\boldsymbol{\theta}}_{n})\), respectively, with the functional \(\delta\)-method. In particular, we have for \(j=1,2\), \[\sqrt{n}(\Gamma(\tilde{\boldsymbol{\theta}}^{(j)})-\Gamma(\tilde{\boldsymbol{ \theta}}^{(j-1)}))=\mathrm{d}\Gamma(\tilde{\boldsymbol{\theta}}^{(j-1)})\cdot \sqrt{n}(\tilde{\boldsymbol{\theta}}^{(j)}-\tilde{\boldsymbol{\theta}}^{(j-1) })+o_{p}(1),\] (II.25) where \(\mathrm{d}\Gamma(\tilde{\boldsymbol{\theta}}^{(j-1)})\) is the Hadamard derivative of \(\Gamma\) at \(\tilde{\boldsymbol{\theta}}^{(j-1)}\), and \(\tilde{\boldsymbol{\theta}}^{(j)}=(\tilde{\boldsymbol{\beta}}^{(j)},\tilde{A }_{1;0}^{(j)})\) with \(\tilde{\boldsymbol{\theta}}^{(0)}=\boldsymbol{\theta}_{0}=(\boldsymbol{ \beta}_{0}^{\top},A_{1;0})^{\top}\), \(\tilde{\boldsymbol{\theta}}^{(1)}=\hat{\boldsymbol{\theta}}_{n}=(\hat{ \boldsymbol{\beta}}_{n}^{\top},\hat{A}_{1;0,n}(\cdot,\hat{\boldsymbol{\beta}}_ {n}))^{\top}\) and \(\tilde{\boldsymbol{\theta}}^{(2)}=\hat{\boldsymbol{\theta}}_{n}^{*}=(\hat{ \boldsymbol{\beta}}_{n}^{*\top},\hat{A}_{1;0,n}^{*}(\cdot,\hat{\boldsymbol{ \beta}}_{n}^{*}))^{\top}\). The corresponding Hadamard derivative is given in the following lemma. **Lemma II.2.9**.: For \(j=1,2\), \[\mathrm{d}\Gamma(\tilde{\mathbf{\theta}}^{(j-1)})\cdot\sqrt{n}(\tilde{ \mathbf{\theta}}^{(j)}-\tilde{\mathbf{\theta}}^{(j-1)})\] \[=\exp\{-\exp(\mathbf{Z}^{\top}\tilde{\mathbf{\beta}}^{(j-1)})\cdot\tilde{A} ^{(j-1)}_{1;0}\}\exp(\mathbf{Z}^{\top}\tilde{\mathbf{\beta}}^{(j-1)})\] \[\qquad\cdot\big{[}\tilde{A}^{(j-1)}_{1;0}\cdot\mathbf{Z}^{\top}\sqrt{ n}(\tilde{\mathbf{\beta}}^{(j)}-\tilde{\mathbf{\beta}}^{(j-1)})+\sqrt{n}(\tilde{A}^{(j) }_{1;0}-\tilde{A}^{(j-1)}_{1;0})\big{]},\quad\text{ on }\mathcal{T}.\] Proof.: See Appendix. Theorem II.2.8 and (II.25) suggest that the conditional distribution of \(\sqrt{n}(\Gamma(\hat{\mathbf{\theta}}^{*}_{n})-\Gamma(\hat{\mathbf{\theta}}_{n}))\) is asymptotically equivalent to the distribution of \(\sqrt{n}(\Gamma(\hat{\mathbf{\theta}}_{n})-\Gamma(\mathbf{\theta}_{0}))\). This is in fact what we prove with the following theorem. **Theorem II.2.10**.: If Assumption II.2.2 holds, then \[d[\mathcal{L}(\sqrt{n}(\Gamma(\hat{\mathbf{\theta}}^{*}_{n})-\Gamma(\hat{\mathbf{ \theta}}_{n}))|\mathcal{F}_{2}(0)),\mathcal{L}(\sqrt{n}(\Gamma(\hat{\mathbf{ \theta}}_{n})-\Gamma(\mathbf{\theta}_{0})))]\stackrel{{\mathbb{P}}}{{ \longrightarrow}}0,\text{ as }n\to\infty.\] Proof.: See Appendix. Due to the asymptotic result of Theorem II.2.10 we validated the wild bootstrap as an appropriate procedure to approximate the distribution of \(\sqrt{n}(\Gamma(\hat{\mathbf{\theta}}_{n})-\Gamma(\mathbf{\theta}_{0}))=\sqrt{n}( \hat{F}_{1,n}-F_{1})\) under censoring-complete data. ### II.3 Time-Simultaneous Confidence Bands for CIFs Our aim is the prediction of \(F_{1}(\cdot|\mathbf{Z})=\Gamma(\mathbf{\theta}_{0}(\cdot))\) for an individual with covariate vector \(\mathbf{Z}\), including an asymptotically valid time-simultaneous \((1-\alpha)\)-confidence band, on a time interval \([t_{1},t_{2}]\subset[0,\tau]\). The band will be based on the estimator \(\hat{F}_{1,n}(\cdot|\mathbf{Z})=\Gamma(\hat{\mathbf{\theta}}_{n}(\cdot))\) of \(F_{1}(\cdot|\mathbf{Z})\) and a wild bootstrap-based quantile. Such a quantile replaces the unknown quantile related to the stochastic process \[W_{n}(t)=\sqrt{n}(\hat{F}_{1,n}(t|\mathbf{Z})-F_{1}(t|\mathbf{Z})),\quad t\in[ t_{1},t_{2}].\] We will investigate the use of several types of quantiles, related to six different approximations of the distribution of \(W_{n}\). First, we approximate the distribution of \(W_{n}\) with that of the following three wild bootstrap counterparts: \[W_{n}^{*,0}(\cdot) =\sqrt{n}(\hat{F}_{1,n}^{*}(\cdot|\mathbf{Z})-\hat{F}_{1,n}(\cdot |\mathbf{Z}))\] \[=\sqrt{n}(\Gamma(\hat{\mathbf{\theta}}^{*}_{n}(\cdot))-\Gamma(\hat{ \mathbf{\theta}}_{n}(\cdot))),\] \[W_{n}^{*,1}(\cdot) =\mathrm{d}\Gamma(\hat{\mathbf{\theta}}_{n})(\cdot)\cdot\sqrt{n}(\hat{ \mathbf{\theta}}_{n}^{*}(\cdot)-\hat{\mathbf{\theta}}_{n}(\cdot)),\] \[W_{n}^{*,2}(\cdot) =\mathrm{d}\Gamma(\hat{\mathbf{\theta}}_{n}^{*})(\cdot)\cdot\sqrt{n}( \hat{\mathbf{\theta}}_{n}^{*}(\cdot)-\hat{\mathbf{\theta}}_{n}(\cdot)),\] where \(\hat{F}_{1,n}^{*}(\cdot|\mathbf{Z})=\Gamma(\hat{\mathbf{\theta}}_{n}^{*}(\cdot))\). The two wild bootstrap counterparts \(W_{n}^{*,1}\) and \(W_{n}^{*,2}\) of \(W_{n}\) are motivated by (II.25). For \(j=0,1,2\), we define the wild bootstrap-based \((1-\alpha)\)-quantile \(q_{1-\alpha,n}^{*,j}\) related to \(W_{n}^{*,j}\), as the conditional \((1-\alpha)\)-quantile of \(\sup_{t\in[t_{1},t_{2}]}|W_{n}^{*,j}(t)|\), given the data. Due to Theorem II.2.10 in combination with (II.25), the corresponding unweighted and untransformed time-simultaneous \((1-\alpha)\)-confidence bands for \(F_{1}(\cdot|\mathbf{Z})\), denoted by \(CB_{1,n,j}^{*}\), are asymptotically valid and they are given by \[CB_{1,n,j}^{*}(t|\mathbf{Z})=\hat{F}_{1,n}(t|\mathbf{Z})\mp q_{1-\alpha,n}^{*, j}/\sqrt{n},\quad t\in[t_{1},t_{2}],\quad j=0,1,2.\] (II.26) Next, in order to improve the performance of the confidence bands, especially for small sample sizes, it is advocated in Lin (1997) to use a transformed process \(W_{n,\phi,1}=\sqrt{n}(\phi(\hat{F}_{1,n}(\cdot|\mathbf{Z}))-\phi(F_{1}(\cdot |\mathbf{Z})))\), instead of \(W_{n}\). Here, \(\phi:[0,1]\to\mathbb{R}\) is a continuously differentiable one-to-one mapping. So we will use three approximations based on this idea as well. For the case at hand, we chose for \(\phi\) the complementary log-log transformation \(\phi(t)=\log(-\log(1-t))\), cf. Lin (1997) and Beyersmann et al. (2013). Additionally to this transformation, we incorporate the weight function \(g_{n}(t)=1/\hat{\sigma}_{n}(t)\), where \(\hat{\sigma}_{n}^{2}(t)\) is a consistent estimator of the variance of \(W_{n,\phi,1}(t)\). More concretely, we consider the weighted and transformed process \[W_{n,\phi,g_{n}}(t)=\sqrt{n}g_{n}(t)\big{(}\phi(\hat{F}_{1,n}(t|\mathbf{Z}))- \phi(F_{1}(t|\mathbf{Z}))\big{)},\quad t\in[t_{1},t_{2}],\] based on which we construct the so-called equal-precision wild bootstrap confidence bands. For this, we approximate the distribution of \(W_{n,\phi,g_{n}}\) by the distribution of either one of the following three wild bootstrap counterparts: \[W_{n,\phi,g_{n}^{*}}^{*,0}(\cdot) =\sqrt{n}g_{n}^{*}(\cdot)(\phi(\hat{F}_{1,n}^{*}(\cdot|\mathbf{Z}) )-\phi(\hat{F}_{1,n}(\cdot|\mathbf{Z})))\] \[=\sqrt{n}g_{n}^{*}(\cdot)(\mathbf{Z}^{\top}(\hat{\mathbf{\beta}}_{n} ^{*}-\hat{\mathbf{\beta}}_{n})+\log(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n} ^{*}))-\log(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n}))),\] \[W_{n,\phi,g_{n}^{*}}^{*,1}(\cdot) =\sqrt{n}g_{n}^{*}(\cdot)(\mathbf{Z}^{\top}(\hat{\mathbf{\beta}}_{n} ^{*}-\hat{\mathbf{\beta}}_{n})+\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})^{-1}( \hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})-\hat{A}_{1;0,n}(\cdot,\hat {\mathbf{\beta}}_{n}))),\] \[W_{n,\phi,g_{n}^{*}}^{*,2}(\cdot) =\sqrt{n}g_{n}^{*}(\cdot)(\mathbf{Z}^{\top}(\hat{\mathbf{\beta}}_{n} ^{*}-\hat{\mathbf{\beta}}_{n})+\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})^{- 1}(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})-\hat{A}_{1;0,n}(\cdot, \hat{\mathbf{\beta}}_{n}))),\] where all three versions are asymptotically equivalent according to the functional \(\delta\)-method and the continuous mapping theorem. Additionally, the bootstrapped weight function \(g_{n}^{*}(t)=1/\hat{\sigma}_{n}^{*}(t)\) involves a bootstrap version \(\hat{\sigma}_{n}^{*2}(t)\) of \(\hat{\sigma}_{n}^{2}(t)\). Both estimators, \(\hat{\sigma}_{n}^{2}(t)\) and \(\hat{\sigma}_{n}^{*2}(t)\), are given in the lemma below. **Lemma II.3.1**.: If Assumption II.2.2 holds, then, for a given covariate vector \(\mathbf{Z}\), \[\hat{\sigma}_{n}^{2}(t)=\hat{A}_{1;0,n}(t,\hat{\boldsymbol{\beta}}_ {n})^{-2} \Big{[}\int_{0}^{t}S_{n}^{(0)}(u,\hat{\boldsymbol{\beta}}_{n})^{-1}d \hat{A}_{1;0,n}(u,\hat{\boldsymbol{\beta}}_{n})\] (II.27) \[+\int_{0}^{t}(\mathbf{Z}-\mathbf{E}_{n}(u,\hat{\boldsymbol{\beta} }_{n}))^{\top}d\hat{A}_{1;0,n}(u,\hat{\boldsymbol{\beta}}_{n})\Big{(}\frac{1} {n}\boldsymbol{I}_{n}(\tau,\hat{\boldsymbol{\beta}}_{n})\Big{)}^{-1}\] \[\cdot\int_{0}^{t}(\mathbf{Z}-\mathbf{E}_{n}(u,\hat{\boldsymbol{ \beta}}_{n}))d\hat{A}_{1;0,n}(u,\hat{\boldsymbol{\beta}}_{n})\Big{]},\] and \[\hat{\sigma}_{n}^{*2}(t)=\hat{A}_{1;0,n}^{*}(t,\hat{\boldsymbol{ \beta}}_{n}^{*}))^{-2} \big{[}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}S_{n}^{(0)}(u,\hat{ \boldsymbol{\beta}}_{n}^{*})^{-2}G_{i}^{2}dN_{i}(u)\] (II.28) \[+\int_{0}^{t}(\mathbf{Z}-\mathbf{E}_{n}(u,\hat{\boldsymbol{\beta }}_{n}^{*}))^{\top}d\hat{A}_{1;0,n}^{*}(u,\hat{\boldsymbol{\beta}}_{n}^{*})( \frac{1}{n}\boldsymbol{I}_{n}^{*}(\tau,\hat{\boldsymbol{\beta}}_{n}^{*}))^{-1}\] \[\cdot\int_{0}^{t}(\mathbf{Z}-\mathbf{E}_{n}(u,\hat{\boldsymbol{ \beta}}_{n}^{*}))d\hat{A}_{1;0,n}^{*}(u,\hat{\boldsymbol{\beta}}_{n}^{*})\big{]}\] are consistent (wild bootstrap) estimators for the variance of \(W_{n,\phi,1}(t)\). Proof.: See Appendix. Like before, we replace the unknown \((1-\alpha)\)-quantile corresponding to \(W_{n,\phi,g_{n}}\) by either one of the wild bootstrap-based quantiles \(\tilde{q}_{1-\alpha,n}^{*,j}\) corresponding to \(W_{n,\phi,g_{n}}^{*,j}\), where \(\tilde{q}_{1-\alpha,n}^{*,j}\) is the conditional \((1-\alpha)\)-quantile of \(\sup_{t\in[t_{1},t_{2}]}|W_{n,\phi,g_{n}^{*}}^{*,j}(t)|\) given the data, \(j=0,1,2\). From Theorem II.2.8, Lemma II.3.1 and the continuous mapping theorem, it follows analogously to the proof of Theorem II.2.10 that these wild bootstrap-based quantiles are asymptotically valid. The corresponding log-log-transformed time-simultaneous equal-precision \((1-\alpha)\) confidence bands for \(F_{1}(\cdot|\mathbf{Z})\), denoted by \(CB_{1,n,j}^{*,EP}\), are given by \[CB_{1,n,j}^{*,EP}(t|\mathbf{Z}) =\phi^{-1}\big{(}\phi(\hat{F}_{1,n}(t|\mathbf{Z}))\mp\tilde{q}_{ 1-\alpha,n}^{*,j}/\{\sqrt{n}g_{n}(t)\}\big{)}\] (II.29) \[=1-(1-\hat{F}_{1,n}(t|\mathbf{Z}))^{\exp(\mp\tilde{q}_{1-\alpha,n}^{*,j}\hat{\sigma}_{n}/\sqrt{n})},\quad t\in[t_{1},t_{2}],\quad j=0,1,2,\] where \(\phi^{-1}(y)=1-\exp(-e^{y})\). ### Simulation Study on Wild Bootstrap-Based Confidence Bands #### Simulation Set-Up Our simulation study is inspired by the _sir.adm_ data set of the mvna R-package and is conducted using R-3.5.1, cf. R Core Team (2016). The aim is to assess the reliability of the six types of wild bootstrap 95% confidence bands for \(F_{1}(\cdot|\mathbf{Z})\), as given in (II.26) and (II.29), in a non-asymptotic, real life setting. For this we evaluated 144 simulation settings and we simulated 5,000 studies for each simulation setting based on which the empirical coverage probability was calculated. Moreover, the wild bootstrap-based quantiles \(q_{0.95,n}^{*,j}\) and \(\tilde{q}_{0.95,n}^{*,j}\), \(j=0,1,2\) are based on 2,000 wild bootstrap iterations. The simulation settings were chosen as follows: * sample sizes: \(n=100,200,300\); * multiplier distributions: \(\mathcal{N}(0,1),\mathrm{Exp}(1)-1\), or \(\mathrm{Pois}(1)-1\); * censoring distributions: \(\mathcal{U}(0,c)\) with varying maximum parameters \(c\) resulting in censoring rates of about 20% to 25% (light censoring) or about 37% to 43% (strong censoring); * covariates: univariate \(Z\sim\mathrm{Bernoulli}(0.2)\) or trivariate \((Z_{ij})_{j=1}^{3}\) with independent \(Z_{i1}\sim\mathcal{N}(0,1)\), \(Z_{i2}\sim\mathrm{Bernoulli}(0.15)\), \(Z_{i3}\sim\mathrm{Bernoulli}(0.4)\), which stand for the standardized age (\(j=1\)), the pneumonia status (\(j=2\)), and the gender (\(j=3\)) of a patient \(i\), \(i=1,\ldots,n\); * time-constant _cause-specific_ baseline hazard rates of event type 1 and of event type 2: in the univariate covariate case, \(\alpha_{01;0}=0.5\) and \(\alpha_{02;0}\in\{0.05,0.5\}\); in the trivariate covariate case, \((\alpha_{01;0},\alpha_{02;0})\in\{(0.05,0.05),\,(0.08,0.008)\}\) the latter of which is motivated from the _sir.adm_ data set that will be introduced in Section II.5 below; * parameter (vector): in the univariate covariate case, \(\beta_{0}\in\{-0.5,-0.25,0.25\}\); \(\mathbf{\beta}_{0}\in\{(-0.05,-0.5,-0.05),(-0.05,-0.25,-0.05),(-0.05,0.25,-0.05)\}\) in the trivariate covariate case; * covariate choices for the confidence bands: in the univariate covariate case, \(Z\in\{0,1\}\); in the trivariate covariate case, \(\mathbf{Z}\in\{(-2/3,0,1),(2/3,1,0)\}\), i.e., a 45 years old female without pneumonia and a 70 years old male with pneumonia on hospital admission, respectively. Based on the above parameter choices, we simulated survival times and event types according to the Fine-Gray model. For this we used the algorithms described in Beyersmann et al. (2009), in which it is suggested to simulate the corresponding survival data by exploiting the cause-specific hazards in the following way. * Given time-constant cause-specific hazards \(\alpha_{01;0}\), \(\alpha_{02;0}\), the baseline subdistribution hazard of event type 1 is \(\alpha_{1;0}(t)=\dfrac{\alpha_{01;0}+\alpha_{02;0}}{1+\alpha_{02;0}/\alpha_{01;0} \cdot\exp\{(\alpha_{01;0}+\alpha_{02;0})t\}}\). * For the cause-specific hazard of event type 1 we chose a time-constant Cox proportional hazards model, i.e., \(\alpha_{01|\mathbf{Z}}=\alpha_{01;0}\cdot\exp\{\mathbf{Z}^{\top}\mathbf{\beta}_{0}\}\). Recall from (II.2) that the subdistributional hazard of event type 1 is given by \(\alpha_{1}(t|\mathbf{Z})=\alpha_{1;0}(t)\cdot\exp\{\mathbf{Z}^{\top}\mathbf{\beta}_ {0}\}\). * Given \(\alpha_{01|\mathbf{Z}}\), \(\alpha_{1}(t|\mathbf{Z})\), the cause-specific hazard rate of event type 2 is \[\alpha_{02}(t|\mathbf{Z})=\alpha_{1}(t|\mathbf{Z})-\alpha_{01|\mathbf{Z}}- \dfrac{d}{dt}\log(\alpha_{1}(t|\mathbf{Z}))\] with \(\dfrac{d}{dt}\log(\alpha_{1}(t|\mathbf{Z}))=-\dfrac{\alpha_{01;0}+\alpha_{02 ;0}}{1+\alpha_{01;0}/\alpha_{02;0}\cdot\exp\{-(\alpha_{01;0}+\alpha_{02;0})t \}}\). The time intervals \([t_{1},t_{2}]\) with respect to which the confidence bands were determined, correspond to the first and the last decile of the observed survival times of event type 1 across all simulated studies of a kind, where for each realized data set, \(t_{1}\) was also taken to be at least the first observed survival time of type 1. This has been done to avoid poor approximation due to proximity of the band's boundary time points to the extremes of the event times, cf. Lin (1997). #### Results of the Simulation Study In our simulation study, we assessed the actual coverage probability of several wild bootstrap 95% confidence bands for \(F_{1}(\cdot|\mathbf{Z})\) based on the wild bootstrap 95%-quantiles \(q^{*,j}_{1-\alpha,n}\), \(\tilde{q}^{*,j}_{1-\alpha,n}\), \(j=0,1,2\), and three different distributions for the multipliers. The corresponding results are summarized in Table II.1. As described in Section II.4.1, we have simulated 144 settings with varying sample sizes, varying censoring rates and varying covariate effects, among others. The simulated coverage probabilities for each setting can be found in the appendix, see Tables II.2-II.13. In order to illustrate the results of all simulated settings at a glance, we calculated for every combination of multipliers and quantiles the percentages of settings with a coverage probability in between 93.0% and 97.0% (Table II.1a), at most 92.0% (Table II.1b), and at least 98.0% (Table II.1c). Overall, the combination of multiplier distribution and type of quantile seems to have a major impact on the reliability of the confidence bands. In particular, none of the tested distributions work well in combination with all type of quantiles and vice versa. There are several combinations that turned out too liberal or too conservative. In this respect, we only mention those combinations for which the bands of at least 15% of the 144 settings are either too liberal or too conservative. The combination of centered exponential multipliers and quantile \(\tilde{q}_{1-\alpha,n}^{*,0}\) resulted in too low coverage probabilities, as 25.7% of the 144 settings have a coverage probability between 0% and 92%. Too high coverage probabilities were found for the combinations of standard normal multipliers with quantile \(\tilde{q}_{1-\alpha,n}^{*,1}\), centered exponential multipliers with quantile \(q_{1-\alpha,n}^{*,1}\), centered Poisson multipliers with quantile \(\tilde{q}_{1-\alpha,n}^{*,1}\), and centered exponential multipliers with quantile \(\tilde{q}_{1-\alpha,n}^{*,2}\), as 38%, 30.6%, 22.9%, and 18.1%, respectively, of their 144 settings have a coverage probability between 98% and 100%. We consider nominal 95% confidence bands with actual coverage probability between 93% and 97% as acceptable. There are 4 combinations of multipliers and quantiles such that in at least 90% of the 144 simulated settings coverage probabilities between 93% and 97% were achieved. The results of the following combinations are in this sense satisfactory: standard normal multipliers in combination with quantile \(\tilde{q}_{1-\alpha,n}^{*,0}\) (97.9%), standard normal multipliers with quantile \(q_{1-\alpha,n}^{*,1}\) (95.1%), centered Poisson multipliers with quantile \(q_{1-\alpha,n}^{*,1}\) (93.8%), and standard normal multipliers with quantile \(q_{1-\alpha,n}^{*,0}\) (91%). Note that for those combinations none of the 144 settings showed a too low coverage probability below 92%. Additionally, for the combination of standard normal multipliers with quantile \(\tilde{q}_{1-\alpha,n}^{*,0}\) none of the settings led to a too high coverage probability, i.e., above 98%. In conclusion, we recommend to use the 95% equal-precision confidence band based on \(\tilde{q}_{1-\alpha,n}^{*,0}\) with standard normal multipliers, as in 97.9% of the simulated settings the coverage probability was between 93% and 97%, and 100% of the settings resulted in coverage probabilities between 92.1% and 97.9%. ### Real Data Example: Impact of Pneumonia on the CIF In this section, we illustrate the wild bootstrap-based 95% confidence bands for a real data set. The data set was obtained by merging the _sir.adm_ data set from the R-package mvna with the _icu.pneu_ data set from the R-package kmi by matching the patient ID. These data sets are random subsamples of the data that originate from the SIR 3 cohort study conducted at the Charite university hospital in Berlin, Germany, during a period of 18 month from January 2000 until July 2001. The goal of that study was to determine the incidence of hospital-acquired infection in intensive care units (ICU). See Barwolff et al. (2005) and Grundmann et al. (2005) for a detailed description of the study and the corresponding results. One may find further statistical analyses of the data in, e.g., Beyersmann et al. (2006) and Wolkewitz et al. (2008). As described in Beyersmann et al. (2012), the _sir.adm_ data set contains 747 patients for whom their pneumonia status on admission to the ICU, age, and sex are given as baseline covariates. The data set _icu.pneu_ contains 1,313 patients for whom a nosocomial pneumonia indicator, their age, and sex are available as covariates. The nosocomial pneumonia indicator switches from zero to one at the time of infection. However, we have established the wild \begin{table} \end{table} Table II.1: _Percentage of the 144 simulated settings with simulated coverage probability between 93% - 97% (a), between 0% - 92% (b), between 98% - 100% (c). The simulated coverage probabilities refer to confidence bands for the cumulative incidence function calculated under the indicated distribution of the multipliers and the specified quantile._ bootstrap only for the case of time-constant (i.e. baseline) covariates in this chapter. Thus, we exclude the nosocomial pneumonia indicator from our analysis. Practical guidelines for the inclusion of time-dependent covariates in Fine-Gray models are given by Beyersmann and Schumacher (2008). By merging the two data sets, we obtained a data set of 524 patients for whom the covariates are comprised of their pneumonia status on admission to the ICU, age, and sex. For example, the merged data set contains 63 patients with pneumonia on admission, 221 female patients and the average age of a patient was 57.62 years (with quartiles 46.55, 61.35, 70.95 years). Moreover, the outcome of the ICU-stay of each patient--alive discharge from hospital, death, or censoring--was recorded. Thus, we have discharge from hospital and death as the competing risks. In our study, we took the status death as the event of interest, i.e., as event of type 1. Note that censoring occurred only due to administrative loss to follow-up. In the data set at hand, 459 patients were discharged from hospital, 54 patients died and 11 were censored. Additionally, the data set contains for each patient the time in ICU till either occurrence of an event or censoring. Furthermore, the data set includes the administrative censoring times for all patients that have been discharged alive from the hospital, but not for the deceased individuals. That is, the data set holds the censoring times for all individuals except for those who experienced the event of interest. We will call such data sets _partially-censoring-complete_. In contrast, a data set with censoring times for all individuals is called _censoring-complete_. Nevertheless, from a practical point of view, partially-censoring-complete data are sufficient, because individuals are considered to be at-risk until either they experience the event of interest or until they are censored. Thus, the at-risk indicator is computable for all individuals based on partially-censoring-complete data. From a theoretical point of view, the underlying \(\sigma\)-algebras for our martingale arguments have to be modified in order to be in line with partially-censoring-complete data. In particular, in (II.4) and (II.18) we replace \(\mathbb{1}\left\{C_{i}\geq u\right\}\) by \(\mathbb{1}\left\{C_{i}\geq u\right\}(1-N_{i}(u))\), where \(N_{i}\) counts the observed events of interest of individual i and \(C_{i}\) is the censoring time of individual i. In this way, the censoring information is available unless the individual has experienced the event of interest. In our present data example, we computed the wild bootstrap confidence band for the cumulative incidence function of event type 1, \(F_{1}(\cdot|\mathbf{Z})\), for two covariate vectors (\(\mathbf{Z}=\mathbf{z}_{1}\) and \(\mathbf{Z}=\mathbf{z}_{2}\)). First, for a female individual of average age _without_ pneumonia on admission (encoded by the covariate vector \(\mathbf{z}_{1}\)). Second, for a female individual of average age _with_ pneumonia on admission (encoded by the covariate vector \(\mathbf{z}_{2}\)). In particular, we computed the log-log-transformed 95% equal-precision wild bootstrap confidence bands \(CB_{1,524,0}^{*,EP}(t|\mathbf{z}_{1})\) and \(CB_{1,524,0}^{*,EP}(t|\mathbf{z}_{2})\) on the interval \(t\in[t_{1},t_{2}]=[6,48]\) (time in days) with standard normal multipliers and quantile \(\tilde{q}_{0.95,524}^{*,0}\). As in Section II.4.1, the boundary values \(t_{1}\) and \(t_{2}\) correspond to the first and the last decile of the observed survival times of the event of interest. Note that no event of interest occurs during the time interval \((44,48]\) and therefore, the figures will be plotted with respect to the time interval \([6,44]\). Because we only consider this particular type of band for the present data example, we simplify the corresponding notation to \(CB_{1,524}^{*}(\cdot|\mathbf{z}_{j})\) \(j=1,2\). The choice of standard normal multipliers in combination with the quantile \(\tilde{q}^{*,0}_{0.95,524}\) has been made in accordance with the results of the simulation study of Section II.4. The wild bootstrap-based quantile has been calculated using 2,000 wild bootstrap iterations. In Figure II.1 the estimated cumulative incidence function \(\hat{F}_{1}(\cdot|\mathbf{z}_{j})\) is plotted on the time interval \([6,44]\) for the individual without pneumonia on admission (\(\mathbf{z}_{1}\)) and for the individual with pneumonia on admission (\(\mathbf{z}_{2}\)), together with the lower bounds and upper bounds of the corresponding wild bootstrap confidence bands \(CB^{*}_{1,524}(\cdot|\mathbf{z}_{j})\), \(j=1,2\). The lower and upper bounds of \(CB^{*}_{1,524}(44|\mathbf{z}_{1})\) and \(CB^{*}_{1,524}(44|\mathbf{z}_{2})\) equal \((0.073,0.134)\) and \((0.092,0.323)\), respectively. Thus, the wild bootstrap confidence band after 44 days for the individual with pneumonia is considerably wider than the wild bootstrap confidence band after 44 days for the individual without pneumonia. This is most likely caused by a larger variance estimate due to the relatively few patients with pneumonia on admission to the hospital (63 out of 524 in the whole data set). In other words, for a female individual of average age without pneumonia on admission, the predicted chances of dying in the ICU is not only lower but also more precise than the predicted chances of experiencing the event of interest for a female individual of average age with pneumonia on admission. Moreover, one can see from the figure that the two confidence bands are overlapping on the entire time interval. In Figure II.2 we present the relationship between the estimated cumulative incidence function \(\hat{F}_{1,524}(\cdot|\mathbf{z}_{j})\), the resampled cumulative incidence functions \(\hat{F}^{*}_{1,524}(\cdot|\mathbf{z}_{j})\), and the equal-precision 95% wild bootstrap confidence band \(CB^{*}_{1,524}(\cdot|\mathbf{z}_{j})\) for an individual without pneumonia on admission (\(\mathbf{z}_{1}\)) and for an individual with pneumonia on admission (\(\mathbf{z}_{2}\)), \(j=1,2\). It can be seen that the resampled cumulative incidence functions fluctuate vertically around the estimated cumulative incidence function. This illustrates the randomness induced by the Figure II.1: _The estimated cumulative incidence function \(\hat{F}_{1,524}(\cdot|\mathbf{z}_{j})\) for a female individual of average age without pneumonia (CIF w/o pneu) and a female individual of average age with pneumonia (CIF w pneu) with lower and upper bounds of the corresponding wild bootstrap confidence bands \(CB^{*}_{1,524}(\cdot|\mathbf{z}_{j})\) (WB CB), \(j=1,2\)._ multipliers which is supposed to mimic the randomness that one would observe if several data sets would have been used for the estimation of the cumulative incidence function. Furthermore, the resampled cumulative incidence functions are asymmetrically distributed around the estimated cumulative incidence function. This is likely due to the complementary \(\log-\log\)-transformation of the equal-precision wild bootstrap confidence band. ### II.6 Discussion In the above, we have demonstrated in detail how the martingale-based theory of Part I can be applied to justify the wild bootstrap for the estimators involved in the Fine-Gray model under censoring-complete data. The key role in this is played by the asymptotic (wild bootstrap) martingale representation considered in Part I and the asymptotic results on the corresponding distribution derived in that chapter. In the present chapter we retrieved the representation for the MPLE, the Breslow estimator, and their wild bootstrap counterparts. We then used the results on the asymptotic distribution from Part I to infer the asymptotic distribution of the (wild bootstrap) estimators involved in the Fine-Gray model. Moreover, we extended the results to a functional of those estimators in order to justify the wild bootstrap for the cumulative incidence function, which is typically the function of interest in the context of this model. Based on these results, we presented two types of asymptotically valid time simultaneous confidence bands that can be used to predict the cumulative incidence function for given covariate combinations. We also conducted an extensive simulation study to evaluate the reliability of different resampling details for small sample size. We discovered that the coverage probability depends on both the chosen distribution of the multipliers and the type of wild bootstrap-based quantile. In summary, the choice of standard normal in combination with either of the quantiles \(q_{1-\alpha,n}^{*,0}\) or \(q_{1-\alpha,n}^{*,1}\), and centered Poisson multipliers in combination with the quantiles \(q_{1-\alpha,n}^{*,1}\) resulted in the most reliable bands based on the untransformed cumulative incidences. Additionally, for bands based on the complementary \(\log-\log\)-transformation, which additionally have the advantage of including only values between 0 and 1, normal multipliers in combination with \(\widetilde{q}_{1-\alpha,n}^{*,0}\) resulted in the most reliable confidence bands of all. Furthermore, we illustrated the wild bootstrap confidence band corresponding to the best choice of multiplier distribution and type of quantile found via the simulation study for a real data set. In particular, we predicted the band estimate of the cumulative incidence function for death as the event of interest for female individuals of average age with and without pneumonia on admission. Thereby, the chances of dying could be compared for those two covariate combinations. We have introduced the Fine-Gray model for time-constant covariates only. A practical solution to the question of how to extend the Fine-Gray model to time-dependent covariates can be found in Beyersmann and Schumacher (2008). In that paper the authors suggested the usage of multistate models in combination with discrete covariates in order to treat time-dependent covariates in Fine-Gray models. Moreover, the general case of independently right-censored data is not covered by our theory developed in Part I. This is due to the fact that for the general case, the score function does not exhibit a martingale property anymore (see Appendix A of Fine and Gray (1999)). In a forthcoming paper, we will develop a wild bootstrap-based confidence band for the cumulative incidence function which is adjusted to independently right-censored data via multiple imputation. ## Appendix B: Proofs and Remarks Throughout the appendix, we will use a simplified version of the notation introduced in Section II.2. In particular, we will use the following notation: * \(\mathbf{C}_{n}=\mathbf{C}_{n}^{(1)}\) and \(\mathbf{C}_{n}^{*}=\mathbf{C}_{n}^{*(1)}\); * \(\mathbf{D}_{n,g}=\mathbf{D}_{n,g}^{(1)}\) and \(\mathbf{D}_{n,g}^{*}=\mathbf{D}_{n,g}^{*(1)}\) with \(g_{n,i}=g_{n,i}^{(1)}\); * \(\mathbf{B}_{n}=\mathbf{B}_{n}^{(2)}\) and \(\mathbf{B}_{n}^{*}=\mathbf{B}_{n}^{*(2)}\); * \(D_{n,k}=D_{n,k}^{(2)}\) and \(D_{n,k}^{*}=D_{n,k}^{*(2)}\) with \(k_{n,i}=k_{n,i}^{(2)}\). ### B.1 Proofs and Remarks of Section II.2.2 **Proof of Lemma II.2.3.** **Proof of Lemma II.2.3(i)**: First, we show that Assumption II.2.2 (i) - (iv) imply parts (i), (ii), and (iii) of Assumption I.2.1 for \(\mathbf{h}_{n,i}(t,\boldsymbol{\beta})=\left(k_{n}(t,\boldsymbol{\beta}), \mathbf{g}_{n,i}(t,\boldsymbol{\beta})^{\top}\right)^{\top}=\left(J_{n}(t)S_{ n}^{(0)}(t,\boldsymbol{\beta}),(\mathbf{Z}_{i}-\mathbf{E}_{n}(t,\boldsymbol{\beta}))^{ \top}\right)^{\top}\) and analogously for its limit in probability \(\tilde{\mathbf{h}}_{i}(t,\boldsymbol{\beta})=\left(\tilde{k}(t,\boldsymbol{ \beta}),\tilde{\mathbf{g}}_{i}(t,\boldsymbol{\beta})^{\top}\right)^{\top}= \left(s^{(0)}(t,\boldsymbol{\beta})^{-1},(\mathbf{Z}_{i}-\mathbf{e}(t, \boldsymbol{\beta}))^{\top}\right)^{\top}\), \(t\in\mathcal{T}\) and \(\boldsymbol{\beta}\in\mathcal{B}\). Let \(\tilde{\boldsymbol{\beta}}_{n}\) be a consistent estimator of \(\boldsymbol{\beta}_{0}\). \[\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}\|\big{(}k_{n}(t,\tilde{ \boldsymbol{\beta}}_{n}),\mathbf{g}_{n,i}(t,\tilde{\boldsymbol{\beta}}_{n})^{ \top}\big{)}^{\top}-\left(\tilde{k}(t,\boldsymbol{\beta}_{0}),\tilde{\mathbf{ g}}_{i}(t,\boldsymbol{\beta}_{0})^{\top}\right)^{\top}\|_{\infty}\] \[\leq\sup_{t\in\mathcal{T}}\|k_{n}(t,\tilde{\boldsymbol{\beta}}_{ n})-\tilde{k}(t,\boldsymbol{\beta}_{0})\|_{\infty}+\sup_{t\in\mathcal{T},i\in\{1, \ldots,n\}}\|\mathbf{g}_{n,i}(t,\tilde{\boldsymbol{\beta}}_{n})-\tilde{ \mathbf{g}}_{i}(t,\boldsymbol{\beta}_{0})\|_{\infty},\] it suffices for proving part (i) of Assumption I.2.1 of Part I to consider the convergence of each of the two terms separately. Obviously, for proving the parts (ii) and (iii) we can also treat the two components of \(h_{n,i}(t,\boldsymbol{\beta})\) separately. Let us consider \(k_{n}(t,\boldsymbol{\beta})\) first. We have \[\sup_{t\in\mathcal{T}}|k_{n}(t,\tilde{\boldsymbol{\beta}}_{n})- \tilde{k}(t,\boldsymbol{\beta}_{0})|\] \[=\sup_{t\in\mathcal{T}}|J_{n}(t)S_{n}^{(0)}(t,\tilde{\boldsymbol {\beta}}_{n})^{-1}-s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}|\] \[=\sup_{t\in\mathcal{T}}|(J_{n}(t)-1+1)\cdot(\frac{s^{(0)}(t, \boldsymbol{\beta}_{0})}{S_{n}^{(0)}(t,\tilde{\boldsymbol{\beta}}_{n})}-1+1) \cdot s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}-s^{(0)}(t,\boldsymbol{\beta}_{0} )^{-1}|\] \[=\sup_{t\in\mathcal{T}}\big{|}\big{[}(J_{n}(t)-1)\cdot(\frac{s^{( 0)}(t,\boldsymbol{\beta}_{0})}{S_{n}^{(0)}(t,\tilde{\boldsymbol{\beta}}_{n})}- 1)+(J_{n}(t)-1)+(\frac{s^{(0)}(t,\boldsymbol{\beta}_{0})}{S_{n}^{(0)}(t, \tilde{\boldsymbol{\beta}}_{n})}-1)+1\big{]}\] (II.30) \[\qquad\cdot s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}-s^{(0)}(t, \boldsymbol{\beta}_{0})^{-1}|\] \[=\sup_{t\in\mathcal{T}}\big{|}\big{[}(J_{n}(t)-1)\cdot(\frac{s^{( 0)}(t,\boldsymbol{\beta}_{0})}{S_{n}^{(0)}(t,\tilde{\boldsymbol{\beta}}_{n})}- 1)+(J_{n}(t)-1)+(\frac{s^{(0)}(t,\boldsymbol{\beta}_{0})}{S_{n}^{(0)}(t, \tilde{\boldsymbol{\beta}}_{n})}-1)\big{]}\] \[\qquad\cdot s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}|.\] Moreover, we know that \[\sup_{t\in\mathcal{T}}|\big{(}\frac{S_{n}^{(0)}(t,\tilde{ \boldsymbol{\beta}}_{n})}{s^{(0)}(t,\boldsymbol{\beta}_{0})}-1\big{)}\frac{s^{( 0)}(t,\boldsymbol{\beta}_{0})}{s^{(0)}(t,\boldsymbol{\beta}_{0})}|\] \[=\sup_{t\in\mathcal{T}}|\big{(}S_{n}^{(0)}(t,\tilde{\boldsymbol {\beta}}_{n})-s^{(0)}(t,\boldsymbol{\beta}_{0})\big{)}s^{(0)}(t,\boldsymbol{ \beta}_{0})^{-1}|\] \[=\sup_{t\in\mathcal{T}}\left\{|[S_{n}^{(0)}(t,\tilde{\mathbf{\beta}}_{n})-s ^{(0)}(t,\tilde{\mathbf{\beta}}_{n})+s^{(0)}(t,\tilde{\mathbf{\beta}}_{n})-S_{n}^{(0)}(t,\mathbf{\beta}_{0})+S_{n}^{(0)}(t,\mathbf{\beta}_{0})-s^{(0)}(t,\mathbf{\beta}_{0})]\right.\] \[\quad\cdot s^{(0)}(t,\mathbf{\beta}_{0})^{-1}|\right\}\] \[\leq\sup_{t\in\mathcal{T}}\left\{|S_{n}^{(0)}(t,\tilde{\mathbf{\beta} }_{n})-s^{(0)}(t,\tilde{\mathbf{\beta}}_{n})|+|S_{n}^{(0)}(t,\mathbf{\beta}_{0})-s^{(0 )}(t,\mathbf{\beta}_{0})+s^{(0)}(t,\mathbf{\beta}_{0})-s^{(0)}(t,\tilde{\mathbf{\beta}}_{n })|\right.\] \[\quad+|S_{n}^{(0)}(t,\mathbf{\beta}_{0})-s^{(0)}(t,\mathbf{\beta}_{0})| \right\}\cdot\sup_{t\in\mathcal{T}}\lvert s^{(0)}(t,\mathbf{\beta}_{0})^{-1}|\] \[\overset{\mathbb{P}}{\longrightarrow}0,\text{ as }n\to\infty.\] The above convergence in probability to zero, as \(n\to\infty\), holds for any consistent estimator \(\tilde{\mathbf{\beta}}_{n}\in\mathcal{B}\) of \(\mathbf{\beta}_{0}\) due to Assumption II.2.2 (i), the continuity of \(s^{(0)}(t,\cdot)\) in \(\mathbf{\beta}\in\mathcal{B}\) (Assumption II.2.2 (ii)), and the boundedness of \(s^{(0)}(\cdot,\mathbf{\beta}_{0})^{-1}\) for all \(t\in\mathcal{T}\) according to Assumption II.2.2 (iii) & (iv), see (II.6). Hence, it follows from the continuous mapping theorem that \[\sup_{t\in\mathcal{T}}\lvert\frac{s^{(0)}(t,\mathbf{\beta}_{0})}{S_{n}^{(0)}(t, \tilde{\mathbf{\beta}}_{n})}-1\rvert\overset{\mathbb{P}}{\longrightarrow}0,\text{ as }n\to\infty,\] (II.31) for any consistent estimator \(\tilde{\mathbf{\beta}}_{n}\in\mathcal{B}\) of \(\mathbf{\beta}_{0}\). Additionally, it holds that \[\sup_{t\in\mathcal{T}}\lvert J_{n}(t)-1\rvert\overset{\mathbb{P}}{ \longrightarrow}0,\text{ as }n\to\infty,\] (II.32) according to Assumption II.2.2 (iii). Based on (II.31), (II.32) and the boundedness of \(s^{(0)}(\cdot,\mathbf{\beta}_{0})^{-1}\) for all \(t\in\mathcal{T}\) according to Assumption II.2.2 (iii) & (iv), the right-hand side of the fourth equation of (II.30) converges to zero in probability, as \(n\to\infty\), i.e., \[\sup_{t\in\mathcal{T}}\lvert J_{n}(t)S_{n}^{(0)}(t,\tilde{\mathbf{\beta}}_{n})^{- 1}-s^{(0)}(t,\mathbf{\beta}_{0})^{-1}\rvert\overset{\mathbb{P}}{\longrightarrow}0,\text{ as }n\to\infty,\] (II.33) for any consistent estimator \(\tilde{\mathbf{\beta}}_{n}\in\mathcal{B}\) of \(\mathbf{\beta}_{0}\). Thus, Assumption I.2.1 (i) of Part I is fulfilled for \(k_{n}(t,\mathbf{\beta})\) under Assumption II.2.2 (i) - (iv). To see that Assumption I.2.1 (ii) of Part I holds, we note that \(\tilde{k}(t,\cdot)=s^{(0)}(t,\cdot)^{-1}\) is a continuous function in \(\mathbf{\beta}\in\mathcal{B}\), because \(s^{(0)}(t,\cdot)\) is a continuous function in \(\mathbf{\beta}\in\mathcal{B}\) according to Assumption II.2.2 (ii), and that the continuity is preserved under the inverse. Additionally, \(s^{(0)}(t,\mathbf{\beta})^{-1}\) is bounded on \(\mathcal{T}\times\mathcal{B}\), since \(s^{(0)}(t,\mathbf{\beta})\) is bounded away from zero on \(\mathcal{T}\times\mathcal{B}\) according to Assumption II.2.2 (iii) and (II.6), which holds due to Assumption II.2.2 (iv). Hence, Assumption II.2.2 (ii) - (iv) imply Assumption I.2.1 (ii) of Part I for \(k_{n}(t,\mathbf{\beta})\). With respect to part (iii) of Assumption I.2.1 of Part I, we remark that the couples \((\tilde{k}(t,\mathbf{\beta}_{0}),\lambda_{i}(t,\mathbf{\beta}_{0}))\), \(i=1,\ldots,n\), are pairwise independent and identically distributed for all \(t\in\mathcal{T}\), because \(\tilde{k}(t,\mathbf{\beta}_{0})=s^{(0)}(t,\mathbf{\beta}_{0})^{-1}\) is a deterministic function in \(t\in\mathcal{T}\) (see (II.6)) and \(\lambda_{1}(t,\mathbf{\beta}_{0}),\ldots,\lambda_{n}(t,\mathbf{\beta}_{0})\) with \(\lambda_{i}(t,\mathbf{\beta}_{0})=Y_{i}(t)\exp(\mathbf{Z}_{i}^{\top}\mathbf{\beta}_{0}) \alpha_{1;0}(t)\) are pairwise independent and identically distributed for all \(t\in\mathcal{T}\) according to Assumption II.2.2 (iv). In conclusion, Assumption I.2.1 of Part I is fulfilled for \(k_{n}(t,\mathbf{\beta})\) under Assumption II.2.2 (i) - (iv). Let us now consider \(\mathbf{g}_{n,i}(t,\mathbf{\beta})\). We first show under which conditions of Assumption II.2.2 Assumption I.2.1 (i) of Part I follows for \(\mathbf{g}_{n,i}(t,\mathbf{\beta})\), i.e., we have to prove that for any consistent estimator \(\tilde{\mathbf{\beta}}_{n}\) of \(\mathbf{\beta}_{0}\). \[\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}\|\mathbf{g}_{n,i}(t, \tilde{\mathbf{\beta}}_{n})-\tilde{\mathbf{g}}_{i}(t,\mathbf{\beta}_{0})\|\stackrel{{ \mathbb{P}}}{{\longrightarrow}}0,\text{ as }n\to\infty.\] (II.34) Recall that we have \[\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}\|\mathbf{g}_{n,i}(t, \tilde{\mathbf{\beta}}_{n})-\tilde{\mathbf{g}}_{i}(t,\mathbf{\beta}_{0})\| =\sup_{t\in\mathcal{T},i\in\{1,\ldots,n\}}\|\mathbf{Z}_{i}- \mathbf{E}_{n}(t,\tilde{\mathbf{\beta}}_{n})-(\mathbf{Z}_{i}-\mathbf{e}(t,\mathbf{ \beta}_{0}))\|\] \[=\sup_{t\in\mathcal{T}}\big{\|}\frac{\mathbf{S}_{n}^{(1)}(t, \tilde{\mathbf{\beta}}_{n})}{S_{n}^{(0)}(t,\tilde{\mathbf{\beta}}_{n})}-\frac{ \mathbf{s}^{(1)}(t,\mathbf{\beta}_{0})}{s^{(0)}(t,\mathbf{\beta}_{0})}\big{\|}.\] It is straightforward to show that the term on the right-hand side of the second equation above converges to zero in probability as \(n\to\infty\) for any consistent estimator \(\tilde{\mathbf{\beta}}_{n}\in\mathcal{B}\) of \(\mathbf{\beta}_{0}\) according to Assumption II.2.2 (i) - (iv). In order to see this one may rewrite \(\sup_{t\in\mathcal{T}}\|\frac{\mathbf{S}_{n}^{(1)}(t,\tilde{\mathbf{\beta}}_{n})} {S_{n}^{(0)}(t,\tilde{\mathbf{\beta}}_{n})}\|\) as \[\sup_{t\in\mathcal{T}}\] \[\leq\sup_{t\in\mathcal{T}}\big{\{}\big{[}\|\mathbf{S}_{n}^{(1)}(t,\tilde{\mathbf{\beta}}_{n})-\mathbf{s}^{(1)}(t,\mathbf{\beta}_{0})\big{)}\|\cdot| \frac{s^{(0)}(t,\mathbf{\beta}_{0})}{S_{n}^{(0)}(t,\tilde{\mathbf{\beta}}_{n})}-1|+\| \mathbf{S}_{n}^{(1)}(t,\tilde{\mathbf{\beta}}_{n})-\mathbf{s}^{(1)}(t,\mathbf{\beta}_ {0})\|\] \[\quad+\|\mathbf{s}^{(1)}(t,\mathbf{\beta}_{0})\|\cdot|\frac{s^{(0)}(t,\mathbf{\beta}_{0})}{S_{n}^{(0)}(t,\tilde{\mathbf{\beta}}_{n})}-1|\big{]}\cdot|s^{(0) }(t,\mathbf{\beta}_{0})^{-1}|+\|\frac{\mathbf{s}^{(1)}(t,\mathbf{\beta}_{0})}{s^{(0)} (t,\mathbf{\beta}_{0})}\|\big{\}}.\] Here, the term in squared brackets converges in probability to zero as \(n\to\infty\) for any consistent estimator \(\tilde{\mathbf{\beta}}_{n}\in\mathcal{B}\) of \(\mathbf{\beta}_{0}\) according to Assumption II.2.2 (i), (II.31), which holds under Assumption II.2.2 (i) - (iv), and the boundedness of \(s^{(1)}(\cdot,\mathbf{\beta}_{0})\) for all \(t\in\mathcal{T}\) according to Assumption II.2.2 (ii). Then, due to the boundedness of \(s^{(0)}(\cdot,\mathbf{\beta}_{0})^{-1}\) for all \(t\in\mathcal{T}\) according to Assumption II.2.2 (iii) & (iv), it holds that \(\sup_{t\in\mathcal{T}}\|\frac{\mathbf{S}_{n}^{(1)}(t,\tilde{\mathbf{\beta}}_{n})}{S _{n}^{(0)}(t,\tilde{\mathbf{\beta}}_{n})}\|\) is asymptotically equivalent to \(\sup\limits_{t\in\mathcal{T}}\lVert\dfrac{\mathbf{s}^{(1)}(t,\mathbf{\beta}_{0})}{s^{(0 )}(t,\mathbf{\beta}_{0})}\rVert\) under Assumption II.2.2 (i) - (iv). Hence, \[\sup\limits_{t\in\mathcal{T}}\left\lVert\dfrac{\mathbf{S}^{(1)}_{n}(t,\tilde{ \mathbf{\beta}}_{n})}{S^{(0)}_{n}(t,\tilde{\mathbf{\beta}}_{n})}-\dfrac{\mathbf{s}^{(1 )}(t,\mathbf{\beta}_{0})}{s^{(0)}(t,\mathbf{\beta}_{0})}\right\rVert\overset{\mathbb{ P}}{\longrightarrow}0,\text{ as }n\to\infty,\] (II.35) for any consistent estimator \(\tilde{\mathbf{\beta}}_{n}\in\mathcal{B}\) of \(\mathbf{\beta}_{0}\). From this, (II.34) immediately follows and Assumption I.2.1 (i) of Part I holds for \(\mathbf{g}_{n,i}(t,\mathbf{\beta})\). Furthermore, \(\tilde{g}_{i}(t,\cdot)=\left(\mathbf{Z}_{i}-\dfrac{\mathbf{s}^{(1)}(t,\cdot)} {s^{(0)}(t,\cdot)}\right)\) is a continuous function in \(\mathbf{\beta}\in\mathcal{B}\), because \(\mathbf{s}^{(1)}(t,\cdot)\) is a continuous function in \(\mathbf{\beta}\in\mathcal{B}\) according to Assumption II.2.2 (ii), and \(s^{(0)}(t,\cdot)^{-1}\) is a continuous function in \(\mathbf{\beta}\in\mathcal{B}\) according to Assumption II.2.2 (ii) as argued in the context of \(\tilde{k}(t,\cdot)\). Additionally, \(\tilde{g}_{i}\) is bounded on \(\mathcal{T}\times\mathcal{B}\) for all \(i\in\mathbb{N}\), since \(\mathbf{Z}_{i}\) is assumed to be bounded for \(i\in\mathbb{N}\) and \(\mathbf{e}=\dfrac{\mathbf{s}^{(1)}}{s^{(0)}}\) is bounded on \(\mathcal{T}\times\mathcal{B}\), because \(\mathbf{s}^{(1)}\) is bounded on \(\mathcal{T}\times\mathcal{B}\) according to Assumption II.2.2 (ii) and \(s^{(0)}(\cdot)^{-1}\) is bounded on \(\mathcal{T}\times\mathcal{B}\) according to Assumption II.2.2 (iii) & (iv) as argued in the context of \(\tilde{k}(t,\cdot)\). Thus we conclude that under Assumption II.2.2 (ii) - (iv), Assumption I.2.1 (ii) of Part I holds for \(g_{n,i}(t,\mathbf{\beta})\). Finally, with respect to part (iii) of Assumption I.2.1 of Part I we note that the couples \((\tilde{g}_{i}(t,\mathbf{\beta}_{0}),\lambda_{i}(t,\mathbf{\beta}_{0}))\), \(i=1,\ldots,n\), with \(\lambda_{i}(t,\mathbf{\beta}_{0})=Y_{i}(t)\exp(\mathbf{Z}_{i}^{\top}\mathbf{\beta}_{0} )\alpha_{1;0}(t)\) are pairwise independent and identically distributed for all \(t\in\mathcal{T}\), because \(\mathbf{e}(t,\mathbf{\beta}_{0})\) is a deterministic function in \(t\in\mathcal{T}\), and \((Y_{i},N_{i},\mathbf{Z}_{i})\), \(i=1,\ldots,n\), are pairwise independent and identically distributed according to Assumption II.2.2 (iv). In conclusion, Assumption I.2.1 of Part I is fulfilled for \(g_{n,i}(t,\mathbf{\beta})\) under Assumption II.2.2 (i) - (iv). Combining this with our results for \(k_{n}(t,\mathbf{\beta})\) above, it follows that under Assumption II.2.2 (i) - (iv) that Assumption I.2.1 of Part I holds for \(\mathbf{h}_{n,i}(t,\mathbf{\beta})=\left(k_{n}(t,\mathbf{\beta}),\mathbf{g}_{n,i}(t, \mathbf{\beta})^{\top}\right)^{\top}\). Next, we derive from which conditions of Assumption II.2.2 Assumption I.2.3 of Part I can be inferred. We start by considering Assumption I.2.3 (i), i.e., \[\sup\limits_{t\in\mathcal{T}}\lVert\nabla k_{n}(t,\tilde{\mathbf{\beta}}_{n})- \tilde{\mathbf{K}}(t,\mathbf{\beta}_{0})\rVert\overset{\mathbb{P}}{\longrightarrow }0,\text{ as }n\to\infty,\] (II.36) for any consistent estimator \(\tilde{\mathbf{\beta}}_{n}\) of \(\mathbf{\beta}_{0}\). According to Section II.2.2 the gradient \(\nabla k_{n}\) of \(k_{n}\) with respect to \(\mathbf{\beta}\) at \(\mathbf{\beta}=\tilde{\mathbf{\beta}}_{n}\) is given by \(\nabla k_{n}(t,\tilde{\mathbf{\beta}}_{n})=-J_{n}(u)\cdot\mathbf{E}_{n}(t,\tilde{ \mathbf{\beta}}_{n})^{\top}\cdot S^{(0)}_{n}(t,\tilde{\mathbf{\beta}}_{n})^{-1}\). We claim that (II.36) holds for \(\tilde{\mathbf{K}}(t,\mathbf{\beta}_{0})=\mathbf{e}(t,\mathbf{\beta}_{0})^{\top}\cdot s ^{(0)}(t,\mathbf{\beta}_{0})^{-1}\). For this \(\tilde{\mathbf{K}}\) we have \[\sup\limits_{t\in\mathcal{T}}\lVert\nabla k_{n}(t,\tilde{\mathbf{\beta }}_{n})-\tilde{\mathbf{K}}(t,\mathbf{\beta}_{0})\rVert\] \[=\sup\limits_{t\in\mathcal{T}}\lVert-J_{n}(u)\cdot\mathbf{E}_{n}( t,\tilde{\mathbf{\beta}}_{n})^{\top}\cdot S^{(0)}_{n}(t,\tilde{\mathbf{\beta}}_{n})^{-1}- \mathbf{e}(t,\mathbf{\beta}_{0})^{\top}\cdot s^{(0)}(t,\mathbf{\beta}_{0})^{-1}\rVert\] \[=\sup_{t\in\mathcal{T}}\big{\{}\|-J_{n}(u)\cdot S_{n}^{(0)}(t,\tilde{ \boldsymbol{\beta}}_{n})^{-1}\cdot\big{(}\mathbf{E}_{n}(t,\tilde{\boldsymbol{ \beta}}_{n})^{\top}-\mathbf{e}(t,\boldsymbol{\beta}_{0})^{\top}+\mathbf{e}(t, \boldsymbol{\beta}_{0})^{\top}\big{)}\] \[\quad-\mathbf{e}(t,\boldsymbol{\beta}_{0})^{\top}\cdot s^{(0)}(t, \boldsymbol{\beta}_{0})^{-1}\|\big{\}}\] \[\leq\sup_{t\in\mathcal{T}}\big{\{}|J_{n}(u)\cdot S_{n}^{(0)}(t, \tilde{\boldsymbol{\beta}}_{n})^{-1}-s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}| \cdot\|\mathbf{e}(t,\boldsymbol{\beta}_{0})^{\top}\|\] \[\quad+\|J_{n}(u)\cdot S_{n}^{(0)}(t,\tilde{\boldsymbol{\beta}}_{ n})^{-1}-s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}+s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1 }\|\cdot\|\mathbf{E}_{n}(t,\tilde{\boldsymbol{\beta}}_{n})^{\top}-\mathbf{e} (t,\boldsymbol{\beta}_{0})^{\top}\|\big{\}}\] \[\leq\sup_{t\in\mathcal{T}}\big{\{}\|J_{n}(u)\cdot S_{n}^{(0)}(t, \tilde{\boldsymbol{\beta}}_{n})^{-1}-s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}\| \cdot\|\mathbf{e}(t,\boldsymbol{\beta}_{0})^{\top}\|\] \[\quad+\big{[}\|J_{n}(u)\cdot S_{n}^{(0)}(t,\tilde{\boldsymbol{ \beta}}_{n})^{-1}-s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}\|+\|s^{(0)}(t, \boldsymbol{\beta}_{0})^{-1}\|\big{]}\cdot\|\mathbf{E}_{n}(t,\tilde{ \boldsymbol{\beta}}_{n})^{\top}-\mathbf{e}(t,\boldsymbol{\beta}_{0})^{\top}\| \big{\}}.\] Hence, (II.36) holds due to (II.33), (II.35), which hold under Assumption II.2.2 (i) - (iii), and the boundedness of \(\mathbf{e}(t,\boldsymbol{\beta}_{0})\) and \(s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}\) on \(\mathcal{T}\) according to Assumption II.2.2 (ii) - (iv). We conclude that Assumption I.2.3 (i) of Part I holds under Assumption II.2.2 (i) - (iv). Moreover, because in view of (II.6), \(\mathbf{e}(t,\boldsymbol{\beta}_{0})^{\top}\) and \(s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}\) are deterministic functions and thus, predictable with respect to \(\mathcal{F}_{1}\), we have that Assumption I.2.3 (ii) of Part I clearly is satisfied due to Assumption II.2.2 (ii) - (iv). Additionally, since \(\mathbf{e}(t,\boldsymbol{\beta}_{0})\) respectively \(s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}\) are bounded on \(\mathcal{T}\) under Assumption II.2.2 (ii) - (iv) (see above), \(\tilde{\mathbf{K}}(t,\boldsymbol{\beta}_{0})=\mathbf{e}(t,\boldsymbol{\beta}_ {0})^{\top}\cdot s^{(0)}(t,\boldsymbol{\beta}_{0})^{-1}\) is bounded on \(\mathcal{T}\). Furthermore, \((\tilde{\mathbf{K}}(t,\boldsymbol{\beta}_{0}),\lambda_{i}(t,\boldsymbol{\beta }_{0}))\), \(i=1,\ldots,n\), are pairwise independent and identically distributed for all \(t\in\mathcal{T}\), because \(\tilde{\mathbf{K}}(t,\boldsymbol{\beta}_{0})\) is a deterministic function in \(t\in\mathcal{T}\), and \((Y_{i},N_{i},\mathbf{Z}_{i})\), \(i=1,\ldots,n\), are pairwise independent and identically distributed according to Assumption II.2.2 (iv). Thus, Assumption I.2.3 (iii) of Part I is fulfilled under Assumption II.2.2 (iv). To sum up, Assumption I.2.3 of Part I holds under Assumption II.2.2 (i) - (iv), and Assumption I.2.1 and Assumption I.2.3 of Part I are valid under Assumption II.2.2 (i) - (iv). **Proof of Lemma II.2.3(ii)**: We derive the limit in probability of \(\mathbf{C}_{n}\), as \(n\to\infty\). Note that \(\langle\mathbf{D}_{n,g}\rangle(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\big{(} \mathbf{Z}_{i}-\mathbf{E}_{n}(u,\boldsymbol{\beta}_{0})\big{)}^{\otimes 2}d \Lambda_{i}(u,\boldsymbol{\beta}_{0})=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t} \mathbf{R}_{n}(u,\boldsymbol{\beta}_{0})d\Lambda_{i}(u,\boldsymbol{\beta}_{0})\). Hence, we have \[\frac{1}{n}\mathbf{I}_{n}(t,\boldsymbol{\beta}_{0})-\langle\mathbf{D}_{n,g} \rangle(t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}\mathbf{R}_{n}(u,\boldsymbol{ \beta}_{0})dM_{i}(u),\] where the right-hand side of the equation above is a local square integrable martingale, according to Proposition II.4.1 of Andersen et al. (1993). Following the notation introduced in Part I, we denote this martingale by \(\frac{1}{\sqrt{n}}\mathbf{D}_{n,R}(t)\). Under Assumption II.2.2 (i)-(iv) it follows from Lemma I.2.2 of Part I that \(\langle\mathbf{D}_{n,R}\rangle(t)\stackrel{{\mathbb{P}}}{{ \longrightarrow}}\langle\mathbf{D}_{r}\rangle(t)\) for all \(t\in\mathcal{T}\), as \(n\to\infty\), where \(\langle\mathbf{D}_{r}\rangle(t)\) is some covariance function bounded for all \(t\in\mathcal{T}\). Thus, \(\frac{1}{n}\langle\mathbf{D}_{n,R}\rangle(\tau)\) and likewise the corresponding martingale \(\frac{1}{\sqrt{n}}\mathbf{D}_{n,R}\) converge to zero in probability, as \(n\to\infty\), according to Lenglart's Inequality. In other words, \(\frac{1}{n}\mathbf{I}_{n}(\tau,\boldsymbol{\beta}_{0})\) and \(\langle\mathbf{D}_{n,g}\rangle(\tau)\) are asymptotically equivalent and we get \[\frac{1}{n}\mathbf{I}_{n}(\tau,\boldsymbol{\beta}_{0})=\langle\mathbf{D}_{n,g} \rangle(\tau)+o_{p}(1)\stackrel{{\mathbb{P}}}{{\longrightarrow}} \langle\mathbf{D}_{\tilde{g}}\rangle(\tau)=\mathbf{V}_{\tilde{g}}(\tau),\ \text{as}\ n\to\infty,\] with \(\mathbf{V}_{\tilde{g}}(t)=\int_{0}^{t}\mathbf{r}(u,\boldsymbol{\beta}_{0})s^{ (0)}(u,\boldsymbol{\beta}_{0})dA_{1;0}(u)\). By the continuous mapping theorem and because \(\frac{1}{n}\mathbf{I}_{n}(\tau,\boldsymbol{\beta}_{0})\) is asymptotically invertible under Assumption II.2.2 (v), it follows from Assumption II.2.2 that \[\mathbf{C}_{n}=\big{(}\frac{1}{n}\mathbf{I}_{n}(\tau,\boldsymbol{\beta}_{0}) \big{)}^{-1}\stackrel{{\mathbb{P}}}{{\longrightarrow}}\mathbf{V} _{\tilde{g}}(\tau)^{-1}=\mathbf{C},\ \text{as}\ n\to\infty.\] (II.37) Hence, Assumption I.2.5 of Part I is satisfied under Assumption II.2.2. Recall that the wild bootstrap counterpart \(\mathbf{C}_{n}^{*}\) of \(\mathbf{C}_{n}=\big{(}\frac{1}{n}\mathbf{I}_{n}(\tau,\boldsymbol{\beta}_{0}) \big{)}^{-1}\) is defined through the optional covariation process \([\mathbf{D}_{n,g}^{*}](\tau)\) of \(\mathbf{D}_{n,g}^{*}\), in this case as \[\mathbf{C}_{n}^{*}=\big{(}[\mathbf{D}_{n,g}^{*}](\tau)\big{)}^{-1}=\big{(} \frac{1}{n}\sum_{i=1}^{n}\int_{0}^{\tau}(\boldsymbol{Z}_{i}-\boldsymbol{E}_{n} (u,\hat{\boldsymbol{\beta}}_{n}))^{\otimes 2}G_{i}^{2}dN_{i}(u)\big{)}^{-1};\] (II.38) cf. Lemma I.3.2 of Part I. The particular choice of \(\mathbf{C}_{n}^{*}\) is motivated by the fact that, under Assumption II.2.2 (i)-(iv) and conditionally on \(\mathcal{F}_{2}(0)\), we have \([\mathbf{D}_{n,g}^{*}](t)\stackrel{{\mathbb{P}}}{{\longrightarrow }}[\mathbf{D}_{\tilde{g}}](t)=\mathbf{V}_{\tilde{g}}(t)\) for all \(t\in\mathcal{T}\) as \(n\to\infty\), according to Corollary I.3.7 of Part I. Hence, from the continuous mapping theorem and because of the asymptotic invertibility of \(\mathbf{V}_{\tilde{g}}(\tau)\) according to Assumption II.2.2 (v) it follows under Assumption II.2.2 that \[\mathbf{C}_{n}^{*}\stackrel{{\mathbb{P}}}{{\longrightarrow}} \mathbf{V}_{\tilde{g}}(\tau)^{-1}=\mathbf{C},\ \text{as}\ n\to\infty.\] (II.39) From (II.37) and (II.39) we conclude that \[\|\mathbf{C}_{n}^{*}-\mathbf{C}_{n}\|\stackrel{{\mathbb{P}}}{{ \longrightarrow}}0,\quad n\to\infty,\] which is why Assumption I.3.9 of Part I is fulfilled under Assumption II.2.2. In conclusion, under Assumption II.2.2 both Assumption I.2.5 and Assumption I.3.9 of Part I are satisfied. **Proof of Lemma II.2.3(iii)**: We need to prove that under Assumption II.2.2, Condition VII.2.1 of Andersen et al. (1993) holds. It is easy to see that Assumption II.2.2 (i) - (iii) and Assumption II.2.2 (v) are identical to Condition VII.2.1 (a) - (c) and Condition VII.2.1 (e), respectively. Thus, it is only left to show that Assumption II.2.2 (iv) implies Condition VII.2.1 (d). In particular, we need to prove that under Assumption II.2.2 (iv) the following holds: \[\frac{\partial}{\partial\mathbf{\beta}}s^{(0)}(t,\mathbf{\beta})=\mathbf{s}^{(1)}(t,\mathbf{ \beta}),\quad\frac{\partial^{2}}{\partial\mathbf{\beta}^{2}}s^{(0)}(t,\mathbf{\beta})= \mathbf{s}^{(2)}(t,\mathbf{\beta}),\text{ for }\mathbf{\beta}\in\mathcal{B},t\in \mathcal{T}.\] (II.40) For this we recall (II.6), this is, under Assumption II.2.2 (iv) we have \[\mathbf{s}^{(m)}(t,\mathbf{\beta})=\mathbb{E}(Y_{1}(t)\mathbf{Z}_{1}^{\otimes m} \exp(\mathbf{Z}_{1}^{\top}\mathbf{\beta})),\] for all fixed \(t\in\mathcal{T}\), \(m\in\{0,1,2\}\) (in non-bold-type for \(m=0\)), and \(\mathbf{\beta}\in\mathcal{B}\). Furthermore, we have \[\big{|}\frac{\partial}{\partial\mathbf{\beta}_{j}}Y_{1}(t)\exp(\mathbf{Z}_{1}^{ \top}\mathbf{\beta})\big{|}=|Y_{1}(t)Z_{1j}\exp(\mathbf{Z}_{1}^{\top}\mathbf{\beta})| \leq|Z_{1j}|\exp(K)\] and \[|\frac{\partial^{2}}{\partial\mathbf{\beta}_{j}\partial\mathbf{\beta}_{l}}Y_{1}(t) \exp(\mathbf{Z}_{1}^{\top}\mathbf{\beta})|=|Y_{1}(t)Z_{1j}Z_{1l}\exp(\mathbf{Z}_{ 1}^{\top}\mathbf{\beta})|\leq|Z_{1j}Z_{1l}|\exp(K),\] where \(Z_{1j}\) is the \(j\)-th component of \(\mathbf{Z}_{1}\), \(j,l=1,\ldots,q\). Note that \(K\) is bounded due to the boundedness of the covariates and the boundedness of \(\mathcal{B}\), so that the bounds on the right-hand side of the two formulas above are integrable random variables. According to Theorem 12.5 of Schilling (2005), it then follows that the integral and the differential operator can be interchanged, which yields \[\frac{\partial}{\partial\mathbf{\beta}_{j}}s^{(0)}(t,\mathbf{\beta}) =\mathbb{E}(\frac{\partial}{\partial\mathbf{\beta}_{j}}Y_{1}(t)\exp( \mathbf{Z}_{1}^{\top}\mathbf{\beta}))\] \[=\mathbb{E}(Y_{1}(t)Z_{1j}\exp(\mathbf{Z}_{1}^{\top}\mathbf{\beta})),\] and \[\frac{\partial^{2}}{\partial\mathbf{\beta}_{j}\partial\mathbf{\beta}_{l}} s^{(0)}(t,\mathbf{\beta}) =\mathbb{E}(\frac{\partial^{2}}{\partial\mathbf{\beta}_{j}\partial \mathbf{\beta}_{l}}Y_{1}(t)\exp(\mathbf{Z}_{1}^{\top}\mathbf{\beta}))\] \[=\mathbb{E}(Y_{1}(t)Z_{1j}Z_{1l}\exp(\mathbf{Z}_{1}^{\top}\mathbf{ \beta})),\] for \(j,l=1,\ldots,q\). Hence, the gradient and the Hessian matrix of \(s^{(0)}(t,\mathbf{\beta})\) are given by \[\mathbf{s}^{(1)}(t,\mathbf{\beta})=\mathbb{E}(Y_{1}(t)\mathbf{Z}_{1}\exp(\mathbf{Z }_{1}^{\top}\mathbf{\beta})),\quad\mathbf{s}^{(2)}(t,\mathbf{\beta})=\mathbb{E}(Y_{1} (t)\mathbf{Z}_{1}^{\otimes 2}\exp(\mathbf{Z}_{1}^{\top}\mathbf{\beta})),\] for all fixed \(t\in\mathcal{T}\) and \(\mathbf{\beta}\in\mathcal{B}\), respectively, so that (II.40) holds under Assumption II.2.2 (iv). Hence, Condition VII.2.1 of Andersen et al. (1993) follows from Assumption II.2.2.2. This completes the proof of Lemma II.2.3. \(\blacksquare\) **Remark II.6.1**.: As explained in Remark II.2.1 and mentioned in Fine and Gray (1999), the structures related to the Fine-Gray model coincide with those under the Cox model. In particular, this holds for the log Cox partial likelihood and the log partial likelihood under the Fine-Gray model. Thus, by means of Lemma II.2.3 (iii) we resort to Theorem VII.2.1 of Andersen et al. (1993) for the Cox model in which it is shown via the log Cox partial likelihood that \(\hat{\mathbf{\beta}}_{n}\) is unique with probability converging to 1 and that \(\hat{\mathbf{\beta}}_{n}\) is a consistent estimator for \(\mathbf{\beta}_{0}\). **Remark II.6.2**.: The score statistic \(\mathbf{U}_{n}(t,\mathbf{\beta}_{0})\) is a local square integrable martingale in \(t\in\mathcal{T}\). In order to see this, we point out the following two observations \[\sum_{i=1}^{n}\int_{0}^{t}\,\mathbf{Z}_{i}d\Lambda_{i}(u,\mathbf{ \beta}_{0}) =n\int_{0}^{t}\frac{1}{n}\sum_{i=1}^{n}\mathbf{Z}_{i}Y_{i}(u)\exp (\mathbf{Z}_{i}^{\top}\mathbf{\beta}_{0})dA_{1;0}(u)\] \[=n\int_{0}^{t}\mathbf{S}_{n}^{(1)}(u,\mathbf{\beta}_{0})dA_{1;0}(u)\] and \[\sum_{i=1}^{n}\int_{0}^{t}\mathbf{E}_{n}(u,\mathbf{\beta}_{0})d\Lambda _{i}(u,\mathbf{\beta}_{0}) =\int_{0}^{t}\frac{\mathbf{S}_{n}^{(1)}(u,\mathbf{\beta}_{0})}{S_{n}^ {(0)}(u,\mathbf{\beta}_{0})}nS_{n}^{(0)}(u,\mathbf{\beta}_{0})dA_{1;0}(u)\] \[=n\int_{0}^{t}\mathbf{S}_{n}^{(1)}(u,\mathbf{\beta}_{0})dA_{1;0}(u).\] Thus, \(\mathbf{U}_{n}(\cdot,\mathbf{\beta}_{0})\) can be expressed as integrals with respect to counting process martingales, i.e., \[\mathbf{U}_{n}(t,\mathbf{\beta}_{0}) =\sum_{i=1}^{n}\int_{0}^{t}\big{(}\mathbf{Z}_{i}-\mathbf{E}_{n}(u,\mathbf{ \beta}_{0})\big{)}(dM_{i}(u)+d\Lambda_{i}(u,\mathbf{\beta}_{0}))\] \[=\sum_{i=1}^{n}\int_{0}^{t}\big{(}\mathbf{Z}_{i}-\mathbf{E}_{n}(u,\mathbf{ \beta}_{0})\big{)}dM_{i}(u)\] with predictable and locally bounded integrands \(\mathbf{Z}_{i}-\mathbf{E}_{n}(u,\mathbf{\beta}_{0})\), \(i=1,\ldots,n\). It follows with Proposition II.4.1 of Andersen et al. (1993) that \(\mathbf{U}_{n}(\cdot,\mathbf{\beta}_{0})\) is a local square integrable martingale with respect to \(\mathcal{F}_{1}\). ### B.2 Proofs and Remarks of Section II.2.3 **Remark II.6.3**.: According to the facts below, all assumptions necessary for the asymptotic representation (I.11) of Part I to hold are satisfied for \(X_{n}^{(2)}=\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})\) and \(X^{(2)}=A_{1;0}\). * The integrand \(k_{n}(t,\mathbf{\beta})=J_{n}(t)S_{n}^{(0)}(t,\mathbf{\beta})^{-1}\) of \(X_{n}^{(2)}\) is almost surely continuously differentiable in \(\mathbf{\beta}\) by definition of \(J_{n}(t)\) and \(S_{n}^{(0)}(t,\mathbf{\beta})\). * The regularity assumption (I.5) of Part I holds, since \[\sqrt{n}\big{(}\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}k_{n}(u,\mathbf{\beta}_{0})d \Lambda_{i}(u,\mathbf{\beta}_{0})-A_{1;0}(t)\big{)}=\sqrt{n}\big{(}\int_{0}^{t}(J _{n}(u)-1)dA_{1;0}\big{)}\xrightarrow{\mathbb{P}}0,\] as \(n\to\infty\). Here we have used \(\frac{1}{n}\sum_{i=1}^{n}d\Lambda_{i}(t,\mathbf{\beta}_{0})=S_{n}^{(0)}(t,\mathbf{ \beta}_{0})dA_{1;0}\), \(\sup_{t\in\mathcal{T}}\sqrt{n}|J_{n}(t)-1|=o_{p}(1)\), and \(A_{1;0}(\tau)<\infty\). * The asymptotic representation (I.8) of Part I is fulfilled because of (II.11), which has been derived under Assumption II.2.2 by means of Lemma II.2.3 (iii) and Theorem VII.2.1 of Andersen et al. (1993). * The consistency assumption (I.2) of Part I, i.e., \(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0}=O_{p}(n^{-1/2})\) holds under Assumption II.2.2 according to Lemma II.2.4. **Remark II.6.4**.: From the following facts we have that all assumptions necessary for (I.19) of Part I to hold are satisfied for \(X_{n}^{*(2)}=\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})\) and \(X_{n}^{(2)}=\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})\). * The integrand \(k_{n}(t,\mathbf{\beta})=J_{n}(t)S_{n}^{(0)}(t,\mathbf{\beta})^{-1}\) of \(X_{n}^{(2)}\) is almost surely continuously differentiable in \(\mathbf{\beta}\) by definition of \(J_{n}(t)\) and \(S_{n}^{(0)}(t,\mathbf{\beta})\). * We use the same wild bootstrap representation for \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}^{*}-\hat{\mathbf{\beta}})\) as in (I.13) of Part I, cf. (II.22). * \(\hat{\mathbf{\beta}}_{n}^{*}-\hat{\mathbf{\beta}}_{n}=O_{p}(n^{-1/2})\) holds under Assumption II.2.2 according to Lemma II.2.6. * The wild bootstrap estimator \(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})\) has been obtained by applying Replacement I.3.1 of Part I to \(\hat{A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})\), just like \(X_{n}^{*(2)}\) has been obtained based on \(X_{n}^{(2)}\). **Proof of Theorem II.2.8** We write \[\sqrt{n}(\hat{\mathbf{\theta}}_{n}-\mathbf{\theta}_{0})(\cdot) =\sqrt{n}(\hat{\mathbf{\beta}}_{n}^{\top}-\mathbf{\beta}_{0}^{\top},\hat {A}_{1;0,n}(\cdot,\hat{\mathbf{\beta}}_{n})-A_{1;0}(\cdot))^{\top}\] (II.41) \[=\begin{pmatrix}\mathbf{0}_{q\times 1}&+\mathbf{I}_{q\times q} \cdot\mathbf{C}_{n}\cdot\mathbf{D}_{n,g}(\tau)&+o_{p}(1)\\ D_{n,k}(\cdot)&+\mathbf{B}_{n}(\cdot)\cdot\mathbf{C}_{n}\cdot\mathbf{D}_{n,g}( \tau)&+o_{p}(1)\end{pmatrix}\] \[=\mathbf{D}_{n,\bar{k}}(\cdot)+\check{\mathbf{B}}_{n}(\cdot) \cdot\check{\mathbf{C}}_{n}\cdot\mathbf{D}_{n,\bar{g}}(\tau)+o_{p}(1),\] where \(\mathbf{D}_{n,\bar{k}}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}\check{ \mathbf{k}}_{n}(u,\mathbf{\beta}_{0})dM_{i}(u)\) and \(\mathbf{D}_{n,\bar{g}}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t}\check{ \mathbf{g}}_{n,i}(u,\mathbf{\beta}_{0})dM_{i}(u)\) \(t\in\mathcal{T}\), with \[\check{\mathbf{k}}_{n}(t,\boldsymbol{\beta}_{0})=\begin{pmatrix}\mathbf{0}_{q \times 1}\\ k_{n}(t,\boldsymbol{\beta}_{0})\end{pmatrix}\quad\text{and}\quad\check{\mathbf{g}} _{n,i}(t,\boldsymbol{\beta}_{0})=\begin{pmatrix}\mathbf{g}_{n,i}(t,\boldsymbol {\beta}_{0})\\ \mathbf{g}_{n,i}(t,\boldsymbol{\beta}_{0})\end{pmatrix},\] and \[\check{\mathbf{B}}_{n}(t)=\begin{pmatrix}\mathbf{I}_{q\times q}&\mathbf{0}_{q \times q}\\ \mathbf{0}_{1\times q}&\mathbf{B}_{n}(t)\end{pmatrix}\quad\text{and}\quad \check{\mathbf{C}}_{n}=\begin{pmatrix}\mathbf{C}_{n}&\mathbf{0}_{q\times q}\\ \mathbf{0}_{q\times q}&\mathbf{C}_{n}\end{pmatrix},\] \(t\in\mathcal{T}\), where \(\mathbf{0}_{q\times 1}\) denotes the \(q\)-dimensional vector of zeros, \(\mathbf{0}_{q\times q}\) denotes the \(q\times q\)-dimensional matrix of zeros, \(\mathbf{I}_{q\times q}\) denotes the \(q\times q\)-dimensional identity matrix, and \(\mathbf{B}_{n}\) and \(\mathbf{C}_{n}\) as given in (II.15) and (II.10), respectively. The main consequence of (II.41) is that the particular structure of the asymptotic representation of \(\sqrt{n}(\hat{\boldsymbol{\beta}}_{n}-\boldsymbol{\beta}_{0})\) and \(\sqrt{n}(\hat{A}_{1;0,n}(\cdot,\hat{\boldsymbol{\beta}}_{n})-A_{1;0}(\cdot))\) carries over to the structure of the asymptotic representation of \(\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}_{0})\). Additionally, the components \(\mathbf{D}_{n,\check{k}}\), \(\mathbf{D}_{n,\check{g}}\), \(\check{\mathbf{B}}_{n}\) and \(\check{\mathbf{C}}_{n}\) have the same properties as \(\mathbf{D}_{n,\check{k}}\), \(\mathbf{D}_{n,g}\), \(\mathbf{B}_{n}\) and \(\mathbf{C}_{n}\). Especially, \(\mathbf{D}_{n,\check{k}}\) and \(\mathbf{D}_{n,\check{g}}\) are square integrable martingales with respect to \(\mathcal{F}_{1}\), respectively, and under Assumption II.2.2 (i)-(iv) \((\mathbf{D}_{n,\check{k}},\mathbf{D}_{n,\check{g}})\) converges in law, as \(n\to\infty\), to the zero-mean Gaussian vector martingale \((\mathbf{D}_{\tilde{k}},\mathbf{D}_{\tilde{g}})\) with covariance function \[\mathbf{V}_{(\tilde{k},\tilde{g})}=\begin{pmatrix}\mathbf{V}_{\tilde{k}}& \mathbf{V}_{\tilde{k},\tilde{g}}\\ \mathbf{V}_{\tilde{g},\tilde{k}}&\mathbf{V}_{\tilde{g}}\end{pmatrix},\] where \[\mathbf{V}_{\tilde{k}}(t)=\langle\mathbf{D}_{\tilde{k}}\rangle(t) =\int_{0}^{t}\mathbb{E}(\tilde{\tilde{\mathbf{k}}}(u,\boldsymbol{ \beta}_{0})^{\otimes 2}\lambda_{1}(u,\boldsymbol{\beta}_{0}))du=\int_{0}^{t} \tilde{\tilde{\mathbf{k}}}(u,\boldsymbol{\beta}_{0})^{\otimes 2}s^{(0)}(u, \boldsymbol{\beta}_{0})dA_{1;0}(u),\] \[\mathbf{V}_{\tilde{g}}(t)=\langle\mathbf{D}_{\tilde{g}}\rangle(t) =\int_{0}^{t}\mathbb{E}(\tilde{\tilde{\mathbf{g}}}_{1}(u, \boldsymbol{\beta}_{0})^{\otimes 2}\lambda_{1}(u,\boldsymbol{\beta}_{0}))du=\int_{0}^{t} \tilde{\tilde{\mathbf{g}}}_{1}(u,\boldsymbol{\beta}_{0})^{\otimes 2}s^{(0)}(u, \boldsymbol{\beta}_{0})dA_{1;0}(u),\] with \(\tilde{\tilde{\mathbf{k}}}(t,\boldsymbol{\beta}_{0})=(\mathbf{0}_{q\times 1}^{ \top},\tilde{k}(t,\boldsymbol{\beta}_{0}))^{\top}\), \(\tilde{\tilde{\mathbf{g}}}_{1}(t,\boldsymbol{\beta}_{0})=(\tilde{\mathbf{g}}_ {1}(t,\boldsymbol{\beta}_{0})^{\top},\tilde{\mathbf{g}}_{1}(t,\boldsymbol{ \beta}_{0})^{\top})^{\top}\), and \[\mathbf{V}_{\tilde{k},\tilde{g}}(t)^{\top}=\mathbf{V}_{\tilde{g},\tilde{k}}(t) =\langle\mathbf{D}_{\tilde{g}},\mathbf{D}_{\tilde{k}}\rangle\int_{0 }^{t}\mathbb{E}(\tilde{\tilde{\mathbf{g}}}(u,\boldsymbol{\beta}_{0})\cdot\tilde {\tilde{\mathbf{k}}}(u,\boldsymbol{\beta}_{0})^{\top}\lambda_{1}(u,\boldsymbol {\beta}_{0}))du\] \[=\int_{0}^{t}\mathbb{E}(\begin{pmatrix}\mathbf{0}_{q\times q}& \tilde{\mathbf{g}}_{1}(u,\boldsymbol{\beta}_{0})\cdot\tilde{k}(u,\boldsymbol{ \beta}_{0})\\ \mathbf{0}_{q\times q}&\tilde{\mathbf{g}}_{1}(u,\boldsymbol{\beta}_{0})\cdot \tilde{k}(u,\boldsymbol{\beta}_{0})\end{pmatrix}\lambda_{1}(u,\boldsymbol{ \beta}_{0}))du\] \[=\mathbf{0}_{2q\times(q+1)},\] as \(\mathbf{V}_{\tilde{g},\tilde{k}}(t)=\mathbf{0}_{q\times 1}\) by (II.17). In particular, the orthogonality of the Gaussian martingales \(D_{\tilde{k}}\) and \(\mathbf{D}_{\tilde{g}}\) carries over to \(\mathbf{D}_{\tilde{k}}\) and \(\mathbf{D}_{\tilde{g}}\). Moreover, under Assumption II.2.2, the limits in probability of \(\check{\mathbf{B}}_{n}\) and \(\check{\mathbf{C}}_{n}\) are given by \[\check{\mathbf{B}}(t)=\begin{pmatrix}\mathbf{I}_{q\times q}&\mathbf{0}_{q\times q }\\ \mathbf{0}_{1\times q}&\mathbf{B}(t)\end{pmatrix}\quad\text{and}\quad\check{ \mathbf{C}}=\begin{pmatrix}\mathbf{C}&\mathbf{0}_{q\times q}\\ \mathbf{0}_{q\times q}&\mathbf{C}\end{pmatrix},\] \(t\in\mathcal{T}\), because from \(\sup_{t\in\mathcal{T}}\lVert\mathbf{B}_{n}(t)-\mathbf{B}(t)\rVert=o_{p}(1)\) and \(\lVert\mathbf{C}_{n}-\mathbf{C}\rVert=o_{p}(1)\), it follows that \(\sup_{t\in\mathcal{T}}\lVert\check{\mathbf{B}}_{n}(t)-\check{\mathbf{B}}(t)\rVert =o_{p}(1)\) and \(\lVert\check{\mathbf{C}}_{n}-\check{\mathbf{C}}\rVert=o_{p}(1)\), respectively. Finally, under Assumption II.2.2 and due to (II.41) it follows with Theorem I.2.6 of Part I that \[\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}_{0})\stackrel{{ \mathcal{L}}}{{\longrightarrow}}\mathbf{D}_{\check{k}}+\check{ \mathbf{B}}\cdot\check{\mathbf{C}}\cdot\mathbf{D}_{\check{\mathfrak{g}}}(\tau),\text{ in }(D(\mathcal{T}))^{(q+1)},\] (II.42) as \(n\to\infty\). Furthermore, the covariance function of \(\mathbf{D}_{\check{k}}+\check{\mathbf{B}}\cdot\check{\mathbf{C}}\cdot \mathbf{D}_{\check{\mathfrak{g}}}(\tau)\) is given by \[t\mapsto\mathbf{V}_{\check{k}}(t)+\check{\mathbf{B}}(t)\cdot\check{\mathbf{C }}\cdot\mathbf{V}_{\check{\mathfrak{g}}}(\tau)\cdot\check{\mathbf{C}}^{\top} \cdot\check{\mathbf{B}}(t)^{\top},\] as \(\mathbf{V}_{\check{k},\check{\mathfrak{g}}}(t)^{\top}=\mathbf{V}_{\check{ \mathfrak{g}},\check{k}}(t)=\mathbf{0}_{2q\times(q+1)}\). For the wild bootstrap counterpart \(\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}^{*}-\hat{\boldsymbol{\theta}}_{n})\) of \(\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}_{0})\) we have \[\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}^{*}-\hat{\boldsymbol{ \theta}}_{n})(\cdot) =\sqrt{n}(\hat{\boldsymbol{\beta}}_{n}^{*\top}-\hat{\boldsymbol{ \beta}}_{n}^{\top},\hat{A}_{1;0,n}^{*}(\cdot,\hat{\boldsymbol{\beta}}_{n}^{*}) -\hat{A}_{1;0,n}(\cdot,\hat{\boldsymbol{\beta}}_{n}))^{\top}\] (II.43) \[=\begin{pmatrix}\mathbf{0}_{q\times 1}&+\mathbf{I}_{q\times q} \cdot\mathbf{C}_{n}^{*}\cdot\mathbf{D}_{n,\mathfrak{g}}^{*}(\tau)+o_{p}(1)\\ D_{n,k}^{*}(\cdot)&+\mathbf{B}_{n}^{*}(\cdot)\cdot\mathbf{C}_{n}^{*}\cdot \mathbf{D}_{n,\mathfrak{g}}^{*}(\tau)+o_{p}(1)\end{pmatrix}\] \[=\mathbf{D}_{n,\check{k}}^{*}(\cdot)+\check{\mathbf{B}}_{n}^{*}( \cdot)\cdot\check{\mathbf{C}}_{n}^{*}\cdot\mathbf{D}_{n,\check{g}}^{*}(\tau)+ o_{p}(1),\] where \(\mathbf{D}_{n,\check{k}}^{*}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t} \check{\mathbf{k}}_{n}(u,\hat{\boldsymbol{\beta}}_{n})G_{i}dN_{i}(u)\), \(\mathbf{D}_{n,\check{g}}^{*}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\int_{0}^{t} \check{\mathbf{g}}_{n,i}(u,\hat{\boldsymbol{\beta}}_{n})G_{i}dN_{i}(u)\), \(t\in\mathcal{T}\), with \[\check{\mathbf{k}}_{n}(t,\hat{\boldsymbol{\beta}}_{n})=\begin{pmatrix}\mathbf{ 0}_{q\times 1}\\ k_{n}(t,\hat{\boldsymbol{\beta}}_{n})\end{pmatrix}\quad\text{and}\quad\check{ \mathbf{g}}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})=\begin{pmatrix}\mathbf{g}_{ n,i}(t,\hat{\boldsymbol{\beta}}_{n})\\ \mathbf{g}_{n,i}(t,\hat{\boldsymbol{\beta}}_{n})\end{pmatrix}.\] Additionally, \[\check{\mathbf{B}}_{n}^{*}(t)=\begin{pmatrix}\mathbf{I}_{q\times q}&\mathbf{0}_ {q\times q}\\ \mathbf{0}_{q\times 1}^{\top}&\mathbf{B}_{n}^{*}(t)\end{pmatrix}\quad\text{and}\quad \check{\mathbf{C}}_{n}^{*}=\begin{pmatrix}\mathbf{C}_{n}^{*}&\mathbf{0}_{q \times q}\\ \mathbf{0}_{q\times q}&\mathbf{C}_{n}^{*}\end{pmatrix},\] \(t\in\mathcal{T}\), where \(\mathbf{B}_{n}^{*}\) and \(\mathbf{C}_{n}^{*}\) are defined in (II.24) and (II.38), respectively. Note that the structure of the asymptotic representation of \(\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}^{*}-\hat{\boldsymbol{\theta}}_{n})(\cdot) =\sqrt{n}(\hat{\boldsymbol{\beta}}_{n}^{*\top}-\hat{\boldsymbol{\beta}}_{n}^{ \top},\hat{A}_{1;0,n}^{*}(\cdot,\hat{\boldsymbol{\beta}}_{n}^{*})-\hat{A}_{1;0, n}(\cdot,\hat{\boldsymbol{\beta}}_{n}))^{\top}\) resembles the structure of the asymptotic representations of its components \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}^{*\top}-\hat{\mathbf{\beta}}_{n}^{\top})\) and \(\sqrt{n}(\hat{A}_{1;0,n}^{*}(\cdot,\hat{\mathbf{\beta}}_{n}^{*})-\hat{A}_{1;0,n}( \cdot,\hat{\mathbf{\beta}}_{n}))\). Moreover, just like for \(\mathbf{D}_{n,k}^{*}\) and \(\mathbf{D}_{n,g}^{*}\), it holds that \(\mathbf{D}_{n,k}^{*}\) and \(\mathbf{D}_{n,\hat{g}}^{*}\) are square integrable martingales with respect to \(\mathcal{F}_{2}\). Additionally, under Assumption II.2.2 (i)-(iv) and conditionally on \(\mathcal{F}_{2}(0)\), it follows with Lemma I.3.6 of Part I that \((\mathbf{D}_{n,\tilde{k}}^{*\top},\mathbf{D}_{n,\tilde{g}}^{*\top})^{\top}\) converge in law to \((\mathbf{D}_{\tilde{k}}^{\top},\mathbf{D}_{\tilde{g}}^{\top})^{\top}\), as \(n\to\infty\). Furthermore, under Assumption II.2.2, we have \[\sup_{t\in\mathcal{T}}\lVert\tilde{\mathbf{B}}_{n}^{*}(t)-\tilde{\mathbf{B}}( t)\rVert\stackrel{{\mathbb{P}}}{{\longrightarrow}}0\quad\text{ and }\quad\lVert\tilde{\mathbf{C}}_{n}^{*}-\tilde{\mathbf{C}}\rVert\stackrel{{ \mathbb{P}}}{{\longrightarrow}}0,\text{ as }n\to\infty,\] because \(\sup_{t\in\mathcal{T}}\lVert\mathbf{B}_{n}^{*}(t)-\mathbf{B}(t)\rVert=o_{p}(1)\) and \(\lVert\mathbf{C}_{n}^{*}-\mathbf{C}\rVert=o_{p}(1)\). From Assumption II.2.2 and (II.43) we conclude by means of Theorem I.3.10 of Part I that, conditionally on \(\mathcal{F}_{2}(0)\), \[\sqrt{n}(\hat{\mathbf{\theta}}_{n}^{*}-\hat{\mathbf{\theta}}_{n})\stackrel{{ \mathcal{L}}}{{\longrightarrow}}\mathbf{D}_{\tilde{k}}+\tilde{\mathbf{B}} \cdot\tilde{\mathbf{C}}\cdot\mathbf{D}_{\tilde{g}}(\tau),\text{ in }(D(\mathcal{T}))^{(q+1)},\] (II.44) in probability as \(n\to\infty\). Comparison of (II.42) with (II.44) leads to the final conclusion that the (conditional) distributions of \(\sqrt{n}(\hat{\mathbf{\theta}}_{n}^{*}-\hat{\mathbf{\theta}}_{n})\) and \(\sqrt{n}(\hat{\mathbf{\theta}}_{n}-\mathbf{\theta}_{0})\) are asymptotically equivalent, as \(n\to\infty\). This completes the proof of Theorem II.2.8. \(\blacksquare\) ### B.3 Proofs of Section II.2.4 **Proof of Lemma II.2.9** In order to derive the Hadamard derivative, we consider \(\Gamma\) as the composition of the following three functionals \[\varphi_{Z}:(\mathbf{x}^{\top},y)^{\top}(t)\mapsto(\exp(\mathbf{Z}^{ \top}\mathbf{x}),y(t))^{\top};\] \[\zeta:(x,y)(t)\mapsto x\cdot y(t);\] \[\psi:x(t)\mapsto 1-\exp(-x(t)).\] This yields \[\Gamma(\tilde{\mathbf{\theta}}^{(j)})(t)=1-\exp\big{\{}-\exp(\mathbf{Z}^{\top} \tilde{\mathbf{\beta}}^{(j)})\tilde{A}_{1;0}^{(j)}(t)\big{\}}=(\psi\circ\zeta\circ \varphi_{Z})(\tilde{\mathbf{\theta}}^{(j)})(t),\qquad j=0,1,2,\] where \(\tilde{\mathbf{\theta}}^{(j)}(t)=(\tilde{\mathbf{\beta}}^{(j)\top},\tilde{A}_{1;0}^{( j)}(t))^{\top}\) with \(\tilde{\mathbf{\beta}}^{(0)}=\mathbf{\beta}_{0}\), \(\tilde{\mathbf{\beta}}^{(1)}=\hat{\mathbf{\beta}}_{n}\), \(\tilde{\mathbf{\beta}}^{(2)}=\hat{\mathbf{\beta}}_{n}^{*}\) and \(\tilde{A}_{1;0}^{(0)}(t)=A_{1;0}(t)\), \(\tilde{A}_{1;0}^{(1)}(t)=\hat{A}_{1;0,n}(t,\hat{\mathbf{\beta}}_{n})\), \(\tilde{A}_{1;0}^{(2)}(t)=\hat{A}_{1;0,n}^{*}(t,\hat{\mathbf{\beta}}_{n}^{*})\). Furthermore, with the chain rule, we obtain for \(j=1,2\), \[\begin{split}&\mathrm{d}\Gamma(\tilde{\mathbf{\theta}}^{(j-1)})\cdot \sqrt{n}(\tilde{\mathbf{\theta}}^{(j)}-\tilde{\mathbf{\theta}}^{(j-1)})(t)\\ &=\mathrm{d}(\psi\circ\zeta\circ\varphi_{Z})(\tilde{\mathbf{\theta}}^ {(j-1)})\cdot\sqrt{n}(\tilde{\mathbf{\theta}}^{(j)}-\tilde{\mathbf{\theta}}^{(j-1)})(t )\\ &=\mathrm{d}\psi(\zeta(\varphi_{Z}(\tilde{\mathbf{\theta}}^{(j-1)}))) \cdot\mathrm{d}\zeta(\varphi_{Z}(\tilde{\mathbf{\theta}}^{(j-1)}))\cdot\mathrm{d} \varphi_{Z}(\tilde{\mathbf{\theta}}^{(j-1)})\cdot\sqrt{n}(\tilde{\mathbf{\theta}}^{(j) }-\tilde{\mathbf{\theta}}^{(j-1)})(t).\end{split}\] (II.45) Evaluating the last expression in (II.45) step by step, we first get \[\begin{split}&\mathrm{d}\varphi_{Z}(\mathbf{\theta})\cdot(\mathbf{x}^{ \top},y)^{\top}(t)=(\exp(\mathbf{Z}^{\top}\mathbf{\theta}_{1})\mathbf{Z}^{\top} \mathbf{x},y(t))^{\top}\\ &=(\exp(\mathbf{Z}^{\top}\tilde{\mathbf{\beta}}^{(j-1)\top})\mathbf{Z }^{\top}\sqrt{n}(\tilde{\mathbf{\beta}}^{(j)}-\tilde{\mathbf{\beta}}^{(j-1)}),\sqrt{n }(\tilde{A}^{(j)}_{1;0}(t)-\tilde{A}^{(j-1)}_{1;0}(t)))^{\top}\end{split}\] (II.46) with \(\mathbf{\theta}=(\mathbf{\theta}_{1}^{\top},\theta_{2})^{\top}=(\tilde{\mathbf{\beta}}^{( j-1)\top},\tilde{A}^{(j-1)}_{1;0}(t))^{\top}\), \(\mathbf{x}=\sqrt{n}(\tilde{\mathbf{\beta}}^{(j)}-\tilde{\mathbf{\beta}}^{(j-1)})\), \(y(t)=\sqrt{n}(\tilde{A}^{(j)}_{1;0}(t)-\tilde{A}^{(j-1)}_{1;0}(t))\). Then, with (II.46) we find \[\begin{split}&\mathrm{d}\zeta(\mathbf{\theta})\cdot(x,y)^{\top}(t)= \theta_{2}(t)\cdot x+\theta_{1}\cdot y(t)\\ &=\tilde{A}^{(j-1)}_{1;0}(t)\cdot\exp(\mathbf{Z}^{\top}\tilde{\bm {\beta}}^{(j-1)})\mathbf{Z}^{\top}\sqrt{n}(\tilde{\mathbf{\beta}}^{(j)}-\tilde{ \mathbf{\beta}}^{(j-1)})\\ &\quad+\exp(\mathbf{Z}^{\top}\tilde{\mathbf{\beta}}^{(j-1)})\cdot \sqrt{n}(\tilde{A}^{(j)}_{1;0}(t)-\tilde{A}^{(j-1)}_{1;0}(t))\end{split}\] (II.47) with \(\mathbf{\theta}=(\theta_{1},\theta_{2})^{\top}=\varphi_{Z}(\tilde{\mathbf{\theta}}^{( j-1)})=(\exp(\mathbf{Z}^{\top}\tilde{\mathbf{\beta}}^{(j-1)}),\tilde{A}^{(j-1)}_{1 ;0}(t))^{\top}\), \((x,y)^{\top}(t)=\mathrm{d}\varphi_{Z}(\tilde{\mathbf{\theta}}^{(j-1)})\cdot\sqrt{n }(\tilde{\mathbf{\theta}}^{(j)}-\tilde{\mathbf{\theta}}^{(j-1)})(t)\). Finally, with (II.47) we obtain \[\begin{split}&\mathrm{d}\psi(\theta)\cdot x(t)=\exp(-\theta(t)) \cdot x(t)\\ &=\exp\{-\exp(\mathbf{Z}^{\top}\tilde{\mathbf{\beta}}^{(j-1)})\cdot \tilde{A}^{(j-1)}_{1;0}(t)\}\exp(\mathbf{Z}^{\top}\tilde{\mathbf{\beta}}^{(j-1)}) \\ &\quad\cdot\left[\tilde{A}^{(j-1)}_{1;0}(t)\cdot\mathbf{Z}^{\top }\sqrt{n}(\tilde{\mathbf{\beta}}^{(j)}-\tilde{\mathbf{\beta}}^{(j-1)})+\sqrt{n}( \tilde{A}^{(j)}_{1;0}(t)-\tilde{A}^{(j-1)}_{1;0}(t))\right]\end{split}\] (II.48) with \(\theta=\zeta(\varphi_{Z}(\tilde{\mathbf{\theta}}^{(j-1)}))=\exp(\mathbf{Z}^{\top} \tilde{\mathbf{\beta}}^{(j-1)})\cdot\tilde{A}^{(j-1)}_{1;0}(t)\), \(x(t)=\mathrm{d}\zeta(\varphi_{Z}(\tilde{\mathbf{\theta}}^{(j-1)}))\cdot\mathrm{d} \varphi_{Z}(\tilde{\mathbf{\theta}}^{(j-1)})\cdot\sqrt{n}(\tilde{\mathbf{\theta}}^{(j )}-\tilde{\mathbf{\theta}}^{(j-1)})(t)\). Combining (II.45) and (II.48) yields Lemma II.2.9. \(\blacksquare\) For the proof of Theorem II.2.10 we will use, like in Part I, that the probability space can be modelled as a product space \((\Omega,\mathcal{A},\mathbb{P})=(\Omega_{1}\times\Omega_{2},\mathcal{A}_{1} \otimes\mathcal{A}_{2},\mathbb{P}_{1}\otimes\mathbb{P}_{2})=(\Omega_{1}, \mathcal{A}_{1},\mathbb{P}_{1})\otimes(\Omega_{2},\mathcal{A}_{2},\mathbb{P}_{2})\). Where necessary, we will distinguish between the probability space \((\Omega_{1},\mathcal{A}_{1},\mathbb{P}_{1})\) underlying the data sets \(\{\mathbb{1}\{C_{i}\geq t\},N_{i}(t),Y_{i}(t),\mathbf{Z}_{i},t\in\mathcal{T},i =1,\ldots,n\}\), and the probability space \((\Omega_{2},\mathcal{A}_{2},\mathbb{P}_{2})\) underlying the multipliers \(G_{1},\ldots,G_{n}\). Additionally, we denote by \(\xrightarrow{\mathcal{L}_{p}}\) the convergence in law w.r.t. the probability measure \(\mathbb{P}_{2}\). Moreover, for some stochastic quantity \(\mathbf{H}_{n}\), we denote \(\mathbf{H}_{n}\) given the data as \(\mathbf{H}_{n}|\mathcal{F}_{2}(0)(\omega)\), \(\omega\in\Omega_{1}\). **Proof of Theorem II.2.10** We wish to show that the conditional limiting distribution of \(\sqrt{n}(\Gamma(\hat{\boldsymbol{\theta}}_{n}^{*})-\Gamma(\hat{\boldsymbol{ \theta}}_{n}))\) is asymptotically equivalent to the limiting distribution of \(\sqrt{n}(\Gamma(\hat{\boldsymbol{\theta}}_{n})-\Gamma(\boldsymbol{\theta}_{0}))\). For this we recall the asymptotic representation (II.25) of \(\sqrt{n}(\Gamma(\tilde{\boldsymbol{\theta}}^{(j)})-\Gamma(\tilde{\boldsymbol {\theta}}^{(j-1)}))(t)\). In the proof of Lemma II.2.9 we have introduced the functional \(\Gamma\) as a composition of the three functionals \(\varphi_{Z}\), \(\zeta\) and \(\psi\). For the present proof it is useful to consider the Hadamard derivatives \(\mathrm{d}\varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j-1)})\), \(\mathrm{d}\zeta(\varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j-1)}))\) and \(\mathrm{d}\psi(\zeta(\varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j-1)})))\) without directly multiplying them by \(\sqrt{n}(\tilde{\boldsymbol{\theta}}^{(j)}-\tilde{\boldsymbol{\theta}}^{(j-1 )})\) as we did in (II.45). In particular, we now identify the Hadamard-derivatives with \[\mathrm{d}\varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j-1)}) =\begin{pmatrix}\exp(\mathbf{Z}^{\top}\tilde{\boldsymbol{\beta}} ^{(j-1)})\mathbf{Z}^{\top}&0\\ \mathbf{0}_{1\times q}&1\end{pmatrix},\] \[\mathrm{d}\zeta(\varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j-1)})) =\begin{pmatrix}\varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j-1)}) _{2},\varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j-1)})_{1}\end{pmatrix}\] \[=\begin{pmatrix}\tilde{A}_{1;0}^{(j-1)}(\cdot),\exp(\mathbf{Z}^{ \top}\tilde{\boldsymbol{\beta}}^{(j-1)})\end{pmatrix},\] \[\mathrm{d}\psi(\zeta(\varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j -1)}))) =\exp\{-\zeta(\varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j-1)})\}\] \[=\exp\{-\exp(\mathbf{Z}^{\top}\tilde{\boldsymbol{\beta}}^{(j-1) })\cdot\tilde{A}_{1;0}^{(j-1)}(\cdot)\}.\] In the above, \(\varphi_{Z}(\cdot)_{i}\) denotes the i-th component of \(\varphi_{Z}\), and \(\tilde{\boldsymbol{\theta}}^{(j-1)}=(\tilde{\boldsymbol{\theta}}_{1}^{(j-1) \top},\tilde{\theta}_{2}^{(j-1)})^{\top}=(\tilde{\boldsymbol{\beta}}^{(j-1) \top},\tilde{A}_{1;0}^{(j-1)}(\cdot))^{\top}\) with \(\tilde{\boldsymbol{\beta}}^{(0)}=\boldsymbol{\beta}_{0}\), \(\tilde{\boldsymbol{\beta}}^{(1)}=\hat{\boldsymbol{\beta}}_{n}\), \(\tilde{A}_{1;0}^{(0)}(\cdot)=A_{1;0}(\cdot)\) and \(\tilde{A}_{1;0}^{(1)}(\cdot)=\tilde{A}_{1;0,n}(\cdot,\hat{\boldsymbol{\beta} }_{n})\). With the chain rule, we can express the Hadamard derivative \(\mathrm{d}\Gamma\) of \(\Gamma\) as follows: \[\mathrm{d}\Gamma(\tilde{\boldsymbol{\theta}}^{(j-1)})=\mathrm{d}\psi(\zeta( \varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j-1)})))\cdot\mathrm{d}\zeta( \varphi_{Z}(\tilde{\boldsymbol{\theta}}^{(j-1)}))\cdot\mathrm{d}\varphi_{Z}( \tilde{\boldsymbol{\theta}}^{(j-1)}).\] (II.49) We first consider the case \(j=1\). In this case, \(\tilde{\boldsymbol{\theta}}^{(j-1)}=\tilde{\boldsymbol{\theta}}^{(0)}= \boldsymbol{\theta}_{0}\) is a constant point in the space \(\mathbb{R}^{q}\times\mathcal{C}[0,\tau]\), where \(\mathcal{C}[0,\tau]^{x}\) is the set of all continuous functions mapping from \([0,\tau]\) to \(\mathbb{R}^{x}\), \(x\in\mathbb{N}\). Thus, \(\left(\mathrm{vec}(\mathrm{d}\varphi_{Z}(\boldsymbol{\theta}_{0}))^{\top}, \mathrm{d}\zeta(\varphi_{Z}(\boldsymbol{\theta}_{0})),\mathrm{d}\psi(\zeta( \varphi_{Z}(\boldsymbol{\theta}_{0})))\right)\) is a constant in the space \(\mathbb{R}^{2q+2}\times\mathcal{C}[0,\tau]\times\mathbb{R}\times\mathcal{C}[ 0,\tau]\subset\mathcal{C}[0,\tau]^{2q+5}\). We now turn to the second term of the expression on the right-hand side of (II.25). For \(j=1\) we have \(\sqrt{n}(\tilde{\boldsymbol{\theta}}^{(1)}-\tilde{\boldsymbol{\theta}}^{(0)})= \sqrt{n}(\hat{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}_{0})\) and as formulated in the proof of Theorem II.2.8 it holds that \[\sqrt{n}(\hat{\boldsymbol{\theta}}_{n}-\boldsymbol{\theta}_{0})=\mathbf{D}_{n, \tilde{k}}+\tilde{\mathbf{B}}_{n}\cdot\tilde{\mathbf{C}}_{n}\cdot\mathbf{D}_{n,\tilde{g}}(\tau)+o_{p}(1).\] (II.50) From the proof of Theorem I.2.6 it follows that the convergence in distribution of this term is based on the joint convergence in distribution of \(\left(\mathbf{D}_{n,\tilde{k}}^{\top},\mathbf{D}_{n,\tilde{g}}^{\top},\mathrm{ vec}(\tilde{\mathbf{B}}_{n})^{\top},\mathrm{vec}(\tilde{\mathbf{C}}_{n})\right)\) to \(\left(\mathbf{D}_{\tilde{k}}^{\top},\mathbf{D}_{\tilde{g}}^{\top},\mathrm{ vec}(\tilde{\mathbf{B}})^{\top},\mathrm{vec}(\tilde{\mathbf{C}})^{\top}\right)\), as \(n\to\infty\), with \(\left(\mathbf{D}_{\tilde{k}}^{\top},\mathbf{D}_{\tilde{g}}^{\top},\mathrm{ vec}(\tilde{\mathbf{B}})^{\top},\mathrm{vec}(\tilde{\mathbf{C}})^{\top}\right) \in\mathcal{C}[0,\tau]^{10q+2}\). From the continuous mapping theorem and the maps \(f_{1},f_{2},\) and \(f_{3}\) defined in the proof of Theorem I.2.6 it follows that \[{\bf D}_{n,\tilde{k}}+\check{\bf B}_{n}\cdot\check{\bf C}_{n}\cdot{\bf D}_{n, \tilde{g}}(\tau)\stackrel{{\mathcal{L}}}{{\longrightarrow}}{\bf D} _{\tilde{k}}+\check{\bf B}\cdot\check{\bf C}\cdot{\bf D}_{\tilde{g}}(\tau),\ \mbox{in}\ {\mathcal{D}}[0,\tau]^{(q+1)},\ \mbox{as}\ n\to\infty.\] In order to derive the convergence in distribution of \({\rm d}\Gamma({\boldsymbol{\theta}}_{0})\cdot\sqrt{n}(\hat{\boldsymbol{ \theta}}_{n}-{\boldsymbol{\theta}}_{0})(t)\), we enlarge \[\big{(}{\bf D}_{n,\tilde{k}}^{\top},{\bf D}_{n,\tilde{g}}^{\top},{\rm vec}( \check{\bf B}_{n})^{\top},{\rm vec}(\check{\bf C}_{n})^{\top}\big{)}\] by \(\big{(}{\rm vec}({\rm d}\varphi_{Z}({\boldsymbol{\theta}}_{0}))^{\top},{\rm d }\zeta(\varphi_{Z}({\boldsymbol{\theta}}_{0})),{\rm d}\psi(\zeta(\varphi_{Z}( {\boldsymbol{\theta}}_{0})))\big{)}\). As the first vector converges in distribution to a limit that is continuous and thus separable, and the latter vector is a constant of the space \({\mathcal{C}}[0,\tau]^{2q+5}\), it holds according to Example 1.4.7 of van der Vaart and Wellner (1996) that \[\begin{split}&\big{(}{\bf D}_{n,\tilde{k}}^{\top},{\bf D}_{n, \tilde{g}}^{\top},{\rm vec}(\check{\bf B}_{n})^{\top},{\rm vec}(\check{\bf C} _{n})^{\top},{\rm vec}({\rm d}\varphi_{Z}({\boldsymbol{\theta}}_{0}))^{\top}, {\rm d}\zeta(\varphi_{Z}({\boldsymbol{\theta}}_{0})),{\rm d}\psi(\zeta( \varphi_{Z}({\boldsymbol{\theta}}_{0})))\big{)}\\ &\stackrel{{\mathcal{L}}}{{\longrightarrow}}\big{(}{ \bf D}_{\tilde{k}}^{\top},{\bf D}_{\tilde{g}}^{\top},{\rm vec}(\check{\bf B} )^{\top},{\rm vec}(\check{\bf C})^{\top},{\rm vec}({\rm d}\varphi_{Z}({ \boldsymbol{\theta}}_{0}))^{\top},{\rm d}\zeta(\varphi_{Z}({\boldsymbol{ \theta}}_{0})),{\rm d}\psi(\zeta(\varphi_{Z}({\boldsymbol{\theta}}_{0}))) \big{)},\end{split}\] (II.51) in \({\mathcal{D}}[0,\tau]^{12q+7}\), as \(n\to\infty\). Next, we make use of the continuous mapping theorem. For this we consider the following map \[\begin{split} f_{4}:&\big{(}[{\bf D}_{n,\tilde{k}}+ \check{\bf B}_{n}\cdot\check{\bf C}_{n}\cdot{\bf D}_{n,\tilde{g}}(\tau)]^{\top },{\rm vec}({\rm d}\varphi_{Z}({\boldsymbol{\theta}}_{0}))^{\top},{\rm d} \zeta(\varphi_{Z}({\boldsymbol{\theta}}_{0})),{\rm d}\psi(\zeta(\varphi_{Z}({ \boldsymbol{\theta}}_{0})))\big{)}\\ &\mapsto\big{(}{\rm d}\psi(\zeta(\varphi_{Z}({\boldsymbol{\theta} }_{0})))\cdot{\rm d}\zeta(\varphi_{Z}({\boldsymbol{\theta}}_{0}))\cdot{\rm d} \varphi_{Z}({\boldsymbol{\theta}}_{0})\cdot[{\bf D}_{n,k}+{\bf B}_{n}\cdot{\bf C }_{n}\cdot{\bf D}_{n,g}(\tau)]\big{)}\end{split}\] Since \[\begin{split}\big{(}{\bf D}_{\tilde{k}}^{\top},{\bf D}_{\tilde{g }}^{\top},&{\rm vec}(\check{\bf B})^{\top},{\rm vec}(\check{\bf C })^{\top},{\rm vec}({\rm d}\varphi_{Z}({\boldsymbol{\theta}}_{0}))^{\top},{ \rm d}\zeta(\varphi_{Z}({\boldsymbol{\theta}}_{0})),{\rm d}\psi(\zeta(\varphi _{Z}({\boldsymbol{\theta}}_{0})))\big{)}\\ &\in{\mathcal{C}}[0,\tau]^{12q+7},\end{split}\] (II.52) it follows successively with the continuous mapping theorem and the maps \(f_{1}\), \(f_{2}\), \(f_{3}\), and \(f_{4}\) applied to (II.51) that \[\begin{split}&{\rm d}\psi(\zeta(\varphi_{Z}({\boldsymbol{\theta}}_{0}))) \cdot{\rm d}\zeta(\varphi_{Z}({\boldsymbol{\theta}}_{0}))\cdot{\rm d}\varphi_{ Z}({\boldsymbol{\theta}}_{0})\cdot[{\bf D}_{n,k}+{\bf B}_{n}\cdot{\bf C}_{n} \cdot{\bf D}_{n,g}(\tau)]\\ &\stackrel{{\mathcal{L}}}{{\longrightarrow}}{\rm d} \psi(\zeta(\varphi_{Z}({\boldsymbol{\theta}}_{0})))\cdot{\rm d}\zeta(\varphi_{Z}({ \boldsymbol{\theta}}_{0}))\cdot{\rm d}\varphi_{Z}({\boldsymbol{\theta}}_{0}) \cdot[{\bf D}_{\tilde{k}}+\check{\bf B}\cdot\check{\bf C}\cdot{\bf D}_{ \tilde{g}}(\tau)],\end{split}\] (II.53) in \(D[0,\tau]^{q+1}\), as \(n\to\infty\). In conclusion, (II.25), (II.49), (II.50), and (II.53) combined yield \[\sqrt{n}(\Gamma(\hat{\boldsymbol{\theta}}_{n})-\Gamma({\boldsymbol{\theta}}_{0}) )\stackrel{{\mathcal{L}}}{{\longrightarrow}}{\rm d}\Gamma({ \boldsymbol{\theta}}_{0})\cdot[{\bf D}_{\tilde{k}}+\check{\bf B}\cdot\check{\bf C }\cdot{\bf D}_{\tilde{g}}(\tau)],\ \mbox{in}\ {\mathcal{D}}[0,\tau]^{q+1},\ \mbox{as}\ n\to\infty.\] (II.54) This completes the proof for the case \(j=1\). For the case \(j=2\), we have \(\tilde{\mathbf{\theta}}^{(j-1)}=\tilde{\mathbf{\theta}}^{(1)}=\hat{\mathbf{\theta}}_{n}\) and \[\hat{\mathbf{\theta}}_{n}\stackrel{{\mathbb{P}}}{{\longrightarrow}} \mathbf{\theta}_{0},\text{ as }n\to\infty,\] follows from Theorem II.2.8. Recall that \(\mathbf{\theta}_{0}\in\mathbb{R}^{q}\times\mathcal{C}[0,\tau]\) holds. Thus, \(\hat{\mathbf{\theta}}_{n}\) is asymptotically degenerate. Furthermore, \(\mathrm{d}\varphi_{Z}(\cdot)\) is continuous at every point of the set \(\mathbb{R}^{q}\times\mathcal{C}[0,\tau]\). Hence, with the continuous mapping theorem as in, e.g., Theorem 1.3.6 of van der Vaart and Wellner (1996) we get \[\mathrm{d}\varphi_{Z}(\hat{\mathbf{\theta}}_{n})\stackrel{{\mathbb{P} }}{{\longrightarrow}}\mathrm{d}\varphi_{Z}(\mathbf{\theta}_{0}),\text{ as }n\to\infty.\] Moreover, \(\varphi_{Z}(\cdot)\) is continuous at all points of the space \(\mathbb{R}^{q}\times\mathcal{C}[0,\tau]\) mapping the space \(\mathbb{R}^{q}\times\mathcal{C}[0,\tau]\) to \(\mathbb{R}\times\mathcal{C}[0,\tau]\). Thus, by means of the continuous mapping theorem we have \[\varphi_{Z}(\hat{\mathbf{\theta}}_{n})\stackrel{{\mathbb{P}}}{{ \longrightarrow}}\varphi_{Z}(\mathbf{\theta}_{0}),\text{ as }n\to\infty.\] Furthermore, \(\mathrm{d}\zeta(\cdot)\) is a continuous at all points of the space \(\mathbb{R}\times\mathcal{C}[0,\tau]\). Hence, it follows again with the continuous mapping theorem that \[\mathrm{d}\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n}))\stackrel{{ \mathbb{P}}}{{\longrightarrow}}\mathrm{d}\zeta(\varphi_{Z}(\mathbf{\theta}_{0})), \text{ as }n\to\infty.\] Additionally, \(\zeta(\cdot)\) is continuous at all points of the set \(\mathbb{R}\times\mathcal{C}[0,\tau]\) and maps the space \(\mathbb{R}\times\mathcal{C}[0,\tau]\) to \(\mathcal{C}[0,\tau]\). This yields \[\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n}))\stackrel{{\mathbb{P}}} {{\longrightarrow}}\zeta(\varphi_{Z}(\mathbf{\theta}_{0})),\text{ as }n\to\infty,\] according to the continuous mapping theorem. Finally, \(\mathrm{d}\psi(\cdot)\) is continuous at all points of the set \(\mathcal{C}[0,\tau]\). Hence, with the continuous mapping theorem we get \[\mathrm{d}\psi(\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n})))\stackrel{{ \mathbb{P}}}{{\longrightarrow}}\mathrm{d}\psi(\zeta(\varphi_{Z}(\mathbf{\theta}_{ 0}))),\text{ as }n\to\infty.\] In conclusion, \(\mathrm{d}\varphi_{Z}(\hat{\mathbf{\theta}}_{n})\), \(\mathrm{d}\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n}))\), and \(\mathrm{d}\psi(\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n})))\) are asymptotically degenerate. It immediately follows that \[\begin{split}&\big{(}\mathrm{vec}(\mathrm{d}\varphi_{Z}(\hat{ \mathbf{\theta}}_{n}))^{\top},\mathrm{d}\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n})), \mathrm{d}\psi(\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n})))\big{)}\\ &\stackrel{{\mathbb{P}}}{{\longrightarrow}}\big{(} \mathrm{vec}(\mathrm{d}\varphi_{Z}(\mathbf{\theta}_{0}))^{\top},\mathrm{d}\zeta( \varphi_{Z}(\mathbf{\theta}_{0})),\mathrm{d}\psi(\zeta(\varphi_{Z}(\mathbf{\theta}_{0} )))\big{)},\text{ as }n\to\infty.\end{split}\] (II.55) By means of the notation introduced just outside the proof of Theorem II.2.10, by Fact 1 of the supplement of Dobler et al. (2019), which states that convergence in probability is equivalent to convergence in conditional probability, and by the subsequence principle, we can infer from (II.55) that for every subsequence \(n_{1}\) of \(n\) there exists a further subsequence \(n_{2}\) such that \[\begin{split}&\big{(}\text{vec}(\text{d}\varphi_{Z}(\hat{\mathbf{ \theta}}_{n_{2}}))^{\top},\text{d}\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n_{2}})),\text{d}\psi(\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n_{2}})))\big{)}|\mathcal{F} _{2}(0)(\omega)\\ &\longrightarrow\big{(}\text{vec}(\text{d}\varphi_{Z}(\mathbf{\theta} _{0}))^{\top},\text{d}\zeta(\varphi_{Z}(\mathbf{\theta}_{0})),\text{d}\psi(\zeta( \varphi_{Z}(\mathbf{\theta}_{0})))\big{)},\text{ as }n\to\infty,\end{split}\] (II.56) for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). Moreover, for \(j=2\), we have \(\sqrt{n}(\tilde{\mathbf{\theta}}^{(2)}-\tilde{\mathbf{\theta}}^{(1)})=\sqrt{n}(\hat{ \mathbf{\theta}}_{n}^{*}-\hat{\mathbf{\theta}}_{n})\) for which it follows according to the proof of Theorem II.2.8 that \[\sqrt{n}(\hat{\mathbf{\theta}}_{n}^{*}-\hat{\mathbf{\theta}}_{n})=\mathbf{D}_{n,\tilde {k}}^{*}+\check{\mathbf{B}}_{n}^{*}\cdot\check{\mathbf{C}}_{n}^{*}\cdot \mathbf{D}_{n,\hat{g}}^{*}(\tau)+o_{p}(1).\] (II.57) According to the proof of Theorem I.3.10, we know that \((\mathbf{D}_{n_{6},\tilde{k}}^{*},\mathbf{D}_{n_{6},\tilde{g}}^{*},\check{ \mathbf{B}}_{n_{6}}^{*},\check{\mathbf{C}}_{n_{6}}^{*})|\mathcal{F}_{2}(0)(\omega)\) converges in \(\mathbb{P}_{2}\)-law to \((\mathbf{D}_{\tilde{k}},\mathbf{D}_{\tilde{g}},\check{\mathbf{B}},\check{ \mathbf{C}})\) for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\), as \(n\to\infty\). Additionally, by means of the continuous mapping theorem and the maps \(f_{1}\), \(f_{2}\), and \(f_{3}\), which are defined in that proof, it follows that \[\mathbf{D}_{n_{6},\tilde{k}}^{*}+\check{\mathbf{B}}_{n_{6}}^{*}\cdot\check{ \mathbf{C}}_{n_{6}}^{*}\cdot\mathbf{D}_{n_{6},\tilde{g}}^{*}(\tau)|\mathcal{ F}_{2}(0)(\omega)\stackrel{{\mathcal{L}_{\mathbb{P}_{2}}}}{{ \longrightarrow}}\mathbf{D}_{\tilde{k}}+\check{\mathbf{B}}\cdot\check{ \mathbf{C}}\cdot\mathbf{D}_{\tilde{g}}(\tau),\text{ in }\mathcal{D}[0,\tau]^{(q+1)},\] (II.58) as \(n\to\infty\) and for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). Clearly, the convergence in (II.56) and (II.58) holds along a joint subsequence \(n_{8}\) as well. We also have that the limit in law with respect to \(\mathbb{P}_{2}\) of \((\mathbf{D}_{n_{8},\tilde{k}}^{*\top},\mathbf{D}_{n_{8},\tilde{g}}^{*\top}, \text{vec}(\check{\mathbf{B}}_{n_{8}}^{*})^{\top},\text{vec}(\check{\mathbf{ C}}_{n_{8}}^{*})^{\top})|\mathcal{F}_{2}(0)(\omega)\) is separable for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega\) and \(\big{(}\text{vec}(\text{d}\varphi_{Z}(\hat{\mathbf{\theta}}_{n_{8}}))^{\top}, \text{d}\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n_{8}})),\text{d}\psi(\zeta( \varphi_{Z}(\hat{\mathbf{\theta}}_{n_{8}})))\big{)}|\mathcal{F}_{2}(0)(\omega)\) is asymptotically degenerate. Therefore, we can conclude based on Example 1.4.7 of van der Vaart and Wellner (1996) that, conditionally on \(\mathcal{F}_{2}(0)(\omega)\), \[\begin{split}&\big{(}\mathbf{D}_{n_{8},\tilde{k}}^{*\top}, \mathbf{D}_{n_{8},\tilde{g}}^{*\top},\text{vec}(\check{\mathbf{B}}_{n_{8}}^{* })^{\top},\text{vec}(\check{\mathbf{C}}_{n_{8}}^{*})^{\top},\text{vec}( \check{\mathbf{C}}_{n_{8}}^{*})^{\top},\text{vec}(\check{\mathbf{C}}_{n_{8}}^{ *}))^{\top},\text{d}\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n_{8}})),\text{d} \psi(\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n_{8}})))\big{)}\\ &\stackrel{{\mathcal{L}_{\mathbb{P}_{2}}}}{{ \longrightarrow}}\big{(}\mathbf{D}_{\tilde{k}}^{\top},\mathbf{D}_{\tilde{g}}^{ \top},\text{vec}(\check{\mathbf{B}})^{\top},\text{vec}(\check{\mathbf{C}})^{ \top},\text{vec}(\text{d}\varphi_{Z}(\mathbf{\theta}_{0}))^{\top},\text{d}\zeta( \varphi_{Z}(\mathbf{\theta}_{0})),\text{d}\psi(\zeta(\varphi_{Z}(\mathbf{\theta}_{0}))) \big{)},\end{split}\] (II.59) in \(\mathcal{D}[0,\tau]^{12q+7}\), as \(n\to\infty\) and for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). From (II.52), the continuous mapping theorem, and application of the maps \(f_{1}\), \(f_{2}\), \(f_{3}\), and \(f_{4}\) to (II.59) it follows that \[\begin{split}&\text{d}\psi(\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n_{8}}))) \cdot\text{d}\zeta(\varphi_{Z}(\hat{\mathbf{\theta}}_{n_{8}}))\cdot\text{d}\varphi_ {Z}(\hat{\mathbf{\theta}}_{n_{8}})\cdot[\mathbf{D}_{n_{8},k}^{*}+\mathbf{B}_{n_{8}}^{ *}\cdot\mathbf{C}_{n_{8}}^{*}\cdot\mathbf{D}_{n_{8},g}^{*}(\tau)]|\mathcal{F}_ {2}(0)(\omega)\\ &\stackrel{{\mathcal{L}_{\mathbb{P}_{2}}}}{{ \longrightarrow}}\text{d}\psi(\zeta(\varphi_{Z}(\mathbf{\theta}_{0})))\cdot\text{d} \zeta(\varphi_{Z}(\mathbf{\theta}_{0}))\cdot\text{d}\varphi_{Z}(\mathbf{\theta}_{0}) \cdot[\mathbf{D}_{\tilde{k}}+\check{\mathbf{B}}\cdot\check{\mathbf{C}}\cdot \mathbf{D}_{\tilde{g}}(\tau)],\end{split}\] (II.60) in \(\mathcal{D}[0,\tau]^{(q+1)}\), as \(n\to\infty\) and for \(\mathbb{P}_{1}\)-almost all \(\omega\in\Omega_{1}\). Eventually, by invoking the subsequence principle again and combining (II.25), (II.49), (II.57), and (II.60), we find that, conditionally on \(\mathcal{F}_{2}(0)\), \[\sqrt{n}(\Gamma(\hat{\mathbf{\theta}}_{n}^{*})-\Gamma(\hat{\mathbf{\theta}}_{n})) \xrightarrow{\mathcal{L}_{\mathcal{P}_{2}}}\mathrm{d}\Gamma(\mathbf{\theta}_{0}) \cdot[\mathbf{D}_{\tilde{k}}+\check{\mathbf{B}}\cdot\check{\mathbf{C}}\cdot \mathbf{D}_{\tilde{g}}(\tau)],\text{ in }\mathcal{D}[0,\tau]^{q+1},\text{ as }n \rightarrow\infty,\] (II.61) in \(\mathbb{P}_{1}\)-probability. This completes the proof for the case \(j=2\). As the (conditional) limits in distribution of \(\sqrt{n}(\Gamma(\hat{\mathbf{\theta}}_{n})-\Gamma(\mathbf{\theta}_{0}))\) and \(\sqrt{n}(\Gamma(\hat{\mathbf{\theta}}_{n}^{*})-\Gamma(\hat{\mathbf{\theta}}_{n}))\) in (II.54) and (II.61), respectively, are the same, we have proved Theorem II.2.10. \(\blacksquare\) ### B.4 Proofs of Section II.3 **Proof of Lemma II.3.1** We first show that under Assumption II.2.2, \(\hat{\sigma}_{n}^{2}(t)\) defined in (II.27) is a consistent estimator of the variance of \(W_{n,\phi,1}(t)\) for \(t\in\mathcal{T}\). For this, we point out that \[\begin{split} W_{n,\phi,1}(t)&=\sqrt{n}(\mathbf{Z} ^{\top}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})+\log(\hat{A}_{1;0,n}(t,\hat{\mathbf{ \beta}}_{n}))-\log(A_{1;0}(t)))\\ &=\sqrt{n}(\mathbf{Z}^{\top}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0}) +\log^{\prime}(A_{1;0}(t))(\hat{A}_{1;0,n}(t,\hat{\mathbf{\beta}}_{n})-A_{1;0}(t)) )+o_{p}(1).\end{split}\] (II.62) From Lemma II.2.4 and (II.37) we see that under Assumption II.2.2, the asymptotic covariance matrix of \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})\) equals \(\mathbf{C}\). Moreover, in view of (II.37) we have that \(\|\mathbf{I}_{n}(\tau,\hat{\mathbf{\beta}}_{n})-\mathbf{I}_{n}(\tau,\mathbf{\beta}_{0} )\|=o_{p}(1)\), since \(\hat{\mathbf{\beta}}_{n}\) is a consistent estimator of \(\mathbf{\beta}_{0}\). Hence, \(\big{(}\frac{1}{n}\mathbf{I}_{n}(\tau,\hat{\mathbf{\beta}}_{n})\big{)}^{-1}\) is a consistent estimator of the covariance of \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})\), cf. Corollary VII.2.4 of Andersen et al. (1993). Next, from Lemma II.2.5 it is easy to see that under Assumption II.2.2, \[\begin{split}&\int_{0}^{t}S_{n}^{(0)}(u,\hat{\mathbf{\beta}}_{n})^{-1}d \hat{A}_{1;0,n}(u,\hat{\mathbf{\beta}}_{n})\\ &+\int_{0}^{t}\mathbf{E}_{n}(u,\hat{\mathbf{\beta}}_{n})^{\top}d \hat{A}_{1;0,n}(u,\hat{\mathbf{\beta}}_{n})\big{(}\frac{1}{n}\mathbf{I}_{n}(\tau,\hat{ \mathbf{\beta}}_{n})\big{)}^{-1}\int_{0}^{t}\mathbf{E}_{n}(u,\hat{\mathbf{\beta}}_{n} )d\hat{A}_{1;0,n}(u,\hat{\mathbf{\beta}}_{n}),\end{split}\] is a uniformly consistent estimator of the variance function of \(\sqrt{n}(\hat{A}_{1;0,n}(t,\hat{\mathbf{\beta}}_{n})-A_{1;0}(t))\), \(t\in\mathcal{T}\). As according to Lemma II.2.4 and Lemma II.2.5, and due to the asymptotic orthogonality of \(D_{\tilde{k}}\) and \(\mathbf{D}_{\tilde{g}}\), it holds that under Assumption II.2.2, the covariance function of \(\mathbf{C}\cdot\mathbf{D}_{\tilde{g}}(\tau)\) and \(D_{\tilde{k}}+\mathbf{B}\cdot\mathbf{C}\cdot\mathbf{D}_{\tilde{g}}(\tau)\) equals \(\mathbf{C}\cdot\mathbf{B}^{\top}\). Hence, we find that \(-\Big{(}\frac{1}{n}\mathbf{I}_{n}(\tau,\hat{\mathbf{\beta}}_{n})\Big{)}^{-1}\int_{0}^{ t}\mathbf{E}_{n}(u,\hat{\mathbf{\beta}}_{n})d\hat{A}_{1;0,n}(u,\hat{\mathbf{\beta}}_{n})\), \(t\in\mathcal{T}\), is a uniformly consistent estimator of the covariance of \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\beta}_{0})\) and \(\sqrt{n}(\hat{A}_{1;0,n}(t,\hat{\mathbf{\beta}}_{n})-A_{1;0}(t))\). Combining this with (II.62), we see that under Assumption II.2.2, \(\hat{\sigma}_{n}^{2}(t)\) defined in (II.27) is a consistent estimator of the variance of \(W_{n,\phi,1}(t)\), \(t\in\mathcal{T}\). We now consider the wild bootstrapped variance estimator \(\hat{\sigma}_{n}^{*2}(t)\), \(t\in\mathcal{T}\), from (II.28). According to Theorem II.2.8, under Assumption II.2.2 the (conditional) covariance functions of \(\sqrt{n}(\hat{\mathbf{\theta}}_{n}^{*}(t)-\hat{\mathbf{\theta}}_{n}(t))\) and \(\sqrt{n}(\hat{\mathbf{\theta}}_{n}(t)-\mathbf{\theta}_{0}(t))\) coincide asymptotically. Thus, we use the same general structure for \(\hat{\sigma}_{n}^{*2}(t)\) as given in (II.27) for \(\hat{\sigma}_{n}^{2}(t)\). Yet, we replace the basic estimator \(\big{(}\frac{1}{n}\mathbf{I}_{n}(\tau,\hat{\mathbf{\beta}}_{n})\big{)}^{-1}\) by the wild bootstrap counterpart \(\big{(}\frac{1}{n}\mathbf{I}_{n}^{*}(\tau,\hat{\mathbf{\beta}}_{n}^{*})\big{)}^{-1}\) with \[\mathbf{I}_{n}^{*}(\tau,\hat{\mathbf{\beta}}_{n}^{*})=\sum_{i=1}^{n}\int_{0}^{\tau}( \mathbf{Z}_{i}-\mathbf{E}_{n}(u,\hat{\mathbf{\beta}}_{n}^{*}))^{\otimes 2}G_{i}^{2}dN_{i}(u),\] which is the optional covariation process \[[\mathbf{D}_{n,g}^{*}](\tau)=\sum_{i=1}^{n}\int_{0}^{\tau}(\mathbf{Z}_{i}-\mathbf{E}_{ n}(u,\hat{\mathbf{\beta}}_{n}))^{\otimes 2}G_{i}^{2}dN_{i}(u)\] of \(\mathbf{D}_{n,g}^{*}(t)\) at \(t=\tau\) with \(\hat{\mathbf{\beta}}_{n}\) replaced by \(\hat{\mathbf{\beta}}_{n}^{*}\). We also replace the basic estimator \[\int_{0}^{t}S_{n}^{(0)}(u,\hat{\mathbf{\beta}}_{n})^{-1}d\hat{A}_{1;0,n}(u,\hat{ \mathbf{\beta}}_{n})\] by the wild bootstrap counterpart \[\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}S_{n}^{(0)}(u,\hat{\mathbf{\beta}}_{n}^{*})^ {-2}G_{i}^{2}dN_{i}(u),\quad t\in\mathcal{T},\] which originates from the optional covariation process \[[D_{n,k}^{*}](t)=\frac{1}{n}\sum_{i=1}^{n}\int_{0}^{t}S_{n}^{(0)}(u,\hat{\mathbf{ \beta}}_{n})^{-2}G_{i}^{2}dN_{i}(u)\] of \(D_{n,k}^{*}(t)\), \(t\in\mathcal{T}\), with again \(\hat{\mathbf{\beta}}_{n}\) replaced by \(\hat{\mathbf{\beta}}_{n}^{*}\). Note that according to Lemma II.2.3 (i) in combination with Corollary I.3.7 of Part I, under Assumption II.2.2 the optional covariation processes of \(\mathbf{D}_{n,g}^{*}\) and \(D_{n,k}^{*}\) converge in probability to \(\mathbf{V}_{\tilde{g}}\) and \(V_{\tilde{k}}\), respectively. Therefore, the corresponding wild bootstrap estimators are consistent estimators. For the particular form of the respective optional covariation process we refer to Lemma I.3.2 of Part I. Additionally, we substitute \(\hat{A}_{1;0,n}(t,\hat{\mathbf{\beta}}_{n})\) and \(\mathbf{E}_{n}(u,\hat{\mathbf{\beta}}_{n})\) in (II.27) by \(\hat{A}_{1;0,n}^{*}(t,\hat{\mathbf{\beta}}_{n}^{*})\) and \(\mathbf{E}_{n}(u,\hat{\mathbf{\beta}}_{n}^{*})\), respectively. All in all, we have that under Assumption II.2.2, \(\hat{\sigma}_{n}^{*2}(t)\) defined in (II.28) is a consistent wild bootstrap estimator for the variance of \(W_{n,\phi,1}(t)\), \(t\in\mathcal{T}\). This completes the proof of Lemma II.3.1. \(\blacksquare\) \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & -0.5 & (0.5,0.05) & 96.2 & 94.5 & 98.4 & 94.7 & 99 & 92.3 \\ & & & (0.5,0.5) & 95.3 & 94.7 & 96.5 & 95 & 99.1 & 93.8 \\ & & -0.25 & (0.5,0.05) & 95.5 & 93.8 & 98.4 & 93.7 & 98.8 & 91.7 \\ & & & (0.5,0.5) & 95.1 & 94.3 & 96.2 & 95.3 & 99.2 & 93.7 \\ & & 0.25 & (0.5,0.05) & 95.2 & 93.3 & 98.2 & 94.2 & 99.2 & 92.6 \\ & & & (0.5,0.5) & 94.5 & 94 & 95.5 & 94.9 & 99.3 & 93.9 \\ high & -0.5 & (0.5,0.05) & 95.9 & 94.2 & 98.5 & 94.9 & 99.2 & 92.8 \\ & & & (0.5,0.5) & 95.9 & 95.3 & 96.8 & 95.7 & 98.9 & 94.8 \\ & & -0.25 & (0.5,0.05) & 95.6 & 94.1 & 98.2 & 94.9 & 99.3 & 92.8 \\ & & & (0.5,0.5) & 95.3 & 94.6 & 96.1 & 94.9 & 99.1 & 94.2 \\ & & 0.25 & (0.5,0.05) & 95.1 & 93.8 & 97.5 & 94.3 & 99.1 & 92.6 \\ & & & (0.5,0.5) & 94.7 & 94.4 & 95.4 & 94.8 & 99 & 94.2 \\ 200 & low & -0.5 & (0.5,0.05) & 95.7 & 94.8 & 97.5 & 94.7 & 98.5 & 93.4 \\ & & & (0.5,0.5) & 95.2 & 95 & 95.9 & 95.7 & 99 & 93.6 \\ & & -0.25 & (0.5,0.05) & 95.5 & 94.6 & 97.4 & 94.8 & 98.6 & 93.5 \\ & & & (0.5,0.5) & 94.2 & 93.9 & 94.9 & 94.7 & 98.8 & 92.9 \\ & & 0.25 & (0.5,0.05) & 95.5 & 94.9 & 97 & 94.7 & 98.7 & 93.7 \\ & & & (0.5,0.5) & 94.8 & 94.6 & 95.3 & 94.3 & 98.8 & 93.2 \\ high & -0.5 & (0.5,0.05) & 95.3 & 94.3 & 97.3 & 94.6 & 98.7 & 93.1 \\ & & & (0.5,0.5) & 95.2 & 95 & 95.9 & 94.4 & 98.8 & 92.7 \\ & & -0.25 & (0.5,0.05) & 95.3 & 94.5 & 97.1 & 94.8 & 98.9 & 93.4 \\ & & & (0.5,0.5) & 94.3 & 94.1 & 94.8 & 94.5 & 98.8 & 93.1 \\ & & 0.25 & (0.5,0.05) & 95.3 & 94.5 & 97 & 94.3 & 98.6 & 93.3 \\ & & & (0.5,0.5) & 94.5 & 94.4 & 94.7 & 94.9 & 99.1 & 93.5 \\ 300 & low & -0.5 & (0.5,0.05) & 95.2 & 94.7 & 97 & 94.8 & 98.3 & 93.8 \\ & & & (0.5,0.5) & 95.2 & 94.8 & 95.5 & 94.7 & 98.6 & 93.1 \\ & & -0.25 & (0.5,0.05) & 95.7 & 94.9 & 96.8 & 95.2 & 98.5 & 94.2 \\ & & & (0.5,0.5) & 94.8 & 94.5 & 95 & 94.9 & 98.7 & 93.6 \\ & & 0.25 & (0.5,0.05) & 95.6 & 95.2 & 96.6 & 94.5 & 98.3 & 93.8 \\ & & & (0.5,0.5) & 94.2 & 94.2 & 94.5 & 94.5 & 98.5 & 93.2 \\ high & -0.5 & (0.5,0.05) & 95.2 & 94.5 & 96.8 & 95.1 & 98.5 & 93.6 \\ & & & (0.5,0.5) & 94.2 & 93.9 & 94.7 & 94.6 & 98.8 & 92.5 \\ & & -0.25 & (0.5,0.05) & 94.7 & 94.2 & 96.2 & 94.3 & 98.4 & 93.2 \\ & & & (0.5,0.5) & 94.8 & 94.7 & 95.2 & 94.8 & 98.8 & 93.5 \\ & & 0.25 & (0.5,0.05) & 94.6 & 94.2 & 96 & 93.7 & 98 & 92.8 \\ & & & (0.5,0.5) & 94.4 & 93.8 & 94.2 & 94.3 & 98.8 & 93.2 \\ \hline \end{tabular} \end{table} Table II.2: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given an individual without pneumonia at time of hospital admission (univariate) for \(\mathcal{N}(0,1)\) multiplier distribution. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & -0.5 & (0.5,0.05) & 94.1 & 95.9 & 95.5 & 91.2 & 97.6 & 97.2 \\ & & & (0.5,0.5) & 94.6 & 95.4 & 94.4 & 90.8 & 97.8 & 96.9 \\ & & -0.25 & (0.5,0.05) & 93.5 & 95.2 & 94.9 & 90.3 & 97 & 96.7 \\ & & & (0.5,0.5) & 94.5 & 95.6 & 94 & 90.8 & 98.1 & 97 \\ & & 0.25 & (0.5,0.05) & 93.1 & 95.2 & 94.3 & 91 & 97.5 & 97 \\ & & & (0.5,0.5) & 94.1 & 95 & 93.6 & 91.1 & 98.1 & 96.9 \\ high & -0.5 & (0.5,0.05) & 93.8 & 95.7 & 94.3 & 90.8 & 97.8 & 97.1 \\ & & & (0.5,0.5) & 95.5 & 96.1 & 95.2 & 91.6 & 97.9 & 97.2 \\ & & -0.25 & (0.5,0.05) & 93.8 & 95.6 & 94.6 & 91 & 98.1 & 97.3 \\ & & & (0.5,0.5) & 94.8 & 95.7 & 94.3 & 90.7 & 98.2 & 96.7 \\ & & 0.25 & (0.5,0.05) & 93.2 & 95.1 & 94.3 & 90.3 & 97.8 & 96.8 \\ & & & (0.5,0.5) & 94.3 & 95.3 & 93.8 & 90.7 & 98.1 & 97.1 \\ 200 & low & -0.5 & (0.5,0.05) & 94.7 & 95.8 & 95.2 & 93.3 & 96.4 & 97.6 \\ & & & (0.5,0.5) & 94.8 & 95.5 & 94.3 & 92.7 & 97.8 & 97.5 \\ & & -0.25 & (0.5,0.05) & 94.6 & 95.8 & 95 & 93.6 & 96.5 & 97.6 \\ & & & (0.5,0.5) & 93.9 & 94.6 & 93.4 & 91.9 & 97.3 & 97.1 \\ & & 0.25 & (0.5,0.05) & 95 & 95.6 & 95.3 & 93.7 & 96.6 & 97.6 \\ & & & (0.5,0.5) & 94.9 & 95.2 & 94.5 & 92.2 & 97.2 & 97.2 \\ high & -0.5 & (0.5,0.05) & 94.1 & 95.5 & 94.3 & 92.9 & 96.8 & 97.4 \\ & & & (0.5,0.5) & 94.9 & 95.7 & 94.4 & 91.4 & 97.2 & 96.8 \\ & & -0.25 & (0.5,0.05) & 94.1 & 95.5 & 94.3 & 93.2 & 97 & 97.6 \\ & & & (0.5,0.5) & 94.2 & 94.7 & 93.7 & 91.7 & 97.5 & 97.1 \\ & & 0.25 & (0.5,0.05) & 94.3 & 95.3 & 94.4 & 92.9 & 96.9 & 97.3 \\ & & & (0.5,0.5) & 94.4 & 94.8 & 93.9 & 91.7 & 98 & 97.5 \\ 300 & low & -0.5 & (0.5,0.05) & 94.5 & 95.5 & 95 & 94 & 95.9 & 97.5 \\ & & & (0.5,0.5) & 94.9 & 95.6 & 94.4 & 92.9 & 96.5 & 97.3 \\ & & -0.25 & (0.5,0.05) & 95 & 95.5 & 95.3 & 94.2 & 96.2 & 97.6 \\ & & & (0.5,0.5) & 94.4 & 95.1 & 94.2 & 93.5 & 96.9 & 97.4 \\ & & 0.25 & (0.5,0.05) & 95 & 95.5 & 95.2 & 94 & 96.1 & 97.2 \\ & & & (0.5,0.5) & 94.1 & 94.6 & 93.9 & 93 & 96.8 & 97.5 \\ high & -0.5 & (0.5,0.05) & 94.4 & 95.5 & 94.4 & 93.7 & 96.4 & 97.6 \\ & & & (0.5,0.5) & 93.8 & 94.7 & 93.4 & 92.2 & 96.7 & 97.1 \\ & & -0.25 & (0.5,0.05) & 94.2 & 95 & 94.2 & 93.3 & 95.8 & 97.5 \\ & & & (0.5,0.5) & 94.8 & 95.3 & 94.6 & 93 & 97.2 & 97.5 \\ & & 0.25 & (0.5,0.05) & 94.2 & 94.8 & 94 & 92.7 & 95.6 & 97 \\ & & & (0.5,0.5) & 94.1 & 94.6 & 93.7 & 92.2 & 97 & 97.5 \\ \hline \end{tabular} \end{table} Table II.3: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given an individual without pneumonia at time of hospital admission (univariate) for centered \(\text{Exp}(1)\) multiplier distribution. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & -0.5 & (0.5,0.05) & 94.8 & 94.9 & 97 & 91.8 & 98.3 & 95 \\ & & & (0.5,0.5) & 94.6 & 94.9 & 95.1 & 93.1 & 98.5 & 95.3 \\ & & -0.25 & (0.5,0.05) & 94 & 94.1 & 96.6 & 91 & 98 & 94.1 \\ & & & (0.5,0.5) & 94.3 & 94.7 & 94.6 & 93.2 & 98.7 & 95.5 \\ & & 0.25 & (0.5,0.05) & 93.6 & 93.9 & 96.3 & 91.8 & 98.5 & 94.8 \\ & & & (0.5,0.5) & 93.8 & 94.2 & 94.2 & 92.8 & 98.8 & 95.4 \\ high & -0.5 & (0.5,0.05) & 94.3 & 94.5 & 96.9 & 92.3 & 98.6 & 94.9 \\ & & & (0.5,0.5) & 95.6 & 95.7 & 96 & 93.9 & 98.4 & 96.4 \\ & & -0.25 & (0.5,0.05) & 94.2 & 94.4 & 96.5 & 92.2 & 98.7 & 95 \\ & & & (0.5,0.5) & 94.5 & 94.8 & 94.8 & 93.4 & 98.7 & 95.7 \\ & & 0.25 & (0.5,0.05) & 93.6 & 94 & 95.8 & 91.8 & 98.6 & 94.8 \\ & & & (0.5,0.5) & 94.1 & 94.5 & 94.2 & 92.7 & 98.6 & 95.7 \\ 200 & low & -0.5 & (0.5,0.05) & 94.7 & 94.9 & 96.2 & 92.8 & 97.5 & 95.2 \\ & & & (0.5,0.5) & 94.6 & 94.8 & 94.7 & 93.1 & 98.5 & 95.9 \\ & & -0.25 & (0.5,0.05) & 94.8 & 94.9 & 96.2 & 93.2 & 97.6 & 95.5 \\ & & & (0.5,0.5) & 93.4 & 93.8 & 93.6 & 92.4 & 98 & 95 \\ & & 0.25 & (0.5,0.05) & 95 & 95.1 & 96 & 93.5 & 97.8 & 95.7 \\ & & & (0.5,0.5) & 94.5 & 94.8 & 94.5 & 92.5 & 98 & 95.1 \\ high & -0.5 & (0.5,0.05) & 94.4 & 94.5 & 95.5 & 92.6 & 97.7 & 94.9 \\ & & & (0.5,0.5) & 94.7 & 94.8 & 94.7 & 92.3 & 98.2 & 94.8 \\ & & -0.25 & (0.5,0.05) & 94.4 & 94.7 & 95.8 & 93.2 & 98.1 & 95.5 \\ & & & (0.5,0.5) & 93.7 & 93.9 & 93.7 & 92.5 & 98.4 & 94.8 \\ & & 0.25 & (0.5,0.05) & 94.5 & 94.7 & 95.6 & 92.8 & 97.8 & 95.1 \\ & & & (0.5,0.5) & 94.1 & 94.3 & 93.9 & 92.9 & 98.6 & 95.4 \\ 300 & low & -0.5 & (0.5,0.05) & 94.7 & 94.8 & 95.8 & 93.4 & 97 & 95.3 \\ & & & (0.5,0.5) & 94.5 & 94.7 & 94.5 & 92.8 & 97.8 & 94.9 \\ & & -0.25 & (0.5,0.05) & 95 & 95.2 & 95.8 & 93.7 & 97.3 & 95.9 \\ & & & (0.5,0.5) & 94.2 & 94.5 & 94.1 & 93.3 & 97.8 & 95.5 \\ & & 0.25 & (0.5,0.05) & 95.2 & 95.3 & 95.8 & 93.7 & 97.2 & 95.6 \\ & & & (0.5,0.5) & 94.4 & 94.1 & 94 & 92.8 & 97.8 & 95.3 \\ high & -0.5 & (0.5,0.05) & 94.4 & 94.7 & 95.3 & 93.1 & 97.7 & 95.5 \\ & & & (0.5,0.5) & 93.6 & 93.8 & 93.5 & 92.3 & 97.8 & 94.7 \\ & & -0.25 & (0.5,0.05) & 94.2 & 94.3 & 95.1 & 93 & 97.2 & 95 \\ & & & (0.5,0.5) & 94.6 & 94.8 & 94.5 & 93.1 & 98 & 95.1 \\ & & 0.25 & (0.5,0.05) & 94.2 & 94.4 & 94.9 & 92.2 & 96.8 & 94.7 \\ & & & (0.5,0.5) & 93.7 & 93.9 & 93.8 & 92.5 & 98.2 & 95.1 \\ \hline \end{tabular} \end{table} Table II.4: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given an individual without pneumonia at time of hospital admission (univariate) for centered \(\text{Pois}(1)\) multiplier distribution. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & -0.5 & (0.5,0.05) & 92.9 & 93.2 & 94.9 & 94.1 & 95.8 & 95.1 \\ & & & (0.5,0.5) & 97.4 & 97.6 & 97.3 & 95.8 & 96.9 & 96.1 \\ & & -0.25 & (0.5,0.05) & 94 & 94.5 & 94.6 & 93.4 & 95.1 & 94.1 \\ & & & (0.5,0.5) & 96.4 & 95.4 & 96.6 & 95.8 & 97.1 & 95.9 \\ & & 0.25 & (0.5,0.05) & 93.7 & 94 & 94.6 & 93.6 & 95.8 & 94.4 \\ & & & (0.5,0.5) & 93.4 & 93.6 & 95.7 & 94.7 & 96.7 & 95 \\ high & -0.5 & (0.5,0.05) & 95.3 & 95 & 96.3 & 94.8 & 96.4 & 95.3 \\ & & & (0.5,0.5) & 97.3 & 98.1 & 97.7 & 95.2 & 96.7 & 95.4 \\ & & -0.25 & (0.5,0.05) & 93.5 & 93.8 & 95.6 & 94.7 & 96.4 & 95.3 \\ & & & (0.5,0.5) & 97.3 & 97.1 & 97.4 & 96.3 & 97.6 & 96.4 \\ & & 0.25 & (0.5,0.05) & 93.4 & 93.8 & 94.6 & 93.3 & 95.5 & 93.9 \\ & & & (0.5,0.5) & 94.1 & 93.8 & 96.4 & 95.1 & 96.9 & 95.4 \\ 200 & low & -0.5 & (0.5,0.05) & 94.2 & 94.9 & 94.3 & 94.5 & 95.5 & 95.3 \\ & & & (0.5,0.5) & 96.2 & 94.5 & 97.5 & 94.7 & 96.1 & 95.3 \\ & & -0.25 & (0.5,0.05) & 93.6 & 93.9 & 93.4 & 93.8 & 95.3 & 94.7 \\ & & & (0.5,0.5) & 94.2 & 93.9 & 96.2 & 94.2 & 95.9 & 95 \\ & & 0.25 & (0.5,0.05) & 94.8 & 95 & 95.3 & 93.6 & 95 & 94.1 \\ & & & (0.5,0.5) & 93.4 & 93.8 & 94.1 & 94 & 96 & 94.6 \\ high & -0.5 & (0.5,0.05) & 93.1 & 93.5 & 93.9 & 93.6 & 95.2 & 94.7 \\ & & & (0.5,0.5) & 97.3 & 94.4 & 98.1 & 95 & 96.5 & 95.6 \\ & & -0.25 & (0.5,0.05) & 93.1 & 93.6 & 93.2 & 93.6 & 94.9 & 94.4 \\ & & & (0.5,0.5) & 95.4 & 93.8 & 97.6 & 94.9 & 96.4 & 95.5 \\ & & 0.25 & (0.5,0.05) & 94.1 & 94.6 & 94.3 & 94 & 96 & 94.9 \\ & & & (0.5,0.5) & 93.4 & 93.7 & 95 & 93.8 & 96.3 & 94.3 \\ 300 & low & -0.5 & (0.5,0.05) & 94.8 & 95 & 94.7 & 94.3 & 95.1 & 94.8 \\ & & & (0.5,0.5) & 95 & 94.3 & 97 & 93.8 & 95.1 & 94.8 \\ & & -0.25 & (0.5,0.05) & 94.3 & 94.5 & 94.2 & 93.9 & 94.9 & 94.7 \\ & & & (0.5,0.5) & 94.4 & 94.4 & 96 & 94 & 95.5 & 94.8 \\ & & 0.25 & (0.5,0.05) & 94.9 & 95 & 95.2 & 94 & 95.4 & 94.7 \\ & & & (0.5,0.5) & 93.1 & 93.7 & 93.1 & 93.5 & 95.5 & 94.1 \\ high & -0.5 & (0.5,0.05) & 94.3 & 94.6 & 94.2 & 93.8 & 94.9 & 94.8 \\ & & & (0.5,0.5) & 96.1 & 94.6 & 98.4 & 95.2 & 96.3 & 95.8 \\ & & -0.25 & (0.5,0.05) & 94 & 94.3 & 93.8 & 93.6 & 94.8 & 94.2 \\ & & & (0.5,0.5) & 94.2 & 93.6 & 96.6 & 93.6 & 95.2 & 94.3 \\ & & 0.25 & (0.5,0.05) & 94.3 & 94.5 & 94.4 & 93.8 & 95.2 & 94.3 \\ & & & (0.5,0.5) & 93.2 & 93.5 & 93.7 & 93.8 & 95.6 & 94.3 \\ \hline \end{tabular} \end{table} Table 5: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given an individual with pneumonia at time of hospital admission (univariate) for \(\mathcal{N}(0,1)\) multiplier distribution. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & -0.5 & (0.5,0.05) & 95.3 & 98.3 & 89.8 & 92.4 & 94.2 & 94.4 \\ & & & (0.5,0.5) & 97 & 99.1 & 94 & 93 & 95.6 & 94.8 \\ & & -0.25 & (0.5,0.05) & 96.5 & 98.2 & 92.7 & 91.3 & 93.5 & 93.7 \\ & & & (0.5,0.5) & 93.9 & 97.6 & 93.4 & 92.8 & 95.9 & 94.7 \\ & & 0.25 & (0.5,0.05) & 97.3 & 98.3 & 95.8 & 91.2 & 93.9 & 94.3 \\ & & & (0.5,0.5) & 92.9 & 96.9 & 90.1 & 91 & 94.7 & 93.7 \\ high & -0.5 & (0.5,0.05) & 94.1 & 98.3 & 92 & 92.3 & 94.2 & 94.2 \\ & & & (0.5,0.5) & 97.9 & 99.5 & 94.3 & 92.8 & 95.4 & 94.1 \\ & & -0.25 & (0.5,0.05) & 94.5 & 98.2 & 89.6 & 91.9 & 94.2 & 94.2 \\ & & & (0.5,0.5) & 96.9 & 98.4 & 93.9 & 92.9 & 96.1 & 94.7 \\ & & 0.25 & (0.5,0.05) & 96.3 & 98.2 & 93.8 & 90.3 & 93.6 & 93.2 \\ & & & (0.5,0.5) & 91.6 & 96.7 & 91.8 & 91.3 & 95.3 & 93.9 \\ 200 & low & -0.5 & (0.5,0.05) & 97.6 & 98.8 & 94.6 & 94.3 & 95.3 & 96 \\ & & & (0.5,0.5) & 93.3 & 98.4 & 90.8 & 92.9 & 94.4 & 94.9 \\ & & -0.25 & (0.5,0.05) & 97.1 & 98.2 & 95.6 & 93.4 & 94.6 & 95.5 \\ & & & (0.5,0.5) & 94.3 & 98.1 & 89 & 92.5 & 94.3 & 94.9 \\ & & 0.25 & (0.5,0.05) & 97.5 & 97.9 & 97.3 & 92.7 & 93.9 & 94.8 \\ & & & (0.5,0.5) & 95.4 & 97.9 & 91 & 92.2 & 94.3 & 94.9 \\ high & -0.5 & (0.5,0.05) & 96.1 & 98.3 & 90.2 & 93.1 & 94.5 & 95.2 \\ & & & (0.5,0.5) & 92.1 & 97.4 & 92.6 & 92.7 & 94.6 & 94.6 \\ & & -0.25 & (0.5,0.05) & 96.2 & 98.1 & 93 & 92.9 & 94.1 & 94.6 \\ & & & (0.5,0.5) & 92.1 & 97.6 & 90.3 & 92.1 & 94.2 & 94.5 \\ & & 0.25 & (0.5,0.05) & 97.5 & 98.1 & 96.6 & 92.6 & 94.8 & 95.6 \\ & & & (0.5,0.5) & 94.2 & 98 & 89 & 91.7 & 94.3 & 94.3 \\ 300 & low & -0.5 & (0.5,0.05) & 97.3 & 98.3 & 95.7 & 94.2 & 94.9 & 95.7 \\ & & & (0.5,0.5) & 95.2 & 98.8 & 89.5 & 93.5 & 94.3 & 95 \\ & & -0.25 & (0.5,0.05) & 97.2 & 97.8 & 96.2 & 94.1 & 94.5 & 95.3 \\ & & & (0.5,0.5) & 95.9 & 98.8 & 90.4 & 93.6 & 94.8 & 95.3 \\ & & 0.25 & (0.5,0.05) & 97.1 & 97.3 & 97 & 93.8 & 94.8 & 95.7 \\ & & & (0.5,0.5) & 95.4 & 97.7 & 92.5 & 92.7 & 94.2 & 95 \\ high & -0.5 & (0.5,0.05) & 96.8 & 98.6 & 93.5 & 93.8 & 94.7 & 95.6 \\ & & & (0.5,0.5) & 93.9 & 98.4 & 90 & 93.9 & 94.8 & 95.6 \\ & & -0.25 & (0.5,0.05) & 97 & 98.1 & 95 & 93.4 & 94.2 & 95.1 \\ & & & (0.5,0.5) & 94.1 & 98.3 & 88.8 & 92.5 & 93.9 & 94.5 \\ & & 0.25 & (0.5,0.05) & 96.6 & 97.1 & 96.2 & 93.4 & 94.3 & 95.2 \\ & & & (0.5,0.5) & 95 & 97.8 & 90.8 & 92 & 94.3 & 95.1 \\ \hline \end{tabular} \end{table} Table 6: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given an individual with pneumonia at time of hospital admission (univariate) for centered \(\text{Exp}(1)\) multiplier distribution. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & -0.5 & (0.5,0.05) & 92.9 & 95.6 & 91.3 & 93.2 & 94.8 & 95.1 \\ & & & (0.5,0.5) & 96.8 & 98.1 & 95.5 & 94.7 & 96.2 & 95.8 \\ & & -0.25 & (0.5,0.05) & 94.7 & 96.4 & 92.2 & 92.4 & 94.2 & 93.9 \\ & & & (0.5,0.5) & 95.1 & 96.1 & 95.3 & 94.4 & 96.4 & 95.6 \\ & & 0.25 & (0.5,0.05) & 95.1 & 96.5 & 94.6 & 92.3 & 94.9 & 94.5 \\ & & & (0.5,0.5) & 91.9 & 94.9 & 93.1 & 92.7 & 95.9 & 94.7 \\ high & -0.5 & (0.5,0.05) & 94 & 95.9 & 95.2 & 93.6 & 95.5 & 95 \\ & & & (0.5,0.5) & 97 & 98.8 & 96 & 94.2 & 96.1 & 95 \\ & & -0.25 & (0.5,0.05) & 92.9 & 95.7 & 92.2 & 93.3 & 95.3 & 95.1 \\ & & & (0.5,0.5) & 96.5 & 97.9 & 95.6 & 94.7 & 96.9 & 95.9 \\ & & 0.25 & (0.5,0.05) & 94.4 & 96 & 93.1 & 91.5 & 94.5 & 93.8 \\ & & & (0.5,0.5) & 92.4 & 94.6 & 94.5 & 93.3 & 96.3 & 95.2 \\ 200 & low & -0.5 & (0.5,0.05) & 95.1 & 96.7 & 93.2 & 94 & 95.3 & 95.7 \\ & & & (0.5,0.5) & 93 & 95.9 & 95.2 & 93.7 & 95.1 & 95.2 \\ & & -0.25 & (0.5,0.05) & 94.8 & 95.9 & 93.8 & 93.4 & 94.7 & 95.2 \\ & & & (0.5,0.5) & 92.5 & 96.1 & 92.4 & 93.2 & 95 & 95.1 \\ & & 0.25 & (0.5,0.05) & 95.9 & 96.2 & 95.8 & 92.9 & 94.4 & 94.8 \\ & & & (0.5,0.5) & 93.8 & 95.9 & 91.3 & 92.8 & 95 & 94.9 \\ high & -0.5 & (0.5,0.05) & 93.2 & 96 & 90.2 & 93.1 & 94.6 & 95.1 \\ & & & (0.5,0.5) & 94.6 & 95.1 & 96.3 & 94 & 95.6 & 95.6 \\ & & -0.25 & (0.5,0.05) & 93.8 & 95.5 & 91.8 & 93.1 & 94.4 & 94.7 \\ & & & (0.5,0.5) & 92.4 & 94.9 & 94.9 & 93.3 & 95.2 & 95.1 \\ & & 0.25 & (0.5,0.05) & 95.5 & 96.1 & 94.8 & 93.2 & 95.4 & 95.4 \\ & & & (0.5,0.5) & 92.8 & 95.9 & 91.3 & 92.7 & 95.3 & 94.6 \\ 300 & low & -0.5 & (0.5,0.05) & 95.3 & 96.3 & 94.5 & 94.2 & 94.9 & 95.4 \\ & & & (0.5,0.5) & 93.6 & 96.9 & 92.5 & 93.2 & 94.7 & 95 \\ & & -0.25 & (0.5,0.05) & 95.2 & 95.9 & 94.5 & 93.8 & 94.6 & 95.1 \\ & & & (0.5,0.5) & 93.9 & 96.7 & 91.6 & 93.7 & 95 & 95.3 \\ & & 0.25 & (0.5,0.05) & 95.7 & 95.8 & 95.8 & 93.6 & 95 & 95.2 \\ & & & (0.5,0.5) & 93.5 & 95.5 & 91.5 & 92.8 & 94.9 & 94.6 \\ high & -0.5 & (0.5,0.05) & 94.7 & 96.3 & 92.3 & 93.6 & 94.7 & 95.2 \\ & & & (0.5,0.5) & 92.3 & 96.4 & 94.3 & 94.3 & 95.5 & 96 \\ & & -0.25 & (0.5,0.05) & 94.8 & 96 & 93.3 & 93.5 & 94.5 & 95 \\ & & & (0.5,0.5) & 92.4 & 96.5 & 91.9 & 93 & 94.5 & 94.6 \\ & & 0.25 & (0.5,0.05) & 95.2 & 95.6 & 94.9 & 93.2 & 94.5 & 94.8 \\ & & & (0.5,0.5) & 93.4 & 95.4 & 91.1 & 92.7 & 95 & 94.8 \\ \hline \end{tabular} \end{table} Table 7: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given an individual with pneumonia at time of hospital admission (univariate) for centered \(\text{Pois}(1)\) multiplier distribution. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.6 & 95.2 & 98 & 93.4 & 98.6 & 97.7 \\ & & (0.05,0.05) & 94.2 & 94.6 & 95.1 & 95.9 & 98.9 & 98.9 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.4 & 95.1 & 97.7 & 92.9 & 98.5 & 97.4 \\ & & (0.05,0.05) & 94.6 & 95.1 & 95.5 & 95 & 98.5 & 98.4 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95.4 & 95.1 & 97.5 & 93.4 & 98.6 & 97.5 \\ & & (0.05,0.05) & 93.4 & 94.2 & 94.5 & 94.1 & 98.7 & 98.5 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.8 & 94.4 & 96.7 & 93.3 & 98.5 & 98.1 \\ & & (0.05,0.05) & 94.1 & 95 & 95.1 & 96.1 & 98.6 & 99.4 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94.4 & 94.4 & 96.1 & 93.5 & 98.6 & 98.3 \\ & & (0.05,0.05) & 94.1 & 94.6 & 95.4 & 95.8 & 98.5 & 98.9 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.3 & 94.5 & 96 & 93.9 & 98.6 & 98.2 \\ & & (0.05,0.05) & 93.6 & 94.4 & 94.6 & 95.6 & 98.9 & 99.1 \\ 200 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.3 & 95.1 & 97.1 & 93.2 & 97.7 & 95.7 \\ & & (0.05,0.05) & 94.5 & 94.8 & 95 & 94.2 & 98.1 & 97.3 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94.9 & 94.4 & 97 & 92.9 & 97.8 & 95.7 \\ & & (0.05,0.05) & 93.8 & 94.2 & 94.1 & 93.5 & 97.9 & 96.7 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.8 & 94.6 & 96.7 & 93.6 & 98 & 96.2 \\ & & (0.05,0.05) & 93.7 & 94.3 & 94.2 & 93.3 & 98.1 & 97.1 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.3 & 94.2 & 95.9 & 93.3 & 97.8 & 96.6 \\ & & (0.05,0.05) & 94.3 & 94.6 & 94.8 & 93.4 & 98.2 & 97.8 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 93.4 & 93.6 & 95.4 & 92.9 & 97.6 & 96.2 \\ & & (0.05,0.05) & 93.5 & 94 & 93.9 & 93.2 & 98.1 & 97.6 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.2 & 94.4 & 95.4 & 93.1 & 97.6 & 96.3 \\ & & (0.05,0.05) & 93.9 & 94.2 & 94.1 & 93.6 & 98.4 & 97.9 \\ 300 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.8 & 94.8 & 96.5 & 93.5 & 97.1 & 95.2 \\ & & (0.05,0.05) & 93.7 & 94 & 93.9 & 93.5 & 97.8 & 96.2 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.3 & 95 & 96.6 & 94.1 & 97.6 & 95.6 \\ & & (0.05,0.05) & 94 & 94.2 & 94.3 & 93.8 & 97.8 & 96.2 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95 & 94.7 & 96.3 & 93.7 & 97 & 95.5 \\ & & (0.05,0.05) & 94.3 & 94.4 & 94.5 & 93.7 & 97.8 & 96.3 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.7 & 94.5 & 95.5 & 93.3 & 97.1 & 95.7 \\ & & (0.05,0.05) & 94.6 & 94.8 & 94.9 & 93.8 & 97.9 & 97 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94.6 & 94.6 & 95.7 & 93.8 & 97.3 & 96 \\ & & (0.05,0.05) & 94.5 & 94.7 & 94.5 & 93.8 & 97.7 & 97 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.2 & 94.4 & 95.1 & 93.6 & 97.2 & 95.9 \\ & & (0.05,0.05) & 94.9 & 95 & 95 & 94 & 98 & 96.9 \\ \hline \end{tabular} \end{table} Table II.8: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given a 45 years old female individual without pneumonia at time of hospital admission (trivariate) for \(\mathcal{N}(0,1)\) multiplier distribution. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 96.2 & 95.9 & 98.7 & 91.3 & 97.8 & 98.7 \\ & & & (0.05,0.05) & 94.7 & 94.8 & 95.8 & 94.6 & 98.2 & 99.5 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.8 & 95.6 & 98.2 & 90.8 & 97.2 & 98.6 \\ & & & (0.05,0.05) & 95.2 & 95.5 & 96.2 & 93.3 & 97.9 & 99 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 96 & 96 & 98.3 & 91.3 & 97.6 & 98.8 \\ & & (0.05,0.05) & 94.2 & 94.8 & 95.5 & 92.2 & 98 & 99.1 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.2 & 95 & 97.4 & 91.3 & 97.5 & 99 \\ & & (0.05,0.05) & 94.2 & 95 & 95.6 & 95.1 & 98.2 & 99.8 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.3 & 95.1 & 97.2 & 91.4 & 98 & 98.9 \\ & & (0.05,0.05) & 94.3 & 95 & 95.8 & 94.9 & 98 & 99.3 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95.2 & 95.2 & 97 & 92.1 & 97.9 & 98.8 \\ & & (0.05,0.05) & 93.9 & 94.7 & 95.2 & 93.9 & 98.5 & 99.4 \\ 200 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.7 & 95.6 & 97.5 & 92.2 & 96.1 & 97.6 \\ & & (0.05,0.05) & 95.1 & 95.2 & 95.7 & 92.4 & 97.2 & 98.6 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.4 & 95.4 & 97.4 & 92 & 96.1 & 97.8 \\ & & (0.05,0.05) & 94.7 & 94.8 & 95.1 & 92 & 96.5 & 98 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95.4 & 95.3 & 97.2 & 92.5 & 96.6 & 97.9 \\ & & (0.05,0.05) & 94.9 & 95.1 & 95.4 & 91.9 & 96.9 & 98.1 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95 & 95.1 & 96.7 & 92.2 & 96.3 & 98.1 \\ & & (0.05,0.05) & 95.1 & 95.2 & 95.4 & 92 & 97 & 98.6 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94.2 & 94.4 & 96.1 & 91.6 & 96.1 & 97.7 \\ & & (0.05,0.05) & 94.3 & 94.5 & 94.6 & 91.9 & 96.9 & 98.5 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.9 & 95 & 96.2 & 92.5 & 96 & 97.8 \\ & & (0.05,0.05) & 94.6 & 94.9 & 94.9 & 92.2 & 97.4 & 98.7 \\ 300 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.3 & 95.4 & 96.8 & 92.9 & 95.3 & 97.1 \\ & & (0.05,0.05) & 94.4 & 94.6 & 94.8 & 92.5 & 96.3 & 98.1 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.6 & 95.6 & 96.9 & 93.6 & 95.6 & 97.8 \\ & & (0.05,0.05) & 95 & 95.1 & 95.3 & 92.8 & 96.3 & 98 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95.4 & 95.4 & 96.5 & 93.5 & 95.6 & 97 \\ & & (0.05,0.05) & 95.1 & 95.2 & 95.3 & 92.9 & 96.5 & 98 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.1 & 95.2 & 96 & 92.9 & 95.5 & 97.3 \\ & & (0.05,0.05) & 95.1 & 95.3 & 95.5 & 92.8 & 96.7 & 98.2 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.1 & 95 & 96.1 & 93.2 & 95.6 & 97.5 \\ & & (0.05,0.05) & 95.2 & 95.2 & 95.3 & 92.4 & 96.6 & 98 \\ & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.9 & 95 & 95.8 & 93.2 & 95.8 & 97.2 \\ & & (0.05,0.05) & 95.6 & 95.7 & 95.8 & 93.3 & 96.8 & 98.1 \\ \hline \end{tabular} \end{table} Table 9: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given a 45 years old female individual without pneumonia at time of hospital admission (trivariate) for centered \(\text{Exp}(1)\) multiplier distribution. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.8 & 95.3 & 98.2 & 92.6 & 98.2 & 98.4 \\ & & & (0.05,0.05) & 94.5 & 94.8 & 95.5 & 96 & 98.6 & 99.4 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.4 & 95 & 97.7 & 92 & 98 & 98.3 \\ & & & (0.05,0.05) & 94.8 & 95.1 & 95.6 & 94.8 & 98.2 & 98.9 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95.2 & 95.1 & 97.6 & 92.3 & 98.2 & 98.3 \\ & & & (0.05,0.05) & 93.1 & 93.9 & 94.2 & 93.8 & 98.4 & 99 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.6 & 94.4 & 97 & 92.8 & 98 & 98.8 \\ & & & (0.05,0.05) & 94.1 & 95 & 95.3 & 96.7 & 98.5 & 99.8 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94.3 & 94.3 & 96.4 & 92.8 & 98.4 & 98.9 \\ & & & (0.05,0.05) & 94.1 & 94.8 & 95.4 & 95.7 & 98.2 & 99.4 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.1 & 94.3 & 96 & 93.3 & 98.2 & 98.8 \\ & & & (0.05,0.05) & 93.2 & 94.4 & 94.6 & 95.2 & 98.6 & 99.4 \\ 200 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.1 & 95 & 97.1 & 92.5 & 97 & 97.1 \\ & & & (0.05,0.05) & 94.6 & 94.7 & 94.9 & 93.3 & 97.9 & 98.2 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94.6 & 94.7 & 97 & 92.5 & 97.2 & 97.1 \\ & & (0.05,0.05) & 93.8 & 94.2 & 94.2 & 92.9 & 97.2 & 97.6 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.9 & 94.8 & 96.6 & 92.9 & 97.4 & 97.3 \\ & & & (0.05,0.05) & 93.8 & 94.2 & 94.3 & 92.6 & 97.5 & 97.8 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.4 & 94.3 & 96.1 & 92.9 & 97.3 & 97.6 \\ & & & (0.05,0.05) & 94.6 & 94.8 & 95 & 93 & 97.8 & 98.5 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 93.4 & 93.5 & 95.3 & 92.1 & 96.8 & 97.4 \\ & & & (0.05,0.05) & 93.6 & 94 & 94.1 & 92.9 & 97.6 & 98.3 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.2 & 94.4 & 95.3 & 92.8 & 96.9 & 97.3 \\ & & & (0.05,0.05) & 93.8 & 94.2 & 94.2 & 93.1 & 97.9 & 98.4 \\ 300 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95 & 94.9 & 96.5 & 93.2 & 96.3 & 96.4 \\ & & & (0.05,0.05) & 93.7 & 94 & 94 & 92.9 & 97.3 & 97.3 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.1 & 95.2 & 96.6 & 93.7 & 96.6 & 96.9 \\ & & & (0.05,0.05) & 94 & 94.1 & 94.3 & 93.3 & 97 & 97.2 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.9 & 94.9 & 96.2 & 93.6 & 96.2 & 96.5 \\ & & & (0.05,0.05) & 94.2 & 94.4 & 94.3 & 93.2 & 97.3 & 97.4 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.5 & 94.6 & 95.5 & 93.2 & 96.4 & 96.7 \\ & & & (0.05,0.05) & 94.7 & 94.8 & 94.8 & 93.5 & 97.4 & 98 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94.4 & 94.4 & 95.5 & 93.4 & 96.5 & 96.8 \\ & & & (0.05,0.05) & 94.3 & 94.5 & 94.4 & 93 & 97.2 & 97.6 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94 & 94.3 & 95.1 & 93.3 & 96.5 & 96.9 \\ & & & (0.05,0.05) & 94.8 & 95.2 & 95.1 & 93.8 & 97.3 & 97.7 \\ \hline \end{tabular} \end{table} Table 10: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given a 45 years old female individual without pneumonia at time of hospital admission (trivariate) for centered \(\text{Pois}(1)\) multiplier distribution. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.6 & 95.3 & 96.8 & 95.1 & 96.2 & 96.6 \\ & & & (0.05,0.05) & 98.7 & 98.5 & 98.2 & 96.4 & 97.4 & 97.8 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94.1 & 95.1 & 96.4 & 94.2 & 95.2 & 95.8 \\ & & & (0.05,0.05) & 97.3 & 96.9 & 97.2 & 95.9 & 96.8 & 97.1 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.5 & 95.6 & 95.3 & 93.7 & 94.8 & 95.2 \\ & & & (0.05,0.05) & 95.5 & 95.6 & 97.7 & 95.2 & 96.5 & 97 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 96.5 & 96.9 & 96.1 & 95.9 & 96.3 & 96.7 \\ & & & (0.05,0.05) & 98.3 & 99 & 98.5 & 95.7 & 96.8 & 97.5 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.6 & 95.8 & 97.1 & 95.4 & 96.4 & 96.9 \\ & & & (0.05,0.05) & 98.1 & 98.2 & 98.3 & 96.5 & 97.5 & 98 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.3 & 95.4 & 96.7 & 94.6 & 95.7 & 96.1 \\ & & & (0.05,0.05) & 96.3 & 96.2 & 97.9 & 95.7 & 96.8 & 97.7 \\ 200 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.2 & 94.7 & 95.4 & 94.8 & 95.2 & 95.7 \\ & & & (0.05,0.05) & 97.7 & 95.6 & 98.2 & 95.6 & 96.5 & 96.8 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 93.3 & 94 & 93.4 & 93.5 & 94.4 & 94.8 \\ & & & (0.05,0.05) & 96.2 & 95.1 & 97.2 & 94.7 & 95.7 & 96 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95 & 95.7 & 94.8 & 93.5 & 94.3 & 94.6 \\ & & & (0.05,0.05) & 93.9 & 94.7 & 95.4 & 93.9 & 95.1 & 95.7 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.9 & 94.6 & 96 & 94.3 & 94.9 & 95.5 \\ & & & (0.05,0.05) & 98.6 & 97.3 & 98.6 & 96 & 96.7 & 97 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 93.9 & 94.5 & 95.2 & 93.8 & 94.3 & 95.1 \\ & & & (0.05,0.05) & 97.4 & 95.5 & 97.9 & 95.9 & 96.7 & 97 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.5 & 95 & 94.4 & 94.2 & 94.7 & 95.6 \\ & & & (0.05,0.05) & 94.3 & 94.4 & 95.8 & 94.3 & 95.4 & 96 \\ 300 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.2 & 94.8 & 94.5 & 93.9 & 94.4 & 94.9 \\ & & & (0.05,0.05) & 96 & 94.4 & 98.2 & 95.1 & 95.8 & 96.3 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 93.9 & 94.4 & 93.5 & 93.7 & 94.4 & 94.8 \\ & & & (0.05,0.05) & 94.8 & 94.5 & 97 & 94.2 & 95 & 95.6 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.4 & 95 & 94.4 & 94.4 & 94.6 & 95 \\ & & & (0.05,0.05) & 94.3 & 94.9 & 94.4 & 93.8 & 94.9 & 95.3 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.1 & 94.3 & 95 & 93.7 & 94.2 & 94.9 \\ & & & (0.05,0.05) & 97.5 & 95.1 & 98.5 & 95.5 & 96 & 96.5 \\ & & (0.05,0.05) & 93.4 & 94.1 & 93.8 & 93.8 & 94.2 & 94.9 \\ & & & (0.05,0.05) & 95.8 & 94.6 & 97.8 & 94.8 & 95.6 & 96.3 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 94.7 & 95.3 & 94.2 & 94.2 & 94.6 & 95.3 \\ & & & (0.05,0.05) & 93.7 & 94.3 & 94.9 & 94.2 & 95 & 95.5 \\ \hline \end{tabular} \end{table} Table 11: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given a 70 years old male individual with pneumonia at time of hospital admission (trivariate) for \(\mathcal{N}(0,1)\) multiplier distribution. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.4 & 97.9 & 97 & 93.3 & 94.8 & 95.8 \\ & & (0.05,0.05) & 99 & 99.2 & 97.8 & 95.6 & 96.8 & 97.5 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95 & 98 & 95 & 92 & 94 & 95.2 \\ & & (0.05,0.05) & 98.2 & 98.1 & 96.6 & 94.6 & 96 & 97.1 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 96.5 & 98.5 & 94.8 & 92 & 93.7 & 95 \\ & & (0.05,0.05) & 95.5 & 97.3 & 97.2 & 93.4 & 95.8 & 96.9 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 97.7 & 98.5 & 96.1 & 94.2 & 95.7 & 96.2 \\ & & (0.05,0.05) & 98.8 & 99.4 & 98.4 & 94.7 & 96.4 & 97.6 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 96.1 & 97.9 & 97.3 & 93.7 & 95.2 & 96.3 \\ & & (0.05,0.05) & 98.6 & 99 & 98.1 & 95.4 & 97.1 & 98.1 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95 & 97.8 & 94.9 & 92.9 & 94.6 & 95.6 \\ & & (0.05,0.05) & 96.9 & 97.4 & 97.7 & 94.4 & 96.1 & 97.7 \\ 200 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 96.1 & 98.6 & 92.3 & 94.4 & 95 & 95.8 \\ & & (0.05,0.05) & 95.7 & 97.7 & 95.8 & 94.1 & 95.2 & 96.3 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 96.6 & 98.6 & 92.9 & 93.5 & 94 & 95.1 \\ & & (0.05,0.05) & 94.2 & 97.6 & 95.1 & 93.1 & 94.2 & 95.6 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 97.7 & 98.1 & 96.9 & 93 & 93.8 & 95 \\ & & (0.05,0.05) & 94.8 & 97.6 & 92.8 & 92.6 & 93.9 & 95.5 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.1 & 98.1 & 93.6 & 93.1 & 93.7 & 94.9 \\ & & (0.05,0.05) & 98.2 & 98.4 & 97.8 & 94.9 & 95.9 & 96.6 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.2 & 98 & 92.8 & 93 & 93.7 & 94.7 \\ & & (0.05,0.05) & 96 & 97.6 & 96.4 & 94.4 & 95.6 & 96.4 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 96.8 & 98.2 & 94 & 93.6 & 94.2 & 95.1 \\ & & (0.05,0.05) & 94 & 97.4 & 94.2 & 92.7 & 93.9 & 95.4 \\ 300 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 97.2 & 98.5 & 93.9 & 94.2 & 94.6 & 95.4 \\ & & (0.05,0.05) & 93.4 & 98.1 & 93.3 & 94.1 & 94.6 & 95.7 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 97.2 & 98.1 & 95 & 93.9 & 94.3 & 95.4 \\ & & (0.05,0.05) & 94.5 & 98.5 & 92.7 & 93.7 & 94.4 & 95.5 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 97.2 & 97.6 & 97 & 94 & 94.6 & 95.3 \\ & & (0.05,0.05) & 95.9 & 97.9 & 92.8 & 93.5 & 94.4 & 95.5 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.7 & 98.5 & 90.9 & 94 & 94.2 & 94.9 \\ & & (0.05,0.05) & 94.3 & 97.7 & 95.7 & 94.2 & 94.8 & 95.5 \\ & & (0.05,0.05) & 95.9 & 98.1 & 91.9 & 93.6 & 94 & 94.8 \\ & & (0.05,0.05) & 93.3 & 97.8 & 93.8 & 93.7 & 94.4 & 95.2 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 97 & 97.8 & 95.5 & 94.1 & 94.4 & 95.1 \\ & & (0.05,0.05) & 94.7 & 97.7 & 92.4 & 93.2 & 94.2 & 95.1 \\ \hline \end{tabular} \end{table} Table II.12: _Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given a 70 years old male individual with pneumonia at time of hospital admission (trivariate) for centered \(\text{Exp}(1)\) multiplier distribution._ \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & & & \(q_{0}^{*}\) & \(q_{1}^{*}\) & \(q_{2}^{*}\) & \(\tilde{q}_{0}^{*}\) & \(\tilde{q}_{1}^{*}\) & \(\tilde{q}_{2}^{*}\) \\ n & cens. & \(\beta_{0}\) & & \((\alpha_{010},\alpha_{020})\) & & & & & \\ 100 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95.6 & 96.3 & 96.4 & 94.5 & 95.7 & 96.8 \\ & & (0.05,0.05) & 98.6 & 99.2 & 97.7 & 96.3 & 97.1 & 97.9 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94.1 & 95.9 & 95.2 & 93.2 & 94.6 & 96.1 \\ & & (0.05,0.05) & 97.6 & 97.9 & 96.6 & 95.3 & 96.6 & 97.4 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95 & 96.6 & 94.4 & 93 & 94.3 & 95.6 \\ & & (0.05,0.05) & 95.5 & 96.2 & 97.1 & 94.7 & 96.2 & 97.3 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 97.1 & 97.4 & 95.7 & 95.1 & 96 & 96.7 \\ & & (0.05,0.05) & 98.3 & 99.2 & 98.2 & 95.1 & 96.5 & 97.7 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 95.9 & 96.5 & 96.9 & 95 & 96 & 96.9 \\ & & (0.05,0.05) & 98.2 & 98.9 & 98 & 96.1 & 97.2 & 98.2 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 93.8 & 96 & 95.3 & 93.8 & 95.4 & 96.4 \\ & & (0.05,0.05) & 96.6 & 97 & 97.4 & 95.3 & 96.6 & 97.9 \\ 200 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 94.3 & 96.5 & 92.6 & 94.2 & 95 & 96 \\ & & (0.05,0.05) & 96.8 & 96.1 & 96.9 & 94.9 & 96.1 & 96.9 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94 & 96.1 & 92 & 93.1 & 94.1 & 95.5 \\ & & (0.05,0.05) & 94.8 & 96 & 96 & 93.9 & 95 & 96.4 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 96.1 & 96.8 & 95.5 & 93.3 & 94 & 95 \\ & & (0.05,0.05) & 93 & 95.8 & 93.1 & 93.2 & 94.5 & 96 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 93.3 & 95.8 & 94.5 & 93.9 & 94.5 & 95.7 \\ & & (0.05,0.05) & 98.4 & 98.1 & 98.1 & 95.6 & 96.4 & 97.1 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 93.3 & 96 & 93 & 93.5 & 94.2 & 95.4 \\ & & (0.05,0.05) & 96.9 & 96.1 & 96.9 & 95.5 & 96.3 & 97.1 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95.2 & 96.5 & 93.2 & 93.9 & 94.5 & 95.7 \\ & & (0.05,0.05) & 93.4 & 95.4 & 94.6 & 93.6 & 94.8 & 96.2 \\ 300 & low & (-0.05,-0.5,-0.05) & (0.08,0.008) & 95 & 96.4 & 92.7 & 93.9 & 94.3 & 95.2 \\ & & (0.05,0.05) & 93.4 & 95.5 & 95.3 & 94.6 & 95.1 & 96.2 \\ & & (-0.05,-0.25,-0.05) & (0.08,0.008) & 94.7 & 96 & 93.4 & 93.8 & 94.3 & 95.3 \\ & & (0.05,0.05) & 93.2 & 96.2 & 93.7 & 94 & 94.5 & 95.8 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95.5 & 95.9 & 95.4 & 93.8 & 94.5 & 95.3 \\ & & (0.05,0.05) & 94.3 & 96 & 92.8 & 93.8 & 94.7 & 95.7 \\ high & (-0.05,-0.5,-0.05) & (0.08,0.008) & 93.3 & 96.1 & 91.5 & 93.9 & 94.1 & 95.2 \\ & & (0.05,0.05) & 96.2 & 95.7 & 97.2 & 94.8 & 95.3 & 96.4 \\ & & (0.05,0.05) & 93.8 & 95.7 & 91.7 & 93.7 & 94.3 & 95 \\ & & (0.05,0.05) & 93.3 & 95.7 & 95.8 & 94.2 & 94.8 & 96.2 \\ & & (-0.05,0.25,-0.05) & (0.08,0.008) & 95.4 & 96.1 & 94.2 & 94.1 & 94.6 & 95.3 \\ & & (0.05,0.05) & 93.1 & 95.8 & 92.8 & 93.7 & 94.5 & 95.8 \\ \hline \end{tabular} \end{table} Table II.13: Simulated coverage probabilities (in %) of various 95% confidence bands for the cumulative incidence function given a 70 years old male individual with pneumonia at time of hospital admission (trivariate) for centered \(\text{Pois}(1)\) multiplier distribution.
2306.12596
A Hierarchical Approach to exploiting Multiple Datasets from TalkBank
TalkBank is an online database that facilitates the sharing of linguistics research data. However, the existing TalkBank's API has limited data filtering and batch processing capabilities. To overcome these limitations, this paper introduces a pipeline framework that employs a hierarchical search approach, enabling efficient complex data selection. This approach involves a quick preliminary screening of relevant corpora that a researcher may need, and then perform an in-depth search for target data based on specific criteria. The identified files are then indexed, providing easier access for future analysis. Furthermore, the paper demonstrates how data from different studies curated with the framework can be integrated by standardizing and cleaning metadata, allowing researchers to extract insights from a large, integrated dataset. While being designed for TalkBank, the framework can also be adapted to process data from other open-science platforms.
Man Ho Wong
2023-06-21T22:37:51Z
http://arxiv.org/abs/2306.12596v1
# A Hierarchical Approach to exploiting Multiple Datasets from TalkBank ###### Abstract TalkBank is an online database that facilitates the sharing of linguistics research data. However, the existing TalkBank's API has limited data filtering and batch processing capabilities. To overcome these limitations, this paper introduces a pipeline framework that employs a hierarchical search approach, enabling efficient complex data selection. This approach involves a quick preliminary screening of relevant corpora that a researcher may need, and then perform an in-depth search for target data based on specific criteria. The identified files are then indexed, providing easier access for future analysis. Furthermore, the paper demonstrates how data from different studies curated with the framework can be integrated by standardizing and cleaning metadata, allowing researchers to extract insights from a large, integrated dataset. While being designed for TalkBank, the framework can also be adapted to process data from other open-science platforms. Data pipeline Data mining TalkBank CHILDES Large dataset Applied linguistics Corpus linguistics Natural language processing Open science ## 1 Introduction Recent development in open science platforms has not only greatly increased the transparency of scientific research but also enables global collaboration. These platforms provide access to vast, publicly available datasets, offering the opportunity to gain valuable insights that can only be extracted from a larger sample size. TalkBank [21], a famous example of such platforms, is a publicly accessible database consisting of datasets contributed by researchers from around the world. The platform covers a diverse range of linguistics topics, including language acquisition, speech-language pathology, and sociolinguistics. While most datasets in TalkBank have been used in published studies, there is likely a substantial amount of hidden information yet to be uncovered from the data. In addition, data from different studies can potentially be integrated into one large dataset to provide a larger sample size, enhancing data reliability and facilitating more robust analyses. Currently, TalkBank's API [18] allows researchers to access the database using custom code. However, the API's data filtering capabilities are limited, hindering comprehensive data exploration. Moreover, the API only allows downloading individual files separately, posing challenges for efficient batch processing of large datasets (Table 1). To overcome the aforementioned limitations of the API, we developed a scalable framework for building pipelines bypassing the API. In the following sections, we will explain the architecture of the framework by building a pipeline to process child speech data from the Child Language Data Exchange System (CHILDES) [17] in TalkBank. The pipeline built in this paper is available at the following GitHub repository (talkbank_pipeline_tutorial.ipynb): github.com/manhowong/talkbank-pipeline For those interested, a ready-to-use pipeline is also available for adoption to other datasets in TalkBank. (See talkbank_pipeline.ipynb in the above repository.) ## 2 Requirements ### Data sourcing SourceThe pipeline in this paper retrieves data from the Child Language Data Exchange System (CHILDES) [Serratrice, 2000] in TalkBank. Human-annotated recording transcripts in CHAT format were downloaded from CHILDES using the pipeline. CorporaThe framework was demonstrated using North American English as the target language due to its extensive, publicly accessible data. There are 47 North American English corpora in CHILDES. Transcripts of child speech were collected from the following 13 corpora in CHILDES: Bates, Bernstein, Brown, Clark, Demertas2, Gleason, HSLLD, Hall, Hicks, Nelson, Newman-Ratner, Post and VanHouten. These corpora were selected by the pipeline based on a set of criteria (see talkbank_pipeline_tutorial.ipynb for details). ### System The initial steps of the framework (scanning download URLs, screening datasets, and downloading data) require a stable internet connection. After the initial steps, data processing is done locally and sufficient storage is required to download the data. ### Software The code was written in Python 3.9.7. For easier demonstration, scripts are organized into Jupyter notebooks. To run the code, a Jupyter Notebook interface is required. You can also run the code on Google Colab. Python packagesPyLangAcq (0.16.0) [Lee et al., 2016] is required to read the CHAT files. In addition, the following packages are also required: Pandas, bs4, NumPy, Requests, Urllib, Zipfile, and tqdm (optional, for showing progress bar during running). ## 3 Framework overview The framework starts with defining the potential datasets and identifying the source URLs (Figure 1). Users need to specify the datasets they want to include in the search for the data they will need. For more efficient data processing, the search space is narrowed down progressively using a hierarchical searching strategy where each level of the hierarchy employs more stringent criteria. At the first level of the hierarchy, the framework performs a preliminary screening of relevant datasets that may contain the desired data. This initial screening does not employ stringent criteria nor inspect every file (see next section for implementation), as the goal here is just to reduce the search space. The identified corpora are then downloaded as a single dataset to the local drive. This compiled dataset facilitates data access during subsequent stages of the framework and data analysis, eliminating the need to repeatedly access the remote data source. At the second level of the hierarchy, the framework performs an in-depth search with more specific criteria to further refine the search space. In addition, every file is inspected throughout the search. Files that match the specified criteria are indexed and stored in a table. This index table serves as a reference for locating the target data within the large dataset compiled in the previous step. Using the index table, researchers can quickly access the target data required for analysis without repeating data searching in the remote data source. ## 4 Demonstration: Building a pipeline to retrieve child speech data from TalkBank In this demonstration, our goal is to retrieve the data containing child speech or child-directed speech recorded from typically developed children of age 0 to 72 months. In addition, we are only interested in files that contain information about the participant's socio-economic status. Such complex data selection criteria can not be implemented with TalkBank's API. To implement the above criteria, we will build a Python pipeline using the framework presented in this paper. We will use the pipeline to search for the files we need and compile a large dataset by integrating data from multiple North American English (Eng-NA) corpora in the CHILDES collection of TalkBank. The compiled dataset will be indexed with metadata from file headers. The pipeline includes the following major steps: 1. Define potential dataset(s) and identify source URLs 2. Screen for relevant corpora 3. Download relevant corpora to the local drive 4. Search for target files and index them \begin{table} \begin{tabular}{l l l} \hline \hline & Framework in this paper & TalkBank’s API \\ \hline Data filtering & Fully customizable & Available query terms only \\ File download & Entire corpus/ dataset & Single CHAT file \\ Data extraction from files & Supported via PyLangAcq package & Supported \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between data retrieval using TalkBank’s API and the framework presented in this paper. 5. Standardize header labels 6. Add participant identifier (optional) ### Define potential dataset(s) and identify source URLs TalkBank maintains a well-organized directory structure, where each collection, such as CHILDES or ASDBank, has its dedicated directory. Within each collection, the datasets are stored under the "data" subdirectory: TalkBank |----Collection 1 |----Collection 2 |----Collection 3 (e.g. CHILDES) |----Data |----dataset 1 |----dataset 2 |----dataset 3 (e.g. Eng-NA) |----Corpus1.zip |----Corpus2.zip |----Corpus3.zip For example, the North American English dataset (Eng-NA) in the CHILDES collection can be accessed through the following URL, where the subdomain corresponds to the collection's name (i.e. CHILDES): [https://childes.talkbank.org/data/Eng-NA](https://childes.talkbank.org/data/Eng-NA) Corpus data are stored as zip files within each dataset. To get the download URLs for the dataset(s), you will first need to specify the TalkBank collection you are interested in, such as CHILDES. You may also provide the name(s) of the specific dataset(s) you want to download. You can visit the TalkBank Browser to look up the directory name of a collection or a dataset: Select the collection in the pull-down menu and navigate to the dataset you are interested in. The name of the collection and/or dataset is indicated in the directory path under the pull-down menu. Alternatively, go to the collection's data page, e.g. [https://childes.talkbank.org/data/](https://childes.talkbank.org/data/) for CHILDES. The pipeline scans the collection's data subdirectory and automatically fetches the URLs for all the downloadable zip files you will need for the next step. Note that this pipeline retrieves download URLs for public collections only. For password-protected collections, please follow the instructions in TalkBank's documentation. ### Screen for relevant corpora For cases where not all the corpora found in the potential datasets are needed, an optional step can be taken to screen for relevant corpora based on the CHAT files required. This step not only reduces the total download size but also narrows the search space for target CHAT files down the line. This step is especially important if you have a large set of potential datasets. Figure 1: Framework architecture. A hierarchical approach is implemented to narrow down the search space progressively with increasingly stringent criteria at each level. Relevant corpora are downloaded after preliminary screening. Target files are indexed at the last level. The compiled dataset can be accessed offline via the index table. The pipeline employs the PyLangAcq package (Lee et al., 2016) to scan the metadata in the header of each CHAT file to check whether the file satisfies the screening criteria. For efficient screening, no stringent criteria will be used in this step, but rather simple criteria that are sufficient enough to narrow down the search space. In addition, the pipeline does not inspect every CHAT file in a corpus thoroughly but instead moves on to the next corpus as soon as it finds a file satisfying the screening criteria. This approach significantly improves screening efficiency. The download URLs for the corpora containing at least one file matching the screening criteria will be returned. Screening criteria can contain any metadata available in the file header. Listing 1 shows an example of an if-conditional statement screening for CHAT files that involve a child participant ('CHI') and contain info about either the child's socio-economic status (SES), the mother's ('MOT') SES, or the mother's education. In this example, the pipeline will return a list of download URLs for the corpora containing at least one file that satisfies the if-condition. ### Download relevant corpora to the local drive The zip file of every corpus found in the previous step will be downloaded and extracted to the local drive. Each zip file contains all the CHAT files in a corpus. Instead of downloading the entire corpus, individual CHAT files can be accessed through TalkBank's official API so that one can download only the CHAT files they need. However, in most cases, sending a request and downloading one CHAT file at a time takes a much longer time than downloading the entire corpus. ### Search for target files and index them After downloading the data, the pipeline will inspect the metadata of every CHAT file to search for files that meet the user-defined criteria. Similar to Step 4.2, the criteria can contain any metadata available in the file header. The output of the pipeline - a table indexing the target CHAT files based on their metadata - is generated in this step (e.g. Table 2). This index table will be used to select CHAT files in future data analysis. Each row of the index table corresponds to a CHAT file (i.e. an entry), and each column represents a file header field. Available file header fields are documented in the CHAT manual. Note that not all files contain the same set of header fields. To determine what header fields are available in a file, you can either view the header of the file using the TalkBank Browser or download the file and view it on your computer. ### Standardize header labels Since the target CHAT files may come from different studies, CHAT files recorded under the same experimental conditions might be indexed differently with different header values (i.e. "header labels"). For example, either TD or typical could be used to label files from typically developed children (Table 3). In addition, since metadata might be entered by researchers manually, human errors such as typos or missing data could be found. To ensure data integrity and consistency, the index table undergoes a cleaning process before being saved as a Python pickle file for future use. The cleaning process involves standardizing header values, removing any incorrect values, handling missing values, and resolving any inconsistencies or discrepancies within the table. To clean the index table, you may want to start by inspecting the header labels first. To get all the labels used for a header field (corresponds to a column in the table), you can call the function get_labels(<COLUMNNAME>). If you are not sure what each of the labels represents, you can find the information about the experimental conditions as well as the study design on the homepage of the corpus where the CHAT file belongs to. The Jupyter Notebook talkbank_pipeline_tutorial.ipynb demonstrates how header labels can be standardized and how missing values can be handled according to the documentation of the corpus involved. ### Add participant identifier (optional) In each corpus, participants can be identified by their names or IDs within the corpus. However, in situations where data from different corpora are pooled together, there is no unique identifier available for participants across multiple corpora. For example, a participant named "Adam" can be found in two corpora (Brown and Van Houten) in CHILDES, though they are clearly different people. In addition, some participants may be associated with multiple CHAT files, such as those involved in longitudinal studies where repeated observations are made with the same participants over a period of time. A unique identifier becomes essential when conducting individual-based analyses or when tracking participant-related information across different datasets or studies. One straightforward approach to creating a participant identifier is to combine the participant's name with the name of the corpus to which the CHAT file belongs. By concatenating these two pieces of information, a unique identifier can be generated for each participant. This combined identifier ensures that participants with the same name in different corpora or with multiple CHAT files can be distinguished and analyzed individually. ## 5 Summary This paper presents a novel framework that addresses the limitations of TalkBank's API in data selection and batch processing. The framework enables customizable data filtering, allowing researchers to use complex criteria for retrieving relevant data from multiple corpora/ datasets. Using a hierarchical search approach, the framework reduces the search space sequentially and makes data processing more efficient. Moreover, the framework generates an index table that facilitates future data analysis by eliminating the need to search for the same datasets repeatedly. While the framework is demonstrated using the CHILDES data in TalkBank as an example, it is applicable to other remote databases as well. Written in Python, the framework can readily be adapted to the RAPIDS ecosystem for high-performance GPU processing of big data. The framework presented in this paper opens up possibilities for researchers to efficiently exploit multiple datasets from open science platforms like TalkBank and similar resources. ## 6 Acknowledgement I would like to thank Prof. Na-Rae Han (Robert Henderson Language Media Center/ Department of Linguistics, University of Pittsburgh) for providing valuable input during the development of this project.
2302.00465
The Spectral Gap and Low-Energy Spectrum in Mean-Field Quantum Spin Systems
A semiclassical analysis based on spin-coherent states is used to establish a classification and formulae for the spectral gap of mean-field spin Hamiltonians. For gapped systems we provide a full description of the low-energy spectra based on a second-order approximation to the semiclassical Hamiltonian hence justifying fluctuation theory at zero temperature for this case. We also point out a shift caused by the spherical geometry in these second-order approximations.
Chokri Manai, Simone Warzel
2023-02-01T14:25:39Z
http://arxiv.org/abs/2302.00465v4
# The Spectral Gap and Low-Energy Spectrum ###### Abstract A semiclassical analysis based on spin-coherent states is used to establish a classification and formulae for the spectral gap of mean-field spin Hamiltonians. For gapped systems we provide a full description of the low-energy spectra based on a second-order approximation to the semiclassical Hamiltonian hence justifying fluctuation theory at zero temperature for this case. We also point out a shift caused by the spherical geometry in these second-order approximations. semiclassical analysis, spin coherent states, spectral analysis, mean-field models ###### Contents * 1 Introduction * 1.1 Semiclassics for the free energy * 1.2 Spectral gap from semiclassics plus fluctuations * 2 Low energy spectra for operators with regular symbols * 2.1 Assumptions * 2.2 The case of a unique minimum * 2.3 The case of a finite number of minima * 3 Semiclassical analysis * 3.1 Limit of fluctuation operators * 3.2 Proof of Theorem 2.3 * 3.3 Proof of Theorem 2.4 * 3.4 Proof of Theorem 2.5 * A Miscellaneous on spin-coherent states * A.1 Semiclassical estimates for the states * A.2 Semiclassical estimates for the symbols * A.3 Semiclassics for the free energy ## 1 Introduction Mean-field quantum spin systems are ubiquitous in effective descriptions of a variety of phenomena. A popular example is the family of Lipkin-Meshkov-Glick (LMG) Hamiltonians, which were originally conceived in [21, 33, 35] to explain shape transitions in nuclei, but also feature in descriptions of Bosons in a double well and quantum-spin tunnelling in molecular magnets [5, 15]. This family includes the quantum Curie-Weiss (CW) Hamiltonian, whose simplicity continues to draw the attention of many communities [9, 11, 12, 14, 30, 45, 50]. In particular, in [4] such models were used to test conjectures related to quantum annealing for which information about the spectral gap is crucial. Most mean-field Hamiltonians discussed in the literature are defined in terms of a symmetric polynomial \(P:\mathbb{R}^{3}\to\mathbb{R}\) of fixed degree on which the three components of the total spin-vector \(\mathbf{S}=\sum_{n=1}^{N}\mathbf{S}(n)\) is evaluated: \[H=N\ \mathrm{P}\Big{(}\tfrac{2}{N}\mathbf{S}\Big{)}. \tag{1.1}\] For a system of \(N\) interacting qubits, the Hilbert space on which these operators act is the tensor product \(\mathcal{H}_{N}=\bigotimes_{n=1}^{N}\mathbb{C}^{2}\). The vectors \(\mathbf{S}(n)=\mathbbm{1}\otimes\cdots\otimes\ \mathbf{S}\ \otimes\cdots\otimes \mathbbm{1}\) stand for the natural lift of the spin vectors \(\mathbf{S}=(S_{x},S_{y},S_{y})\) to the \(n\)-th component of the tensor product. On each copy of \(\mathbb{C}^{2}\) the spin vector coincides with the three generators of \(SU(2)\): \[S_{x}=\frac{1}{2}\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad S_{y}=\frac{1}{2}\begin{pmatrix}0&-i\\ i&0\end{pmatrix},\quad S_{z}=\frac{1}{2}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}.\] The scaling of the spin in (1.1) ensures that the operator is norm-bounded by one, \(\big{\|}\tfrac{2}{N}S_{\xi}\big{\|}\leq 1\) for all \(\xi=x,y,z\). Moreover, the prefactor \(N\) forces the energy \(H\) to be extensive. For example, the LMG model is given by \(P(\mathbf{m})=-\alpha m_{2}^{2}-\beta m_{3}^{2}-\gamma m_{1}\) with \(\alpha,\beta,\gamma\in\mathbb{R}\). The special case \(\alpha=0\), \(\beta=1\) corresponds to the quantum CW model with \(\gamma\) playing the role of the transversal, external magnetic field. Since \(H\) is a function of the total spin \(\mathbf{S}\), it is block diagonal with respect to the decomposition of the tensor-product Hilbert space according to the irreducible representations of the total spin: \[\mathcal{H}_{N}\equiv\bigoplus_{J=\frac{N}{2}-\lfloor\frac{N}{2}\rfloor}^{N/2 }\bigoplus_{\alpha=1}^{M_{N,J}}\ \mathbb{C}^{2J+1},\qquad M_{N,J}=\frac{2J+1}{N+1}\binom{N+1}{\frac{N}{2}+J+1}. \tag{1.2}\] The total spin \(J\) of \(N\) qubits can take any value from \(N/2\) down in integers to either \(1/2\) if \(N\) is odd or \(0\) if \(N\) is even. The degeneracy of the representation of spin \(J\) in this decomposition is \(M_{N,J}\)[38]. On each block \((J,\alpha)\), the Hamiltonian (1.1) then acts as the given polynomial of the generators of the irreducible representation of \(SU(2)\) on \(\mathbb{C}^{2J+1}\). The analysis of such systems in the limit of large spin quantum number \(J\) is known [7, 26, 32, 36, 37] to be facilitated by Bloch coherent states on the Hilbert space \(\mathbb{C}^{2J+1}\). They are parametrized by an angle \(\Omega=(\theta,\varphi)\) on the unit sphere \(S^{2}\) with \(0\leq\theta\leq\pi,\ 0\leq\varphi\leq 2\pi\). In bra-ket-notation, which we will use in this paper, the Bloch-coherent states are given by \[\big{|}\Omega,J\rangle:=U(\theta,\varphi)\ \big{|}J\rangle,\qquad U(\varphi, \theta):=\exp\left(\frac{\theta}{2}\left(e^{i\varphi}S_{-}-e^{-i\varphi}S_{+} \right)\right) \tag{1.3}\] The reference vector \(\big{|}J\rangle\in\mathbb{C}^{2J+1}\) is the normalized eigenstate of the \(z\)-component of the spin corresponding to (maximal) eigenvalue \(J\) on the Hilbert space \(\mathbb{C}^{2J+1}\). The operators \(S_{\pm}=S_{x}\pm iS_{y}\) are the spin raising and lowering operators of the irreducible representation of \(SU(2)\) on \(\mathbb{C}^{2J+1}\). Bloch coherent states have many remarkable features. First and foremost, they form an overcomplete set of vectors as expressed through the resolution of unity on \(\mathbb{C}^{2J+1}\): \[\frac{2J+1}{4\pi}\int\big{|}\Omega,J\big{\rangle}\langle\Omega,J\big{|}\ d \Omega=\mathbbm{1}_{\mathbb{C}^{2J+1}}. \tag{1.4}\] Every linear operator \(G\) on \(\mathbb{C}^{2J+1}\) is associated with a lower and upper symbol. The lower symbol is \(G(\Omega,J):=\langle\Omega,J\big{|}G|\Omega,J\rangle\), and the upper symbol is characterized through the property \[G=\frac{2J+1}{4\pi}\int g(\Omega,J)\,\big{|}\Omega,J\big{\rangle}\langle\Omega,J\big{|}\,d\Omega. \tag{1.5}\] The choice of \(g\) is not unique. E.g. through an explicit expression [29], one sees that there is always an arbitrarily often differentiable choice, \(g(\cdot,J)\in C^{\infty}(S^{2})\). More properties of coherent states are collected in Appendix A; see also [3, 16, 20, 41]. ### Semiclassics for the free energy The lower and upper symbol feature prominently in Berezin and Lieb's semiclassical bounds [6, 32] on the partition function associated with a self-adjoint Hamiltonian \(G\) on \(\mathbb{C}^{2J+1}\): \[\frac{2J+1}{4\pi}\int e^{-\beta G(\Omega,J)}d\Omega\leq\operatorname{Tr}_{ \mathbb{C}^{2J+1}}e^{-\beta G}\leq\frac{2J+1}{4\pi}\int e^{-\beta g(\Omega,J) }d\Omega. \tag{1.6}\] In the semiclassical limit of large spin quantum number \(J\), these bounds are known to asymptotically coincide [17, 32]. In the same spirit, for any polynomial of the spin operator as in (1.1) restricted to \(\mathbb{C}^{2J+1}\) both the upper and lower symbols agree to leading order in \(N\) with the corresponding classical polynomial function on the unit ball \(B_{1}\), which parametrises the Hilbert space (1.2) semiclassically. Using spherical coordinates \(\mathbf{e}(\Omega)=(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)\in S ^{2}\), one has \[\sup_{0\leq J\leq N/2}\left\|\mathrm{P}\Big{(}\tfrac{2}{N}\mathbf{S}\Big{)} \right|_{\mathbb{C}^{2J+1}}-\frac{2J+1}{4\pi}\int\mathrm{P}\Big{(}\tfrac{2J}{ N}\mathbf{e}(\Omega)\Big{)}\big{|}\Omega,J\big{\rangle}\langle\Omega,J\big{|}\,d \Omega\right\|\leq\mathcal{O}(N^{-1}) \tag{1.7}\] for the operator norm \(\|\cdot\|\) on \(\mathbb{C}^{2J+1}\). We use Landau \(\mathcal{O}\)-notation, i.e., the error on the right is bounded by \(CN^{-1}\) with a constant \(C\) which only depends on the coefficients of the polynomial. This statement is a quantitative version of Duffield's theorem [17]. Since it is hard to locate general quantitative error estimates, we include a proof as Proposition A.4 in the Appendix. As is recalled in Proposition A.3, the lower symbol then shares the same classical asymptotics \[\sup_{0\leq J\leq N/2}\,\sup_{\Omega}\,\Big{|}\langle\Omega,J\big{|}\mathrm{ P}\big{(}\tfrac{2}{N}\mathbf{S}\big{)}\Big{|}_{\mathbb{C}^{2J+1}}\big{|} \Omega,J\rangle-\mathrm{P}\Big{(}\tfrac{2J}{N}\mathbf{e}(\Omega)\Big{)}\Big{|} \leq\mathcal{O}(N^{-1}).\] The Berezin-Lieb inequalities (1.6) immediately imply that the free energy of (1.1) is determined by minimizing a variational functional involving the classical energy \(\mathrm{P}\) on the unit ball and the (shifted) binary entropy \[I(r):=\begin{cases}-\tfrac{1+r}{2}\ln\tfrac{1+r}{2}-\tfrac{1-r}{2}\ln\tfrac{1 -r}{2},&r\in(0,1).\\ 0,&r\in\{0,1\}\end{cases}\] A straightforward, rigorous saddle-point evaluation, which we spell out in the proof of Proposition A.7 - a generalisation to mean-field models with regular symbols - yields the pressure as a min-max variational principle on \(B_{1}\). **Proposition 1.1**.: _For a mean-field Hamiltonian \(H=N\) P\(\left(\frac{2}{N}\mathbf{S}\right)\) with a symmetric polynomial P, the pressure for any \(\beta>0\) is given by:_ \[p(\beta):=\lim_{N\to\infty}N^{-1}\ln\mathrm{Tr}\,\exp\left(-\beta NP\Big{(} \tfrac{2}{N}\mathbf{S}\Big{)}\right)=\max_{r\in[0,1]}\left\{I(r)-\beta\min_{ \Omega\in S^{2}}P\left(r\mathbf{e}(\Omega)\right)\right\}. \tag{1.8}\] As a special case with constant field \(P(\mathbf{m})=-\lambda m_{3}\), \(\lambda\geq 0\), one obtains the Legendre relation \[\ln 2\cosh\left(\beta\lambda\right)=\max_{r\in[0,1]}\left[I(r)+\beta\lambda r \right].\] By inverting this Legendre transform, one may rewrite (1.8) in the slightly more familiar form \[p(\beta)=\max_{r\in[0,1]}\min_{\lambda\geq 0}\left\{\ln 2\cosh\left(\beta \lambda\right)-\beta\left(\min_{\Omega\in S^{2}}\mathrm{P}\left(r\mathbf{e}( \Omega)\right)+\lambda r\right)\right\}.\] Investigations of the free energy have been of great interests over many decades [11, 13, 14, 17, 18, 32, 43, 49]. They are however not the main focus of this paper. We therefore conclude this topic with only one brief comment on the literature. Among the numerous results, it is worth mentioning that alternatively to the sketched approach via the Berezin-Lieb inequalities, the formula (1.8) may be derived exploiting the exchange symmetry of the mean-field Hamiltonian using a version of the quantum de Finetti theorem. This road was taken by Fannes, Spohn and Verbeurre [18]. Their result essentially covers all Hamiltonians \(H=\sum_{p=1}^{m}A^{(p)}\) on \(\mathcal{H}_{N}\) with exchange-symmetric \(p\)-spin interactions \(A^{(p)}\) and yields \[p(\beta)=\sup_{\varrho}\left[E(\varrho)-\beta\sum_{p=1}^{m}\mathrm{Tr}\,A^{(p )}\varrho^{\otimes p}\right].\] For a mean-field system of \(N\) spin-\(1/2\), the supremum is over states which are parametrized by the Bloch sphere, \(\varrho=\frac{1}{2}\mathbbm{1}_{\mathbb{C}^{2}}-r\mathbf{e}(\Omega)\cdot \mathbf{S}\), and whose entropy is \(E(\varrho)=-\mathrm{Tr}\,\varrho\ln\varrho=I(r)\). As an aside, we note that this powerful de-Finetti-approach generalizes from spin-\(1/2\) to spin-\(s\)[18], and then covers the results on the free energy obtained in [8, 10]. E.g. for the exchange Hamiltonian \(T(\psi\otimes\varrho)\coloneqq\phi\otimes\psi\) on \(\mathbb{C}^{d}\) for which \(\mathrm{Tr}\,T\varrho\otimes\varrho=\sum_{i,j=1}^{d}\lambda_{i}\lambda_{j} \mathrm{Tr}\,T|u_{i}\rangle\langle u_{i}|\otimes|u_{j}\rangle\langle u_{j}|= \sum_{i=1}^{d}\lambda_{i}^{2}\), one immediately gets \(p(\beta)=\sup_{\lambda\in\Delta^{d}}\sum_{i=1}^{d}(-\lambda_{i}\ln\lambda_{i} -\beta\lambda_{i}^{2})\), where \(\lambda=(\lambda_{1},\ldots,\lambda_{d})\in\Delta^{d}\) is the vector of eigenvalues of the state \(\varrho\) on \(\mathbb{C}^{d}\). ### Spectral gap from semiclassics plus fluctuations The main result of this paper is a simple quasi-classical explanation and formulae for the low-energy part of the spectrum of a self-adjoint mean-field operator \(H\) as in (1.1) in the limit \(N\to\infty\). We will denote by \(E_{0}(H)\leq E_{1}(H)\leq E_{2}(H)\leq\dots\) the ordered sequence of its eigenvalues counted with multiplicities. In particular, the existence and leading asymptotic value of the spectral gap \[\operatorname{gap}H=E_{1}(H)-E_{0}(H)\] can be read of from the location of the minimum \(\mathbf{m}_{0}\) of the polynomial \(P:\mathbb{R}^{3}\to\mathbb{R}\) restricted to \(B_{1}\). In case the minimum is unique and located on the surface \(S^{2}\), the operator (1.1) generically has a spectral gap. To leading order in \(N\) the value of this gap is in fact completely determined by the coefficients of the quadratic polynomial which is uniquely associated with \(P\). In view of the notorious difficulty of determining the spectral gap in quantum lattice systems [1, 2, 23, 28], this simplicity might be somewhat surprising. Broadly speaking, our results are in accordance with the general belief of fluctuation theory that the second-order approximation to \(P\), which involves the gradient \(\nabla P(\mathbf{m}_{0})\) and the Hessian \(D_{P}(\mathbf{m}_{0})=(\partial_{j}\partial_{k}P(\mathbf{m}_{0}))_{j,k=1}^{3}\), yields the description of the low-energy spectra. Related statements have been proven in the context of mean-field Bose systems (see e.g. [22, 31]). For the precise formulation of such a result for quantum spin systems and in order to point out a subtlety caused by the geometry, we need some basic geometric facts on functions on \(B_{1}\). If \(\mathbf{m}_{0}\in S^{2}\) is a minimum of \(P\) on \(B_{1}\), the gradient either vanishes or points towards the center of the ball, \(\nabla P(\mathbf{m}_{0})=-|\nabla P(\mathbf{m}_{0})|\ \mathbf{m}_{0}\). The quadratic approximation of the polynomial is then given by \(D_{P}(\mathbf{m}_{0})\) projected on the directions perpendicular to \(\mathbf{m}_{0}\). In terms of the normalized directional vector \[\mathbf{e}_{\mathbf{m}_{0}}=\frac{\mathbf{m}_{0}}{|\mathbf{m}_{0}|},\qquad \text{we set}\qquad Q_{\perp}:=\mathbbm{1}_{\mathbb{R}^{3}}-\mathbf{e}_{ \mathbf{m}_{0}}^{T}\mathbf{e}_{\mathbf{m}_{0}},\] which is understood as a linear projection map on \(\mathbb{R}^{3}\). Introducing a local chart \(\Phi:\mathbb{R}^{2}\to T_{\mathbf{m}_{0}}S^{2}\), the linear map on \(\operatorname{ran}Q_{\perp}\equiv T_{\mathbf{m}_{0}}S^{2}\) given by \[D_{P}^{\perp}(\mathbf{m}_{0}):=Q_{\perp}D_{P}(\mathbf{m}_{0})Q_{\perp}+|\nabla P (\mathbf{m}_{0})|\ Q_{\perp} \tag{1.9}\] is then the quadratic approximation to \(P\circ\Phi\) at \(\mathbf{m}_{0}\). The shift of the Hessian in cartesian coordinates by the norm of the gradient \(|\nabla P(\mathbf{m}_{0})|\) is thus an effect of the constraint due to the spherical geometry. **Theorem 1.2**.: _Let \(H\) be a self-adjoint operator on \(\mathcal{H}_{N}\) of the form (1.1) with a symmetric polynomial \(P:\mathbb{R}^{3}\to\mathbb{R}\) of fixed degree. Suppose that the minimum of \(P\) restricted to the unit ball \(B_{1}\) is unique and located at a point \(\mathbf{m}_{0}\in S^{2}\) on the unit sphere. Then,_ \[\operatorname{gap}H=2\min\left\{|\nabla P(\mathbf{m}_{0})|,\sqrt{\det D_{P}^{ \perp}(\mathbf{m}_{0})}\right\}+o(1). \tag{1.10}\] _is the spectral gap above the unique ground state in case the rhs is strictly positive._ This theorem is a special case of Theorem 2.3, which deals with mean-field Hamiltonians with more general regular symbols than just polynomials. Theorem 2.3 also has the corresponding description of the leading asymptotics of the entire low-energy spectrum in terms of fluctuation operators in directions \(\operatorname{ran}Q_{\perp}\), which also applies to the polynomial case (by Theorem 2.1). To demonstrate the applicability, we consider the quantum CW Hamiltonian \(H=-\frac{4}{N}S_{x}^{2}-2\gamma S_{z}\), which corresponds to the choice \(P(\mathbf{m})=-m_{1}^{2}-\gamma m_{3}\). The gradient and Hessian in cartesian coordinates are given by \[\nabla P(\mathbf{m})=\begin{bmatrix}-2m_{1}\\ 0\\ -\gamma\end{bmatrix},\quad D_{P}(\mathbf{m})=\begin{bmatrix}-2&0&0\\ 0&0&0\\ 0&0&0\end{bmatrix}\] In the paramagnetic phase \(\gamma>2\), \(P\) has only one minimum on \(B_{1}\) at \(\mathbf{m}_{0}=(0,0,1)^{T}\). The eigenvalues of the orthogonal part \(D_{P}^{\perp}(\mathbf{m}_{0})\) of the Hessian are \(\omega_{1}=-2+\gamma\), \(\omega_{2}=\gamma\) and \(|\nabla P(\mathbf{m}_{0})|=\gamma\). Theorem 1.2 then yields: \[\operatorname{gap}H=2\sqrt{\gamma(\gamma-2)}+o(1).\] The gap closes like a square root close to the critical point \(\gamma=2\). This is the end-point (at \(\beta=\infty\)) of the critical line which separates the ferromagnetic phase of the quantum CW model at low temperatures from the paramagnetic phase at high temperature or large transversal field (cf. [4]). In the ferromagnetic phase \(|\gamma|<2\), minima of \(P\) are found at \(\mathbf{m}_{0}^{\pm}=(\pm\sqrt{1-\gamma^{2}/4},0,\gamma/2)^{T}\). The eigenvalues of the orthogonal part \(D_{P}^{\perp}(\mathbf{m}_{0}^{\pm})\) of the Hessian are \(\omega_{1}=2,\omega_{2}=2(1-\gamma^{2}/4)\) and \(|\nabla P(\mathbf{m}_{0}^{\pm})|=2\). If we ignore the degeneracy of these two minima for the moment and pretend that only the positive solution \(\mathbf{m}_{0}^{+}\) exists, this leads to the expression \(4\sqrt{1-\gamma^{2}/4}\) in the right side of (1.10). As will be explained below Theorem 2.5, the gap vanishes in this phase due to the degeneracy of the two minima. What is calculated here is in fact the gap between the second excited state and the ground state. ## 2 Low energy spectra for operators with regular symbols Our result on the gap and the low-energy spectrum applies to a more general class of mean-field quantum spin Hamiltonians than just polynomials of the total spin. Next, we describe this class. ### Assumptions **A0**: We assume that \(H\) is block diagonal with respect to the orthogonal decomposition (1.2) of \(\mathcal{H}_{N}\), \[H=\bigoplus_{J=\frac{N}{2}-\lfloor\frac{N}{2}\rfloor}^{N/2}\bigoplus_{\alpha=1 }^{M_{J}}H_{J,\alpha} \tag{2.1}\] with self-adjoint blocks \(H_{J,\alpha}\) acting on a copy of \(\mathbb{C}^{2J+1}\). Moreover, there is a twice continuously differentiable symbol \(h:B_{1}\to\mathbb{R}\) such that all blocks are uniformly approximable in operator norm on \(\mathbb{C}^{2J+1}\) to order one as \(N\to\infty\): \[\max_{J,\alpha}\left\|H_{J,\alpha}-\frac{2J+1}{4\pi}\int Nh\Big{(}\frac{2J}{N} \mathbf{e}(\Omega)\Big{)}\left|\Omega,J\right\rangle\!\left\langle\Omega,J \right|d\Omega\right\|\leq\mathcal{O}(1), \tag{2.2}\] where the maximum runs over \(\alpha\in\{1,\dots,M_{N,J}\}\) and \(J\in\{\frac{N}{2}-\lfloor\frac{N}{2}\rfloor,\dots,N/2\}\). For our semiclassical analysis, we introduce subspaces associated with a fixed block \((J,\alpha)\) and any direction defined by \(\mathbf{0}\neq\mathbf{m}_{0}\in B_{1}\): \[\mathcal{H}_{J}^{K}(\mathbf{m}_{0})=\mathrm{span}\left\{\psi\in\mathbb{C}^{2J +1}\ |\ \mathbf{m}_{0}\cdot\mathbf{S}\ \psi=|\mathbf{m}_{0}|\ (J-k)\psi\ \text{for some}\ k\in\{0,,1,\dots,K\}\right\}.\] The associated orthogonal projection on \(\mathbb{C}^{2J+1}\) will be denoted by \(P_{J}^{K}(\mathbf{m}_{0})\). We will work with quadratic approximations at \(\mathbf{m}_{0}\) defined in terms of the matrix-valued second-order Taylor polynomial associated with \(h\) and the total spin \(\mathbf{S}\) on \(\mathbb{C}^{2J+1}\): \[Q(\mathbf{m}_{0}):=Nh(\mathbf{m}_{0})\mathbb{1}+(2\mathbf{S}-N\mathbf{m}_{0}) \cdot\nabla h(\mathbf{m}_{0})+\frac{2}{N}(\mathbf{S}-N\mathbf{m}_{0}/2)\cdot D _{h}(\mathbf{m}_{0})(\mathbf{S}-N\mathbf{m}_{0}/2). \tag{2.3}\] The operators \((\mathbf{S}-N\mathbf{m}_{0}/2)/\sqrt{J_{N}(\mathbf{m}_{0})}\) with \(J_{N}(\mathbf{m}_{0})\coloneqq|\mathbf{m}_{0}|N/2\) are the fluctuation operators (cf. [36, 37]) with respect to the coherent state \(|\Omega_{0},J_{N}(\mathbf{m}_{0})\rangle\), where \(\Omega_{0}\) is the spherical angle of \(\mathbf{e}_{\mathbf{m}_{0}}\). We assume the approximability of \(H_{J,\alpha}\) with \(J\) close to \(J_{N}(\mathbf{m}_{0})\) either solely for the set of minima \(\mathcal{M}\subset B_{1}\) or globally at every point. **A1**: We assume that there is a continuously differentiable \(\kappa:B_{1}\to\mathbb{R}\) and diverging sequences \(K_{N}\), \(\overline{K}_{N}\in\mathbb{N}\) such that for all minima \(\mathbf{m}_{0}\in\mathcal{M}\): \[\max_{|J-J_{N}(\mathbf{m}_{0})|\leq\overline{K}_{N}}\,\max_{\alpha}\ \left\|\left[\kappa\left(\mathbf{m}_{0}\right)\mathbb{1}+Q\left(\mathbf{m}_{0} \right)-H_{J,\alpha}\right]P_{J}^{K_{N}}(\mathbf{m}_{0})\right\|=o(1). \tag{2.4}\] where Landau's \(o(1)\) stands for a null sequence as \(N\to\infty\). **A1'**: We assume that there is a continuously differentiable \(\kappa:B_{1}\to\mathbb{R}\) and a diverging sequence \(K_{N}\in\mathbb{N}\) such that: \[\max_{\mathbf{m}_{0}\in B_{1}}\max_{|J-J_{N}(\mathbf{m}_{0})|\leq 1}\,\max_{ \alpha}\ \left\|\left[\kappa\left(\mathbf{m}_{0}\right)\mathbb{1}+Q\left(\mathbf{m}_{ 0}\right)-H_{J,\alpha}\right]P_{J}^{K_{N}}(\mathbf{m}_{0})\right\|=o(1). \tag{2.5}\] Before moving on to the main result, let us add a few comments. 1. The reason for including an off-set function \(\kappa\) in the quadratic approximations (2.4) and (2.5) is that the symbol \(h\) is only assumed to approximate \(H\) up to order one, cf. (2.2). However, our main results address the spectrum exactly to this order. 2. In case the minimum \(\mathbf{m}_{0}\in\mathcal{M}\) is in the interior of the ball, \(|\mathbf{m}_{0}|<1\), then \(\nabla h(\mathbf{m}_{0})=\mathbf{0}\). The second term in the quadratic approximation (2.3) is hence absent. In case \(|\mathbf{m}_{0}|=1\), the gradient either vanishes or points to the center of the ball, \(\nabla h(\mathbf{m}_{0})=-|\nabla h(\mathbf{m}_{0})|\ \mathbf{e}_{\mathbf{m}_{0}}\). The second term in (2.3) then equals \(2|\nabla h(\mathbf{m}_{0})|(\mathbf{e}_{\mathbf{m}_{0}}\cdot\mathbf{S}-N/2)\). 3. If **A1** or **A1'** hold for diverging sequences \(K_{N},\overline{K}_{N}\), then they also hold for any such sequences, which are upper bounded by \(K_{N},\overline{K}_{N}\). The projections corresponding to the subspaces \(\mathcal{H}_{J}^{K}(\mathbf{m}_{0})\) are chosen such that \[\left\|\left(\mathbf{e}_{\mathbf{m}_{0}}\cdot\mathbf{S}-J_{N}(\mathbf{m}_{0}) \right)P_{J}^{K}(\mathbf{m}_{0})\right\|=\max_{k\in\{0,\ldots,K\}}\left|J-k-J_ {N}(\mathbf{m}_{0})\right|\leq K+\left|J-J_{N}(\mathbf{m}_{0})\right|. \tag{2.6}\] To estimate the norm of the spin operator \(Q_{\perp}\mathbf{S}\) projected to orthogonal directions, \(Q_{\perp}=\mathbb{1}_{\mathbb{R}^{3}}-\mathbf{e}_{\mathbf{m}_{0}}^{T}\mathbf{ e}_{\mathbf{m}_{0}}\) perpendicular to \(\mathbf{m}_{0}\), it is convenient to introduce a coordinate system. When focusing on a patch around one point, we may always assume without loss of generality that \(\mathbf{m}_{0}=(0,0,|\mathbf{m}_{0}|)^{T}\). This can be accomplished by the unitary rotation in (1.3) for which \(U(\Omega_{0})^{*}\ \mathbf{e}_{\mathbf{m}_{0}}\cdot\mathbf{S}\ U(\Omega_{0})=S_{z}\) if \(\Omega_{0}\) denotes the spherical coordinates of \(\mathbf{e}_{\mathbf{m}_{0}}\). In this coordinate system, the spin operators in the two orthogonal directions are given by \(S_{x}\) and \(S_{y}\), and the range of the projections \(P_{J,\alpha}^{K}(\mathbf{m}_{0})\) are spanned by the canonical orthonormal eigenbasis of \(S_{z}\) on \(\mathbb{C}^{2J+1}\), i.e. \(S_{z}|J-k\rangle=(J-k)|J-k\rangle\) for all \(k\in\{0,1,\ldots,2J\}\). We recall that both \(S_{x}\) and \(S_{y}\) are tridiagonal matrices in terms of this basis: \[\langle J-k|S_{x}|J-k^{\prime}\rangle=i^{k^{\prime}-k}\langle J-k|S_{y}|J-k^{ \prime}\rangle=\sqrt{\frac{2J\max\{k,k^{\prime}\}-kk^{\prime}}{4}}\delta_{|k^{ \prime}-k|,1}. \tag{2.7}\] Therefore, for any \(d\in\mathbb{N}\) there is some \(C_{d}<\infty\) such that for both \(\xi\in\{x,y\}\): \[\max_{J\leq N/2}\max_{\min\{k,k^{\prime}\}\in\{0,\ldots,K\}}\left|\langle J-k|S _{\xi}^{d}|J-k^{\prime}\rangle\right|\leq C_{d}\left(NK\right)^{\frac{d}{2}}1[|k ^{\prime}-k|\leq d]. \tag{2.8}\] This renders evident that on \(\mathcal{H}_{J}^{K}(\mathbf{m}_{0})\) the scaled operators \(N(2S_{\xi}/N)^{d}\) are negligible if \(d\geq 3\) and \(K=o(N^{1/3})\). The above arguments also show that our assumptions are tailored to apply to (1.1) with an arbitrary fixed polynomial \(P\). **Theorem 2.1**.: _For \(H=NP\big{(}\frac{2}{N}\mathbf{S}\big{)}\) on \(\mathcal{H}_{N}\) with any symmetric polynomial \(P\), all assumptions **A0**, **A1**, **A1'** are satisfied with \(h=P\) and \(\overline{K}_{N}=o(N^{2/3})\), \(K_{N}=o(N^{1/3})\)._ Proof.: Thanks to (1.7), an approximate symbol of the Hamiltonian is indeed the polynomial, \(h=P\). Let \(\mathbf{m}_{0}\in B_{1}\) be arbitrary, not necessarily a minimum of \(P\). Without loss of generality we may assume \(\mathbf{m}_{0}=(0,0,|\mathbf{m}_{0}|)^{T}\) and that the Hamiltonian is of the form \(H=N\mathrm{P}\left(s_{N}(1),s_{N}(2),s_{N}(3)\right)\) with \[s_{N}(1):=\frac{2}{N}S_{x},\quad s_{N}(2):=\frac{2}{N}S_{y},\quad s_{N}(3):= \frac{2}{N}(S_{z}-J_{N}(\mathbf{m}_{0})).\] The polynomial \(\mathrm{P}\) is a linear combination of monomials of a fixed order \(d\in\mathbb{N}\). Due to non-commutativity of matrix multiplication such monomials include all products of the form \[\Pi_{N}(\mathbf{i}):=s_{N}(i_{1})s_{N}(i_{2})\ldots s_{N}(i_{d})\] with an arbitrary choice of ordered indices \(\mathbf{i}=(i_{1},\ldots,i_{d})\in\{1,2,3\}^{d}\). Up to order \(d\leq 2\), they coincide with the terms in the definition (2.3) of \(Q(\mathbf{m}_{0})\). Each of the above matrices is block diagonal on \(\mathcal{H}_{N}\) and, when restricted to a copy of \(\mathbb{C}^{2J+1}\), at most \(2d+1\)-diagonal. We may therefore estimate similarly as in (2.8) for all \(J\leq N/2\) and any choice of indices \(\mathbf{i}\in\{1,2,3\}^{d}\): \[\max_{\min\{k,k^{\prime}\}\in\{0,\ldots,K_{N}\}}\big{|}\langle J-k |\ N\Pi_{N}(\mathbf{i})\ |J-k^{\prime}\rangle\big{|}\] \[\leq C_{d}\ N^{1-d}\max_{m\in\{0,\ldots,d\}}\Big{\{}\sqrt{N(K_{N} +d)}^{d-m},\left(|J-J_{N}(\mathbf{m}_{0})|+K_{N}+d\right)^{m}\Big{\}}\,1[|k^{ \prime}-k|\leq d]\] \[\leq C_{d}\left[\frac{\sqrt{K_{N}}^{d}}{\sqrt{N}^{d-2}}+\frac{|J- J_{N}(\mathbf{m}_{0})|^{d}+K_{N}^{d}}{N^{d-1}}\right]1[|k^{\prime}-k|\leq d], \tag{2.9}\] with a constant \(C_{d}<\infty\), which changes from line to line, and which is independent of \(J\). For any \(d\geq 3\) the right side vanished if \(K_{N}=o(N^{1/3})\) and \(|J-J_{N}(\mathbf{m}_{0})|=o(N^{2/3})\). Since \[\max_{\alpha}\Big{\|}\Pi_{N}(\mathbf{i})P_{J}^{K_{N}}(\mathbf{m}_{0})\Big{\|} \leq(2d+1)\max_{\min\{k,k^{\prime}\}\in\{0,\ldots,K_{N}\}}\bigl{|}\langle J-k |\Pi_{N}(\mathbf{i})|J-k^{\prime}\rangle\bigr{|}\,, \tag{2.10}\] any monomial of order \(d\geq 3\) indeed does not contribute in (2.4). Using similar estimates and under some more restrictive assumptions on \(K_{N}\) and \(\overline{K}_{N}\), one may replace \(Q(\mathbf{m}_{0})\) in assumption (2.4) by the second-order polynomial \[\widehat{Q}(\mathbf{m}_{0}):=Nh(\mathbf{m}_{0})\mathbb{1}+(2\mathbf{S}-N \mathbf{m}_{0})\cdot\nabla h(\mathbf{m}_{0})+\frac{2}{N}(Q_{\perp}\mathbf{S}) \cdot D_{h}(\mathbf{m}_{0})(Q_{\perp}\mathbf{S}), \tag{2.11}\] which only involves the projection \(Q_{\perp}\) of the Hessian. This means that the fluctuation operator in the radial direction of \(\mathbf{m}_{0}\) is negligible. **Lemma 2.2**.: _If \(K_{N}=o(N^{1/3})\) and \(\overline{K}_{N}^{2}K_{N}=o(N)\), then_ \[\sup_{|J-J_{N}(\mathbf{m}_{0})|\leq\overline{K}_{N}}\ \sup_{\alpha}\Big{\|} \Big{[}Q(\mathbf{m}_{0})-\widehat{Q}(\mathbf{m}_{0})\Big{]}\,P_{J}^{K_{N}}( \mathbf{m}_{0})\Big{\|}=o(1). \tag{2.12}\] Proof.: Using the same coordinate system and notation as in the proof of Theorem 2.1, the difference \(Q(\mathbf{m}_{0})-\widehat{Q}(\mathbf{m}_{0})\) is a linear combination of the five monomials of the form \(N\Pi_{N}(\mathbf{i})\) with \(\mathbf{i}=(i_{1},i_{2})\in\{(1,3),(3,1),(2,3),(3,2),(3,3)\}\). By (2.6) we have \(\left\|N\Pi_{N}(3,3)P_{J}^{K_{N}}(\mathbf{m}_{0})\right\|\leq 8\)\((|J-J_{N}(\mathbf{m}_{0})|^{2}+K_{N}^{2})/N=o(1)\). In all other cases, we estimate similarly as in (2.9). For example: \[\max_{k^{\prime}\in\{0,\ldots,K_{N}\}}\left|\langle J-k|N\Pi_{N}(3,1)|J-k^{ \prime}\rangle\right|\leq\sqrt{\frac{K_{N}+1}{N}}\,(|J-J_{N}(\mathbf{m}_{0})|+ K_{N}+1)\,1[|k^{\prime}-k|\leq 1],\] which by (2.10) leads to \(\left\|N\Pi_{N}(3,1)P_{J,\alpha}^{K_{N}}(\mathbf{m}_{0})\right\|\leq o(1)\). The remaining terms are estimated similarly. ### The case of a unique minimum The following is our main result for mean-field Hamiltonians whose approximate symbol has a unique minimum, which is at the surface of the unit ball. **Theorem 2.3**.: _Assuming **A0** and **A1** and that the symbol \(h\) has a unique global minimum at \(\mathbf{m}_{0}\in S^{2}\) where \(\nabla h(\mathbf{m}_{0})\neq\mathbf{0}\) and \(\det D_{h}^{\perp}(\mathbf{m}_{0})>0\):_ 1. _the ground state_ \(\psi_{0}\) _of_ \(H\) _on_ \(\mathcal{H}_{N}\) _is unique (up to phase) and contained in the subspace with maximal total spin_ \(J=N/2\)_. In terms of the eigenstates_ \(|N/2-m\rangle\) _of_ \(S_{z}\) _in that subspace, we have for any_ \(m\in\mathbb{N}_{0}\)_:_ \[\langle J-m|\psi_{0}\rangle=\omega^{1/4}\sqrt{\frac{2}{(\omega+1)n!}}\left( \sqrt{\frac{\omega-1}{2(\omega+1)}}\right)^{n}H_{n}(0)+o(1),\] (2.13) _where_ \(H_{n}\) _denotes the_ \(n\)_-th Hermite polynomial, and the ground state energy is given by_ \[E_{0}(H)=Nh(\mathbf{m}_{0})+\kappa(\mathbf{m}_{0})-|\nabla h(\mathbf{m}_{0}) |+\sqrt{\det D_{h}^{\perp}(\mathbf{m}_{0})}+o(1).\] (2.14) 2. _for any energy below_ \(E_{0}(H)+\Delta\) _with_ \(\Delta>0\) _fixed but arbitrary, the eigenvalues of_ \(H\) _stem from blocks_ \((J,\alpha)\) _with_ \(k=N/2-J\in\mathbb{N}_{0}\) _fixed. When counted with multiplicity, these low-energy eigenvalues of_ \(H_{N/2-k,\alpha}\) _for_ \(k\in\mathbb{N}_{0}\) _asymptotically coincide up to_ \(o(1)\) _with the points in_ \[Nh(\mathbf{m}_{0})+\kappa(\mathbf{m}_{0})+(2k-1)\ |\nabla h(\mathbf{m}_{0})|+(2m+1) \sqrt{\det D_{h}^{\perp}(\mathbf{m}_{0})},\] (2.15) _with_ \(m\in\mathbb{N}_{0}\)_. The spectral gap of_ \(H\) _is_ \[\operatorname{gap}H=2\min\left\{|\nabla h(\mathbf{m}_{0})|,\sqrt{\det D_{h}^{ \perp}(\mathbf{m}_{0})}\right\}+o(1).\] (2.16) The proof of Theorem 2.3 is the topic of Section 3.2. As can be inferred from there, the error estimates hiding in the \(o(1)\)-terms are made up from two sources: first, the accuracy of the assumed quadratic approximation (2.4), and second, the cumulative subsequent error estimates. Moreover, an asymptotic form of all eigenfunctions of \(H_{J,\alpha}\) on the outer blocks and not only the ground state (2.13) is found there. Although the ground state is simple and the same applies to the oscillator spectrum (2.15) of \(H_{J,\alpha}\) for fixed \((J,\alpha)\), the multiplicity \(M_{N,J}\) in (2.1) will cause the eigenvalues of \(H\) to occur approximately to order \(o(1)\) with these multiplicities. Before moving on to the general case, let us put Theorem 2.3 in the context of available results. 1. The spectra of quadratic mean-field Hamiltonians such as the LMG model can be described exactly through Bethe-Ansatz-type equations [39, 40, 48]. Since the latter are in the most general case hard to solve, much attention has been given to finding approximate semiclassical solutions [12, 45]. In view of this, it is worth emphasizing that the above theorem is (through Theorem 2.1) applicable to all \(H\) of polynomial form (1.1). Their low-energy spectra are proven to agree with that of the associated quadratic term. The latter turns out to produce the harmonic oscillator spectrum (2.15), in which the frequency is determined by the Hessian in the spherical geometry. The spectrum of such general mean-field Hamiltonian has so far only been determined to a coarser order \(N\) in [49] and not on the fine scale \(o(1)\). 2. Expressions for the spectral gap of certain polynomial mean-field quantum-spin Hamiltonians have been derived in [4, 45]. These works assume that the low-energy spectrum of the relevant blocks \(H_{J,\alpha}\) are equally spaced, and argue that the gap is proportional to its inverse density of states at the ground state. A-posteriori and thanks to the proven equal spacing of the low-energy eigenvalues of \(H_{N/2-k,\alpha}\), this is correct in case the minimum in (2.16) is attained at \(\big{(}\det D_{h}^{\perp}(\mathbf{m}_{0})\big{)}^{1/2}\). It hence works in case of the quantum CW model in the paramagnetic phase. However, in case the minimum in (2.16) is found at \(|\nabla h(\mathbf{m}_{0})|\) and hence stems from the second-most outer shell, this strategy fails. Theorem 2.3 should be contrasted with the case that the minimum of the symbol is found strictly inside the unit ball. In this case, the spectral gap vanishes. **Theorem 2.4**.: _Assuming **A0** and **A1**'and that the symbol \(h\in C^{3}\) has a unique global minimum at \(\mathbf{m}_{0}\in B_{1}\) with \(0<|\mathbf{m}_{0}|<1\) and \(D_{h}(\mathbf{m}_{0})>0\). Then:_ 1. _the ground state is contained in a subspace with total spin_ \(J\) _with_ \(|J-J_{N}(\mathbf{m}_{0})|\leq\mathcal{O}(\sqrt{N})\) _and_ \[E_{0}(H)=E_{0}(H_{J,\alpha})=Nh(\mathbf{m}_{0})+\kappa(\mathbf{m}_{0})+| \mathbf{m}_{0}|\sqrt{\det D_{h}^{\perp}(\mathbf{m}_{0})}+o(1).\] (2.17) 2. _for any_ \(J\) _with_ \(|J-J_{N}(\mathbf{m}_{0})|\leq o(\sqrt{N})\) _the ground-state energy_ \(E_{0}(H_{J,\alpha})\) _is still given by (_2.17_)._ The proof largely builds on the techniques of the proof of Theorem 2.3 and found in Section 3.3. The techniques allow in fact to determine the whole low-energy spectrum of every block \(H_{J,\alpha}\) with \(|J-J_{N}(\mathbf{m}_{0})|\leq o(\sqrt{N})\). ### The case of a finite number of minima Our last main result concerns the case of symbols with finitely many minima on the surface of the quantum sphere. **Theorem 2.5**.: _Assuming **A0** and **A1** and that the symbol \(h\) has \(L\) global minima at \(\{\mathbf{m}_{1},\ldots,\mathbf{m}_{L}\}\subset S^{2}\) where at each minimum \(\nabla h(\mathbf{m}_{l})\neq\mathbf{0}\) and \(\det D_{h}^{\perp}(\mathbf{m}_{l})>0\):_ 1. _the ground state of_ \(H\) _on_ \(\mathcal{H}_{N}\) _is contained in the subspace with maximal total spin_ \(J=N/2\)_. Its energy is_ \[E_{0}(H)=\min_{l\in\{1,\ldots,L\}}\left[h(\mathbf{m}_{l})+\kappa(\mathbf{m}_{l}) -|\nabla h(\mathbf{m}_{l})|+\sqrt{\det D_{h}^{\perp}(\mathbf{m}_{l})}\right]+o( 1).\] 2. _for any energy below_ \(E_{0}(H)+\Delta\) _with_ \(\Delta>0\) _fixed but arbitrary, the eigenvalues of_ \(H\) _stem from blocks_ \((J,\alpha)\) _with_ \(k=N/2-J\in\mathbb{N}_{0}\) _fixed but arbitrary. When counted with multiplicity, these low-energy eigenvalues of_ \(H_{N/2-k,\alpha}\) _for_ \(k\in\mathbb{N}_{0}\) _asymptotically coincide up to_ \(o(1)\) _with the points in_ \[Nh(\mathbf{m}_{l})+\kappa(\mathbf{m}_{l})+(2k-1)\ |\nabla h(\mathbf{m}_{l})|+(2m+1) \sqrt{\det D_{h}^{\perp}(\mathbf{m}_{l})},\] (2.18) _with_ \(m\in\mathbb{N}_{0}\) _and_ \(l\in\{1,\ldots,L\}\)_._ The proof, which in comparison to Theorem 2.3 poses the additional difficulty of controlling the tunnelling between minima, is found in Section 3.4. The theorem allows for degeneracies in the spectrum already at the level of the ground state. The quantum CW model \(P(\mathbf{m})=-m_{1}^{2}-\gamma m_{3}\) in the ferromagnetic phase \(|\gamma|<2\) is an example. The gradient's norm \(|\nabla P(\mathbf{m}_{0}^{\pm})|=2\) and \(\det D_{P}^{\perp}(\mathbf{m}_{0}^{\pm})=4-\gamma^{2}\) agree for the two minima \(\mathbf{m}_{0}^{\pm}\in S^{2}\). Therefore, all the low-energy eigenvalues described by (2.18) are approximately doubly degenerate. In this situation the formula (1.10) yields the gap of the nearly \(L\)-fold degenerate ground state (\(L=2\) for CW) to the next energy levels. Our proof enables to show that the level splitting due to tunnelling through a macroscopic barrier from one minimum to the other is of order \(o(N^{-\infty})\). As demonstrated numerically in [50], the quantum CW's ferromagnetic phase exhibits the 'flea on the elephant phenomenon' [27], i.e., the sensitivity of the ground-state function to perturbations. It might be interesting to combine the techniques in this paper with [25, 46] for a proof of this. Let us conclude the main part of the paper with a general outlook. Having derived precise low-energy asymptotics of eigenvalues in terms of quadratic approximations and using techniques as in [34, Sec. 5], our techniques should extend to derive the fluctuations of the free energy not only at \(\beta=\infty\), but also for finite temperature. It would also be interesting to investigate the dynamical properties of the mean-field Hamiltonians. Coherent wavepackets evolve semiclassically [19]. The latter work also connects to the question of whether the description of the low-energy spectra in the present paper are helpful in the analysis of models which become semiclassical only in a Kac-type scaling limit (cf. [42]). ## 3 Semiclassical analysis This section is dedicated to the proofs of Theorems 2.3-2.5. We combine projection techniques in a Schur-complement analysis [47, 51, 52] through which we focus on patches around the classical minima, with a detailed analysis of a limit operator. ### Limit of fluctuation operators We start our proof by introducing two operators \(L_{x}\) and \(L_{y}\) on the Hilbert space \(\ell^{2}(\mathbb{N}_{0})\), which turn out to be unitarily equivalent to the position and momentum operator on \(L^{2}(\mathbb{R})\). This enables us to determine the spectrum and eigenfunctions of the operator \[D=\omega^{2}L_{x}^{2}+L_{y}^{2},\] which is equivalent to a harmonic oscillator with frequency \(\omega\). In the following subsections, we show that the Hamiltonians described in Section 2.1 indeed converges in a sense to be specified locally to an operator of the form \(D\). This the key to determine their low energy spectra explicitly. To set the stage, we define \(L_{x}\) and \(L_{y}\) via their matrix elements in terms of the canonical orthonormal bases in \(\ell^{2}(\mathbb{N}_{0})\): \[\langle k|L_{x}|k^{\prime}\rangle=i^{k^{\prime}-k}\langle k|L_{y}|k^{\prime} \rangle=\sqrt{\frac{\max\{k,k^{\prime}\}}{2}}\delta_{|k-k^{\prime}|=1}. \tag{3.1}\] They give rise to essentially self-adjoint operators on the dense domain \[c_{00}\coloneqq\{(x_{n})_{n\in\mathbb{N}}\,|\,\exists\,N\in\mathbb{N}\text{ such that }x_{n}=0\ \forall\,n\geq N\}.\] Moreover, by an elementary calculation \[[L_{x},L_{y}]\delta_{k}\coloneqq(L_{x}L_{y}-L_{y}L_{x})|k\rangle=i|k\rangle, \tag{3.2}\] i.e., \(L_{x}\) and \(L_{y}\) satisfy the canonical commutation relations. The following proposition is essentially a consequence of this observation. **Proposition 3.1**.: _Let \(\omega\geq 1\). Then \(D=\omega^{2}L_{x}^{2}+L_{y}^{2}\) is a positive, essentially self-adjoint operator on the domain \(c_{00}\subset\ell^{2}(\mathbb{N}_{0})\) with spectrum_ \[\operatorname{spec}D=\{(2k+1)\omega\,|\,k\in\mathbb{N}_{0}\}. \tag{3.3}\] _Every point in the spectrum is a non-degenerate eigenvalue. The ground-state \(\psi_{0}\) is_ \[\langle n|\psi_{0}\rangle=\omega^{1/4}\sqrt{\frac{2}{(\omega+1)n!}}\left( \sqrt{\frac{\omega-1}{2(\omega+1)}}\right)^{n}H_{n}(0). \tag{3.4}\] _The \(k\)-th excited state is given by \(\psi_{k}=(a^{\dagger})^{k}\psi_{0}/\sqrt{k!}\) with the raising operator \(a^{\dagger}\coloneqq\sqrt{\frac{\omega}{2}}\left(L_{x}-\frac{i}{\omega}L_{y}\right)\). In particular, \(\langle n|\psi_{k}\rangle=0\) unless \(k-n\) is even in which case, we have the exponential decay estimate_ \[|\langle n|\psi_{k}\rangle|^{2} \leq\sqrt{\frac{\omega}{\pi}}\frac{2^{2k+1}}{\omega+1}\frac{n^{k -\frac{1}{2}}}{k!}\left(\frac{\omega-1}{\omega+1}\right)^{n-k},\quad\text{if }n \geq 2k\text{,} \tag{3.5}\] \[|\langle n|\psi_{k}\rangle|^{2} \leq\sqrt{\frac{\omega}{\pi}}\frac{2^{2n+1}}{\omega+1}\frac{k^{n -\frac{1}{2}}}{n!}\left(\frac{\omega-1}{\omega+1}\right)^{n-k},\quad\text{if }k \geq 2n. \tag{3.6}\] The value \(H_{n}(0)\) is explicit: \[H_{n}(0)=\begin{cases}(-1)^{n/2}\frac{n!}{(n/2)!}&\text{ if n even}\\ 0&\text{ if n odd}.\end{cases} \tag{3.7}\] In case \(\omega=1\) the eigenbasis turns out to agree with the canonical orthonormal basis, \(|\psi_{k}\rangle=|k\rangle\) for all \(k\in\mathbb{N}_{0}\). The exponential decay estimates (3.5) and (3.6) will play a crucial role subsequent approximation results. Proof of Proposition 3.1.: The commutation relation (3.2) implies that there is a unitary \(U\colon\ell^{2}(\mathbb{N}_{0})\to L^{2}(\mathbb{R})\) such that \(UL_{x}U^{*}=\hat{x}\) and \(UL_{y}U^{*}=\hat{p}\), where \(\hat{x}\) and \(\hat{p}\) denote the position and momentum operator. In particular, \(UDU^{*}\) is the standard harmonic oscillator with frequency \(\omega\), which yields the basic assertion on \(\operatorname{spec}D\). The eigenfunctions of \(UDU^{*}\) are known to be given by \[\varphi_{n}^{\omega}(x)=\frac{1}{\sqrt{2^{n}n!}}\left(\frac{\omega}{\pi} \right)^{1/4}e^{-\omega x^{2}/2}H_{n}(\sqrt{\omega}x).\] This unitary equivalence also proves the ladder operator representation for the excited states (cf. [24]). Since, \(U|n\rangle=|\varphi_{n}^{1}\rangle\), we seek for a representation of \(\varphi_{m}^{\omega}\) and, in particular \(m=0\), in terms of the \(\omega=1\) eigenfunctions: \[\langle\varphi_{n}^{1}|\varphi_{m}^{\omega}\rangle=\frac{\omega^{1/4}}{\sqrt{ 2^{n+m}\pi n!m!}}\int e^{-(\omega+1)x^{2}/2}H_{n}\left(x\right)H_{m}\left( \sqrt{\omega}x\right)dx\] Using a change of variables and subsequently the multiplication theorem for Hermite polynomials, \[H_{n}(\alpha x)=\sum_{l=0}^{\lfloor\frac{n}{2}\rfloor}\alpha^{n-2l}(\alpha^{2 }-1)^{l}\frac{n!}{(n-2l)!l!}H_{n-2l}(x),\quad\alpha\in\mathbb{R},\] we arrive after some elementary algebra at \[\langle n|\psi_{k}\rangle=\omega^{1/4}\sqrt{\frac{2\;n!k!}{(\omega+1)}}\;\sum _{l=0}^{\lfloor\frac{n}{2}\rfloor}(-1)^{l}\frac{1\left[l+\frac{k-n}{2}\in \left\{0,1,\ldots,\lfloor\frac{k}{2}\rfloor\right\}\right]}{(n-2l)!\;l!\;(l+ \frac{k-n}{2})!}\left(\frac{2\sqrt{\omega}}{\omega+1}\right)^{n-2l}\left( \frac{\omega-1}{2(\omega+1)}\right)^{2l+\frac{k-n}{2}}. \tag{3.8}\] The above sum is only non-zero if \(n-k\) is even. In particular, due to the indicator function requiring \(l=n/2\), for \(k=0\) only one term in the sum survives yielding (3.4) with \(H_{n}(0)\) replaced by (3.7). For a proof of the exponential decay estimates (3.5), we start from (3.8) with the triangle inequality. Since \(\frac{2\sqrt{\omega}}{\omega+1}\leq 1\), this term can be upper bounded by one. Furthermore, since also \(\frac{\omega-1}{2(\omega+1)}\leq 1\), the we may lower bound \(2l+\frac{k-n}{2}\geq\frac{|k-n|}{2}\), since \(l\geq 0\) in case \(k\geq n\) and \(l\geq(n-k)/2\) in case \(n\geq k\). Therefore, in case \(k\geq n\) it remains to estimate \[\sum_{l=0}^{\lfloor\frac{n}{2}\rfloor}\frac{\sqrt{n!k!}}{(n-2l)!}\frac{1}{l!( l+\frac{k-n}{2})!}\leq\sum_{l=0}^{n}\binom{n}{l}\frac{\sqrt{k!}}{\sqrt{n!} \;(\frac{k-n}{2})!}=\frac{2^{n}}{\sqrt{n!}}\frac{\sqrt{k!}}{(\frac{k-n}{2})!}.\] The last ratio is then estimated with the help of standard Stirling bounds, \[\sqrt{2\pi m}\left(\frac{m}{e}\right)^{m}\exp\left(\frac{1}{12m+1}\right)\leq m!\leq\sqrt{2\pi m}\left(\frac{m}{e}\right)^{m}\exp\left(\frac{1}{12m}\right), \quad m\in\mathbb{N},\] together with the elementary bound \(\exp\left(-(k-n)\ln\left(1-\frac{n}{k}\right)\right)\leq 2^{n}\) valid for all \(k\geq 2n\). This yields (3.5) after some algebra. In case \(n\geq k\) we proceed similarly. The only difference is that the sum in (3.8) starts from \(\frac{|k-n|}{2}\). The connection of \(D\) to our models will be through its approximations defined on \(\mathbb{C}^{2J+1}\): \[D_{N}:=\omega^{2}\big{(}L_{x}^{(N)}\big{)}^{2}+\big{(}L_{y}^{(N)}\big{)}^{2} \qquad\text{with}\quad L_{\xi}^{(N)}\coloneqq\frac{S_{\xi}}{\sqrt{J_{N}}}, \quad J_{N}:=\frac{N}{2}|\mathbf{m}_{0}|. \tag{3.9}\] In this subsection \(|\mathbf{m}_{0}|\in(0,1]\) is treated as a scaling parameter. When restricting the commutation relation, \(\big{[}L_{x}^{(N)},L_{y}^{(N)}\big{]}=iS_{z}/J_{N}\), to the subspace \[\mathcal{H}_{J}^{K_{N}}:=\text{span}\left\{|J-k\right\rangle\,\big{|}\,\,k\in \left\{0,\ldots,K_{N}\right\}\right\}\] with \(|J-J_{N}|\leq\overline{K}_{N}\), the commutator asymptotically agrees as \(N\to\infty\) with the canonical one as long as both \(\overline{K}_{N}\) and \(K_{N}\) grow only suitably slow with \(N\). Hence \(D_{N}\) when restricted to \(\mathcal{H}_{J}^{K_{N}}\) is expected to be equal to the harmonic oscillator \(D\) in the limit \(N\to\infty\). This observation is at the heart of many works on quantum spin systems in the large \(J\) limit [26, 36, 37]. To turn it into a mathematical argument in the present context, we note that through the identification \(|J-k\rangle\equiv|k\rangle\) of their canonical orthonormal basis, the Hilbert spaces \(\mathcal{H}_{J}^{K_{N}}\subset\mathbb{C}^{2J+1}\) are all canonically embedded into \(\ell^{2}(\mathbb{N}_{0})\). This embedding will be denoted by \[I_{J}^{(N)}:\mathcal{H}_{J}^{K_{N}}\to\ell^{2}(\mathbb{N}_{0}),\quad\text{and }\quad\overline{I}_{J}^{(N)}:\ell^{2}(\mathbb{N}_{0})\to\mathcal{H}_{J}^{K_{N}}\] stands for its corresponding projection. Thus \[\overline{D}_{J,N}\coloneqq\overline{I}_{J}^{(N)}DI_{J}^{(N)}\] when restricted to \(\mathcal{H}_{J}^{K_{N}}\) is unitarily equivalent to \(\overline{D}_{N}\coloneqq P_{K_{N}}DP_{K_{N}}\) when restricted to \[I_{J}^{(N)}\mathcal{H}_{J}^{K_{N}}=\text{span}\left\{|k\rangle\in\ell^{2}( \mathbb{N}_{0})\,\,|\,\,k\in\left\{0,\ldots,K_{N}\right\}\right\}\subset c_{00}\] with \(P_{K_{N}}\) denoting the corresponding orthogonal projection in \(\ell^{2}(\mathbb{N}_{0})\). **Lemma 3.2**.: 1. _For any choice of sequences_ \(K_{N}=o(N^{1/2})\) _and_ \(K_{N}\overline{K}_{N}^{3}=o(N^{2})\)_:_ \[\max_{|J-J_{N}|\leq\overline{K}_{N}}\left\|\left(\overline{D}_{J,N}-D_{N} \right)P_{J}^{K_{N}}\right\|=o(1),\] (3.10) _where_ \(K_{N}^{-}\coloneqq K_{N}-2\) _and_ \(P_{J}^{K_{N}^{-}}\) _denotes the orthogonal projection onto_ \(\mathcal{H}_{J}^{K_{N}^{-}}\subset\mathcal{H}_{J}^{K_{N}}\)_._ 2. _For any sequence_ \(K_{N}\to\infty\)_, the operators_ \(\overline{D}_{N}=P_{K_{N}}DP_{K_{N}}\) _converges as_ \(N\to\infty\) _in strong-resolvent sense to_ \(D\)_. Moreover, we also have the pointwise convergence of eigenvalues:_ \[E_{k}\left(\overline{D}_{N}\right)=E_{k}\left(D\right)+o(K_{N}^{-\infty})\] (3.11) _for any_ \(k\in\mathbb{N}_{0}\)_. If_ \(E_{K}\) _and_ \(E_{K}^{(N)}\) _denotes the spectral projection onto the_ \(K\in\mathbb{N}\) _lowest eigenvalues of_ \(D\) _and_ \(\overline{D}_{N}\) _on_ \(P_{K_{N}}\ell^{2}(\mathbb{N}_{0})\)_, then_ \[\left\|\left(E_{K}^{(N)}-E_{K}\right)E_{K}^{(N)}\right\|=o(K_{N}^{-\infty})\] (3.12) _where_ \(o(K_{N}^{-\infty})\) _denotes a sequence, which when multiplied by an arbitrary but fixed power of_ \(K_{N}\) _goes to zero._ Proof.: 1. For any \(k,k^{\prime}\leq K_{N}\) the matrix-elements of the difference in the canonical basis of \(I_{J}^{(N)}\mathcal{H}_{J}^{K_{N}}\) are given by \[\langle k|D-I_{J}^{(N)}D_{N}\overline{I}_{J}^{(N)}|k^{\prime}\rangle=\langle k|L _{y}^{2}|k^{\prime}\rangle-J_{N}^{-1}\langle J-k|S_{y}^{2}|J-k^{\prime}\rangle+ \omega^{2}\left[\langle k|L_{x}^{2}|k^{\prime}\rangle-J_{N}^{-1}\langle J-k|S_{ x}^{2}|J-k^{\prime}\rangle\right]\] Each of the two terms terms in the right side are explicit thanks to (3.1) and (2.7). If \(k^{\prime}\leq K_{N}-2\), their difference can be expressed in terms of \[\langle m|L_{x}|m^{\prime}\rangle-J_{N}^{-1/2}\langle J-m|S_{x}|J-m^{\prime} \rangle=\delta_{|m-m^{\prime}|,1}\left[\sqrt{\frac{\max\{m,m^{\prime}\}}{2}}- \sqrt{\frac{2J\max\{m,m^{\prime}\}-mm^{\prime}}{4J_{N}}}\right]\] with \(m,m^{\prime}\leq K_{N}\) and analogously for \(y\) instead of \(x\), for which the expression differs only by an overall complex phase. By an explicit calculation, the modulus of the last term is upper bounded according to \[\max_{|J-J_{N}|\leq\overline{K}_{N}}\left|\langle m|L_{\xi}|m^{\prime}\rangle -J_{N}^{-1/2}\langle J-m|S_{\xi}|J-m^{\prime}\rangle\right|\leq\frac{\max\{m,m^ {\prime},\overline{K}_{N}\}^{3/2}}{J_{N}\sqrt{1-\max\{m,m^{\prime},\overline{K }_{N}\}/J_{N}}}\ \delta_{|m-m^{\prime}|,1}\] for both \(\xi\in\{x,y\}\). Since also \[\left|\langle m|L_{\xi}|m^{\prime}\rangle\right|\leq\sqrt{\max\{m, m^{\prime}\}/2} \leq\sqrt{K_{N}},\] \[J_{N}^{-1/2}\left|\langle J-m|S_{\xi}|J-m^{\prime}\rangle\right| \leq\sqrt{K_{N}/|\mathbf{m}_{0}|},\] we conclude that for some constant \(C<\infty\) and all \(k\leq K_{N}\), \(k^{\prime}\leq K_{N}-2\): \[\max_{|J-J_{N}|\leq\overline{K}_{N}}\left|\langle k|D-I_{J}^{(N)}D_{N} \overline{I}_{J}^{(N)}|k^{\prime}\rangle\right|\leq\frac{C}{N}\max\left\{K_{N }^{2},\sqrt{K_{N}\overline{K}_{N}^{3}}\right\}\ 1[|k-k^{\prime}|\leq 2]. \tag{3.13}\] By an analogous estimate as in (2.10), we thus arrive at the claimed norm estimates in (3.10). 2. Since \(c_{00}\) is a common core for \(D\) and \(\overline{D}_{N}=P_{K_{N}}DP_{K_{N}}\), the claimed strong-resolvent convergence is immediate from from the fact that the matrix elements of \(D\) vanish unless their difference is smaller that two (cf. [44]). To boost this strong convergence to convergence of eigenvalues, we use the block approximation \[\widehat{D}_{N}\coloneqq P_{K_{N}}DP_{K_{N}}+Q_{K_{N}}DQ_{K_{N}},\qquad\text{ on }\ell^{2}(\mathbb{N}_{0}). \tag{3.14}\] with \(Q_{K_{N}}=1-P_{K_{N}}\). For \(E_{K}=\sum_{k=0}^{K-1}|\psi_{k}\rangle\langle\psi_{k}|\), we have \[\|E_{K}Q_{K_{N}}\|^{2}=\|E_{K}Q_{K_{N}}E_{K}\|\leq\sum_{k=0}^{K-1}\sum_{n=K_{N }}^{\infty}|\langle n|\psi_{k}\rangle|^{2}=o(K_{N}^{-\infty}).\] The error estimate as \(N\to\infty\) follows from the exponential decay estimate (3.5). The matrix elements of \(\langle n|D-\widehat{D}_{N}|m\rangle\) are non-vanishing only for \(|n-K_{N}|,|m-K_{N}|\leq 2\). Moreover, \(|\langle n|D-\widehat{D}_{N}|m\rangle|\leq\mathcal{O}(K_{N})\) such that again by the exponential decay estimate (3.5): \[\left\|(D-\widehat{D}_{N})E_{K}\right\|=o(K_{N}^{-\infty}). \tag{3.15}\] Denoting by \(F_{K}=1-E_{K}\), this implies that \(\left\|E_{K}(D-\widehat{D}_{N})E_{K}\right\|=o(K_{N}^{-\infty})\) and \(\left\|F_{K}\widehat{D}_{N}E_{K}\right\|=o(K_{N}^{-\infty})\). Since also \[F_{K}\widehat{D}_{N}F_{K} \geq F_{K}P_{K_{N}}DP_{K_{N}}F_{K}+E_{K}(D)\,F_{K}Q_{K_{N}}F_{K}Q_ {K_{N}}F_{K}\] \[=F_{K}P_{K_{N}}\left(D-E_{K}(D)\right)P_{K_{N}}F_{K}+E_{K}(D)\,\left( F_{K}-F_{K}Q_{K_{N}}E_{K}Q_{K_{N}}F_{K}\right)\] \[\geq E_{K}(D)\,\,F_{K}\left(1-2\|Q_{K_{N}}E_{K}\|^{2}\right)=E_{K} (D)\,\,F_{K}\left(1-o(K_{N}^{-\infty})\right),\] the Schur-complement method [47, 51, 52] proves that the eigenvalues of \(\widehat{D}_{N}\) strictly below \(E_{K}(D)\,(1-o(1))\) asymptotically coincide with those of \(D\) up to an error of order \(o(K_{N}^{-\infty})\). A similar estimate also shows that \(Q_{K_{N}}DQ_{K_{N}}\geq E_{K}(D)Q_{K_{N}}\,(1-o(1))\) for any \(K\in\mathbb{N}\), so that the low-energy spectrum of \(\widehat{D}_{N}\) entirely coincides with that of its first block \(\overline{D}_{N}\). This finishes the proof of (3.11). The assertion (3.12) then follows from (3.15), the discrete nature of the spectrum of \(D\) and a standard perturbation theory bound based on the representation of the spectral projection as a contour integral involving resolvents (cf. [44]). ### Proof of Theorem 2.3 For a proof of Theorem 2.3, we analyse all the blocks \(H_{J,\alpha}\) in the decomposition (2.1) of the Hamiltonian separately. The main contribution to the lowest energies of \(H\) will come from the largest \(J\) and the vicinity of the minimum. Projection techniques will help to focus on this patch. To facilitate notation, by a unitary rotation we subsequently assume without loss of generality that the minimum of \(h\) is at \(\mathbf{m}_{0}=(0,0,1)^{T}\), and that the projected Hessian \(Q_{\perp}D_{h}(\mathbf{m}_{0})Q_{\perp}\) as a matrix on \(\operatorname{ran}Q_{\perp}\) has its eigenvectors aligned with the \(x\)- and \(y\)-direction with \(\omega_{x}\) and \(\omega_{y}\) the corresponding eigenvalues. This entails that \[\widehat{Q}_{N}(\mathbf{m}_{0})=Nh(\mathbf{m}_{0})\mathbbm{1}+(N-2S_{z})\,| \nabla h(\mathbf{m}_{0})|+\frac{2}{N}\left(\omega_{x}S_{x}^{2}+\omega_{y}S_{y} ^{2}\right), \tag{3.16}\] cf. (2.11). We will also assume without loss of generality that the positive eigenvalues of \(D_{h}^{\perp}(\mathbf{m}_{0})\) are ordered according to \(0<\omega_{y}^{\perp}=\omega_{y}+|\nabla h(\mathbf{m}_{0})|\leq\omega_{x}+| \nabla h(\mathbf{m}_{0})|=\omega_{x}^{\perp}\), and we note that \[\sqrt{\det D_{P}^{\perp}(\mathbf{m}_{0})}=\sqrt{\omega_{x}^{\perp}\omega_{y} ^{\perp}}=\omega_{y}^{\perp}\,\omega,\quad\text{with }\omega^{2}:=\frac{\omega_{x}^{ \perp}}{\omega_{y}^{\perp}}\geq 1. \tag{3.17}\] Moreover, we will use throughout the whole proof \(K_{N}=\overline{K}_{N}=o(N^{1/3})\) diverging as \(N\to\infty\). #### 3.2.1 Limiting operator for \(J\geq N/2-K_{n}\) By assumption (2.4) and Lemma 2.2, on the increasing subspaces \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{0})\) with \(J\geq N/2-K_{N}\), the shifted quadratic polynomial \(\kappa(\mathbf{m}_{0})\mathbbm{1}+\widehat{Q}_{N}(\mathbf{m}_{0})\) approximates \(H_{J,\alpha}\) uniformly in norm to order \(o(1)\). Next, we show that this quadratic polynomial is well approximated by \[H_{J}^{(N)}:=Nh(\mathbf{m}_{0})+\kappa(\mathbf{m}_{0})+|\nabla h(\mathbf{m}_{ 0})|(N-2J-1)+\omega_{y}^{\perp}\,\,\overline{I}_{J}^{(N)}DI_{J}^{(N)}\quad \text{on }\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{0})\] with \(D=\omega^{2}L_{x}^{2}+L_{y}^{2}\) on \(\ell^{2}(\mathbb{N}_{0})\) and \(\omega\) from (3.17). **Lemma 3.3**.: _In the situation of Theorem 2.3 if \(K_{N}=o(N^{1/3})\):_ \[\max_{J\geq N/2-K_{N}}\max_{\alpha}\left\|\left(H_{J,\alpha}-H_{J}^{(N)}\right)P _{J}^{K_{N}^{-}}(\mathbf{m}_{0})\right\|=o(1). \tag{3.18}\] Proof.: We use assumption (2.4) and Lemma 2.2 to establish the claim (3.18) with \(H_{J}^{(N)}\) replaced by \(\kappa(\mathbf{m}_{0})\mathbb{1}+\widehat{Q}_{N}(\mathbf{m}_{0})\). In order to simplify the term in (3.16) proportional to \(|\nabla h(\mathbf{m}_{0})|\), we rewrite \[N-2S_{z}=\frac{N}{2}-\frac{2}{N}\left(\mathbf{S}^{2}-S_{x}^{2}-S_{y}^{2} \right)+\frac{(N-2S_{z})^{2}}{2N}.\] On \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{0})\) with \(J\geq N/2-K_{N}\), the norm of the last term is bounded by \((4K_{N})^{2}/2N=o(1)\). To that order, we can therefore replace \(H_{J}^{(N)}\) by \[\widehat{H}_{J}^{(N)}:=Nh(\mathbf{m}_{0})+\kappa(\mathbf{m}_{0})+ \left|\nabla h(\mathbf{m}_{0})\right|(N-2J-1)+\omega_{y}^{\perp}D_{N},\] \[\text{with}\quad D_{N}=\frac{\omega^{2}}{J_{N}}S_{x}^{2}+\frac{1 }{J_{N}}S_{y}^{2}\quad\text{and}\quad J_{N}=\frac{N}{2}.\] Lemma 3.2 then yields \[\max_{J\geq N/2-K_{N}}\left\|(H_{J}^{(N)}-\widehat{H}_{J}^{(N)})P_{J}^{K_{N}^ {-}}(\mathbf{m}_{0})\right\|=o(1),\] which completes the proof. #### 3.2.2 Truncations To control the blocks \(H_{J,\alpha}\) for values of \(J\) other than the ones of the previous paragraphs, we use the following lemma. **Lemma 3.4**.: _In the situation of Theorem 2.3, there are constants \(c,C\in(0,\infty)\) such that for all \((J,\alpha)\):_ \[H_{J,\alpha}\geq Nh(\mathbf{m}_{0})-C+c\left(N-2\ \mathbf{e}_{\mathbf{m}_{0}} \cdot\mathbf{S}\right). \tag{3.19}\] Proof.: From assumption (2.2) we infer that for some \(C<\infty\): \[H_{J,\alpha}\geq-C+\frac{2J+1}{4\pi}\int Nh\Big{(}\frac{2J}{N}\mathbf{e}( \Omega)\Big{)}\left|\Omega,J\right\rangle\!\left\langle\Omega,J\right|\,d\Omega. \tag{3.20}\] Since \(\mathbf{m}_{0}\in S^{2}\) was assumed to be the unique minimum of \(h:B_{1}\to\mathbb{R}\) and \(|\nabla h(\mathbf{m}_{0})|>0\), there is some \(c>0\) such that for all \(\mathbf{m}\in B_{1}\): \[h(\mathbf{m})\geq h(\mathbf{m}_{0})+c(1-\mathbf{e}_{\mathbf{m}_{0}}\cdot \mathbf{m}). \tag{3.21}\] Plugging this estimate into the above operator inequality and using the fact that by Lemma A.5 \[\frac{2J+1}{4\pi}\int d\Omega\,J\mathbf{m}_{0}\cdot\mathbf{e}(\Omega)\big{|} \Omega,J\rangle\!\left\langle\Omega,J\right|\,d\Omega\geq-C+\mathbf{e}_{ \mathbf{m}_{0}}\cdot\mathbf{S}\] with some constant \(C\), which does not depend on \(N,J\), we arrive at the claimed matrix inequality. #### 3.2.3 Finishing the proof With the above preparations, we are ready to spell out the proof of Theorem 2.3. Aside from exploiting the block decomposition (2.1), the argument essentially relies on a two-step approximation procedure and a Schur-complement analysis [47, 51, 52] in the main blocks corresponding to \(J\geq N/2-K_{N}\). Proof of Theorem 2.3.: We investigate the spectrum of each block \(H_{J,\alpha}\) of the Hamiltonian in (2.1) separately, and distinguish cases. In case \(J\leq N/2-K_{N}\), we use Lemma 3.4 to conclude that for all \(\alpha\): \[H_{J,\alpha}\geq Nh(\mathbf{m}_{0})-C+2cK_{N}. \tag{3.22}\] Since \(K_{N}\to\infty\) as \(N\to\infty\), these blocks clearly do not contribute to the asserted low-energy spectrum below \(E_{0}(H)+\Delta\) with arbitrary \(\Delta>0\). In case \(J\geq N/2-K_{N}\), we consider the approximating Hamiltonian \(H_{J}^{(N)}\) on \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{0})\) from Lemma 3.3, and define its projection \[\widetilde{H}_{J}^{(N)}:=P_{J}^{K_{N}^{-}}(\mathbf{m}_{0})H_{J}^{(N)}P_{J}^{ K_{N}^{-}}(\mathbf{m}_{0})\quad\text{on }\mathcal{H}_{J}^{K_{N}^{-}}(\mathbf{m}_{0}).\] Note that this operator is unitarily equivalent to \[\left(Nh(\mathbf{m}_{0})\mathbbm{1}+\kappa(\mathbf{m}_{0})+|\nabla h(\mathbf{ m}_{0})|(N-2J-1)\right)P_{K_{N}^{-}}+\omega_{y}^{\perp}P_{K_{N}^{-}}DP_{K_{N}^{-}} \text{ on }P_{K_{N}^{-}}\ell^{2}(\mathbb{N}_{0}). \tag{3.23}\] We fix \(K\in\mathbb{N}\) arbitrary, and let \(E_{K}^{(N)}\) stand for the orthogonal projection onto the \(K\)-dimensional subspace of \(\mathcal{H}_{J}^{K_{N}^{-}}(\mathbf{m}_{0})\) spanned by eigenvectors of the \(K\) lowest eigenvalues of \(\widetilde{H}_{J}^{(N)}\). Its orthogonal complement in \(\mathbb{C}^{2J+1}\) will be denoted by \(F_{K}^{(N)}=\mathbbm{1}_{\mathbb{C}^{2J+1}}-E_{K}^{(N)}\). Using Lemma 3.3 we conclude \[\max_{J\geq N/2-K_{N}}\max_{\alpha}\left\|\widetilde{H}_{J}^{(N)} -E_{K}^{(N)}H_{J,\alpha}E_{K}^{(N)}\right\|=o(1), \tag{3.24}\] \[\max_{J\geq N/2-K_{N}}\max_{\alpha}\left\|F_{K}^{(N)}H_{J,\alpha }E_{K}^{(N)}\right\|\leq\max_{J\geq N/2-K_{N}}\left\|F_{K}^{(N)}\overline{D}_ {J,N}E_{K}^{(N)}\right\|+o(1).\] The term in the right side equals \[\left\|F_{K}^{(N)}\overline{T}_{J}^{(N)}(D-P_{K_{N}^{-}}DP_{K_{N}^{-}})I_{J}^ {(N)}E_{K}^{(N)}\right\|\leq\left\|(D-P_{K_{N}^{-}}DP_{K_{N}^{-}})I_{J}^{(N)} E_{K}^{(N)}\right\|=o(1), \tag{3.25}\] and its convergence follows from (3.12) and (3.15). For the Schur-complement analysis it thus remains to control the lowest eigenvalue of the block \(F_{K}^{(N)}H_{J,\alpha}F_{K}^{(N)}\). For this task, we again employ Lemma 3.4, which yields the matrix inequality \[F_{K}^{(N)}H_{J,\alpha}F_{K}^{(N)} \geq\left(Nh(\mathbf{m}_{0})-C\right)F_{K}^{(N)}+c\ F_{K}^{(N)} \left(N-2S_{z}\right)F_{K}^{(N)}\] \[\geq\left(Nh(\mathbf{m}_{0})-C\right)F_{K}^{(N)}+2c\,M\left(1- \left\|F_{K}^{(N)}P_{J}^{M}F_{K}^{(N)}\right\|\right)F_{K}^{(N)} \tag{3.26}\] where \(P_{J}^{M}\) is the orthogonal projection in \(\mathbb{C}^{2J+1}\) spanned by the eigenvectors \(|J\rangle,|J-1\rangle,\ldots|J-M\rangle\) of \(S_{z}\) corresponding to the highest \(M+1\) eigenvalues. We establish a lower bound on the last term in (3.26) with the help of an upper bound on \[\|F_{K}^{(N)}P_{J}^{M}F_{K}^{(N)}\|=\|P_{J}^{M}F_{K}^{(N)}P_{J}^{M}\|\leq\sum_{ m=0}^{M}\langle J-m|\ F_{K}^{(N)}|J-m\rangle. \tag{3.27}\] By Lemma 3.2 as \(N\to\infty\) and for any sequence \(J\geq N/2-K_{N}\) the matrices \(P_{K_{N}^{-}}DP_{K_{N}^{-}}\) converge in strong-resolvent sense to \(D\). This implies the weak convergence of the corresponding spectral projections. In particular, for each fixed \(m\in\mathbb{N}_{0}\) and any sequence \(J\geq N/2-K_{N}\): \[\langle J-m|\ F_{K}^{(N)}|J-m\rangle=\sum_{k>K}|\langle m|\psi_{k}\rangle|^{2} +o(1), \tag{3.28}\] where \((\psi_{k})_{k\in\mathbb{N}_{0}}\) denotes the orthonormal eigenbasis of \(D\). The latter are estimated in Proposition 3.1. Thanks to the exponential decay estimate (3.6), the right side is exponentially small in \(K-M\) for all \(m\in\{0,1,\ldots,M\}\) provided \(K\geq 2M\). Hence, if we choose \(M\) large enough and subsequently \(K\) suitably larger, the prefactor in the last term in (3.26) exceeds any constant. The block \(F_{K}^{(N)}H_{J,\alpha}F_{K}^{(N)}\) then does not contribute to the low-energy spectrum of \(H_{J,\alpha}\). By (3.24) the low-energy spectrum of \(H_{J,\alpha}\) hence asymptotically agrees with the lowest eigenvalues of \(\widetilde{H}_{J}^{(N)}\). In turn, they agree with those of the operator in (3.23). By the convergence (3.11), the eigenvalues of the last term in (3.23) asymptotically agree with those of \(\omega_{y}^{\perp}D\), which are given by the oscillator values (3.3). For \(k=N/2-J\in\mathbb{N}_{0}\) fixed, this yields the expression (2.15) for the eigenvalues including their multiplicities. The minimal eigenvalue \(E_{0}(H)\) asymptotically corresponds to the choice \(k=m=0\), which establishes (2.14). The corresponding weak convergence of the eigenvector (2.13) is a consequence of the Schur-complement analysis, Lemma 3.2 and the explicit expression (3.4) for the ground-state of \(D\). Finally, the expression (2.16) for the spectral gap is immediate from (2.15), since either the first excited corresponds to \(k=0\) and \(m=1\) or vice versa. ### Proof of Theorem 2.4 In case the unique minimum is found at at \(\mathbf{m}_{0}\in B_{1}\) with \(0<|\mathbf{m}_{0}|<1\), the low-energy spectrum of \(H\) stems again from blocks \(H_{J,\alpha}\) with \(J\)-values in the vicinity of \(J_{N}(\mathbf{m}_{0})=N|\mathbf{m}_{0}|/2\). This is evident from the following lower bound, which substitutes Lemma 3.4 in the present case. **Lemma 3.5**.: _In the situation of Theorem 2.4, there are constants \(c,C\in(0,\infty)\) such that for all \((J,\alpha)\) and all \(\mathbf{m}_{1}\in B_{1}\):_ \[H_{J,\alpha}\geq Nh(\mathbf{m}_{0})-C-\frac{cN}{4}|\mathbf{m}_{1}-\mathbf{m}_ {0}|^{2}+\frac{c}{N}\left(J-J_{N}(\mathbf{m}_{1})\right)^{2}+c|\mathbf{m}_{1}| \left(J-\mathbf{e}_{\mathbf{m}_{1}}\cdot\mathbf{S}\right). \tag{3.29}\] Proof.: We use that \(\mathbf{m}_{0}\in B_{1}\) is the unique minimum of \(h:B_{1}\to\mathbb{R}\) at which \(D_{h}(\mathbf{m}_{0})>0\). Hence there is some \(c>0\) such that: \[h(\mathbf{m})\geq h(\mathbf{m}_{0})+\frac{c}{4}\left|\mathbf{m}-\mathbf{m}_{0 }\right|^{2}\geq h(\mathbf{m}_{0})+\frac{c}{8}\left|\mathbf{m}-\mathbf{m}_{1} \right|^{2}-\frac{c}{2}\left|\mathbf{m}_{1}-\mathbf{m}_{0}\right|^{2}. \tag{3.30}\] The last inequality was obtained using Cauchy-Schwarz and holds for all \(\mathbf{m}\in B_{1}\). Plugging the estimate into (3.20) yields the claim using the same arguments as in the proof of Lemma 3.4. Proof of Theorem 2.4.: A Rayleigh-Ritz bound with the variational state \(\left|\Omega_{0},J_{N}(\mathbf{m}_{0})\right\rangle\), with \(\Omega_{0}\) the spherical angle of \(\mathbf{e}_{\mathbf{m}_{0}}\), yields the upper bound \[E_{0}(H)\leq E_{0}\big{(}H_{J_{N}(\mathbf{m}_{0}),\alpha}\big{)}\leq\langle \Omega_{0},J_{N}(\mathbf{m}_{0})|H_{J_{N}(\mathbf{m}_{0}),\alpha}|\Omega_{0},J _{N}(\mathbf{m}_{0})\rangle\leq Nh(\mathbf{m}_{0})+C\] by Proposition A.3. Combing this with Lemma 3.5 with \(\mathbf{m}_{1}=\mathbf{m}_{0}\), we immediately conclude that the blocks \(H_{J,\alpha}\) corresponding to \(|J-J_{N}(\mathbf{m}_{0})|\geq C\sqrt{N}\) with \(C>0\) suitably large, but fixed, do not contribute to the spectrum near the ground state. It thus remains to analyse the blocks corresponding to \(|J-J_{N}(\mathbf{m}_{0})|\leq C\sqrt{N}\). For any \(J\) in this regime, we associate a radius \(r_{J,N}:=2J/N\), which by construction satisfies \(|r_{J,N}-|\mathbf{m}_{0}||\leq CN^{-1/2}\). The value of \(h\) at the point on this sphere in direction of \(\mathbf{m}_{0}\) is controlled by a second-order Taylor estimate using \(|\nabla h(\mathbf{m}_{0})|=0\): \[\left|h\left(r_{J,N}\mathbf{e}_{\mathbf{m}_{0}}\right)-h(\mathbf{m}_{0}) \right|\leq C\left|r_{J,N}-\left|\mathbf{m}_{0}\right|\right|^{2}\leq\frac{C}{ N}, \tag{3.31}\] where here and in the following the constant \(C\) changes from line to line, but remains independent of \(J\) and \(N\). Hence, the minimum \(\mathbf{m}_{J,N}:=\arg\min\{h(\mathbf{m})\,|\,|\mathbf{m}|=r_{J,N}\}\) on the sphere of radius \(r_{J,N}\) has distance to \(\mathbf{m}_{0}\) at most \[\left|\mathbf{m}_{J,N}-\mathbf{m}_{0}\right|\leq\frac{C}{\sqrt{N}}, \tag{3.32}\] since otherwise the first inequality in (3.30) together with the above estimate of \(h\left(r_{J,N}\mathbf{e}_{\mathbf{m}_{0}}\right)\) would yield a contradiction to the minimality of \(h(\mathbf{m}_{J,N})\). Using Taylor-estimates for \(h\in C^{3}\), this also implies: \[\left|\nabla h(\mathbf{m}_{J,N})\right|=\left|\nabla h(\mathbf{m} _{J,N})-\nabla h(\mathbf{m}_{0})\right|\leq C\left|\mathbf{m}_{J,N}-\mathbf{m} _{0}\right|\leq\frac{C}{\sqrt{N}},\] \[\left|\kappa(\mathbf{m}_{J,N})-\kappa(\mathbf{m}_{0})\right|\leq C \left|\mathbf{m}_{J,N}-\mathbf{m}_{0}\right|\leq\frac{C}{\sqrt{N}},\] \[\left\|D_{h}(\mathbf{m}_{J,N})-D_{h}(\mathbf{m}_{0})\right\|\leq C \left|\mathbf{m}_{J,N}-\mathbf{m}_{0}\right|\leq\frac{C}{\sqrt{N}}. \tag{3.33}\] Having singled out \(\mathbf{m}_{J,N}\), we now use assumption **A1** to approximate \(H_{J,\alpha}\) in the \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{J,N})\), where we pick \(K_{N}=o(N^{1/6})\) diverging to infinity. This enables us to use Lemma 2.2 (with \(\overline{K}_{N}=1\)) and the second-order polynomial \(\widehat{Q}(\mathbf{m}_{J,N})\) in this approximation. The above estimate on the gradient implies that the first-order term is negligible: \[\left\|\nabla h(\mathbf{m}_{J,N})\cdot\left(2\mathbf{S}-N\mathbf{m}_{J,N} \right)P_{J}^{K_{N}}(\mathbf{m}_{J,N})\right\|\leq 2\left|\nabla h\mathbf{m}_{J,N }\right)\left|K_{N}\leq C\frac{K_{N}}{\sqrt{N}}=o(1).\] Changing coordinates by a unitary rotation such that \(\mathbf{m}_{J,N}=(0,0,r_{J,N})\), we thus conclude that on \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{J,N})\) we may approximate \(H_{J,\alpha}\) in terms of \[Nh(\mathbf{m}_{J,N})+\kappa(\mathbf{m}_{J,N})+\frac{2\ r_{J,N}}{N}\left( \omega_{x}^{(N)}S_{x}^{2}+\omega_{y}^{(N)}S_{y}^{2}\right).\] where \(\omega_{x}^{(N)},\omega_{y}^{(N)}\) are the two eigenvalues of \(D_{h}^{\perp}(\mathbf{m}_{J,N})\), i.e., the Hessian projected onto the directions perpendicular to \(\mathbf{m}_{J,N}\). Since \(D_{h}(\mathbf{m}_{0})>0\), these eigenvalues are uniformly bounded away from zero for all \(|J-J_{N}(\mathbf{m}_{0})|\leq C\sqrt{N}\) by (3.33). Moreover, they uniformly converge to the eigenvalues \(0<\omega_{y}\leq\omega_{x}\) of \(D_{h}^{\perp}(\mathbf{m}_{0})\) as \(N\to\infty\). In fact, from (3.33) we have the estimates \(|\omega_{\xi}^{(N)}-\omega_{\xi}|\leq CN^{-1/4}\) for both \(\xi\in\{x,y\}\). Since also \(\left\|S_{\xi}^{2}P_{J}^{K_{N}}(\mathbf{m}_{J,N})\right\|\leq CNK_{N}\) (cf. (2.8)), we may thus use \[H_{J}^{(N)}:=Nh(\mathbf{m}_{J,N})+\kappa(\mathbf{m}_{0})+|\mathbf{m}_{0}|\, \omega_{y}\ \overline{I}_{J}^{(N)}\left(\omega^{2}L_{x}^{2}+L_{y}^{2}\right)I_{J}^{(N)} \quad\text{on }\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{J,N})\] to show the following substitute for Lemma 3.3: \[\max_{|J-J_{N}(\mathbf{m}_{0})|\leq C\sqrt{N}}\left\|\left(H_{J,\alpha}-H_{J}^ {(N)}\right)P_{J}^{K_{N}}(\mathbf{m}_{J,N})\right\|=o(1).\] We then proceed with the Schur-complement analysis as in the proof of Theorem 2.3 with only one modification. To control the last block, we use Lemma 3.5 with \(\mathbf{m}_{1}=\mathbf{m}_{J,N}\). For this choice, the quadratic difference term in its right side is bounded by a constant thanks to (3.32). This proves that \(E_{0}(H_{J,\alpha})=Nh(\mathbf{m}_{J,N})+\kappa(\mathbf{m}_{0})+|\mathbf{m}_{0 }|\sqrt{\det D_{h}^{\perp}(\mathbf{m}_{0})+o(1)}\) for all blocks \((J,\alpha)\) with \(|J-J_{N}(\mathbf{m}_{0})|\leq C\sqrt{N}\). Clearly for \(J=J_{N}(\mathbf{m}_{0})\), we have \(\mathbf{m}_{J,N}=\mathbf{m}_{0}\), which concludes the proof of (2.17). The assertion concerning the regime \(|J-J_{N}(\mathbf{m}_{0})|\leq o(\sqrt{N})\) follows by a Taylor estimate as in (3.31). ### Proof of Theorem 2.5 Compared to the case of one minimum, the case of several minima \(\mathcal{M}=\{\mathbf{m}_{1},\ldots,\mathbf{m}_{L}\}\subset S^{2}\) of \(h\) poses the additional problem to separate the patches around them. In the next subsection, we use semiclassical analysis based on refined projection techniques to tackle this issue and to focus on the subspaces, from which the low-energy spectrum arises. The part going beyond this semiclassics largely parallels the proof in the previous section. #### 3.4.1 Semiclassics for subspace decompositions In each of the subspaces \(\mathcal{H}_{J}^{K}(\mathbf{m}_{l})\subset\mathbb{C}^{2J+1}\), which are associated to the minimizing directions \(\mathbf{m}_{l}\), we may choose the canonical orthonormal basis consisting of normalized eigenvectors of \(\mathbf{m}_{l}\cdot\mathbf{S}\), i.e. \[\mathbf{m}_{l}\cdot\mathbf{S}\ |k;\mathbf{m}_{l}\rangle=k\ |k;\mathbf{m}_{l} \rangle,\quad k\in\{-J,\ldots,J\},\] and hence \(\mathcal{H}_{J}^{K}(\mathbf{m}_{l})=\operatorname{span}\left\{|J-k;\mathbf{m}_ {l}\rangle\ |\ k\in\{0,1,\ldots,K\}\right\}\). **Lemma 3.6**.: _Let \(\mathcal{M}:=\{\mathbf{m}_{1},\ldots,\mathbf{m}_{L}\}\subset S^{2}\) be a finite set and \(J\in\mathbb{N}/2\) and \(K\in\mathbb{N}\) fixed. Then the spectrum of the Gram matrix_ \[G:=\left(\langle J-k^{\prime};\mathbf{m}_{l^{\prime}}|J-k;\mathbf{m}_{l} \rangle\right)_{\begin{subarray}{c}l,l^{\prime}\in\{1,\ldots,L\}\\ k,k^{\prime}\in\{0,1,\ldots,K\}\end{subarray}} \tag{3.34}\] _is contained in the set \([1-R_{J}^{K},1+R_{J}^{K}]\) with_ \[R_{J}^{K}=(L-1)(K+1)(4KJ)^{K}\exp\left(-(J-2K)\gamma\right)\quad\text{and} \quad\gamma=\min_{l\neq l^{\prime}}\ln\left(\cos\frac{\sphericalangle(\mathbf{m}_ {l},\mathbf{m}_{l^{\prime}})}{2}\right)^{-1}>0,\] _with \(\sphericalangle(\mathbf{m}_{l},\mathbf{m}_{l^{\prime}})\) the spherical angle between the two points._ Proof.: From Lemma A.2 we infer that for all \(k,k^{\prime}\leq K\): \[\big{|}\langle J-k;\mathbf{m}_{l}\big{|}J-k^{\prime};\mathbf{m}_{l^{\prime}} \rangle\big{|}\leq(2J)^{(k+k^{\prime})/2}(2\max\{k,k^{\prime}\})^{\min\{k,k^{ \prime}\}}\,\bigg{|}\!\cos\frac{\spherical(\mathbf{m}_{l},\mathbf{m}_{l^{\prime}}) }{2}\bigg{|}^{2J-(k+k^{\prime})}\,. \tag{3.35}\] Since \(G\) is a hermitian block matrix with \(L\times L\) blocks of size \(K+1\) and the diagonal blocks all equal to the unit matrix, the assertion follows straightforwardly from Gershgorin's circle theorem. The last lemma ensures that for \(J\geq N/2-K_{N}\) and \(K_{N}=o(N/\ln N)\): \[R_{J}^{K_{N}}=o(N^{-\infty}). \tag{3.36}\] The union of the individual basis vectors of \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{l})\) for any finite number of directions hence still form a set of linearly independent vectors and thus a basis of joint subspace \[\mathcal{H}_{J}^{K_{N}}:=\bigvee_{l=1}^{L}\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_ {l}).\] In this situation, we may construct an orthonormal basis using the square-root of the inverse, \(G^{-1/2}\), of the Gram matrix defined as in (3.34): \[\big{|}(k,l)\rangle:=\sum_{l^{\prime}=1}^{L}\sum_{k^{\prime}=0}^{K_{N}}G_{(kl),(k^{\prime}l^{\prime})}^{-1/2}\big{|}J-k^{\prime};\mathbf{m}_{l^{\prime}} \rangle,\qquad(k,l)\in\{0,\ldots,K_{N}\}\times\{1,\ldots,L\}. \tag{3.37}\] Clearly, \(\langle(k^{\prime},l^{\prime})\big{|}(k,l)\rangle=\delta_{l,l^{\prime}} \delta_{k,k^{\prime}}\). The spectral projections onto \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{l})\) and its cousins after orthogonalization, \[P_{J}^{K_{N}}(\mathbf{m}_{l}):=\sum_{k=0}^{K_{N}}\big{|}J-k;\mathbf{m}_{l} \rangle\langle J-k;\mathbf{m}_{l}\rangle\big{|},\qquad P_{J}^{K_{N}}(l):=\sum _{k=0}^{K_{N}}\big{|}(k,l)\rangle\langle(k,l)\rangle\big{|}, \tag{3.38}\] are then norm-close to each other. **Theorem 3.7**.: _For \(N/2\geq J\geq N/2-K_{N}\) and \(K_{N}=o(N/\ln N)\):_ 1. \(\big{\|}|(k,l)\rangle-\big{|}J-k;\mathbf{m}_{l}\rangle\big{\|}=o(N^{-\infty})\) _for all_ \((k,l)\in\{0,\ldots,K_{N}\}\times\{1,\ldots,L\}\)_._ 2. \(\max_{l}\Big{\|}P_{J}^{K_{N}}(\mathbf{m}_{l})-P_{J}^{K_{N}}(l)\Big{\|}=o(N^{- \infty})\)_, and the projection_ \(P_{J}^{K_{N}}:=\sum_{l=0}^{L}P_{J}^{K_{N}}(l)\) _onto_ \(\mathcal{H}_{J}^{K_{N}}\) _is approximated according to_ \[\left\|P_{J}^{K_{N}}-\sum_{l=0}^{L}P_{J}^{K_{N}}(\mathbf{m}_{l})\right\|=o(N^ {-\infty}).\] (3.39) Proof.: 1. By definition we have \[\big{|}J-k;\mathbf{m}_{l}\rangle=\sum_{l^{\prime}=1}^{L}\sum_{k^{\prime}=0}^{ K_{N}}G_{(kl),(k^{\prime}l^{\prime})}^{1/2}\big{|}(k^{\prime},l^{\prime})\big{)}.\] The norm of the difference vector is hence bounded according to \[\left\|\left|(k,l)\right>-\left|J-k;\mathbf{m}_{l}\right>\right\|\leq\left\|G^{1/ 2}-\mathbb{1}\right\|\leq\max_{|\lambda-1|\leq R_{J}^{K_{N}}}\left|\sqrt{ \lambda}-1\right|\leq\frac{R_{J}^{K_{N}}}{2\sqrt{1-R_{J}^{K_{N}}}}.\] The last inequality follows from the bounds on the eigenvalues of \(G\) established in Lemma 3.6. The assertion is therefore a simple consequence of (3.36). 2. The assertions follow immediately from the first item with the help of the triangle inequality. In order to be able to control the relation of vectors in \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{l})\) to a slightly fattened version of \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{l^{\prime}})\) with \(l\neq l^{\prime}\), we also need the following lemma. It will play a crucial role in the truncation procedure below. **Lemma 3.8**.: _Let_ \[0<\kappa<\frac{1}{2}\left(\sin\frac{\sphericalleft(\mathbf{m}_{l},\mathbf{m}_{l^{ \prime}}\right)}{2}\right)^{2}, \tag{3.40}\] _and suppose that \(N/2\geq J\geq N/2-K_{N}\) and \(K_{N}=o(N/\ln N)\). Then for any \(k\in\{0,\ldots,K_{N}\}\):_ \[\sum_{k=0}^{K_{N}}\sum_{k^{\prime}=0}^{\kappa N}\left|\left<J-k^{\prime}; \mathbf{m}_{l^{\prime}}|J-k;\mathbf{m}_{l}\right>\right|^{2}=o(N^{-\infty}). \tag{3.41}\] Proof.: We split the \(k^{\prime}\)-sum in two terms. The first sum to \(K_{N}\) is estimates with the help of (3.35) from which the claim follows by the same lines of reasoning as above. For the second sum, we have \(k\leq k^{\prime}\). Abbreviating \(\theta:=\sphericalleft(\mathbf{m}_{l},\mathbf{m}_{l^{\prime}}\right)\), we estimate this part with the help of (A.4) by \[\sum_{k=0}^{K_{N}}\binom{2J}{k}\left(\kappa N\right)^{2k}\left(\frac{2+2(\sin \frac{\theta}{2})^{2}}{\sin\theta}\right)^{2k}\left(\cos\frac{\theta}{2} \right)^{4J}\sum_{k^{\prime}=0}^{\kappa N}\binom{2J}{k^{\prime}}\left(\tan \frac{\theta}{2}\right)^{2k^{\prime}}.\] The truncated binomial is bounded by a standard Chernoff bound for any \(t>0\) \[\left(\cos\frac{\theta}{2}\right)^{4J}\sum_{k^{\prime}=0}^{\kappa N}\binom{2 J}{k^{\prime}}\left(\tan\frac{\theta}{2}\right)^{2k^{\prime}}\leq e^{t \kappa N}\left(1-(1-e^{-t})\ \left(\sin\frac{\theta}{2}\right)^{2}\right)^{N}. \tag{3.42}\] Choosing \(t(\theta)=\ln\frac{1-\kappa}{\kappa}\left(\tan\frac{\theta}{2}\right)^{2}>0\), the right side is of the form \(\exp\left(-N\alpha(\theta)\right)\) with \(\alpha(\theta)=-\kappa t(\theta)-\ln\left[\cos^{2}\left(\frac{\theta}{2}\right) /(1-\kappa)\right]<0\). Since the remaining summation is estimated trivially by the number of terms \(K_{N}\) times the maximum of each term, which occurs at \(k=K_{N}\) with the binomial also trivially bounded, \(\binom{2J}{k}\leq N^{k}\leq N^{K_{N}}\), the result follows. #### 3.4.2 Truncation By assumption, the minima of \(h\) have the property \(h(\mathbf{m})\geq h(\mathbf{m}_{1})=\cdots=h(\mathbf{m}_{L})\) for all \(\mathbf{m}\in B_{1}\). A substitute for (3.21) is provided by a bound of the form \[h(\mathbf{m})\geq h(\mathbf{m}_{1})+f_{0}+\sum_{l=1}^{L}f\left(\mathbf{m}_{l} \cdot\mathbf{m}\right) \tag{3.43}\] with \(f_{0}>0\) and a monotone decreasing \(C^{2}\)-function \(f:[0,1]\to[0,\infty)\) of the form \[f(x)=\begin{cases}0&\text{if}\quad 0\leq x\leq\xi,\\ c(1-x)-f_{0}&\text{if}\,\frac{1+\xi}{2}\leq x\leq 1,\end{cases}\] with some \(c>0\). The parameter \(\xi\in(0,1)\) is chosen close enough to one such that the supports corresponding to distinct \(l\neq l^{\prime}\) do not overlap, i.e., \(f(\mathbf{m}_{l}\cdot\mathbf{m})f(\mathbf{m}_{l^{\prime}}\cdot\mathbf{m})=0\). We will choose \[0<1-\xi<\min_{l\neq l^{\prime}}\left(\sin\frac{\spherical(\mathbf{m}_{l},\mathbf{ m}_{l^{\prime}})}{2}\right)^{2}. \tag{3.44}\] The same strategy as in the proof of Lemma 3.4 immediately yields that for some constant \(C_{L}\in(0,\infty)\) and all \((J,\alpha)\): \[H_{J,\alpha}\geq Nh(\mathbf{m}_{1})-C_{L}+Nf_{0}+\sum_{l=1}^{L}Nf\left(\frac{2 }{N}\mathbf{m}_{l}\cdot\mathbf{S}\right). \tag{3.45}\] This would enable us to discard all blocks \(J<N/2-K_{N}\) as far the low-energy spectrum is concerned. For the other blocks, we however need a slightly more refined lower bound which distinguishes-patches corresponding to the subspace decomposition in the previous subsection. **Lemma 3.9**.: _In the situation of Theorem 2.5, there are constants \(C,c\in(0,\infty)\) such that for all \((J,\alpha)\) with \(J\geq N/2-K_{N}\) and \(K_{N}=o(N/lnN)\):_ \[H_{J,\alpha}\geq Nh(\mathbf{m}_{1})-C+cK_{N}Q_{J}^{K_{N}}+\sum_{l=1}^{L}c\left( N-2\mathbf{m}_{l}\cdot\mathbf{S}\right)P_{J}^{K_{N}}(\mathbf{m}_{l}), \tag{3.46}\] _where \(Q_{J}^{K_{N}}:=\mathbbm{1}_{\mathbb{C}^{2J+1}}-P_{J}^{K_{N}}\) is the orthogonal projection to the complement of \(\mathcal{H}_{J}^{K_{N}}\)._ Proof.: We start from (3.45). In order to ease the notation, in this proof we abbreviate \(\hat{f}_{l}:=Nf\left(\frac{2}{N}\mathbf{m}_{l}\cdot\mathbf{S}\right)\) and we will drop the super-/subscripts on the projection, e.g. \(P:=P_{J}^{K_{N}}\). We will estimate the block of this operator in the decomposition \(P+Q=\mathbbm{1}\) separately. For the blocks involving \(P\), we use \[\left\|\hat{f}_{l}P-\hat{f}_{l}P(\mathbf{m}_{l})\right\|\leq\|\hat{f}_{l}\| \left\|P-\sum_{l=0}^{L}P(\mathbf{m}_{l})\right\|+\sum_{l^{\prime}\neq l}\left\| \hat{f}_{l}P(\mathbf{m}_{l^{\prime}})\right\| \tag{3.47}\] The operator \(\hat{f}_{l}\) is diagonal in the eigenbasis of \(\mathbf{m}_{l}\cdot\mathbf{S}\), i.e. \[\hat{f}_{l}=\sum_{k=0}^{\lfloor\frac{N}{2}(1-\xi)\rfloor}Nf\left(\frac{2}{N}(J -k)\right)\ \big{|}J-k;\mathbf{m}_{l}\rangle\langle J-k;\mathbf{m}_{l}\big{|}, \tag{3.48}\] where the truncation of the \(k\)-sum results from the bounds on the support of \(f\). Evidently \(\|\hat{f}_{l}\|\leq Nf_{0}\). Therefore, for any \(l^{\prime}\neq l\): \[\left\|\hat{f}_{l}P(\mathbf{m}_{l^{\prime}})\right\|\leq Nf_{0}\left(\sum_{k= 0}^{\lfloor\frac{N}{2}(1-\xi)\rfloor}\sum_{k^{\prime}=0}^{K_{N}}\big{|} \langle J-k;\mathbf{m}_{l}\big{|}J-k^{\prime};\mathbf{m}_{l^{\prime}}\rangle \big{|}^{2}\right)^{1/2}=o(N^{-\infty})\] where the last step is Lemma 3.8. Its applicability is ensured by the choice (3.44) of \(\xi\). From (3.47) and Theorem 3.7, we this conclude \(\left\|\hat{f}_{l}P-\hat{f}_{l}P(\mathbf{m}_{l})\right\|=o(N^{-\infty})\). By the spectral representation (3.48) we also have for all sufficiently large \(N\): \[\hat{f}_{l}P(\mathbf{m}_{l})=\left[cN\left(1-\frac{2}{N}\mathbf{m}_{l}\cdot \mathbf{S}\right)-Nf_{0}\right]P(\mathbf{m}_{l}).\] Upon summation over \(l\in\{1,\ldots,L\}\) and adding \(Nf_{0}P\), this term produces the last term in the right side of (3.46) up to another norm-error of order \(o(N^{-\infty})\) due to (3.39). These error terms are absorbed in the constant in (3.46). It thus remains to investigate the block \(Q\hat{f}_{l}Q+N(f_{0}+h(\mathbf{m}_{1}))Q\). To do so, it is most convenient to switch back to the representation using coherent states. Since \(f\in C^{2}\) this may be done at the expense of another constant thanks to (A.3). We then lower bound \[Q\left[\frac{2J+1}{4\pi}\int h\left(\frac{2}{N}\mathbf{e}(\Omega)\right)\ |\Omega,J\rangle\langle\Omega,J|\ d\Omega\right]Q\geq\frac{2J+1}{4\pi}\int_{C_{N }^{c}}Nh\left(\frac{2}{N}\mathbf{e}(\Omega)\right)Q|\Omega,J\rangle\langle \Omega,J|Q\ d\Omega,\] where \(C_{N}^{c}=S^{2}\backslash\bigcup_{l=1}^{L}C_{N}(\mathbf{m}_{l})\) is the complement of the union of the spherical caps \[C_{N}(\mathbf{m}_{l}):=\left\{\Omega\ |\ \mathbf{e}(\Omega)\cdot\mathbf{m}_{l} \geq 1-\frac{K_{N}}{2N}\right\}.\] These are chosen such that on \(C_{N}^{c}\) we have the lower bound \(Nh(\mathbf{m})\geq Nh(\mathbf{m}_{1})+cK_{N}\). Thanks to the decomposition of unity (1.4), to complete the proof, it remains to establish an upper bound for all \(l\) on: \[\left\|\frac{2J+1}{4\pi}\int_{C_{N}(\mathbf{m}_{l})}\!\!\!\!\!Q|\Omega,J \rangle\langle\Omega,J|Q\ d\Omega\right\|\leq\frac{N+1}{4\pi}\int_{C_{N}( \mathbf{m}_{l})}\!\!\!\!\!\|Q|\Omega,J\rangle\|^{2}\,d\Omega\] The spherical volume of \(C_{N}(\mathbf{m}_{l})\) is \(\pi K_{N}/N\). To estimate the norm in the integrand, we fix \(\Omega\in C_{N}(\mathbf{m}_{l})\) and employ the approximate decomposition of unity as expressed in (3.39): \[\|Q|\Omega,J\rangle\|\leq\|(\mathbb{1}-P(\mathbf{m}_{l}))|\Omega,J\rangle\|+ \sum_{l^{\prime}\neq l}\|P(\mathbf{m}_{l^{\prime}})|\Omega,J\rangle\|+o(N^{- \infty}).\] By a unitary rotation, we may assume without loss of generality that \(\mathbf{m}_{l}=(0,0,1)^{T}\). In this case, an estimate on the first term is contained in Proposition A.1 in the appendix. For its application we note that \(2J\sin^{2}(\frac{\theta}{2})=J(1-\cos\theta)\leq K_{N}/4\). Choosing \(\delta=K_{N}/[8J\sin^{2}(\frac{\theta}{2})]\geq 1\), we hence conclude: \[\|(\mathbb{1}-P(\mathbf{m}_{l}))|\Omega,J\rangle\|^{2}\leq\sum_{k\geq K_{N}/2 }|\langle J-k|\Omega,J\rangle|^{2}\leq\exp\left(-\frac{K_{N}}{12}\right)=o(K_{ N}^{-\infty}).\] As a consequence of this, we also have for all \(l^{\prime}\neq l\) \[\|P(\mathbf{m}_{l^{\prime}})|\Omega,J\rangle\|\leq\|(\mathbb{1}-P(\mathbf{m}_ {l}))|\Omega,J\rangle\|+\|P(\mathbf{m}_{l^{\prime}})P(\mathbf{m}_{l})\|\leq o (K_{N}^{-\infty})+o(N^{-\infty}),\] where the last estimate is due to Theorem 3.7. This completes the proof. #### 3.4.3 Finishing the proof Once the above semiclassical reduction to the relevant subspaces in the vicinity of the minima is accomplished, the proof largely follows the strategy of the case of one minimum. In the following. we will therefore only highlight the differences. We start by setting some notation. By assumption the projection \(D_{h}^{\perp}(\mathbf{m}_{l})\) of the Hessian of \(h\) at each minimum onto the plane perpendicular to \(\mathbf{m}_{l}\) has two strictly positive eigenvalues, \[0<\omega_{y,l}^{\perp}=\omega_{y}+|\nabla h(\mathbf{m}_{l})|\leq\omega_{x,l}^{ \perp}=\omega_{x}+|\nabla h(\mathbf{m}_{l})|,\qquad\text{and we set}\quad\omega_{l}^{2}:= \frac{\omega_{x,l}^{\perp}}{\omega_{y,l}^{\perp}}\geq 1.\] Throughout the proof we again use \(K_{N}=\overline{K}_{N}=o(N^{1/3})\) diverging as \(N\to\infty\). By assumption (2.4) and Lemma 2.2, on the increasing subspaces \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{l})\), we approximate \(H_{J,\alpha}\) in terms of the quadratic term \(\widehat{Q}(\mathbf{m}_{l})\), which involves \(2\left(\omega_{x,l}S_{x}^{2}+\omega_{y,l}S_{y}^{2}\right)/N\). Proceeding as in Lemma 3.3, we therefore arrive at \[\left\|\left(H_{J,\alpha}-H_{J}^{(N)}(l)\right)P_{J}^{K_{N}}(\mathbf{m}_{l}) \right\|=o(1) \tag{3.49}\] where \[H_{J,l}^{(N)}:=Nh(\mathbf{m}_{l})+\kappa(\mathbf{m}_{l})+|\nabla h(\mathbf{m }_{l})|(N-2J-1)+\overline{I}_{J,l}^{(N)}\omega_{y,l}^{\perp}D(l)I_{J,l}^{(N)} \quad\text{on }\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{l}).\] The operator \(D(l):=\omega_{l}^{2}L_{x}^{2}+L_{y}^{2}\) acts in \(\ell^{2}(\mathbb{N}_{0})\) and \[I_{J,l}^{(N)}:\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{l})\to\ell^{2}(\mathbb{N}_{0 }),\qquad\overline{I}_{J,l}^{(N)}:\ell^{2}(\mathbb{N}_{0})\to\mathcal{H}_{J}^ {K_{N}}(\mathbf{m}_{l})\] is the natural injection respectively projection with respect to the \(z\)-basis in the \(l\)-direction, i.e. \(I_{J,l}^{(N)}|J-k;\mathbf{m}_{l}\rangle=|k\rangle\) for all \(k\in\{0,\ldots,K_{N}\}\). Theorem 3.7 ensures the proximity of these \(z\)-basis vectors to the orthonormalized basis \(|(k,l)\rangle\), which compose an orthonormal basis for the joint subspace \(\mathcal{H}_{J}^{K_{N}}=\bigvee_{l=1}^{L}\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_ {l})\). We therefore replace the above isometric embeddings and projections by \[I_{J}^{(N)}:\ \mathcal{H}_{J}^{K_{N}}\to\bigoplus_{l=1}^{L}\ell^{2}(\mathbb{N} _{0}),\quad\text{and}\quad\overline{I}_{J}^{(N)}:\bigoplus_{l=1}^{L}\ell^{2}( \mathbb{N}_{0})\to\mathcal{H}_{J}^{K_{N}},\] where \(I_{J}^{(N)}|(k,l)\rangle=|k\rangle_{l}\). These embeddings are direct sums, \(I_{J}^{(N)}=\bigoplus_{l=1}^{L}I_{J}^{(N)}(l)\), of embeddings of \(\mathcal{H}_{J}^{K_{N}}(l)\coloneqq P_{J}^{K_{N}}(l)\mathbb{C}^{2J+1}\) into the \(l\)th copy of \(\ell^{2}(\mathbb{N}_{0})\), cf. (3.38). Theorem 3.7 then allows us to replace \(H_{J,l}^{(N)}\) on \(\mathcal{H}_{J}^{K_{N}}(\mathbf{m}_{l})\) by \[H_{J}^{(N)}(l)\coloneqq Nh(\mathbf{m}_{l})+\kappa(\mathbf{m}_{l})+|\nabla h( \mathbf{m}_{0})|(N-2J-1)+\overline{I}_{J}^{(N)}(l)\omega_{y,l}^{\perp}D(l)I_{ J}^{(N)}(l)\quad\text{on }\mathcal{H}_{J}^{K_{N}}(l).\] These operators can be lifted to the direct sum \[H_{J}^{(N)}\coloneqq\bigoplus_{l=1}^{L}H_{J}^{(N)}(l)\qquad\text{on}\quad \mathcal{H}_{J}^{K_{N}}=\bigoplus_{l=1}^{L}\mathcal{H}_{J}^{K_{N}}(l).\] The above argument, then yields to following modification of Lemma 3.3. **Lemma 3.10**.: _In the situation of Theorem 2.5 if \(K_{N}=o(N^{1/3})\):_ \[\max_{J\geq N/2-K_{N}}\max_{\alpha}\left\|\left(H_{J,\alpha}-H_{J}^{(N)}\right)P _{J}^{K_{N}^{-}}\right\|=o(1), \tag{3.50}\] _where \(P_{J}^{K_{N}^{-}}\coloneqq\sum_{l=1}^{L}\sum_{k=0}^{K_{N}^{-}}|(k,l) \rangle\langle(k,l)|\) with \(K_{N}^{-}=K_{N}-2\)._ The proof is straightforward from (3.49) and Theorem 3.7. Equipped with this, we then proceed with the proof of Theorem 2.5 in the same way as for the case of one minimum. Proof of Theorem 2.5.: In case \(J\leq N/2-K_{N}\) we use (3.45) to conclude that the ground-state of \(H_{J.\alpha}\) is found above \(Nh(\mathbf{m}_{1})-C+cK_{N}\) and hence does not contribute at the energies considered. In case \(J>N/2-K_{N}\) we consider the projections, \[\widetilde{H}_{J}^{(N)}\coloneqq P_{J}^{K_{N}^{-}}H_{J}^{(N)}P_{J}^{K_{N}^{-} }\qquad\text{on }P_{J}^{K_{N}^{-}}\mathcal{H}_{J}^{K_{N}}.\] Note that this matrix is still a direct sum of matrices associated with the subspaces corresponding to \(P_{J}^{K_{N}^{-}}(l)=\sum_{k=0}^{K_{N}^{-}}|(k,l)\rangle\langle(k,l)|\). The matrices forming the direct sum are unitarily equivalent to \[(Nh(\mathbf{m}_{l})+\kappa(\mathbf{m}_{l})+|\nabla h(\mathbf{m}_{0})|(N-2J-1) )\,P_{K_{N}^{-}}+\omega_{y,l}^{\perp}\ P_{K_{N}^{-}}D(l)P_{K_{N}^{-}}\] on \(P_{K_{N}^{-}}\ell^{2}(\mathbb{N}_{0})=\operatorname{span}\left\{|k\rangle|k \in\{0,\dots,K_{N}^{-}\}\right\}\). In turn, these operators have been described in detail in Section 3.1. We now fix \(K\in\mathbb{N}\) arbitrary, and let \(E_{K}^{(N)}\) stand for the orthogonal projection onto the subspace of \(P_{J}^{K_{N}^{-}}\mathcal{H}_{J}^{K_{N}}\) spanned by eigenvectors of the \(K\) lowest eigenvalues of \(\widetilde{H}_{J}^{(N)}\). Its orthogonal complement in \(\mathbb{C}^{2J+1}\) will be denoted by \(F_{K}^{(N)}=\mathbbm{1}_{\mathbb{C}^{2J+1}}-E_{K}^{(N)}\). Lemma 3.10 ensures the validity of the estimates (3.24)-(3.25) (with minor modifications in the notation). It thus remains to again control the block \(F_{K}^{(N)}H_{J,\alpha}F_{K}^{(N)}\). To do so, we modify the argument in (3.26). With the help of Lemma 3.9, we arrive at: \[F_{K}^{(N)}H_{J,\alpha}F_{K}^{(N)}\geq(Nh(\mathbf{m}_{1})-C)\,F_{K}^{(N)}+c \min\{2M,K_{N}\}F_{K}^{(N)}-2cMF_{K}^{(N)}\sum_{l=1}^{L}\left\|F_{K}^{(N)}P_{J }^{M}(\mathbf{m}_{l})F_{K}^{(N)}\right\|, \tag{3.51}\] where \(M\in\mathbb{N}\) is arbitrary. Using Theorem 3.7 we can replace the projection \(P_{J}^{M}(\mathbf{m}_{l})\) by \(P_{J}^{M}(l)\) at the expense of a term which is \(o(N^{-\infty})\). The latter can be added to the order one term proportional to \(C\). Proceeding as in (3.27), it thus remains to estimate \[\sum_{l=1}^{L}\left\|P_{J}^{M}(l)F_{K}^{(N)}P_{J}^{M}(l)\right\|\leq\sum_{l=1} ^{L}\sum_{m=0}^{M}\langle(m,l)|F_{K}^{(N)}|(m,l)\rangle.\] Since \(\widetilde{H}_{J}^{(N)}\) is a direct sum, for each of the terms in the \(l\)-sum we are therefore back to (3.28) with \(K\) changed depending on how the \(L\) harmonic oscillator levels interlace. Since (3.28) was identified to be exponentially small in case \(K\) is chosen much larger than \(M\), this still shows that the last term in the right side of (3.51) is bounded independent of \(M\). Hence choosing \(M\) large enough and subsequently \(K\) larger, the ground state energy of the block \(F_{K}^{(N)}H_{J,\alpha}F_{K}^{(N)}\) is seen to be much larger than the energies of interest. By a Schur-complement analysis the low-energy spectrum of \(H_{J,\alpha}\) agrees with that of \(\widetilde{H}_{J}^{(N)}\). In the limit of \(N\to\infty\) and using Lemma 3.2, the spectrum of \(\widetilde{H}_{J}^{(N)}\) is a direct sum of \(L\) harmonic oscillator spectra as described in Proposition (3.1). ## Appendix A Miscellaneous on spin-coherent states In this appendix, we collect properties of the spin coherent states as defined in (1.3). We restrict attention to the semiclassical properties, which were essential for the analysis in this paper. We refer to the textbooks [16, 20, 41] and [3, 32] for further information and references. ### Semiclassical estimates for the states The spin coherent states (1.3) on \(\mathbb{C}^{2J+1}\) are parametrised by a spherical angle \(\Omega=(\theta,\varphi)\) on the unit sphere. Their scalar product \[\langle\Omega^{\prime},J|\Omega,J\rangle=\left[\cos\frac{\theta}{2}\cos\frac {\theta^{\prime}}{2}+e^{i(\varphi-\varphi^{\prime})}\sin\frac{\theta}{2}\sin \frac{\theta^{\prime}}{2}\right]^{2J}\] shows that for large values of \(J\in\mathbb{N}/2\), they are sharply localised. Denoting by \(\sphericalangle(\Omega,\Omega^{\prime})\) the spherical angle between two points on the unit sphere, one has the Gaussian-type localisation \[\big{|}\langle\Omega^{\prime},J|\Omega,J\rangle\big{|}^{2}=\left[\cos\sphericalangle (\Omega,\Omega^{\prime})\right]^{4J}\] with width proportial to \(J^{-1/2}\). With respect to the orthonormal eigenbasis of \(S_{z}\) on \(\mathbb{C}^{2J+1}\), the spin coherent states are linear combinations with coefficients given by \[\langle J-k|\Omega,J\rangle=\binom{2J}{k}^{1/2}\left(\cos\frac{\theta}{2} \right)^{2J-k}\ \left(\sin\frac{\theta}{2}\right)^{k}e^{ik\varphi},\] (A.1) for any \(k\in\{0,1,\ldots,2J\}\) and \(\Omega=(\theta,\varphi)\). Measurement of \(S_{z}\) will therefore result in a binomial distribution of \(2J\) independent Bernoulli variables with parameter \(p=\sin^{2}\left(\frac{\theta}{2}\right)\). The following lemma records the standard upper-tail Chernoff estimate for the binomial distribution. **Proposition A.1**.: _For any \(J\in\mathbb{N}/2\) and any \(k\in\{0,1,\ldots,2J\}\) and any \(\delta>0\):_ \[\sum_{k\geq 2(1+\delta)J\sin^{2}(\frac{\theta}{2})}|\langle J-k|\Omega,J \rangle|^{2}\leq\exp\left(-\frac{\delta^{2}}{2+\delta}2J\sin^{2}\left(\frac{ \theta}{2}\right)\right).\] (A.2) We will also need the following generalisation of the identity (A.1). **Lemma A.2**.: _For any \(k,k^{\prime}\in\{0,1,\ldots,2J\}\) and any \(\Omega=(\theta,\varphi)\):_ \[\langle J-k^{\prime}|U(\Omega)|J-k\rangle=\sum_{m=\max\{0,k-k^{ \prime}\}}^{k}(-1)^{k} \binom{2J+m-k}{m}^{1/2}\binom{2J+m-k}{k^{\prime}-k+m}^{1/2} \binom{k}{m}^{1/2}\binom{k^{\prime}}{k^{\prime}-k+m}^{1/2}\] \[\times\left(\cos\frac{\theta}{2}\right)^{2J-k-k^{\prime}}\ \left(\sin\frac{\theta}{2}\right)^{2m+k^{\prime}-k}e^{i(k^{\prime}-k)\varphi}.\] (A.3) _Moreover, if \(k\leq k^{\prime}\) (with the convention that \(0^{0}=1\)):_ \[\left|\langle J-k^{\prime}|U(\Omega)|J-k\rangle\right|\leq\binom{2J}{k}^{1/2} \binom{2J}{k^{\prime}}^{1/2}(k^{\prime})^{k}\left(\cos\frac{\theta}{2}\right) ^{2J-k-k^{\prime}}\left(\sin\frac{\theta}{2}\right)^{k^{\prime}-k}\left(1+ \left(\sin\frac{\theta}{2}\right)^{2}\right)^{k}\] (A.4) Proof.: We use the decomposition of the unitary [41, Eq. (4.3.14)] (see also [3]): \[U(\Omega)=\exp\left(\tan\frac{\theta}{2}\ e^{i\varphi}S_{-}\right)\exp\left(2 \ln\left(\cos\frac{\theta}{2}\right)S_{z}\right)\exp\left(-\tan\frac{\theta}{2 }e^{-i\varphi}S_{+}\right).\] The formula (A.3) follows from a straightforward, but tedious calculation which uses that \[\frac{1}{m!}\left(S_{+}\right)^{m}|J-k\rangle=\binom{k}{m}^{1/2}\binom{2J+m-k }{m}^{1/2}|J-k+m\rangle\] and similarly for \(S_{-}\) (and the usual convention that the binomial is zero if the upper integer is smaller than the lower one), cf. [41, Eq. (4.2.3)]. The bound (A.4) then follows by estimating \[\binom{2J+m-k}{k^{\prime}-k+m}\binom{k^{\prime}}{k^{\prime}-k+m} =\binom{2J}{k^{\prime}}\frac{(2J+m-k)!}{(2J)!(k-m)!}\left(\frac{ (k^{\prime})!}{(k^{\prime}+m-k)!}\right)^{2}\] \[\leq\binom{2J}{k^{\prime}}\frac{(k^{\prime})^{2k}}{(k-m)!},\] and similarly \[\binom{2J+m-k}{m}\leq\binom{2J}{k}\frac{k!}{m!}.\] This yields the claim by binomial formula. ### Semiclassical estimates for the symbols Associated to any linear operator on \(\mathbb{C}^{2J+1}\) are two semiclassical symbols: the lower and upper one. We recall from [32] the lower symbols of the spin operator \[\langle\Omega,J|\mathbf{S}|\Omega,J\rangle=J\mathbf{e}(\Omega).\] (A.5) as well as its upper symbol alongside, the symbol of the square of the \(z\)-component, \[\mathbf{S} =\frac{2J+1}{4\pi}\int d\Omega^{\prime}\left(J+1\right)\mathbf{ e}(\Omega)\left|\Omega,J\right\rangle\!\langle\Omega,J\big{|}\] (A.6) \[S_{z}^{2} =\frac{2J+1}{4\pi}\int d\Omega\left((J+1)(J+3/2)\mathbf{e}_{z}( \Omega)^{2}-(J+1)/2\right)\big{|}\Omega,J\rangle\!\langle\Omega,J\big{|}.\] (A.7) For operators with a smooth upper symbol, the lower symbol is know to agree with the upper symbol in the semiclassical limit [17, 32]. Since we need quantitative error estimates, we include the following result, which is tailored to the applications considered here (see also [30, Prop. 4.2]). **Proposition A.3**.: _For any_ \[H=\frac{2J+1}{4\pi}\int Nf\Big{(}\frac{2J}{N}\mathbf{e}(\Omega)\Big{)}\left| \Omega,J\right\rangle\!\langle\Omega,J\right|d\Omega\] _on \(\mathbb{C}^{2J+1}\) with some \(f\in C^{2}\), there is some \(C<\infty\) such that for all \(J\leq N/2\):_ \[\sup_{\Omega}\left|\langle\Omega,J\big{|}H\big{|}\Omega,J\rangle-Nf\big{(} \frac{2J}{N}\mathbf{e}(\Omega)\big{)}\right|\leq C.\] (A.8) Proof.: The proof is based on the standard Taylor estimate \[\left|f\Big{(}\frac{2J}{N}\mathbf{e}(\Omega^{\prime})\Big{)}-f \Big{(}\frac{2J}{N}\mathbf{e}(\Omega)\Big{)}-\frac{2J}{N}\nabla f\Big{(}\frac{ 2J}{N}\mathbf{e}(\Omega)\Big{)}\cdot\big{(}\mathbf{e}(\Omega^{\prime})- \mathbf{e}(\Omega)\big{)}\right|\] \[\quad\leq\|f^{\prime\prime}\|_{\infty}\left(\frac{2J}{N}\right)^ {2}\left(1-\mathbf{e}(\Omega^{\prime})\cdot\mathbf{e}(\Omega)\right).\] Plugging this into the integral expression and using the representation (A.6) and (A.5), we arrive at \[\left|\langle\Omega,J\big{|}H\big{|}\Omega,J\rangle-Nf\big{(} \frac{2J}{N}\mathbf{e}(\Omega)\big{)}-2J\ \nabla f\Big{(}\frac{2J}{N}\mathbf{e}(\Omega)\Big{)}\cdot\mathbf{e}(\Omega) \left(\frac{J}{J+1}-1\right)\right|\] \[\quad\leq\|f^{\prime\prime}\|_{\infty}\frac{4J^{2}}{N}\left(1- \frac{J}{J+1}\right)\] from which the claim follows. The last proposition shows the consistency of the two symbols in the semiclassical limit. We also need the following quantitative version of Duffield's theorem [17] on the consistency of the quantisation of symmetric polynomials with the help of the lower symbol. **Proposition A.4**.: _If \(H=N\)\(P\Big{(}\frac{2}{N}\mathbf{S}\Big{)}\) on \(\mathbb{C}^{2J+1}\) with a symmetric polynomial \(P\), then there is some \(C<\infty\), which is independent of \(J\leq N/2\), such that_ \[\left\|H-\frac{2J+1}{4\pi}\int d\Omega\,NP\Big{(}\frac{2J}{N}\mathbf{e}( \Omega)\Big{)}\left|\Omega,J\right\rangle\!\langle\Omega,J\big{|}\right\|\leq C.\] (A.9) The proof, which is spelled out at the end of this subsection, will rest on two preparatory lemmas. The first lemma deals with operators of the type \(f(2S_{z}/N)\). **Lemma A.5**.: _If \(H=Nf(2\mathbf{v}\cdot\mathbf{S}/N)\) on \(\mathbb{C}^{2J+1}\) for some \(f\in C^{2}\), then there is some \(C<\infty\), which is independent of \(J\leq N/2\), such that for all unit vectors \(\mathbf{v}\in S^{2}\) and \(N\), \(J\leq N/2\):_ \[\left\|H-\frac{2J+1}{4\pi}\int Nf\Big{(}\frac{2J}{N}\mathbf{v}\cdot\mathbf{e} (\Omega)\Big{)}\left|\Omega,J\right\rangle\!\langle\Omega,J\big{|}\,d\Omega \right\|\leq C.\] (A.10) Proof.: We first reduce the assertion to the case \(\mathbf{v}=\mathbf{e}_{z}\). If \(\Omega\) stands for the spherical angles of \(\mathbf{v}=\mathbf{e}(\Omega)\), we have \(U(\Omega)^{*}(\mathbf{v}\cdot\mathbf{S})U(\Omega)=S_{z}\) with the unitary from (1.3). Similarly, by the definition of the coherent states, one easily arrives at \[U(\Omega_{0})^{*}\left(\int Nf\Big{(}\frac{2J}{N}\mathbf{v}\cdot\mathbf{e}( \Omega)\Big{)}\,\big{|}\Omega,J\big{\rangle}\langle\Omega,J\big{|}\,d\Omega \right)U(\Omega_{0})=\int\,Nf\Big{(}\frac{2J}{N}\mathbf{e}_{z}(\Omega)\Big{)} \,\big{|}\Omega,J\big{\rangle}\langle\Omega,J\big{|}d\Omega\] for any continuous function \(f\). In order to establish the claim in case \(\mathbf{v}=\mathbf{e}_{z}\), we first show that \[H^{\prime}\coloneqq\frac{2J+1}{4\pi}\int d\Omega\,Nf\Big{(}\frac{2J}{N} \mathbf{e}_{z}(\Omega)\Big{)}\,\big{|}\Omega,J\big{\rangle}\langle\Omega,J \big{|}\] s diagonal in the orthonormal basis \(|k\rangle\) with \(k\in\{-J,\ldots,J\}\) for which the operator \(S_{z}\) is diagonal. Inserting the explicit expression (A.1) in \[\langle k|H^{\prime}|k^{\prime}\rangle=\frac{2J+1}{4\pi}\int d\Omega\,Nf\Big{(} \frac{2J}{N}\mathbf{e}_{z}(\Omega)\Big{)}\,\langle k,J\big{|}\Omega,J\rangle \langle\Omega,J\big{|}k^{\prime},J\rangle.\] the \(\varphi\)-integration of the spherical angle \(\Omega=(\theta,\varphi)\) immediately yield \(\langle k|H^{\prime}|k^{\prime}\rangle=0\) if \(k\neq k^{\prime}\). It remains to control the diagonal elements of \(H^{\prime}\) for which we fix \(k=k^{\prime}\) in the above integral. By a standard Taylor approximation we have \[\left|f\Big{(}\frac{2J}{N}\mathbf{e}_{z}(\Omega)\Big{)}-f\Big{(}\frac{2k}{N} \Big{)}-\frac{2J}{N}f^{\prime}(2k/N)(\mathbf{e}_{z}(\Omega)-k/J)\right|\leq \frac{\|f^{\prime\prime}\|_{\infty}}{2}\left(\frac{2J}{N}\right)^{2}( \mathbf{e}_{z}(\Omega)-k/J)^{2}\] In particular we have \[|\langle k|H^{\prime}-H|k\rangle \leq 2J\|f^{\prime}\|_{\infty}\left|\frac{2J+1}{4\pi}\int d\Omega \ (\mathbf{e}_{z}(\Omega)-k/J)|\langle\Omega,J\big{|}k\rangle|^{2}\right|\] \[+\frac{\|f^{\prime\prime}\|_{\infty}N}{2}\left(\frac{2J}{N} \right)^{2}\frac{2J+1}{4\pi}\int d\Omega\ (\mathbf{e}_{z}(\Omega)-k/J)^{2}|\langle \Omega,J\big{|}k\rangle|^{2}.\] To estimate the right side, we make use of the explicit operator representations (A.6) for \(S_{z}\) and (A.7) which immediately yield \[|\langle k|H^{\prime}-H|k\rangle\leq C(\|f^{\prime}\|_{\infty}+\|f^{\prime \prime}\|_{\infty}),\] with some numerical constant \(C\). Our second ingredient is an algebraic result, which allows to write any homogeneous polynomial of degree \(d\) as sum of linear forms to the power \(d\): **Lemma A.6**.: _Let \(Q^{d}_{\text{hom}}(x_{1},\ldots,x_{k})\) be the real vector space of homogeneous polynomials of degree \(d\) in the \(k\) variables \(x_{1},\ldots x_{k}\). Then,_ \[Q^{d}_{\text{hom}}(x_{1},\ldots,x_{k})=\text{span}\{(\alpha_{1}x_{1}+\cdots \alpha_{k}x_{k})^{d}\,|\,\alpha_{1},\ldots,\alpha_{d}\in\mathbb{R}\}\] We remark that this a commutative result in the sense that we distinguish between the order of variables. For example, the polynomials \(x_{1}x_{2}\) and \(x_{2}x_{1}\) are considered to be the same. Proof.: It is clear that \[W^{d}(x_{1},\ldots,x_{k})\coloneqq\operatorname{span}\{(\alpha_{1}x_{1}+\cdots \alpha_{k}x_{k})^{d}\,|\,\alpha_{1},\ldots,\alpha_{d}\in\mathbb{R}\}\] is a closed subspace of \(Q^{d}_{\hom}(x_{1},\ldots,x_{k})\). Let \(f_{\alpha}\coloneqq(\alpha_{1}x_{1}+\cdots\alpha_{k}x_{k})^{d}\). Note that \(f_{\alpha}-f_{\alpha^{\prime}}\in W^{d}(x_{1},\ldots,x_{k})\) and since \(W^{d}(x_{1},\ldots,x_{k})\) is closed, we see that \[\partial_{\alpha_{i}}f_{\alpha}\in W^{d}(x_{1},\ldots,x_{k}).\] Similarly, partial derivatives of higher order are element of \(W^{d}(x_{1},\ldots,x_{k})\); in particular the \(d\)-th order derivatives which agree with the monomials of degree \(d\). Thus, \(W^{d}(x_{1},\ldots,x_{k})\) contains all monomials of degree \(d\) and, hence, coincides with \(Q^{d}_{\hom}(x_{1},\ldots,x_{k})\). We are finally ready to spell out the Proof of Proposition a.4.: Due to linearity and Lemma A.6 it enough to prove the assertion for an operator of the form \(H=(\mathbf{v}\cdot\mathbf{S})^{d}\), where \(\mathbf{v}\) is some unit vector in \(\mathbb{R}^{3}\), in which case the claim follows from Lemma A.5. ### Semiclassics for the free energy **Proposition A.7**.: _For a Hamiltonian \(H\) of the form (2.1) on \(\mathcal{H}_{N}\) with a regular symbol \(h\in C^{2}\) satisfying **A0**, the pressure at inverse temperature \(\beta>0\) is:_ \[p(\beta)\coloneqq\lim_{N\to\infty}N^{-1}\ln\operatorname{Tr}\,\exp\left(- \beta H\right)=\max_{r\in[0,1]}\left\{I(r)-\beta\min_{\Omega\in S^{2}}h\left( r\mathbf{e}(\Omega)\right)\right\}.\] (A.11) Proof.: We first use the block decomposition (2.1) in the trace to write \[\operatorname{Tr}\,\exp\left(-\beta H\right)=\sum_{J=\frac{N}{2}-\lfloor\frac {N}{2}\rfloor}^{N/2}\sum_{\alpha=1}^{M_{N,J}}\operatorname{Tr}_{\mathbb{C}^{2 J+1}}\exp\left(-\beta H_{J,\alpha}\right)\] By (2.2) the logarithm of the trace in the right side bounded according to \[\sup_{J,\alpha}\left|\ln\operatorname{Tr}_{\mathbb{C}^{2J+1}}\exp\left(-\beta H _{J,\alpha}\right)-\ln\operatorname{Tr}_{\mathbb{C}^{2J+1}}\exp\left(-\beta \frac{2J+1}{4\pi}\int Nh\Big{(}\frac{2J}{N}\mathbf{e}(\Omega)\Big{)}\,\big{|} \Omega,J\big{>}\langle\Omega,J\big{|}\,d\Omega\right)\right|=O(1).\] We then use the Berezin-Lieb inequalities (1.6) and Proposition A.3, which ensures that the upper and lower symbol agree up to order one, to replace the last trace by an integral, \[\sup_{J,\alpha}\left|\ln\operatorname{Tr}_{\mathbb{C}^{2J+1}}\exp\left(-\beta H _{J,\alpha}\right)-\ln\frac{2J+1}{4\pi}\int\exp\left(-\beta Nh\Big{(}\frac{2J }{N}\mathbf{e}(\Omega)\Big{)}\right)d\Omega\right|=O(1).\] In order to use a standard Laplace evaluation for the above integral in the limit \(N\to\infty\), we set \(J=rN/2\) with fixed \(r\in(0,1]\). Since \(h\in C^{2}\), we arrive at \[\frac{1}{N}\ln\frac{2J+1}{4\pi}\int\exp\left(-\beta Nh\Big{(}\frac{2J}{N} \mathbf{e}(\Omega)\Big{)}\right)d\Omega=-\beta\min_{\Omega}h\left(r\mathbf{e} (\Omega)\right)+o(1).\] The assertion then follows from the known asymptotics of the binomial coefficient in (1.2) for \(M_{N,J}\), i.e. \(\frac{1}{N}\ln M_{N,J}=I(r)+o(1)\) #### Acknowledgments This work was supported by the DFG under EXC-2111 - 390814868.
2308.15755
Density Stabilization Strategies for Nonholonomic Agents on Compact Manifolds
In this article, we consider the problem of stabilizing stochastic processes, which are constrained to a bounded Euclidean domain or a compact smooth manifold, to a given target probability density. Most existing works on modeling and control of robotic swarms that use PDE models assume that the robots' dynamics are holonomic, and hence, the associated stochastic processes have generators that are elliptic. We relax this assumption on the ellipticity of the generator of the stochastic processes, and consider the more practical case of the stabilization problem for a swarm of agents whose dynamics are given by a controllable driftless control-affine system. We construct state-feedback control laws that exponentially stabilize a swarm of nonholonomic agents to a target probability density that is sufficiently regular. State-feedback laws can stabilize a swarm only to target probability densities that are positive everywhere. To stabilize the swarm to probability densities that possibly have disconnected supports, we introduce a semilinear PDE model of a collection of interacting agents governed by a hybrid switching diffusion process. The interaction between the agents is modeled using a (mean-field) feedback law that is a function of the local density of the swarm, with the switching parameters as the control inputs. We show that the semilinear PDE system is globally asymptotically stable about the given target probability density. The stabilization strategies are verified without inter-agent interactions is verified numerically for agents that evolve according to the Brockett integrator and a nonholonomic system on the special orthogonal group of 3-dimensional rotations $SO(3)$. The stabilization strategy with inter-agent interactions is verified numerically for agents that evolve according to the Brockett integrator and a holonomic system on the sphere $S^2$.
Karthik Elamvazhuthi, Spring Berman
2023-08-30T04:45:42Z
http://arxiv.org/abs/2308.15755v2
# Density Stabilization Strategies for Nonholonomic Agents on Compact Manifolds ###### Abstract In this article, we consider the problem of stabilizing a class of degenerate stochastic processes, which are constrained to a bounded Euclidean domain or a compact smooth manifold, to a given target probability density. This stabilization problem arises in the field of swarm robotics, for example in applications where a swarm of robots is required to cover an area according to a target probability density. Most existing works on modeling and control of robotic swarms that use partial differential equation (PDE) models assume that the robots' dynamics are holonomic, and hence, the associated stochastic processes have generators that are elliptic. We relax this assumption on the ellipticity of the generator of the stochastic processes, and consider the more practical case of the stabilization problem for a swarm of agents whose dynamics are given by a controllable driftless control-affine system. We construct state-feedback control laws that exponentially stabilize a swarm of nonholonomic agents to a target probability density that is sufficiently regular. State-feedback laws can stabilize a swarm only to target probability densities that are positive everywhere. To stabilize the swarm to probability densities that possibly have disconnected supports, we introduce a semilinear PDE model of a collection of interacting agents governed by a hybrid switching diffusion process. The interaction between the agents is modeled using a (mean-field) feedback law that is a function of the local density of the swarm, with the switching parameters as the control inputs. We show that under the action of this feedback law, the semilinear PDE system is globally asymptotically stable about the given target probability density. The stabilization strategies with and without agent interactions are verified numerically for agents that evolve according to the Brockett integrator; the strategy with interactions is additionally verified for agents that evolve according to an underactuated system on the sphere \(S^{2}\). Mean-field control, Hypoelliptic operators, Nonholonomic systems, Multi-agent systems, Swarms, Semilinear PDE ## I Introduction In recent years, there has been much work on the construction of decentralized control laws for multi-robot systems using _mean-field models_[21], in which a large collective of agents is treated as a continuum. Mean-field based approaches for modeling and control have been used in the multi-agent control and swarm robotics literature for problems such as consensus [59], flocking [30], task allocation [8, 45], and cooperative transport [61]. Similar problems have also been considered over the last two decades in the control and mathematics literature in the context of mean-field games [38, 31], mean-field control [28, 51, 12], and optimal transport [7]. One advantage of mean-field based approaches for decentralized control is that the constructed control laws are _identity-free_, that is, the control laws do not depend on the agents' identities. The identity-free nature of the control laws simplifies some aspects of their implementation on large swarms of homogeneous agents when compared to control laws that are identity-dependent. For instance, suppose that a central supervisor observes the states of the agents via an overhead camera and uses these measurements to update state-feedback control laws, which it periodically broadcasts to the agents. If the control laws are identity-free, then the supervisor does not need to expend computational power to distinguish between individual agents. Another advantage of mean-field control approaches, from a theoretical point of view, is that as the number of agents tends to infinity, the mean-field behavior of the swarm is governed by a deterministic differential equation or difference equation model, even though each agent might exhibit stochasticity in its dynamics. Such models are more analytically tractable than models describing the dynamics of a large number of individual agents. In this article, we consider a mean-field stabilization problem motivated by coverage problems in multi-agent control. A classical approach to multi-agent coverage is described in [17], which presents a distributed method for implementing Lloyd's algorithm for positioning multiple agents in a domain according to a given probability density function. The stochastic task allocation problem considered in [8], where the goal is to stabilize a swarm of agents evolving according to a continuous-time Markov chain to a target distribution among a set of states (e.g., physical locations), can be viewed as a mean-field version of this coverage problem. Stochastic task allocation approaches have also been developed for swarms of agents that evolve according to discrete-time Markov chains [1] and that follow control laws which depend on the local density of agents [44, 22]. A drawback of Markov chain-based approaches to this problem is that the state space of the agents needs to be discretized beforehand. In [48, 47], the authors consider the problem of stabilizing a swarm of agents with general controllable dynamics to a target distribution. This approach has been extended to the case of holonomic agents on bounded domains [19, 24] and compact manifolds without boundary [20]. In these extensions, the control is either a diffusion coefficient or a velocity field in the Fokker-Planck partial differential equation (PDE) [53] that determines the spatio-temporal evolution of the probability density of the agents, each of which is governed by a reflected stochastic differential equation [57]. In [10], similar stochastic control laws are constructed for agents evolving according to a discrete-time deterministic nonlinear control system on a bounded Euclidean domain. An advantage of the stochastic coverage approaches in [8, 1, 48, 47, 19, 24, 20, 10] over the classical coverage strategy in [17] is that they only require each agent's control action to depend on its own state. The stochasticity designed into the system ensures that the swarm reaches a target probability density. A disadvantage of these approaches is that agents do not stop switching between states even after the swarm has reached the target probability density, resulting in an unnecessary expenditure of energy. Moreover, a large number of agents is required for the swarm density to stabilize close to the target density. One way to resolve these issues is to design control laws that are functions of the local swarm density. Such density-dependent or _mean-field_ feedback laws have been proposed for agents that evolve according to Markov chains on discrete state spaces [44, 22], for agents that evolve according to ordinary differential equations on Euclidean domains [26, 37], and for agents that evolve according to stochastic or ordinary differential equations on compact manifolds without boundary [20]. The works [26, 37, 20] assume that the agents are holonomic, and in order to achieve global asymptotic stability, the target distribution is required to be strictly positive everywhere on the domain. In the context of these previous works, the **main contributions** of this article are the following: 1. Extension of the stochastic multi-agent coverage approach developed by the authors in [19, 24] to non-interacting agents with nonholonomic dynamics that evolve on a bounded subset of \(\mathbb{R}^{d}\) or a compact manifold without boundary, given a target probability density for the swarm that is bounded from below by a positive constant. 2. Development of a stochastic multi-agent coverage approach for interacting agents governed by the hybrid switching diffusion model introduced by the authors in [23]. In this approach, a mean-field feedback law (i.e., a control law that depends on the local swarm density) is constructed to globally asymptotically stabilize the swarm to any given target probability density, which is not necessarily positive everywhere. Contribution 1 is partially motivated by the fact that extension of multi-agent control strategies designed for Euclidean state spaces to general manifolds [56, 9] is important, given that many mechanical systems are naturally modeled on manifolds [15]. While the work [47], as in Contribution 1, does consider agents with general nonholonomic dynamics, due to the assumption that the domain is unbounded, this work requires assumptions on the behavior of the target probability density at infinity. The extension of the coverage strategy presented in [19, 24] to the case of nonholonomic agents evolving on manifolds is complicated by the fact that the associated PDEs are not elliptic. Contribution 2 improves over existing work on mean-field feedback laws [26, 37], since our approach does not make strong assumptions on the regularity of solutions of the associated PDEs. Instead, we prove all the regularity required to enable the stability analysis, which makes the analysis much more technically involved. Moreover, we are able to stabilize a larger class of probability densities than those considered in [19, 24, 20, 26, 37], which require that the target probability density is strictly bounded from below by a positive constant everywhere on the domain. A control law similar to the one described in Contribution 2 was constructed by the authors in [11] for a discrete-time control system. However, it is assumed in [11] that the system is locally controllable within one time-step, and hence is fully actuated. Moreover, in contrast to the mean-field feedback laws constructed in [26, 37, 20] in which the control input is the agents' velocity field, the control inputs that we design in Contribution 2 are the transition rates of the hybrid switching diffusion process that describes the agents' dynamics. This article is organized as follows. In Section II, we establish notation and provide some definitions that are used throughout the article. In Section III, we present and analyze the properties of the degenerate PDEs that describe the mean-field model in the case where the agents do not interact with one another. In Section IV, we present a semilinear PDE mean-field model for stabilizing the density of a swarm of interacting agents and establish global asymptotic stability properties of the model. In Section V, we validate the control strategies presented in Sections III and IV with numerical simulations. ## II Notation We denote the \(n\)-dimensional Euclidean space by \(\mathbb{R}^{n}\). \(\mathbb{R}^{n\times m}\) refers to the space of \(n\times m\) matrices, and \(\mathbb{R}_{+}\) refers to the set of non-negative real numbers. Given a vector \(\mathbf{x}\in\mathbb{R}^{n}\), \(x_{i}\) denotes the \(i^{th}\) coordinate value of \(\mathbf{x}\). For a matrix \(\mathbf{A}\in\mathbb{R}^{n\times m}\), \(A_{ij}\) refers to the element in the \(i^{th}\) row and \(j^{th}\) column of \(\mathbf{A}\). For a subset \(B\subset\mathbb{R}^{M}\), \(\mathrm{int}(B)\) refers to the interior of the set \(B\). \(\mathbb{C}\), \(\mathbb{C}_{-}\), and \(\mathbb{C}_{-}\) denote the set of complex numbers, the set of complex numbers with negative real parts, and the set of complex numbers with non-positive real parts, respectively. \(\mathbb{Z}_{+}\) refers to the set of positive integers. We denote by \(\Omega\) an open, bounded, and connected subset of an \(N\)-dimensional smooth Riemannian manifold \(M\)[39, 40] with a Riemannian volume measure \(d\mathbf{x}\). The boundary of \(\Omega\) is denoted by \(\partial\Omega\). We denote by \(\int_{\Omega}f(\mathbf{x})d\mathbf{x}\) the integral of a function \(f:\Omega\rightarrow\mathbb{R}\) with respect to the Riemannian volume. For each \(1\leq p<\infty\), we define \(L^{p}(\Omega)\) as the Banach space of complex-valued measurable functions over the set \(\Omega\) whose absolute value raised to \(p^{\mathrm{th}}\) power has finite integral. The space \(L^{1}\) is equipped with the norm \(\|f\|_{1}:=\int_{\Omega}|f(x)|dx\). We define \(L^{\infty}(\Omega)\) as the space of essentially bounded measurable functions on \(\Omega\). The space \(L^{\infty}(\Omega)\) is equipped with the norm \(\|z\|_{\infty}=\ \text{ess sup}_{\mathbf{x}\in\Omega}|z(\mathbf{x})|,\) where \(\text{ess sup}_{\mathbf{x}\in\Omega}(\cdot)\) denotes the _essential supremum_ attained by its argument over the domain \(\Omega\). For a given real-valued function \(a\in L^{\infty}(\Omega)\), \(L^{2}_{a}(\Omega)\) refers to the set of all functions \(f\) such that the norm of \(f\) is defined as \(\|f\|_{a}:=(\int_{0}^{1}|f(\mathbf{x})|^{2}a(\mathbf{x})d\mathbf{x})^{1/2}<\infty\). We will always assume that the associated function \(a\) is uniformly bounded from below by a positive constant, in which case the space \(L^{2}_{a}(\Omega)\) is a Hilbert space with respect to the weighted inner product \(\langle\cdot,\cdot\rangle_{a}:L^{2}_{a}(\Omega)\times L^{2}_{a}(\Omega)\to\mathbb{R}\), given by \(\langle f,g\rangle_{a}=\int_{\Omega}f(\mathbf{x})\bar{g}(\mathbf{x})a(\mathbf{ x})d\mathbf{x}\) for each \(f,g\in L^{2}_{a}(\Omega)\), where \(\bar{g}\) is the complex conjugate of the function \(g\). When \(a=\mathbf{1}\), where \(\mathbf{1}\) is the function that takes the value \(1\) almost everywhere on \(\Omega\), the space \(L^{2}_{a}(\Omega)\) coincides with the space \(L^{2}(\Omega)\). For a function \(f\in L^{2}(\Omega)\) and a given constant \(c\), we write \(f\geq c\) to imply that \(f\) is real-valued and \(f(\mathbf{x})\geq c\) for almost every (a.e.) \(\mathbf{x}\in\Omega\). Suppose \(e^{x_{t}}\) is the flow generated by a vector field \(X\). Then \(X\) defines a differential operator on the set of smooth functions \(C^{\infty}(M)\) through the following action, \[(Xf)(\mathbf{x})=\lim_{t\to 0}\frac{f(e^{tX}(\mathbf{x}))-f(\mathbf{x})}{t} \tag{1}\] for all \(\mathbf{x}\in\Omega\) and all \(f\in C^{\infty}(\Omega)\). This is the differential geometric definition from [39] of a vector field \(X\) as an associated differential operator acting on the space of smooth functions. Let \(\mathcal{V}=\{X_{1},...,X_{m}\}\), \(m\leq N\), be a collection of smooth vector fields \(X_{i}\), each defined as in (1). Let \([X,Y]\) denote the Lie bracket of the vector fields \(X\) and \(Y\). We define \(\mathcal{V}^{0}=\mathcal{V}\). For each \(i\in\mathbb{Z}_{+}\), we define in an iterative manner the set of vector fields \(\mathcal{V}^{i}=\{[X,Y];\ X\in\mathcal{V},\ Y\in\mathcal{V}^{j-1},\ j=1,...,i\}\). We will assume that the collection of vector fields \(\mathcal{V}\) satisfies the _Chow-Rashevsky_ condition [4] (also known as _Hormander's condition_[13]), i.e., the Lie algebra generated by the vector fields \(\mathcal{V}\), given by \(\cup_{i=0}^{r}\mathcal{V}^{i}\), has rank \(N\), for sufficiently large \(r\). A _horizontal curve_\(\boldsymbol{\gamma}:[0,1]\to\Omega\) connecting two points \(\mathbf{x},\mathbf{y}\in M\) is a Lipschitz curve in \(\Omega\) for which there exist essentially bounded functions \(a_{i}(t)\) such that \[\dot{\boldsymbol{\gamma}}(t)=\sum_{i=1}^{m}a_{i}(t)X_{i}(\boldsymbol{\gamma} (t)) \tag{2}\] for almost every \(t\in[0,1]\), where \(X_{i}\in\mathcal{V}\), \(\boldsymbol{\gamma}(0)=\mathbf{x}\), and \(\boldsymbol{\gamma}(1)=\mathbf{y}\). Then \(\mathcal{V}\) defines a distance \(d:\Omega\to\mathbb{R}_{\geq 0}\) on \(M\) as \[d(\mathbf{x},\mathbf{y})=\inf\ \{\int_{0}^{1}|\dot{ \boldsymbol{\gamma}}(t)|dt; \boldsymbol{\gamma}\text{ is a horizontal curve}\] \[\text{ connecting }\mathbf{x}\text{ and }\mathbf{y}\}\] **Definition II.1**.: _The domain \(\Omega\subset M\) is said to be \(\epsilon-\delta\) if there exist \(\delta>0\), \(0<\epsilon\leq 1\) such that for any pair of points \(\mathbf{p},\mathbf{q}\in\Omega\), if \(d(\mathbf{p},\mathbf{q})\geq\delta\), then there exists a continuous curve \(\boldsymbol{\gamma}:[0,T]\to\Omega\) such that \(\boldsymbol{\gamma}(0)=\mathbf{p}\), \(\boldsymbol{\gamma}(T)=\mathbf{q}\), and_ \[\int_{0}^{1}|\dot{\boldsymbol{\gamma}}(t)|dt\ \geq\ \frac{1}{ \epsilon}d(\mathbf{p},\mathbf{q})\] \[d(\mathbf{z},\partial\Omega)\ \geq\ \epsilon\min(d(\mathbf{p}, \mathbf{z}),d(\mathbf{z},\mathbf{q}))\ \ \forall\mathbf{z}\in\{\boldsymbol{\gamma}(t):t\in[0,T]\}\] The metric \(d\) on \(\Omega\) is known as the _sub-Riemannian_ or _Carnot-Caratheodory_ metric [2, 13]. The topology induced by this metric on \(d\) coincides with the usual Riemannian metric. We will assume that the radius \(r(\Omega)\) of \(\Omega\), given by \(r(\Omega)=\sup\{d(\mathbf{x},\mathbf{y});\mathbf{x},\mathbf{y}\in M\}\), is finite. Given \(a\in L^{\infty}(\Omega)\), with \(a\geq c\) for a positive parameter \(c>0\), we define the _weighted horizontal Sobolev space_\(WH^{1}_{a}(\Omega)=\big{\{}f\in L^{2}(\Omega):X_{i}(af)\in L^{2}(\Omega)\text{ for }1\leq i\leq m\big{\}}\). We equip this space with the weighted horizontal Sobolev norm \(\|\cdot\|_{WH^{1}_{a}}\), given by \(\|f\|_{WH^{1}_{a}}=\Big{(}\|f\|_{2}^{2}+\sum_{i=1}^{n}\|X_{i}(af)\|_{2}^{2} \Big{)}^{1/2}\) for each \(f\in WH^{1}_{a}(\Omega)\). Here, the derivative action of \(X_{i}\) on a function \(f\) is to be understood in the distributional sense. When \(a=\mathbf{1}\), where \(\mathbf{1}\) is the constant function that is equal to \(1\) everywhere, we will denote \(WH^{1}_{a}\) by \(WH^{1}\). Let \(X\) be a Hilbert space with the norm \(\|\cdot\|_{X}\). The space \(C([0,T];X)\) consists of all continuous functions \(u:[0,T]\to X\) for which \(\|u\|_{C([0,T];X)}:=\max_{0\leq t\leq T}\|u(t)\|_{X}\ <\ \infty\). If \(Y\) is a Hilbert space, then \(\mathcal{L}(X,Y)\) will denote the space of linear bounded operators from \(X\) to \(Y\). We will also use the multiplication operator \(\mathcal{M}_{a}:L^{2}(\Omega)\to L^{2}(\Omega)\), defined as \((\mathcal{M}_{a}u)(\mathbf{x})=a(\mathbf{x})u(\mathbf{x})\) for a.e. \(\mathbf{x}\in\Omega\) and each \(u\in L^{2}(\Omega)\). We will need an appropriate notion of a solution of the PDEs considered in this paper. Toward this end, let \(A\) be a closed linear operator that is densely defined on a subset \(\mathcal{D}(A)\), the domain of the operator, of a Hilbert space \(H\). We will define \(\operatorname{spec}(A)\) as the set \(\{\lambda\in\mathbb{C}:\lambda\mathbb{I}-A\text{ is not invertible in }X\}\), where \(\mathbb{I}\) is the identity map on \(X\). If \(A\) is a bounded operator, then \(\|A\|_{op}\) will denote the operator norm induced by the norm defined on \(H\). From [25], we have the following definition. **Definition II.2**.: _For a given time \(T>0\), a **mild solution** of the ODE_ \[\dot{u}(t)=Au(t);\ \ u(0)=u_{0}\in H \tag{3}\] _is a function \(u\in C([0,T];X)\) such that \(u(t)=u_{0}+A\int_{0}^{t}u(s)ds\) for each \(t\in[0,T]\)._ Under appropriate conditions satisfied by \(A\), the mild solution of a PDE is given by a _strongly continuous semigroup_ of linear operators, \((\mathcal{T}(t))_{t\geq 0}\), that are _generated_ by the operator \(A\)[25]. **Definition II.3**.: _A strongly continuous semigroup of linear operators \((\mathcal{T}(t))_{t\geq 0}\) on a Hilbert space \(X\) is called **positive** if \(u\in X\) such that \(u\geq 0\) implies that \(\mathcal{T}(t)u\geq 0\) for all \(t\geq 0\)._ ## III Stabilization without Agent Interactions Given the definitions in Section II, consider the following _reflected_ stochastic differential equation (SDE) [52] constrained to a domain \(\Omega\subseteq M\): \[d\mathbf{Z}(t) = \sum_{i=1}^{m}u_{i}(\mathbf{Z}(t))X_{i}dt+\sqrt{2}\sum_{i=1}^{m}v_{i }(\mathbf{Z}(t))X_{i}\circ dW_{i}\] \[+\ \mathbf{n}(\mathbf{Z}(t))d\psi(t),\] \[\mathbf{Z}(0) = \ \mathbf{Z}_{0}, \tag{4}\] where \(\psi(t)\in\mathbb{R}\) is called the _reflecting function_ or _local time_[52], a stochastic process that constrains \(\mathbf{Z}(t)\) to the domain \(\Omega\); \(\mathbf{n}(\mathbf{x})\) is the normal to the boundary at \(\mathbf{x}\in\partial\Omega\); \(W_{i}\) are \(m\) copies of the one-dimensional Wiener process; and \(u_{i}\) and \(v_{i}\) are \(m\) feedback laws. In the above SDE (4), the notation \(\circ\) is used to mean that the SDE should be interpreted in the _sense of Stratonovich_[34]. Let \(y(\mathbf{x},t)\) denote the probability density of the random variable \(\mathbf{Z}(t)\), defined as \(\int_{A}y(\mathbf{x},t)d\mathbf{x}\). In this section, we consider the following control problem: **Problem III.1**.: _Given a target probability density \(f\) on \(\Omega\), design control laws \(u_{i}(\mathbf{x})\) and \(v_{i}(\mathbf{x})\) in Eq. (4) such that the probability density \(y(\mathbf{x},t)\)of the stochastic process \(\mathbf{Z}(t)\), which evolves according to Eq. (4), converges asymptotically to \(f\)._ The motivation for this problem comes from stochastic coverage applications in swarm robotics that are framed as follows. Let the random variable \(\mathbf{Z}_{j}(t)\), \(j\in\{1,...,N_{p}\}\), denote the position of the \(j^{th}\) robot in a swarm of \(N_{p}\) robots at time \(t\). This position evolves according to Eq. (4), in which \(u_{i}\) and \(v_{i}\) are control laws that govern each robot's motion. Since each robot follows the same control laws \(u_{i}\) and \(v_{i}\), the random variables \(\mathbf{Z}_{i}(t)\) are independent and identically distributed. Then, denoting by \(\delta_{\mathbf{x}}\) the delta distribution at \(\mathbf{x}\in M\), the _empirical distribution_\(\frac{1}{N_{p}}\sum_{j=1}^{N_{p}}\delta_{\mathbf{Z}_{j}(t)}\), which represents the distribution of the robots in space, converges to the density \(y(\mathbf{x},t)\) as \(N_{p}\rightarrow\infty\) due to the _law of large numbers_. The stabilization problem III.1 has been considered by the authors in [19] for the case where the system is _holonomic_ and the vector fields \(X_{i}=\frac{\partial}{\partial x_{i}}\) are the standard coordinate vector fields. The goal in this section is to extend the results in [19] to the general case where the number of vector fields \(X_{i}\) is possibly less than the dimension \(N\) of the state space \(M\). Such density stabilization problems were first considered for the case where the domain \(\Omega\) is the whole of the Euclidean space \(\mathbb{R}^{n}\) in [48, 47]. When time is discrete, and the system is controllable in one time step, this problem has been considered in [10]. The main difficulty in extending the results from [19] is that when the number of control vector fields, \(m\), is less than the dimension of the state space \(M\), the _generator_\(\sum_{i=1}^{m}(v_{i}X_{i})^{2}+u_{i}X_{i}\) of the stochastic process \(\mathbf{Z}(t)\) is not _elliptic_, which makes standard results in the literature on parabolic PDEs inapplicable. In particular, let \(A=\sum_{i=1}^{m}(v_{i}X_{i})^{2}\). The associated probability density \(y(\mathbf{x},t)\) of the process \(\mathbf{Z}(t)\) evolves according to the PDE \[y_{t}=A^{*}y-\nabla\cdot(\sum_{i=1}^{m}u_{i}(\mathbf{x})X_{i}y) \quad\quad\text{in}\quad\Omega\times[0,T]\] \[y(\cdot,0)=y^{0} \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{in} \quad\Omega \tag{5}\] with _zero flux_ boundary conditions, where \(\nabla\cdot\) denotes the divergence operation with respect to the measure \(d\mathbf{x}\), and \(A^{*}\) is the adjoint of the operator \(A\). The stabilization problem III.1 is an open-loop control problem for the PDE (5), in which the goal is to stabilize the solution \(y(\mathbf{x},t)\) of (5) to a target function \(f\). The operator \(A\) is not elliptic in general, but only _hypoelliptic_. Particularly, if \(f\in C_{0}^{\infty}(\Omega)\) has compact support \(K\), then, due to the Chow-Rashevsky's Lie rank condition, if \(u\) is a function on \(\Omega\) such that \(Au=f\), then \(u\) is smooth on \(K\)[13]. Using this property of \(A\), we will extend the stabilization results of [19, 24] to the case where the agents have nonholonomic dynamics. First, formally, we will provide a number of candidate control laws that are solutions to Problem III.1. Given these control laws, in Section III-A we will present a stability analysis of a class of PDEs that coincide with (5) when the operators \(X_{i}\) are formally skew-adjoint with respect to the volume form \(d\mathbf{x}\), that is, \(X_{i}^{*}=-X_{i}\). Suppose that the operators \(X_{i}\) are formally skew-adjoint. Let \(f\in W^{1,\infty}(\Omega)\) be a positive function that is bounded from below by a positive number and for which \(\int_{\Omega}f(\mathbf{x})d\mathbf{x}=1\). If we set \(u_{i}(\cdot)=X_{i}g/g\) and \(v_{i}(\cdot)=1\) for each \(i\in\{1,...,m\}\) and all \(t\geq 0\), then the PDE (5) becomes \[y_{t}=\sum_{i=1}^{m}X_{i}^{2}y-\nabla\cdot(\sum_{i=1}^{m}\frac{X _{i}f}{f}X_{i}y) \quad\quad\text{in}\quad\Omega\times[0,T]\] \[y(\cdot,0)=y^{0} \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\text{in}\quad\Omega \tag{6}\] Let \(\nabla_{H}\) be the _horizontal gradient_ operator, which maps functions to vector fields and is defined as \(\nabla_{H}g=\sum_{i=1}^{m}(X_{i}g)X_{i}\). When the manifold \(M\) is a _Lie group_\(G\) and it is _unimodular_, i.e., the left- and right-Hakar measures [39] coincide, then we have that \(\nabla\cdot\nabla_{H}(\cdot)=\sum_{i=1}^{m}X_{i}^{2}\)[3]. Hence, if we set \(y=f\), then \[\sum_{i=1}^{m}X_{i}^{2}y-\nabla\cdot\left(\sum_{i=1}^{m}\frac{X_{i}f}{f}X_{i}y \right)=\sum_{i=1}^{m}X_{i}^{2}g-\nabla\cdot(\nabla_{H}g)=0 \tag{7}\] Thus, \(f\) is an equilibrium solution of the PDE (6). Our goal in Section III-A will be to show that \(f\) is the globally exponentially stable equilibrium solution of PDE (6) on the set of square-integrable probability densities. Let \(a=\frac{1}{f}\). Then the operator \(\sum_{i=1}^{m}X_{i}^{2}-\nabla\cdot(\sum_{i=1}^{m}\frac{X_{i}f}{f}X_{i})\) can be alternatively expressed as \(\nabla\cdot(\frac{1}{a(\mathbf{x})}\nabla_{H}(a(\mathbf{x})\cdot))\). Similarly, one can also consider the feedback laws \(u_{i}=0\) and \(v_{i}=\frac{1}{f}\). In this case, the corresponding operator of interest is given by \[A^{*}=\nabla\cdot(a(\mathbf{x})\nabla_{H}(a(\mathbf{x})\cdot)) \tag{8}\] This control law is similar to the one presented in [19] for the case of holonomic agents, where instead of the Stratonovich integral, we considered the Ito integral, and the resulting generator was of the form \(\nabla\cdot(\nabla_{H}(a(\mathbf{x})\cdot))\). ### _Stability Analysis_ The preceding discussion motivates us to study stability properties of PDEs associated with a class of hypoelliptic operators that have a given probability density as their equilibrium solution. In this section, we will provide a semigroup theoretic analysis of a class of such PDEs. There have been a number of works on semigroups generated by hypoelliptic operators on manifolds without boundary [33], or manifolds with boundary under the Dirichlet boundary [60]. Due to the term \(a(\mathbf{x})\), the operators that we consider are more general than those in [60]. There has also been work on long-term behavior of hypoelliptic diffusions to uniform distributions for the special case of Carnot groups [54] and to more general equilibrium distributions, as well as on more general state spaces, for certain examples of hypoelliptic diffusions using log-Sobolev inequalities [6]. In comparison, our results hold for a more general class of degenerate diffusions by establishing a spectral gap for the generator, which can be guaranteed to exist for equilibrium distributions that are bounded uniformly from above and below by positive numbers. Before we present our stability analysis, we give some more preliminary definitions. Given \(a,b\in L^{\infty}(\Omega)\) such that \(a\geq c\) and \(b\geq c\) for some positive constant \(c\), and \(\mathcal{D}(\omega_{a}^{b})=WH_{a}^{1}(\Omega)\), we define the sesquilinear form \(\omega_{a}^{b}:\mathcal{D}(\omega_{a}^{b})\times\mathcal{D}(\omega_{a}^{b}) \rightarrow\mathbb{C}\) as \[\omega_{a}^{b}(u,v)=\sum_{i=1}^{m}\int_{\Omega}b(\mathbf{x})X_{i}(a(\mathbf{x} )u(\mathbf{x}))\cdot X_{i}(a(\mathbf{x})\bar{v}(\mathbf{x}))d\mathbf{x} \tag{9}\] for each \(u\in\mathcal{D}(\omega_{a}^{b})\). We associate with the form \(\omega_{a}^{b}\) an operator \(A_{a}^{b}:\mathcal{D}(A_{a}^{b})\to L_{a}^{2}(\Omega)\), defined as \(A_{a}^{b}u=v\), if \(\omega_{a}^{b}(u,\phi)=\langle v,\phi\rangle_{a}\) for all \(\phi\in\mathcal{D}(\omega_{a}^{b})\) and for all \(u\in\mathcal{D}(A_{a}^{b})=\{g\in\mathcal{D}(\omega_{a}^{b}):\ \exists f\in L_{a}^{2}(\Omega)\ \text{ s.t.}\ \omega_{a}^{b}(g,\phi)= \langle f,\phi\rangle_{a}\ \forall\phi\in\mathcal{D}(\omega_{a}^{b})\}\). When the \(X_{i}\) are formally skew-adjoint, the operator \(A_{a}^{b}\) is a weak formulation of the the second-order partial differential operator \(\sum_{i=1}^{m}X_{i}^{*}(b(\mathbf{x})X_{i}(a(\mathbf{x})\ \cdot\ ))\). An advantage of using this weak formulation of the operator, rather than a strong formulation, is that one does not need to establish that the domain \(\mathcal{D}(A_{a}^{b})\) contains twice (weakly) differentiable functions, which might not be true in general, given the very weak regularity assumed on the boundary of the domain \(\Omega\)[32]. Formally, we will be studying properties of the following PDE, \[y_{t}=\sum_{i=1}^{m}X_{i}^{*}(b(\mathbf{x})X_{i}(a(\mathbf{x})y))\ \ in\ \ \Omega\times[0,T], \tag{10}\] Note that for \(b=1/a\), \(b=a\), and \(b=\mathbf{1}\), we recover the operators introduced in the previous subsection. The proofs of the results in this section follow closely, almost verbatim, to those for the elliptic case considered by the authors in [24]. Therefore, due to space limitations, we only include the proof of the stability result, which is stated in Theorem III.6. These results will be used extensively in Section IV, where we consider the case in which the agents have local interactions with one another. The main technical difference between the proofs in [24] and the proofs of the results presented in this section is that here, we use the horizontal Sobolev spaces \(WH^{1}(\Omega)\) to establish semigroup generation properties of the generator \(A\), instead of the classical Sobolev space \(H^{1}(\Omega)\). Due to the bracket generating property of the vector fields \(\mathcal{V}\), it is known that the space \(WH^{1}(\Omega)\) has many properties similar to the classical Sobolev space \(H^{1}(\Omega)\)[29, 49]. We will need the following assumption for the results in this section to hold true. **Assumption III.2**.: _Only one of the following conditions holds:_ 1. _The domain_ \(\Omega\) _is_ \(\epsilon-\delta\) _and_ \(M=\mathbb{R}^{n}\)_._ 2. _The manifold_ \(M\) _is compact and without a boundary._ 3. _The boundary_ \(\partial\Omega\) _of the domain_ \(\Omega\) _is_ \(C^{1}\) _and_ \(\mathrm{span}\{\mathcal{V}\}=T_{\mathbf{x}}\mathbf{M}\) _for all_ \(\mathbf{x}\in\Omega\)_._ **Lemma III.3**.: 1. _The operator_ \(A_{a}^{b}:\mathcal{D}(A_{a}^{b})\to L_{a}^{2}(\Omega)\) _is closed, densely defined, and self-adjoint._ 2. _The operator_ \(A_{a}^{b}:\mathcal{D}(A_{a}^{b})\to L_{a}^{2}(\Omega)\) _has a purely discrete spectrum._ 3. \(-A_{a}^{b}\) _generates a semigroup of operators_ \((\mathcal{T}_{a}^{b}(t))_{t\geq 0}\)_. The semigroup_ \((\mathcal{T}_{a}^{b}(t))_{t\geq 0}\) _is positive and a contraction._ That the operator \(A_{a}^{b}\) is densely defined follows from the fact that the domain of the form \(\omega_{a}^{b}\), which is \(WH_{a}^{1}(\Omega)\), is dense in \(L_{a}^{2}(\Omega)\) when equipped with the norm \((\omega_{a}^{b}(u,u)+\|u\|_{2}^{2})^{1/2}\). The result on discreteness of the spectrum of the operator \(A_{a}^{b}\) follows from the compactness of the embedding \(WH^{1}(\Omega)\) in \(L^{2}(\Omega)\). The positivity properties of the semigroup \(\mathcal{T}_{a}^{b}(t)\), we need the fact that if \(f\in WH^{1}(\Omega)\), then \(|f|\in WH^{1}(\Omega)\). This result is known for the case where \(\Omega\) is a subset of \(\mathbb{R}^{n}\). The proof for the general case where \(\Omega\) is a manifold follows verbatim the results of [29, 49], since the proof only requires the density of the space \(C^{\infty}(M)\) in \(WH^{1}(\Omega)\), which can be verified. Using these facts, the proof of the previous lemma follows the proof of [24][Lemma IV.1] very closely. The contractivity of the semigroup follows from [50][Proposition 1.51] due to the fact that the form \(\omega_{a}^{b}\) is _accretive_; that is, \(\omega_{b}^{\omega}(u,u)\geq 0\) for all \(u\in\mathcal{D}(\omega_{b}^{\omega})\). **Proposition III.4**.: _[_29, 49_]_ _Given \(f\in WH^{1}(\Omega)\), we have that \(|f|\in WH^{1}(\Omega)\)._ Using the above proposition and the established properties of the operator \(A_{a}^{b}\), we can prove the following results on the semigroup generated by the operator \(-A_{a}^{b}\) concerning its positivity and mass-conserving properties. These results will be used in the analysis presented in Section IV. **Corollary III.5**.: _The operator \(-A_{a}^{b}\) generates a semigroup of operators \((\mathcal{T}_{a}^{b}(t))_{t\geq 0}\) acting on \(L_{a}^{2}(\Omega)\). The semigroup \((\mathcal{T}_{a}^{b}(t))_{t\geq 0}\) is positive. Furthermore, if \(a=b=\mathbf{1}\) is the constant function equal to \(1\) almost everywhere, then \(\|y^{0}\|_{\infty}\leq 1\) implies that \(\|\mathcal{T}_{a}^{b}(t)y^{0}\|_{\infty}\leq 1\) for all \(t\geq 0\)._ _Additionally, this semigroup has the following mass conservation property: if \(y_{0}\in L_{a}^{2}(\Omega)\), \(y^{0}\geq 0\) and \(\int_{\Omega}y^{0}(\mathbf{x})d\mathbf{x}=1\), then \(\int_{\Omega}(\mathcal{T}_{a}^{b}(t)y^{0})(\mathbf{x})d\mathbf{x}=\int_{\Omega}( \mathcal{T}_{a}^{B}(t)y^{0})(\mathbf{x})d\mathbf{x}=1\) for all \(t\geq 0\)._ Lastly, we establish the following important result on the long-term stability properties of the semigroups associated with the operators in \(\sum_{i=1}^{m}X_{i}^{*}(b(\mathbf{x})X_{i}(a(\mathbf{x})\ \cdot\ ))\()\). **Theorem III.6**.: _(Exponential stability of semigroup) The semigroup \((\mathcal{T}_{a}^{b}(t))_{t\geq 0}\) generated by the operator \(-A_{a}^{b}\) is analytic. Moreover, \(0\) is a simple eigenvalue of \(-A_{a}^{b}\) corresponding to the eigenvector \(f=1/a\). Hence, if \(y^{0}\geq 0\) and \(\int_{\Omega}y^{0}(\mathbf{x})d\mathbf{x}=\int_{\Omega}f(\mathbf{x})d\mathbf{x}=1\), then the following estimate holds for some positive constants \(M_{0},\lambda\) and all \(t\geq 0\):_ \[\|\mathcal{T}_{a}^{b}(t)y^{0}-f\|_{a}\quad\leq\quad M_{0}e^{-\lambda t}\|y^{0}- f\|_{a} \tag{11}\] Proof.: We only prove the stability result, since the analyticity of the semigroup and the conservation of mass properties can be established as in [24]. The semigroup is compact due to the compactness of the embedding \(\mathcal{D}(A_{a}^{b})\subset WH^{1}(\Omega)\) in \(L^{2}(\Omega)\). Since the semigroup is analytic and compact, in order to establish its stability properties, it is sufficient to identify the eigenvectors associated with the eigenvalue \(0\). In the proof of the corresponding result in [24], we used the Poincare inequality to establish the uniqueness of the eigenvector of constant functions, corresponding to the eigenvalue \(0\) of the Laplacian \(\Delta\). It is not clear whether the Poincare inequality holds for the operator \(-A_{a}^{b}\) for condition 2 in Assumption III.2. Hence, instead of using a Poincare inequality, we will prove that the kernel of the operator \(-A_{a}^{b}\) consists only of con stant functions. Suppose \(u\in\mathcal{D}(A)\) is such that \(Au=\mathbf{0}\), where \(A:=A_{\mathbf{1}}^{1}\). This implies that \(<Au,u>=\sum_{i=1}^{m}(X_{i}u)^{2}=0\). Since the operator \(A\) satisfies the Lie rank condition, from regularity results to Hormander [13], we can infer that \(u\) is locally smooth everywhere in \(\Omega\). Then we know that, for a given horizontal curve \(\boldsymbol{\gamma}:[0,1]\to\Omega\), \(u(\boldsymbol{\gamma}(1))=u(\boldsymbol{\gamma}(0))+\int_{0}^{1}\sum_{i=1}^{m} a_{i}(t)X_{i}u(\boldsymbol{\gamma}(t))dt\) since \(u(\boldsymbol{\gamma}(t))\) satisfies the differential equation \(\dot{u}(\boldsymbol{\gamma}(t))=\sum_{i=1}^{m}a_{i}(t)X_{i}u(\boldsymbol{ \gamma}(t))\), where \(a_{i}(t)\) are the essentially bounded functions associated with the curve \(\boldsymbol{\gamma}(t)\) according to (2). Hence, \(u(\boldsymbol{\gamma}(1))-u(\boldsymbol{\gamma}(0))=\int_{0}^{1}\sum_{i=1}^{m} a_{i}(t)X_{i}u(\boldsymbol{\gamma}(t))dt=0\) because \(\sum_{i=1}^{m}(X_{i}u)^{2}=0\). Note that we require the local smoothness of \(u\) in order to make sense of the term \(\int_{0}^{1}\sum_{i=1}^{m}a_{i}(t)Xu(\boldsymbol{\gamma}(t))dt\). Since \(\mathcal{V}\) is bracket generating, we can choose \(\boldsymbol{\gamma}(t)\) such that \(\boldsymbol{\gamma}(0)\) and \(\boldsymbol{\gamma}(1)\) are the given initial and final conditions in \(\Omega\). Hence, we have that \(u\) is constant everywhere on \(\Omega\). This implies that \(A\mathbf{1}=\mathbf{0}\), and hence \(A_{a}^{b}f=\mathbf{0}\) due to the assumption that \(a,b\) are uniformly bounded from below by a positive constant. ## IV Stabilization with Local Agent Interactions In Section III, the probability densities that we stabilized were assumed to be uniformly bounded from below by a positive number. Without this assumption, the semigroups that were constructed would not be globally asymptotically stable. In this section, we will introduce a semilinear PDE model for stabilizing a swarm to probability densities that possibly have supports that are disconnected. As in Section III, \(\Omega\) will denote an open bounded subset of a manifold, and we consider a collection of vector fields \(\mathcal{V}=\{X_{1},...,X_{m}\}\) satisfying the Chow-Rashevsky condition. Let \(A:=A_{\mathbf{1}}^{1}\) be the operator defined in Section III, where \(\mathbf{1}\) denotes the function that is equal to \(1\) almost everywhere on \(\Omega\). We will also need the spaces \(\mathbf{L}^{2}(\Omega)=L^{2}(\Omega)\times L^{2}(\Omega)\) and \(\mathbf{L}^{\infty}(\Omega)=L^{\infty}(\Omega)\times L^{\infty}(\Omega)\) with the standard norms inherited from the spaces \(L^{2}(\Omega)\) and \(L^{\infty}(\Omega)\). We will consider the following PDE model, \[(y_{1})_{t}=-Ay_{1}-q_{1}(\mathbf{x},t)y_{1}+q_{2}(\mathbf{x},t) y_{2}\quad\;\;in\;\;\Omega\times[0,T]\] \[(y_{2})_{t}=q_{1}(\mathbf{x},t)y_{1}-q_{2}(\mathbf{x},t)y_{2} \quad\;\;\;\;in\;\;\Omega\times[0,T]\] \[\mathbf{y}(\cdot,0)=\mathbf{y}^{0}\quad\;\;\Omega\] \[\mathbf{n}\cdot\nabla y_{1}=0\quad\;\;in\;\;\;\partial\Omega \times[0,T], \tag{12}\] where \(y_{1}\) and \(y_{2}\) are non-negative functions and \(q_{i}\) are reaction parameters. This PDE model is the forward equation of a hybrid switching diffusion process (HSDP) [62]. In addition to a continuous spatial state \(\mathbf{Z}(t)\), each agent is associated with a discrete state \(Y(t)\in\{0,1\}\) at each time \(t\). The hybrid switching diffusion process \((\mathbf{Z}(t),Y(t))\) can be represented as a system of SDEs of the form \[d\mathbf{Z}(t) = \sqrt{2}(1-Y(t))\sum_{i=1}^{m}X_{i}\circ dW_{i}\;+\mathbf{n}( \mathbf{Z}(t))d\psi(t),\] \[\mathbf{Z}(0) = \mathbf{Z}_{0}. \tag{13}\] The PDE (IV-A) is related to the SDE (IV-A), for each \(k\in\{0,1\}\), through the relation \(\mathbb{P}(Y(t)=k,\mathbf{Z}(t)\in\Gamma)=\int_{\Gamma}y_{k+1}(\mathbf{x},t)d \mathbf{x}\) for all \(t\in[0,T]\) and all measurable \(\Gamma\subset\Omega\). The transitions of the variable \(Y(t)\) from one discrete state to another is determined by two functions \(q_{i}:\Omega\to[0,\infty]\) in the following way, \[\mathbb{P}(Y(t+h)=1|Y(t)=0)=q_{1}(\mathbf{Z}(t),t)h+o(h) \tag{14}\] \[\mathbb{P}(Y(t+h)=0|Y(t)=1)=q_{2}(\mathbf{Z}(t),t)h+o(h) \tag{15}\] The state \(Y(t)=0\) corresponds to the state in which agents diffuse in space according to the reflected SDE, and the state \(Y(t)=1\) corresponds to a state in which they are motionless. Therefore, unlike the process considered in Section III, each agent has two discrete states, between which it jumps according to the _transition rates_\(q_{i}(\mathbf{x},t)\) (also called _reaction parameters_). We will treat the transition rates \(q_{i}(\mathbf{x},t)\) as the control inputs, instead of the velocity and diffusion parameters \((u_{i},v_{i})\). Since we will allow the control inputs to be functions of the density of the random variables \((\mathbf{Z}(t),Y(t))\), i.e., the density of agents in each state at time \(t\), this reaction-based control mechanism depends on _interactions_ among agents that enable them to estimate these densities, e.g., via local sensing, wireless communication, or physical encounters. Due to the density-dependent transition rates, the forward equation is a semilinear PDE. We will consider the following problem in this section. **Problem IV.1**.: _Let \(y^{d}\in L^{\infty}(\Omega)\) be a target probability density. Construct a mean-field feedback law \(K_{i}:\mathbf{L}^{2}(\Omega)\to L^{\infty}(\Omega)\) such that if \(u_{i}(\cdot,t)=K_{i}(\mathbf{y}(t))\) for all \(i\in\{1,2\}\) and all \(t\geq 0\), then the system (IV-A) is globally asymptotically stable about the equilibrium \(\mathbf{y}^{d}=[\mathbf{0}\;\;\;y^{d}]^{T}\)._ Before we address this problem, we make some additional assumptions on the domain \(\Omega\) and the operator \(A\). Toward this end, we present the following definitions. **Definition IV.2**.: _We will say that \(\Omega\) is a \(C^{1,1}\)**domain** if each point \(\mathbf{x}\in\partial\Omega\) has a neighborhood \(\mathcal{N}\) such that \(\Omega\cap\mathcal{N}\) is represented by the inequality \(x_{n}<\gamma(x_{1},...,x_{n-1})\) in some Cartesian coordinate system for some function \(\gamma:\mathbb{R}^{n-1}\to\mathbb{R}\) that is at least once differentiable and has derivatives of order \(1\) that are Lipschitz continuous._ **Definition IV.3**.: _The domain \(\Omega\) will be said to satisfy the **chain condition** if there exists a constant \(C>0\) such that for every \(\mathbf{x},\bar{\mathbf{x}}\in\Omega\) and every positive \(j\in\mathbb{Z}_{+}\), there exists a sequence of points \(\mathbf{x}_{i}\in\Omega\), \(0\leq i\leq j\), such that \(\mathbf{x}_{0}=\mathbf{x}\), \(\mathbf{x}_{j}=\bar{\mathbf{x}}\), and \(|\mathbf{x}_{i}-\mathbf{x}_{i+1}|\leq\frac{C}{j}|\mathbf{x}-\bar{\mathbf{x}}|\) for all \(i=0,...,j-1\). Here \(|\cdot|\) denotes the standard Euclidean norm._ Note that every convex domain satisfies the chain condition. In this section, we will make some stronger assumptions on the generator and the domain \(\Omega\) than those made in the previous section. **Assumption IV.4**.: _Only one of the following conditions holds:_ 1. _If_ \(\Omega\neq M\)_, then_ \(\Omega\) _is a bounded subset of_ \(\mathbb{R}^{n}\)_,_ \(-A=\sum_{i=1}^{n}\partial_{x_{i}}^{2}=\Delta\) _is the Laplacian, and_ \(\Omega\) _is a_ \(C^{1,1}\) _domain in the sense of Definition_ IV.2 _and satisfies the chain condition in Definition_ IV.3_._ 2. _The set_ \(\Omega\) _is a compact manifold_ \(M\) _without a boundary and_ \(-A=\sum_{i=1}^{m}X_{i}^{*}X_{i}\) Given these assumptions, we have the following result due to Gaussian estimates proved by [16] for the Laplacian \(\Delta\), and by [33] for sub-Laplacians. We will use this result in the subsequent analysis. **Theorem IV.5**.: _Let \((\mathcal{T}(t))_{t\geq 0}\) be the semigroup generated by the operator \(-A\). Let \(y^{0}\in L^{2}(\Omega)\) be non-negative. Then there exists a constant \(C>0\) and time \(T>0\), independent of \(y^{0}\), such that \(\mathcal{T}(t)y^{0}\geq C\|y^{0}\|_{1}\) for all \(t\geq T\)._ In order to address Problem IV.1, we define the following maps \(F_{i}:L^{2}(\Omega)\to L^{\infty}(\Omega)\), \(i\in\{1,2\}\), \[(F_{i}(f))(\mathbf{x})=r_{i}(f(\mathbf{x})-y^{d}(\mathbf{x})) \tag{16}\] for almost every \(\mathbf{x}\in\Omega\) and all \(f\in L^{2}(\Omega)\), where \(r_{i}:\mathbb{R}\to[0,c]\) are globally Lipschitz functions for some positive number \(\beta\), such that the functions \(r_{1}\) and \(r_{2}\) have supports equal to the intervals \((-\infty,0]\) and \([0,\infty)\), respectively. Our candidate mean-field feedback law \(K_{i}\) for addressing Problem IV.1 will be \(K_{i}(\mathbf{y})=F_{i}(y_{2})\) for each \(i\in\{1,2\}\). Then the resulting _closed-loop_ PDE is given by \[(y_{1})_{t}=-Ay_{1}-F_{1}(y_{2})y_{1}+F_{2}(y_{2})y_{2} \quad\text{in}\ \ \Omega\times[0,T]\] \[(y_{2})_{t}=F_{1}(y_{2})y_{1}-F_{2}(y_{2})y_{2} \quad\text{in}\ \ \Omega\times[0,T]\] \[\mathbf{y}(\cdot,0)=\mathbf{y}^{0}\quad\text{in}\ \ \Omega\] \[\mathbf{n}\cdot\nabla y_{1}=0\quad\text{in}\ \ \partial\Omega\times[0,T], \tag{17}\] where the Neumann boundary condition in the last equation is specified only for the case where the boundary \(\partial\Omega\) is nonempty. Since the transition rates are a functions of the distribution of the random variable, the relation between the system of SDEs (13) and PDEs (17) is no longer straightforward. For the choice of control law \(F_{i}\), the SDE becomes a stochastic process of _Mckean-Vlasov_ type [46, 36], and further analysis is required to establish a rigorous connection between the two systems (12) and (17). Such an analysis is beyond the scope of this article, and is left for future work. Our main goal in this section will be to establish the asymptotic stability of the PDE (17) given Assumption IV.4. Before we begin the stability analysis of the above PDE model, we point out that standard approaches to stability analysis, such as linearization-based approaches or Lyapunov functional arguments, are not immediately applicable, as we demonstrate in the following two remarks. **Remark IV.6**.: _(**Lack of exponential stability**) Consider the linearization of the PDE (17) about the target equilibrium density \(\mathbf{y}^{d}=[\mathbf{0}\quad y^{d}]^{T}\). It can be verified that the (Frechet) derivative of the nonlinear operators \(F_{i}\) about \(\mathbf{y}^{d}\) is the \(\mathbf{0}\) operator. Therefore, the linearization of the PDE about the equilibrium \(\mathbf{y}^{d}\) is:_ \[(\tilde{y}_{1})_{t}=-A\tilde{y}_{1} \quad\text{in}\ \ \Omega\times[0,T]\] \[(\tilde{y}_{2})_{t}=\mathbf{0} \quad\text{in}\ \ \Omega\times[0,T]\] Clearly, this PDE is not exponentially stable since the spectrum of its generator, \(\mathbf{A}\mathbf{y}=[Ay_{1}\ \mathbf{0}]^{T}\), has an infinite number of eigenvalues at \(0\). Hence, the PDE (17) cannot be locally exponentially stable about the equilibrium \(\mathbf{y}^{d}\). **Remark IV.7**.: _(**Difficulty in using LaSalle's principle**) Another standard approach to establish asymptotic stability of dynamical systems is LaSalle's invariance principle [35]. However, the application of LaSalle's invariance principle for stability analysis of infinite-dimensional dynamical systems, such as the PDE (17), requires that the trajectories of the system remain in a compact set for all time. The compactness of trajectories for solutions of parabolic PDEs is usually inferred from the regularizing effect of the diffusion component of the dynamics. This is not straightforward to establish for solutions \(\mathbf{y}\) of the PDE (17) due to the fact that the diffusion operator \(A\) acts only on the first state \(y_{1}\), and therefore it cannot be guaranteed that the state \(y_{2}\) lies in a Sobolev space._ Due to the technical issues pointed out in Remarks IV.6 and IV.7, we will use an alternative approach to establish asymptotic stability of the PDE (17) based on the monotonicity properties of the PDE. In order to perform stability analysis of the PDE (17), we will need a suitable notion of a solution. Toward this end, we use the following definition. **Definition IV.8**.: _Let \((\mathcal{T}(t))_{t\geq 0}\) be the semigroup generated by the operator \(-A\). We will say that the PDE (17) has a **local mild solution** if there exist \(T>0\) and \(\mathbf{y}\in C([0,T];\mathbf{L}^{2}(\Omega))\) such that_ \[y_{1}(\cdot,t) = \mathcal{T}(t)y_{1}^{0}-\int_{0}^{t}\mathcal{T}(t-s)\Big{(}F_{1}( y_{2}(\cdot,s))y_{1}(\cdot,s)\Big{)}ds\] \[+\int_{0}^{t}\mathcal{T}(t-s)\Big{(}F_{2}(y_{2}(\cdot,s))y_{2}( \cdot,s)\Big{)}ds,\] \[y_{2}(\cdot,t) = y_{2}^{0}+\int_{0}^{t}F_{1}(y_{2}(\cdot,s))y_{1}(\cdot,s)ds \tag{18}\] \[-\int_{0}^{t}F_{2}(y_{2}(\cdot,s))y_{2}(\cdot,s)ds\] _for all \(t\in[0,T]\). We will say that the PDE (17) has a unique **global solution** if it has a unique local mild solution for every \(T>0\)._ To establish the existence of solutions of the PDE (17), we will need the operator \(\mathbf{A}:\mathcal{D}(\mathbf{A}):\mapsto\mathbf{L}^{2}(\Omega)\), defined as \[\mathbf{A}\mathbf{y}=\begin{bmatrix}Ay_{1}\\ \mathbf{0}\end{bmatrix}\] for all \(\mathbf{y}\in\mathcal{D}(\mathbf{A})=\mathcal{D}(A)\times L^{2}(\Omega)\). Our next goal will be to construct global solutions of the PDE (17). First, we will show that the solutions of the PDE (17) remain essentially bounded if the initial condition is essentially bounded. Toward this end, we first establish this property for a related autonomous linear PDE. **Lemma IV.9**.: _Suppose \(\mathbf{y}\in\mathbf{L}^{\infty}(\Omega)\). Let \(\mathbf{a}\in\mathbf{L}^{\infty}(\Omega)\) be non-negative. Consider the linear bounded operator \(\mathbf{B}:\mathbf{L}^{2}(\Omega)\to\mathbf{L}^{2}(\Omega)\) defined by_ \[(\mathbf{B}\mathbf{y})(\mathbf{x})=\begin{bmatrix}-a_{1}(\mathbf{x})y_{1}( \mathbf{x})+a_{2}(\mathbf{x})y_{2}(\mathbf{x})\\ a_{1}(\mathbf{x})y_{1}(\mathbf{x})-a_{2}(\mathbf{x})y_{2}(\mathbf{x})\end{bmatrix}\] _for almost every \(\mathbf{x}\in\Omega\) and all \(\mathbf{y}\in\mathbf{L}^{2}(\Omega)\). Let \((\mathcal{T}^{\mathbf{C}}(t))_{t\geq 0}\) be the semigroup generated by the operator \(\mathbf{C}=-\mathbf{A}+\mathbf{B}\). Then \(\|\mathcal{T}^{\mathbf{C}}(t)\mathbf{y}^{0}\|_{\infty}\leq e^{\|\mathbf{a}\|_{ \infty}t}\|\mathbf{y}^{0}\|_{\infty}\) for all \(t\geq 0\)._ Proof.: We know that the operator \(\mathbf{A}\) generates a semigroup \((\mathcal{T}^{\mathbf{A}}(t))_{t\geq 0}\) given by \[\mathcal{T}^{\mathbf{A}}(t)=\begin{bmatrix}\mathcal{T}(t)&\mathbf{0}\\ \mathbf{0}&\mathbf{I}\end{bmatrix} \tag{19}\] for all \(t\geq 0\). Moreover, the semigroup \((\mathcal{T}^{\mathbf{A}}(t))_{t\geq 0}\) satisfies \(\|\mathcal{T}^{\mathbf{A}}(t)\mathbf{y}^{0}\|_{\infty}\leq\|\mathbf{y}^{0}\|_ {\infty}\) for all \(\mathbf{y}^{0}\in\mathbf{L}^{\infty}(\Omega)\) and \(t\geq 0\) (Corollary III.5). Additionally, we know that the semigroup \((\mathcal{T}^{\mathbf{B}}(t))_{t\geq 0}\) generated by the bounded operator \(\mathbf{B}\) satisfies the estimate \(\|\mathcal{T}^{\mathbf{B}}(t)\mathbf{y}^{0}\|_{\infty}\leq e^{\|\mathbf{a} \|_{\infty}t}\|\mathbf{y}^{0}\|_{\infty}\). Since \(\mathbf{B}\) is a bounded operator, and the resolvent of \(\mathbf{C}\) has an explicit well-defined representation [25][p.160], we can conclude that \(\lambda-\mathbf{C}\) has a dense range in \(\mathbf{L}^{2}(\Omega)\) for all \(\lambda\). Then the result follows from the _Lie-Trotter product formula_[25][Corrollary III.5.8] by noting that \(\mathcal{T}^{\mathbf{C}}(t)=\lim_{N\to 0}(\mathcal{T}^{\mathbf{A}}(\frac{t}{N}) \mathcal{T}^{\mathbf{B}}(\frac{t}{N}))^{N}\), where the limit holds in the strong operator topology, for all \(t\geq 0\). Now we can show that the \(\mathbf{L}^{\infty}-\) estimate proved in Lemma IV.9 can be extended to a class of non-autonomous linear systems that can be treated as autonomous linear systems over certain intervals of time. **Lemma IV.10**.: _Suppose \(\mathbf{y}^{0}\in\mathbf{L}^{\infty}(\Omega)\), \(c>0\) and \(T>0\). Let \(a_{1},a_{2}\in L^{2}(0,T;L^{2}(\Omega))\) be non-negative and piecewise constant with respect to time, with \(\|a_{1}(t)\|_{\infty}\leq c\) and \(\|a_{2}(t)\|_{\infty}\leq c\) for all \(t\in[0,T]\). Then suppose \(\mathbf{y}\in C([0,T];\mathbf{L}^{2}(\Omega))\) is given by_ \[y_{1}(\cdot,t) = \mathcal{T}(t)y_{1}^{0}-\int_{0}^{t}\mathcal{T}(t-s)\bigg{(}a_{1 }(\cdot,t)y_{1}(\cdot,s)\bigg{)}ds\] \[+\int_{0}^{t}\mathcal{T}(t-s)\bigg{(}a_{2}(\cdot,t)y_{2}(\cdot, s)\bigg{)}ds\] \[y_{2}(\cdot,t) = y_{2}^{0}+\int_{0}^{t}a_{1}(\cdot,s)y_{1}(\cdot,s)ds-\int_{0}^{ t}a_{2}(\cdot,s)y_{2}(\cdot,s)ds\] _for all \(t\in[0,T]\). Then_ \[\|\mathbf{y}(\cdot,t)\|_{\infty}\leq e^{ct}\|\mathbf{y}^{0}\|_{\infty} \tag{20}\] _for all \(t\in[0,T]\)._ Proof.: Let \((t_{i})_{i=0}^{m}\) be a finite sequence of length \(m+1\in\mathbb{Z}_{+}\) of strictly increasing time instants, with \(t_{0}=0\), such that the functions \(a_{1}\) and \(a_{2}\) are constant over the intervals \([t_{i-1},t_{i})\), \(i\in\{1,...,m\}\). Then, for each \(i\in\{1,...,m\}\), consider the bounded operators \(\mathbf{B}_{i}:\mathbf{L}^{2}(\Omega)\to\mathbf{L}^{2}(\Omega)\) and \(\mathbf{C}_{i}:\mathcal{D}(\mathbf{A})\to\mathbf{L}^{2}(\Omega)\) \[(\mathbf{B}_{i}\mathbf{y})(\mathbf{x})=\begin{bmatrix}-a_{1}(\mathbf{x},t_{i -1})y_{1}(\mathbf{x})+a_{2}(\mathbf{x},t_{i-1})y_{2}(\mathbf{x})\\ a_{1}(\mathbf{x},t_{i-1})y_{1}(\mathbf{x})-a_{2}(\mathbf{x},t_{i-1})y_{2}( \mathbf{x})\end{bmatrix} \tag{21}\] for almost every \(\mathbf{x}\in\Omega\) and all \(\mathbf{y}\in\mathbf{L}^{2}(\Omega)\), and \(\mathbf{C}_{i}=\mathbf{A}+\mathbf{B}_{i}\), respectively. Then for each \(i\in\{1,...,m\}\), \(\mathbf{y}\) is given by \[\mathbf{y}(\cdot,t)=\mathcal{T}^{\mathbf{C}_{i}}(t-t_{i})\mathcal{T}^{ \mathbf{C}_{i-1}}(t_{i}-t_{i-1})...\mathcal{T}^{\mathbf{C}_{1}}(t_{1}) \tag{22}\] for all \(t\in[t_{i-1},t_{i}]\). Then the result follows from Lemma IV.9. **Lemma IV.11**.: _Suppose \(\mathbf{y}^{0}\in\mathbf{L}^{\infty}(\Omega)\), \(c>0\), and \(T>0\). Let the functions \(a_{1},a_{2}\in L^{2}(0,T;L^{2}(\Omega))\) be non-negative with \(\|a_{1}(t)\|_{\infty}\leq c\) and \(\|a_{2}(t)\|_{\infty}\leq c\) for almost every \(t\in[0,T]\). Then there exists \(\mathbf{y}\in C([0,T];\mathbf{L}^{2}(\Omega))\) given by_ \[y_{1}(\cdot,t) = \mathcal{T}(t)y_{1}^{0}-\int_{0}^{t}\mathcal{T}(t-s)\Big{(}a_{1} (\cdot,s)y_{1}(\cdot,s)\Big{)}ds\] \[+\int_{0}^{t}\mathcal{T}(t-s)\Big{(}a_{2}(\cdot,s)y_{2}(\cdot,s) \Big{)}ds,\] \[y_{2}(\cdot,t) = y_{2}^{0}+\int_{0}^{t}a_{1}(\cdot,s)y_{1}(\cdot,s)ds-\int_{0}^{t}a_ {2}(\cdot,s)y_{2}(\cdot,s)ds\] _for all \(t\geq 0\). Moreover,_ \[\|\mathbf{y}(\cdot,t)\|_{\infty}\leq e^{ct}\|\mathbf{y}^{0}\|_{\infty} \tag{24}\] _for all \(t\in[0,T]\)._ Proof.: Given that \(a_{1},a_{2}\in L^{2}(0,T;L^{2}(\Omega))\), we know that there exists a sequence of piecewise constant (with respect to time) non-negative functions \((a_{1}^{i})_{\infty=1}^{\infty},(a_{2}^{i})_{\infty=1}^{\infty}\) in \(L^{2}(0,T;L^{2}(\Omega))\) such that \(\lim_{i\to\infty}\|a_{i}^{i}-a_{j}\|_{L^{2}(0,T;L^{2}(\Omega))}=0\), for \(j=1,2\)[55][Proposition 1.36]. Moreover, for each \(j\in\{1,2\}\), we can assume that \(\|a_{j}^{i}(t)\|_{\infty}\leq c\) for all \(t\in[0,T]\) and all \(i\in\mathbb{Z}_{+}\). Consider the corresponding sequence \((\mathbf{y})_{i=1}^{\infty}\) in \(C([0,T];\mathbf{L}^{2}(\Omega))\) defined by \[y_{1}^{i}(\cdot,t) = \mathcal{T}(t)y_{1}^{0}-\int_{0}^{t}\mathcal{T}(t-s)\Big{(}a_{1} ^{i}(\cdot,s)y_{1}^{i}(\cdot,s)\Big{)}ds\] \[+\int_{0}^{t}\mathcal{T}(t-s)\Big{(}a_{2}(\cdot,s)y_{2}^{i}(\cdot, s)\Big{)}ds,\] \[y_{2}^{i}(\cdot,t) = y_{2}^{0}+\int_{0}^{t}a_{1}^{i}(\cdot,s)y_{1}^{i}(\cdot,s)ds-\int_{0 }^{t}a_{2}^{i}(\cdot,s)y_{2}^{i}(\cdot,s)ds\] for each \(i\in\mathbb{Z}_{+}\). Let \(\mathbf{e}^{i,j}\in C([0,T];\mathbf{L}^{2}(\Omega))\) be given by \(\mathbf{e}^{i,j}=\mathbf{y}^{i}-\mathbf{y}^{j}\) for each \(i,j\in\mathbb{Z}_{+}\). Then, from equations (23) and (IV.1), we know that \(\mathbf{e}^{i,j}\) satisfies \[e_{1}^{i,j}(\cdot,t) = -\int_{0}^{t}\mathcal{T}(t-s)\Big{(}a_{1}^{i}(\cdot,s)y_{1}^{i}( \cdot,s)\Big{)}ds\] \[+\int_{0}^{t}\mathcal{T}(t-s)\Big{(}a_{2}^{i}(\cdot,s)y_{2}^{i}( \cdot,s)\Big{)}ds\] \[+\int_{0}^{t}\mathcal{T}(t-s)\Big{(}a_{1}^{j}(\cdot,s)y_{1}^{j}( \cdot,s)\Big{) uniformly bounded in \(L^{\infty}((0,T)\times\Omega)\) for \(k=1,2\), we can conclude that there exists a constant \(\alpha>0\) such that \[\|e_{1}^{i,j}(\cdot,t)\|_{2} \leq \alpha\|a_{1}^{i}-a_{1}^{j}\|_{\infty}\|y_{1}^{0}\|_{\infty} \tag{26}\] \[+\ \alpha\|a_{1}^{j}\|_{\infty}\int_{0}^{t}\|e_{1}^{i,j}(s)\|_{2}ds\] \[+\ \alpha\|a_{2}^{i}-a_{2}^{j}\|_{\infty}\|y_{2}^{0}\|_{\infty}\] \[+\ \alpha\|a_{2}^{j}\|_{\infty}\int_{0}^{t}\|e_{2}^{i,j}(s)\|_{2}ds\] for all \(t\in[0,T]\). Similarly, we can obtain the estimate \[\|e_{2}^{i,j}(\cdot,t)\|_{2} \leq \alpha\|a_{1}^{i}-a_{1}^{j}\|_{\infty}\|y_{1}^{0}\|_{\infty} \tag{27}\] \[+\ \alpha\|a_{1}^{j}\|_{\infty}\int_{0}^{t}\|e_{1}^{i,j}(s)\|_{2}ds\] \[+\ \alpha\|a_{2}^{i}-a_{2}^{j}\|_{\infty}\|y_{2}^{0}\|_{\infty}\] \[+\ \alpha\|a_{2}^{j}\|_{\infty}\int_{0}^{t}\|e_{2}^{i,j}(s)\|_{2}ds\] for all \(t\in[0,T]\). Then, by considering the sum \(\|e_{1}^{i,j}(\cdot,t)\|_{2}+\|e_{2}^{i,j}(\cdot,t)\|_{2}\), combining the two inequalities (26) and (27), and applying the integral form of Gronwall's inequality [27], we have that \[\|e_{1}^{i,j}(\cdot,t)\|_{2}+\|e_{2}^{i,j}(\cdot,t)\|_{2}\ \leq\ C_{1}e^{C_{2}t} \tag{28}\] for all \(t\in[0,T]\), where \(C_{1}\) and \(C_{2}\) are constants depending only on \(\beta\) (the upper bound on the functions \(r_{i}\)) and \(\|\mathbf{y}^{0}\|_{\infty}\). From the inequality (28), we can infer that \[\lim_{i,j\rightarrow\infty}\|\mathbf{e}^{i,j}\|_{C([0,T];\mathbf{L}^{2}( \Omega))}=0\] This implies that the solution of (23) is continuous with respect to \(a_{i}\), and by continuity, one can construct a solution \(\mathbf{y}\) corresponding to the coefficients \(a_{i}\). Considering the estimate (20), we can conclude that \(\mathbf{y}\) satisfies the estimate (24). From the above lemma, we can conclude the following theorem on global existence of solutions of the PDE (17). **Theorem IV.12**.: _Suppose \(\mathbf{y}^{0}\in\mathbf{L}^{\infty}(\Omega)\). Then the PDE (17) has a unique global mild solution._ Proof.: We use a contraction mapping approach to construct the solution. Consider a map \(\Gamma\) on \(C([0,T];\mathbf{L}^{2}(\Omega))\) that is defined by \(\mathbf{v}\mapsto\Gamma(\mathbf{v})\equiv\tilde{\mathbf{v}}\), where \(\tilde{\mathbf{v}}\) is constructed by setting \(a_{i}=F_{i}(v_{2})\), \(i\in\{1,2\}\), in (23): \[\tilde{v}_{1}(\cdot,t) = \mathcal{T}(t)y_{1}^{0}-\int_{0}^{t}\mathcal{T}(t-s)\Big{(}F_{1 }(v_{2}(\cdot,s))\tilde{v}_{1}(\cdot,s)\Big{)}ds\] \[+\ \int_{0}^{t}\mathcal{T}(t-s)\Big{(}F_{2}(v_{2}(\cdot,s))\tilde{ v}_{2}(\cdot,s)\Big{)}ds,\] \[\tilde{v}_{2}(\cdot,t) = y_{2}^{0}+\int_{0}^{t}F_{1}(v_{2}(\cdot,s))\tilde{v}_{1}(\cdot,s )ds \tag{29}\] \[-\int_{0}^{t}F_{2}(v_{2}(\cdot,s))\tilde{v}_{2}(\cdot,s)ds\] for all \(t\in[0,T]\). Then \(\mathbf{y}\) is a solution of the PDE (17) if it is a fixed point of the map \(\Gamma\). From Lemma IV.11, any solution of the PDE (17) must be attained in the set \(\mathcal{Y}=\{\mathbf{v}\in C([0,T],\mathbf{L}^{2}(\Omega));\|v(t)\|_{\infty} \leq\hat{C}\ \forall t\in[0,T]\}\) for some sufficiently large constant \(\hat{C}>0\) that depends on \(\|\mathbf{y}^{0}\|_{\infty}\). Therefore, it suffices to check that the map \(\Gamma\) is a contraction on \(\mathcal{Y}\), which guarantees that it has a unique fixed point. Let \(\mathbf{v},\mathbf{u}\in\mathcal{Y}\). To show that \(\Gamma\) is a contraction on \(\mathcal{Y}\), we will derive an upper bound on \(\sup_{t\in[0,T]}\|\Gamma(\mathbf{v})-\Gamma(\mathbf{u})(t)\|_{2}\). In order to estimate \(\|\tilde{v}_{1}(\cdot,t)-\tilde{u}_{1}(\cdot,t)\|_{2}\), we first compute an upper bound on the corresponding terms in the difference according to the computation in (29): \[\bigg{\|}-\int_{0}^{t}\mathcal{T}(t-s)\Big{(}F_{1}(v_{2}(\cdot,s ))\tilde{v}_{1}(\cdot,s)\Big{)}ds\] \[\qquad\qquad+\int_{0}^{t}\mathcal{T}(t-s)\Big{(}F_{1}(u_{2}( \cdot,s))\tilde{u}_{1}(\cdot,s)\Big{)}ds\bigg{\|}_{2}\quad\leq\] \[\bigg{\|}\int_{0}^{t}\mathcal{T}(t-s)\big{(}-F_{1}(v_{2}(\cdot,s ))\tilde{v}_{1}(\cdot,s)\] \[\qquad\qquad+\ \ F_{1}(u_{2}(\cdot,s))\tilde{v}_{1}(\cdot,s)\big{)}ds \bigg{\|}_{2}\quad+\] \[\bigg{\|}\int_{0}^{t}\mathcal{T}(t-s)\big{(}F_{1}(u_{2}(\cdot,s ))\tilde{v}_{1}(\cdot,s)-F_{1}(u_{2}(\cdot,s))\tilde{u}_{1}(\cdot,s)\big{)}ds \bigg{\|}_{2}\] Since \(r_{i}\) in the definition (16) of \(F_{i}\) is a bounded Lipschitz function (with, say, Lipschitz constant \(K>0\)), and \(\mathcal{T}(t)\) is a contraction, we can compute the following upper bounds on the left-hand side of the above inequality: \[\bigg{|}\int_{0}^{t}\|\big{(}-F_{1}(v_{2}(\cdot,s))+F_{2}(u_{2}( \cdot,s)\Big{)}\tilde{v}_{1}(\cdot,s)\|_{2}ds\] \[\qquad\quad+\ c\int_{0}^{t}\|\tilde{v}_{1}(\cdot,s)-\tilde{u}_{1} (\cdot,s)\|_{2}ds\bigg{|}\] \[\leq K\int_{0}^{t}\|\Big{(}-v_{2}(\cdot,s)+u_{2}(\cdot,s)\Big{)}\tilde{v} _{1}(\cdot,s)\|_{2}ds\] \[\quad\quad+\ c\int_{0}^{t}\|\tilde{v}_{1}(\cdot,s)-\tilde{u}_{1} (\cdot,s)\|_{2}ds\] \[\leq \hat{C}K\int_{0}^{t}\|v_{2}(\cdot,s)-u_{2}(\cdot,s)\|_{2}ds\] \[\quad\quad+\ c\int_{0}^{t}\|\tilde{v}_{1}(\cdot,s)-\tilde{u}_{1} (\cdot,s)\|_{2}ds\] \[\leq TK\sup_{t\in[0,T]}\|v_{2}(\cdot,t)-u_{2}(\cdot,t)\|_{2}\] \[\quad+\ Tc\sup_{t\in[0,T]}\|\tilde{v}_{1}(\cdot,t)-\tilde{u}_{1} (\cdot,t)\|_{2}.\] Using a similar computation, we can estimate that \[\sup_{t\in[0,T]}(\|\tilde{v}_{1}(\cdot,t)-\tilde{u}_{1}(\cdot,t)\|_{2}+\| \tilde{v}_{2}(\cdot,t)-\tilde{u}_{2}(\cdot,t)\|_{2})\] \[\leq \tilde{C}T\sup_{t\in[0,T]}\|v_{1}(\cdot,t)-u_{1}(\cdot,t)\|_{2}\] \[\quad+\ \tilde{C}T\sup_{t\in[0,T]}\|v_{2}(\cdot,t)-u_{2}(\cdot,t)\|_{2}\] \[\quad+\ \tilde{C}T\sup_{t\in[0,T]}\|\tilde{v}_{1}(\cdot,t)-\tilde{u}_{1} (\cdot,t)\|_{2}\] \[\quad+\ \tilde{C}T\sup_{t\in[0,T]}\|\tilde{v}_{2}(\cdot,t)-\tilde{u}_{2} (\cdot,t)\|_{2}\] for a sufficiently large constant \(\tilde{C}>0\) that depends only on \(K\), \(c\), and \(\hat{C}\). This implies that \[\sup_{t\in[0,T]}(\|\tilde{v}_{1}(\cdot,t)-\tilde{u}_{1}(\cdot,t)\|_ {2}+\|\tilde{v}_{2}(\cdot,t)-\tilde{u}_{2}(\cdot,t)\|_{2})\] \[\leq \frac{\tilde{C}T}{1-\tilde{C}T}\sup_{t\in[0,T]}\|v_{1}(\cdot,t)-u_ {1}(\cdot,t)\|_{2}\] \[+ \frac{\tilde{C}T}{1-\tilde{C}T}\sup_{t\in[0,T]}\|v_{2}(\cdot,t)-u _{2}(\cdot,t)\|_{2}.\] Thus, if \(T>0\) is small enough, \(\Gamma\) is a contraction map on \(\mathcal{Y}\). This implies that for small enough \(T>0\), \(\Gamma\) has a fixed point \(\mathbf{y}\) that is a unique local mild solution to the PDE (17). Then, due to the uniform bound on the solution established in Lemma IV.11, it follows that the solution can, in fact, be extended to arbitrary \(T>0\), and is therefore global. Next, our goal will be to prove that \(\mathbf{y}^{d}\) is the globally asymptotically stable equilibrium of the system (17). Toward this end, we first prove the following preliminary results. **Lemma IV.13**.: _Suppose \(y^{0}\in L^{\infty}(\Omega)\) and \(T>0\). Let \(a\in L^{\infty}(\Omega)\) be non-negative. Consider the multiplication operator \(B:L^{2}(\Omega)\to L^{2}(\Omega)\) defined by_ \[(By)(\mathbf{x})=-a(\mathbf{x})y(\mathbf{x})\] _for all \(\mathbf{x}\in\Omega\) and all \(y\in L^{2}(\Omega)\). Let \((\mathcal{T}^{C}(t))_{t\geq 0}\) be the semigroup generated by the operator \(C=-A+B\). Then \(\|\mathcal{T}^{C}(t)y^{0}\|_{\infty}\leq\|y^{0}\|_{\infty}\) for all \(t\geq 0\)._ Proof.: We know that if \((\mathcal{T}(t))_{t\geq 0}\) is the semigroup generated by the operator \(-A\), then from Corollary III.5, \(\|\mathcal{T}(t)\mathbf{y}^{0}\|_{\infty}\leq\|y^{0}\|_{\infty}\) for all \(t\geq 0\). Moreover, \(B\) generates the multiplication semigroup \((e^{-a(\cdot)t})_{t\geq 0}\). Then the result follows from the Lie-Trotter formula [25]. **Lemma IV.14**.: _Let \(T>0\). Let \(f,a\in L^{2}(0,T;L^{2}(\Omega))\) be non-negative functions defined such that \(\|f(t)\|_{\infty}\) and \(\|a(t)\|_{\infty}\) are bounded by a constant \(C>0\) almost everywhere on \(t\in[0,T]\). Suppose \(e\in C([0,T];L^{2}(\Omega))\) is given by_ \[e(\cdot,t)= -\int_{0}^{t}\mathcal{T}(t-s)\Big{(}a(\cdot,s)e(\cdot,s)\Big{)}ds\] \[\qquad+\int_{0}^{t}\mathcal{T}(t-s)f(\cdot,s)ds\] _for all \(t\in[0,T]\). Then \(e(\cdot,t)\) is non-negative for all \(t\in[0,T]\)._ Proof.: Since the proof follows a similar line of argument as the proof of Lemma IV.11, we only sketch an outline of the proof here. As in the proof of Lemma IV.11, for a given \(a\in L^{2}(0,T;L^{2}(\Omega))\) we can construct a sequence \((a^{i})_{i=1}^{\infty}\) in \(L^{2}(0,T;L^{2}(\Omega))\) that is piecewise constant in time and converges in \(L^{2}(0,T;L^{2}(\Omega))\), with \(\|a^{i}(t)\|_{\infty}\) bounded almost everywhere on \([0,T]\) by \(C>0\). Let the sequence \((e_{i})_{i=1}^{\infty}\) in \(C([0,T];L^{2}(\Omega))\) be given by \[e_{i}(\cdot,t)= -\int_{0}^{t}\mathcal{T}(t-s)\Big{(}a^{i}(\cdot,s)e_{i}(\cdot,s) \Big{)}ds\] \[\qquad+\ \int_{0}^{t}\mathcal{T}(t-s)f(\cdot,s)ds\] for all \(t\in[0,T]\). Since \(e_{i}(t)\) is also the solution of the PDE \(\dot{e}_{i}(t)=-Ae_{i}(t)-a^{i}(\cdot,t)e_{i}(t)+f(\cdot,t)\) with initial condition equal to \(0\), and this solution can be constructed using the positive semigroup \(\mathcal{T}^{C}(t)\) in Lemma IV.13, we can conclude that \((e_{i})_{i=1}^{\infty}\) is non-negative for each \(i\in\mathbb{Z}_{+}\). Then, using the fact that the sequences \((e_{i})_{i=1}^{\infty}\) and \((a^{i})_{i=1}^{\infty}\) are uniformly bounded in the spaces \(C([0,T];L^{2}(\Omega))\) and \(L^{2}(0,T;L^{2}(\Omega))\), respectively, and applying Gronwall's lemma, the result follows. We can use Lemma IV.14 prove the next result, which will enable us to show later on that the rate of convergence of the solution \(\mathbf{y}\) of the PDE (17) toward \(\mathbf{0}\) can be controlled by the rate of convergence of the solution of a related linear PDE. **Theorem IV.15**.: _(Comparison Principle) Let \(T>0\). Let \(y^{0}\in L^{2}(\Omega)\) and \(f,g\in L^{2}(0,T;L^{2}(\Omega))\) be non-negative such that \(\|f(t)\|_{\infty}\) and \(\|g(t)\|_{\infty}\) are bounded by a constant \(C_{1}>0\) almost everywhere on \(t\in[0,T]\). Define the operator \(C=-A-\|g\|_{\infty}\mathbf{I}\). Let \(y(\cdot,t)\) be given by_ \[y(\cdot,t)= \mathcal{T}(t)y^{0}-\int_{0}^{t}\mathcal{T}(t-s)\Big{(}g(\cdot,s)y( \cdot,s)\Big{)}ds\] \[\qquad+\int_{0}^{t}\mathcal{T}(t-s)f(\cdot,s)ds \tag{30}\] _for all \(t\in[0,T]\). Then \(y(\cdot,t)\geq\mathcal{T}^{C}(t)y^{0}\) for all \(t\in[0,T]\), where \((\mathcal{T}^{C}(t))_{t\geq 0}\) is the semigroup generated by \(C\)._ Proof.: Let \(\tilde{y}(\cdot,t)=\mathcal{T}^{C}(t)y^{0}\) for all \(t\geq 0\). Then, we know that \(\tilde{y}(\cdot,t)\) satisfies the equation \[\tilde{y}(\cdot,t)=\mathcal{T}(t)y^{0}-\int_{0}^{t}\mathcal{T}(t-s)\|g\|_{ \infty}\tilde{y}(\cdot,s)ds \tag{31}\] for all \(t\in[0,T]\). Let \(e=y-\tilde{y}\). Substituting equations (30) and (31) for \(y\) and \(\tilde{y}\), respectively, we have that \[e(\cdot,t) = -\int_{0}^{t}\mathcal{T}(t-s)\Big{(}g(\cdot,s)y(\cdot,s)\Big{)}ds\] \[+\int_{0}^{t}\mathcal{T}(t-s)f(\cdot,s)ds\] \[+\int_{0}^{t}\mathcal{T}(t-s)\|g\|_{\infty}\tilde{y}(\cdot,s)ds\] for all \(t\in[0,T]\). By adding and subtracting the term \(\int_{0}^{t}\mathcal{T}(t-s)\Big{(}g(\cdot,s)\tilde{y}(\cdot,s)\Big{)}ds\) in this expression for \(e\), we obtain \[e(\cdot,t) = -\int_{0}^{t}\mathcal{T}(t-s)\Big{(}(g(\cdot,s))e(\cdot,s)\Big{)}ds\] \[+\int_{0}^{t}\mathcal{T}(t-s)f(\cdot,s)ds\] \[+\int_{0}^{t}\mathcal{T}(t-s)\Big{(}(\|g\|_{\infty}-g(\cdot,s)) \tilde{y}(\cdot,s)\Big{)}ds\] for all \(t\in[0,T]\). Then the result follows from the non-negativity of \(e\), which is a consequence of Lemma IV.14. **Theorem IV.16**.: _(Positive Lower Bound on Solutions) Let \(T>0\). Let \(y^{0}\in L^{2}(\Omega)\) and \(f,g\in L^{2}(0,T;L^{2}(\Omega))\) be non-negative such that \(\|f(t)\|_{\infty}\) and \(\|g(t)\|_{\infty}\) are bounded by a constant \(C_{1}>0\) almost everywhere on \(t\in[0,T]\). Let \(y(\cdot,t)\) be given by_ \[y(\cdot,t) = \mathcal{T}(t)y^{0}-\int_{0}^{t}\mathcal{T}(t-s)\Big{(}g(\cdot,s )y(\cdot,s)\Big{)}ds \tag{32}\] \[+\int_{0}^{t}\mathcal{T}(t-s)f(\cdot,s)ds\] _for all \(t\in[t_{1},t_{2}]\). Then there exist constants \(\tau,\epsilon,\delta>0\), independent of \(y^{0}\) and \(T>0\), such that if \(\tau+\delta<T\), then \(y(\cdot,t)\geq\epsilon\|y^{0}\|_{1}\) for all \(t\geq[\tau,\tau+\delta]\)._ Proof.: We know from Theorem IV.5 that there exists a constant \(k>0\) and time \(T>0\), independent of \(y^{0}\), such that \(\mathcal{T}(t)y^{0}\geq k\|y^{0}\|_{1}\) for all \(t\geq T\). Let \(C=-A-\|g\|_{\infty}\mathbf{I}\). Then the semigroup \((\mathcal{T}^{C}(t))\) generated by the operator \(C\) is given by \(\mathcal{T}(t)=e^{-\|g\|_{\infty}t}\mathcal{T}(t)\) for all \(t\geq 0\). The result then follows from Theorem IV.15. The following theorem states the fundamental result that the PDE (17) conserves mass and maintains positivity. **Theorem IV.17**.: _Let \(\mathbf{y}^{0}\in\mathbf{L}^{\infty}(\Omega)\) be non-negative. Then the unique global mild solution of the PDE (17) is non-negative, and \(\|\mathbf{y}(\cdot,t)\|_{1}=\|\mathbf{y}^{0}\|\) for all \(t\geq 0\)._ Proof.: The conservation of mass is a simple consequence of taking the inner product of the solution of (17) with a constant function. The positivity property of solutions follows from [18][Theorem 1] by noting that, if \(\lambda>0\) is large enough, then \(G(\mathbf{y})+\lambda\mathbf{y}\geq 0\) for all \(\mathbf{y}\in\mathbf{L}^{2}(\Omega)\) that are non-negative. We now require some additional notation. For a function \(f\in L^{2}(\Omega)\), we define \(f_{+}:=\frac{|f|+f}{2}\), the projection of \(f\) onto the set of non-negative functions in \(L^{2}(\Omega)\), and \(f_{-}:=-\frac{|f|-f}{2}\), the projection of \(f\) onto the set of non-positive functions in \(L^{2}(\Omega)\). Given these definitions, we have the following result on partial monotonicity of solutions of the PDE (17). **Proposition IV.18**.: _(Partial Monotonicity of Solutions) Let \(\mathbf{y}^{0}\in\mathbf{L}^{\infty}(\Omega)\) be positive. Then, for all \(t\geq s\geq 0\), the unique global mild solution of the PDE (17) satisfies_ \[(y^{d}-y_{2}(\cdot,t))_{+} \leq (y^{d}-y_{2}(\cdot,s))_{+} \tag{33}\] \[(y^{d}-y_{2}(\cdot,t))_{-} \geq (y^{d}-y_{2}(\cdot,s))_{-} \tag{34}\] Proof.: We will only prove the first inequality (33). Since \(\mathbf{y}^{0}\in\mathbf{L}^{\infty}(\Omega)\), we know that \(y_{2}\in C([0,1];L^{2}(\Omega))\) and \(\|y_{2}(t)\|_{\infty}\) is uniformly bounded over \([0,T]\). Assume that \(y^{d}-y_{2}^{0}\) is non-zero and non-negative on a set \(\Omega_{1}\subseteq\Omega\) of positive measure. For the sake of contradiction, suppose that there exists \(t_{2}\in(0,T]\) such that \(y_{2}(\cdot,t_{2})\) is greater than \(y^{d}\) on a subset of \(\Omega_{1}\) that has positive Lebesgue measure. Then, due to the fact that \(y_{2}\in C([0,T];L^{2}(\Omega))\), there must exist \(t_{1}\in(0,t_{2})\) and a measurable set \(\Omega_{2}\subset\Omega_{1}\) of positive Lebesgue measure, such that for each \(s\in[t_{1},t_{2}]\), \(y_{2}(\mathbf{x},s)\geq y^{d}(\mathbf{x})\) for almost every \(\mathbf{x}\in\Omega_{2}\), with \(y_{2}(\mathbf{x},t_{2})\neq y_{2}(\mathbf{x},s)\) for almost every \(\mathbf{x}\in\Omega_{2}\) and a subset of \([t_{1},t_{2}]\) with positive Lebesgue measure. However, we know that \[y_{2}(\cdot,t) = y_{2}(\cdot,t_{1})+\int_{t_{1}}^{t}F_{1}(y_{2}(\cdot,\tau))y_{1} (\cdot,\tau)d\tau \tag{35}\] \[-\int_{t_{1}}^{t}F_{2}(y_{2}(\cdot,\tau))y_{2}(\cdot,\tau)d\tau\] for all \(t\in[t_{1},t_{2}]\) This implies that \[y_{2}(\mathbf{x},t) = y_{2}(\mathbf{x},t_{1})+\int_{t_{1}}^{t}F_{1}(y_{2}(\mathbf{x}, \tau))y_{1}(\mathbf{x},\tau)d\tau\] \[-\int_{t_{1}}^{t}F_{2}(y_{2}(\mathbf{x},\tau))y_{1}(\mathbf{x}, \tau)d\tau\] \[= y_{2}(\mathbf{x},t_{1})-\int_{t_{1}}^{t}r_{2}(y_{2}(\mathbf{x}, \tau)-y^{d}(\mathbf{x}))y_{2}(\mathbf{x},\tau)d\tau\] for almost every \(\mathbf{x}\in\Omega_{2}\) and for all \(t\in[t_{1},t_{2}]\). The function \(y_{1}\) is non-negative due to Theorem IV.17. Moreover, \(r_{2}\) is also non-negative by definition. Thus, we arrive at the contradiction that \(y_{2}(\mathbf{x},t)\leq y_{2}(\mathbf{x},t_{1})\) for almost every \(\mathbf{x}\in\Omega_{1}\) and for all \(t\in[t_{1},t_{2}]\). Hence, we must have that \[y_{2}(\mathbf{x},t)=y_{2}(\mathbf{x},t_{1})+\int_{t_{1}}^{t}r_{1}(y_{2}( \mathbf{x},\tau)-y^{d}(\mathbf{x}))y_{1}(\mathbf{x},\tau)d\tau\] for almost every \(\mathbf{x}\in\Omega_{1}\) and for all \(t\in[0,T]\). This implies that \(y_{2}\) is non-decreasing with time, and is less than or equal to \(y^{d}\) almost everywhere on \(\Omega_{1}\). This proves the first inequality (33). Using a similar argument, based on the fact that \(r_{1}\) and \(r_{2}\) are non-negative bounded functions, we can arrive at the second inequality (34). Using the above proposition, we will establish global asymptotic stability of the system (17) in the \(L^{1}\) norm. Towards this end, we first establish marginal stability of the system about the equilibrium distribution \(\mathbf{y}^{d}\). **Theorem IV.19**.: _(\(L^{1}\)-Lyapunov Stability) Let \(\mathbf{y}^{0}\in\mathbf{L}^{\infty}(\Omega)\) be positive and \(\int_{\Omega}\mathbf{y}^{0}(\mathbf{x})d\mathbf{x}=1\). For every \(\epsilon>0\), if_ \[\|\mathbf{y}^{0}-\mathbf{y}^{d}\|_{1}\leq\epsilon, \tag{36}\] _then the solution \(\mathbf{y}(\cdot,t)\) of the system (17) satisfies_ \[\|\mathbf{y}(\cdot,t)-\mathbf{y}^{d}\|_{1}\leq 2\epsilon \tag{37}\] _for all \(t\geq 0\)._ Proof.: We know that the solution \(\mathbf{y}\) satisfies \[\int_{\Omega}\mathbf{y}(\cdot,t)d\mathbf{x}=\int_{\Omega}y_{1}(\cdot,t)d \mathbf{x}+\int_{\Omega}y_{2}(\cdot,t)d\mathbf{x}=1\] for all \(t\in[0,T]\). From Proposition IV.18, we know that \(\|y_{2}(\cdot,t)-y^{d}\|_{1}\) is non-decreasing with time \(t\). Hence, \(\|y_{2}(\cdot,t)-y^{d}\|_{1}\leq\epsilon\) for all \(t\geq 0\). Then, we have that \[\int_{\Omega}y_{1}(\mathbf{x},t)d\mathbf{x}+\int_{\Omega}(y_{2}(\mathbf{x},t)- y^{d}(\mathbf{x}))d\mathbf{x}=1-\int_{\Omega}y^{d}(\mathbf{x})d\mathbf{x}\] for all \(t\geq 0\). This implies that \[\int_{\Omega}y_{1}(\mathbf{x},t)d\mathbf{x} \leq -\int_{\Omega}(y_{2}(\mathbf{x},t)-y^{d}(\mathbf{x}))d\mathbf{x}\] \[\leq \|y_{2}(\cdot,t)-y^{d}\|_{1}\ \leq\ \epsilon\] for all \(t\geq 0\). This concludes the proof. **Proposition IV.20**.: _Let \(\mathbf{y}^{0}\in\mathbf{L}^{\infty}(\Omega)\) be non-negative and \(\|\mathbf{y}^{0}\|_{1}=1\). Then the solution \(\mathbf{y}\) of the PDE (17) satisfies \(\lim_{t\to\infty}\|(y_{2}(\cdot,t)-y^{d})_{+}\|_{\infty}=0\)._ Proof.: Suppose that, for the sake of contradiction, this is not true. Then, due to the partial monotonicity property of the solution \(\mathbf{y}\) stated in Proposition IV.18, there exists a subset \(\Omega_{1}\subseteq\Omega\) of positive measure, and a parameter \(\epsilon>0\), such that \(y_{2}(\mathbf{x},t)-y^{d}(\mathbf{x})\geq\epsilon\) for almost every \(\mathbf{x}\in\Omega_{1}\) and all \(t\geq 0\). However, we know that \[y_{2}(\mathbf{x},t) = y_{2}(\mathbf{x},0)-\int_{0}^{t}F_{2}(y_{2}(\mathbf{x},\tau))y_{ 2}(\mathbf{x},\tau)d\tau\] \[= y_{2}(\mathbf{x},0)-\int_{0}^{t}r_{2}(y_{2}(\mathbf{x},\tau)-y^{ d}(\mathbf{x}))y_{2}(\mathbf{x},\tau)d\tau\] for almost every \(\mathbf{x}\in\Omega_{1}\) and for all \(t\geq 0\). We know that the function \(r_{2}\) is non-zero and continuous on the open interval \(t\in(0,\infty)\). Hence, there must exist \(\delta>0\) such that \[y_{2}(\mathbf{x},t) \leq y_{2}(\mathbf{x},0)-\int_{0}^{t}\delta y_{2}(\mathbf{x},\tau)d\tau \tag{38}\] \[\leq y_{2}(\mathbf{x},0)-\delta\int_{0}^{t}(y^{d}(\mathbf{x})+\epsilon )d\tau\] for almost every \(\mathbf{x}\in\Omega_{1}\) and for all \(t\geq 0\). This leads to a contradiction. Finally, we can establish attractivity of the equilibrium point \(\mathbf{y}^{d}\in L^{\infty}(\Omega)\). Towards this end, we first prove in the lemma below that the density of the agents in the state \(Y(t)=0\) (i.e. the state of motion, given by \(y_{1}\), must converge to \(0\) eventually. **Lemma IV.21**.: _Let \(\mathbf{y}^{0}\in\mathbf{L}^{\infty}(\Omega)\) be non-negative and \(\|\mathbf{y}^{0}\|_{1}=1\). Then \(\lim_{t\to\infty}\|y_{1}(\cdot,t)\|=0\). Hence, \(\lim_{t\to\infty}\|y_{2}(\cdot,t)\|=1\)._ Proof.: Suppose that, for the sake of contradiction, this is not true. Then there exists \(\epsilon_{1}>0\) and a sequence of increasing time instants \((t_{i})_{i=1}^{\infty}\) such that \(\lim_{i\to\infty}t_{i}=\infty\) and \(\|y_{1}(\cdot,t_{i})\|_{1}\geq\epsilon_{1}\) for all \(i\in\mathbb{Z}_{+}\). From Theorem IV.16, we know that this implies that there exist constants \(\tau,\epsilon_{2},\delta>0\) such that \(y_{1}(\cdot,t)\geq\epsilon_{2}\|y_{1}(\cdot,t_{i})\|_{1}\geq\epsilon_{1} \epsilon_{2}\) for all \(t\in[t_{i},t_{i}+\delta]\), for all \(i\in\mathbb{Z}_{+}\). Without loss of generality, we can assume that \(t_{i+1}-t_{i}>\delta\) for all \(i\in\mathbb{Z}_{+}\). Let \(\Omega_{1}\subseteq\Omega\) be the subset of largest measure such that \(y_{2}^{0}(\mathbf{x})\geq y^{d}(\mathbf{x})\) for all \(\mathbf{x}\in\Omega_{1}\). Then, from the partial monotonicity property of the solution \(\mathbf{y}\) (Proposition IV.18), we have that, for each \(i\in\mathbb{Z}_{+}\), \[y_{2}(\mathbf{x},t_{i}+\delta) = y_{2}(\mathbf{x},0)+\int_{0}^{t_{i}+\delta}F_{1}(y_{2}(\mathbf{ x},\tau))y_{1}(\mathbf{x},\tau)d\tau\] \[\geq y_{2}(\mathbf{x},0)\] \[+\sum_{j=1}^{i}\int_{t_{i}}^{t_{i}+\delta}r_{1}(y_{2}(\mathbf{x}, \tau)-y^{d}(\mathbf{x}))y_{1}(\mathbf{x},\tau)d\tau\] for almost every \(\mathbf{x}\in\Omega_{1}\). This implies that \(\lim_{i\to\infty}\|(y_{2}(\cdot,t_{i})-y^{d})_{-}\|_{\infty}=0\). However, we know that \(\|\mathbf{y}(\cdot,t)\|_{1}=1\) for all \(t\geq 0\). This, along with the fact that \(\lim_{t\to\infty}\|(y_{2}(\cdot,t)-y^{d})_{+}\|_{\infty}=0\) (Proposition IV.20) and the assumption that \(\|y_{1}(\cdot,t_{i})\|_{1}\geq\epsilon_{1}\) for all \(i\in\mathbb{Z}_{+}\), leads to a contradiction. Using the partial monotonicity property of solutions established in Proposition IV.18 and the result in Lemma IV.21, we now obtain the following global asymptotic stability result. **Theorem IV.22**.: _(\(L^{1}\)-Global Asymptotic Stability) Let \(\mathbf{y}^{0}\in L^{\infty}(\Omega)\) be non-negative and \(\|\mathbf{y}^{0}\|_{1}=1\). Then \(\lim_{t\to\infty}\|\mathbf{y}(\cdot,t)-\mathbf{y}^{d}\|_{1}=0\), and hence the system (17) is globally asymptotically stable about the target equilibrium distribution \(\mathbf{y}^{d}\)._ Proof.: Let \(\Omega_{1}=\{\mathbf{x}\in\Omega;\;y_{2}^{0}(\mathbf{x})\geq y^{d}(\mathbf{x})\}\). Let \(\Omega_{2}=\Omega-\Omega_{1}\). From Proposition IV.20, we know that \(\lim_{t\to\infty}\|y(\cdot,t)|_{\Omega_{1}}-y^{d}|_{\Omega_{1}}\|_{\infty}=0\), where \(\cdot|_{\Omega}\) denotes the restriction operation. This implies that \(\lim_{t\to\infty}\|y(\cdot,t)|_{\Omega_{1}}-y^{d}|_{\Omega_{1}}\|_{1}=0\). We also know from Proposition IV.20 that \[\lim_{t\to\infty}\int_{\Omega_{1}}\big{(}y_{2}(\mathbf{x},t)-y^{d}(\mathbf{x}) \big{)}d\mathbf{x}+\int_{\Omega_{2}}\big{(}y_{2}(\mathbf{x},t)-y^{d}(\mathbf{x} )\big{)}d\mathbf{x}=0 \tag{39}\] This implies that \[\lim_{t\to\infty}\int_{\Omega_{2}}\big{(}y_{2}(\mathbf{x},t)-y^{d}(\mathbf{x} )\big{)}d\mathbf{x}=0 \tag{40}\] From Proposition IV.18, we know that \(y_{2}(\cdot,t)\leq y^{d}\) almost everywhere on \(\Omega_{2}\) and for all \(t\geq 0\). Hence, we conclude that \[\lim_{t\to\infty}\|y_{2}(\cdot,t)-y^{d}\|_{1}=\lim_{t\to\infty}\int_ {\Omega_{1}}|y_{2}(\mathbf{x},t)-y^{d}(\mathbf{x})|d\mathbf{x}\] \[+\int_{\Omega_{2}}|y_{2}(\mathbf{x},t)-y^{d}(\mathbf{x})|d\mathbf{x}\] From this, along with the fact that \(\lim_{t\to\infty}\|y_{1}(\cdot,t)\|_{1}=0\), we arrive at our result. ## V Simulations In this section, we validate the control laws presented in Sections III and IV with numerical simulations. The SDEs (4) and (13) were simulated using the method of Wong-Zakai approximations [58]. The diffusion and reaction parameter values used in each simulation were chosen with the goal of shortening the duration of the simulation on a case-by-case basis. Hence, different parameter values were chosen for each of the examples below. In practice, these parameters would need to be chosen according to the physical constraints on the system and the objectives of the user, such as optimizing the rate of convergence to the target density or controlling the variance of the agent density around the target density. ### _Density Control without Agent Interactions_ In this subsection, we simulate the control approach presented in Section III. In the following example, we simulate the SDE (4) with the control laws \(v_{i}=0\) and \(u_{i}(\mathbf{x})=D/y_{d}(\mathbf{x})\), where \(D\) is a diffusion coefficient. The generator of the process is given by the operator in (8). **Example V.1**.: _Brockett integrator_ In this example, we consider the case where each agent's motion evolves according to the Brockett integrator, which has been well-studied in the control theory literature [14, 5]. The control vector fields for this system are the following: \[X_{1}(\mathbf{x})=\frac{\partial}{\partial x_{1}}-x_{2}\frac{\partial}{\partial x _{3}},\hskip 14.226378ptX_{2}(\mathbf{x})=\frac{\partial}{\partial x_{2}}+x_{1} \frac{\partial}{\partial x_{3}} \tag{41}\] The Lie bracket of the two vector fields is given by \[[X_{1},X_{2}](\mathbf{x})=X_{1}X_{2}-X_{2}X_{1}=2\frac{\partial}{ \partial x_{3}} \tag{42}\] for all \(\mathbf{x}\in\mathbb{R}^{n}\). Hence, we have that span \(\{X_{1}(\mathbf{x}),X_{2}(\mathbf{x}),[X_{1},X_{2}](\mathbf{x})\}=T_{\mathbf{x }}\mathbb{R}^{3}\) and therefore, the system is bracket generating. We define the domain \(\Omega=[0,100]^{3}\). The target distribution is given by \(d^{\prime}=c\ [\sum_{i=1}^{8}1_{B_{\mathbf{x}_{i}}}+0.001],\) where \(c>0\) is a normalization constant that makes \(y^{d}\) a probability density, \(\mathbf{1}_{S}\) denotes the indicator function of a set \(S\), and \(B_{\mathbf{x}_{i}}\) denotes a ball of radius \(12.5\) centered at \(\mathbf{x}_{i}\), \(i=1,...,8\), defined as \(\{\mathbf{x}_{1},...,\mathbf{x}_{8}\}=\{[25\ 25\ 25]^{T},[25\ 25\ 75]^{T},[25\ 75 75]^{T},...,[75\ 75\ 75]^{T}\}\). The positions of \(N_{p}=10,000\) agents are generated from a stochastic simulation of the SDE (4) and plotted at three times \(t\) in Fig. 1. The figure shows that at time \(t=100\ s\) the distribution of the swarm over the sphere is close to the target density. In Fig. 2, it can be seen that the \(L_{1}\) norm of the difference between the current distribution and the target distribution decreases over time. ### _Density Control with Agent Interactions_ In this subsection, we simulate the control approach in Section IV. The functions \(r_{i}\) in Eq. (16) are chosen to be \[r_{1}(\mathbf{x})=\begin{cases}-k\mathbf{x}&\text{if }\mathbf{x}<0\\ 0&\text{if }\mathbf{x}=0\end{cases} \tag{43}\] \[r_{2}(\mathbf{x})=\begin{cases}k\mathbf{x}&\text{if }\mathbf{x}>0\\ 0&\text{if }\mathbf{x}=0\end{cases} \tag{44}\] for all \(\mathbf{x}\in\Omega\), where \(k\) is a positive scaling constant. Since we simulate a finite number of agents, instead of the density \(y_{1}\), the agents use the empirical measure \(\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}\delta_{\mathbf{x}_{i}(t)}\) to compute their transition rates. However, the empirical measure is not absolutely continuous with respect to the Riemianian volume and does not have a density. Therefore, the agents use the regularized approximation of the measure \(\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}\delta_{\mathbf{x}_{i}(t)}\), given by \[\tilde{\rho}(\mathbf{x},t)=c(\epsilon)\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}K_{ \epsilon}(\mathbf{x},\mathbf{x}_{i}(t)) \tag{45}\] for all \(\mathbf{x}\in M\), where the kernel function \(K_{\epsilon}\) is chosen such that \(\lim_{\epsilon\to 0}c(\epsilon)K_{\epsilon}(\cdot,\mathbf{y})=\delta_{ \mathbf{y}}\) for each \(\mathbf{y}\in\Omega\), and the function \(c(\epsilon)\) is a normalization parameter defined such that \(c(\epsilon)\int_{M}K_{\epsilon}(\mathbf{x},\mathbf{y})d\mathbf{x}=1\). For each of the examples below, we will specify the kernel function used. The positions of each agent are generated according to the SDE (13). The transition rates \(q_{i}(\mathbf{x},t)\) of the agents are defined as \[q_{i}(\mathbf{x},t)=r_{i}\Big{(}c(\epsilon)\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}K_ {\epsilon}(\mathbf{x},\mathbf{x}_{i}(t))-y^{d}(\mathbf{x})\Big{)}. \tag{46}\] **Example V.2**.: _Brockett integrator_ In this example, each agent moves according to Eq. (13) with the control vector fields as defined in Example V.1. The kernel function is given by: \[K_{\epsilon}(\mathbf{x},\mathbf{y})=\begin{cases}\exp\frac{-1}{1-(|\mathbf{x }-\mathbf{y}|/\epsilon)^{2}}&\text{if }|\mathbf{x}-\mathbf{y}|<\epsilon\\ 0&\text{otherwise}\end{cases} \tag{47}\] for all \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{3}\). The target density is set to \(y^{d}=c\sum_{i=1}^{8}1_{B_{\mathbf{x}_{i}}}\), similar to the \(y^{d}\) defined in Example V.1, where \(c>0\) is a normalization constant. Note that in this example, the probability density is allowed to take value equal to \(0\) in certain regions of the the domain, unlike in Example V.1. The reaction constant in Eqs. (43)-(44) is set to \(k=500\), and the parameter of the kernel in (47) is defined as \(\epsilon=5\). The positions of \(N_{p}=1,000\) agents are generated from a stochastic simulation of the SDE (13) and plotted in Fig. 3 at three times \(t\). As can be seen in this figure, at time \(t=100\ s\) the swarm is uniformly distributed over the sets \(B_{\mathbf{x}_{i}}\) according to the target density. Figure 4 shows that the \(L_{1}\) norm of the difference between the current distribution of agents in the motionless state and the target distribution decreases over time. At \(t=100\ s\), there are only \(40\) agents in the state of motion. In contrast, when using the control approach without agent interactions (Section III), all agents are constantly in motion. The \(L_{1}\) norm at time \(t=100\ s\) is smaller for the case with agent interactions (Fig. 4) than for the case without agent interactions (Fig. 2). While the interacting-agent control approach, unlike the control approach without interactions, enables agents to stop moving once the target density is reached (and therefore stop unnecessarily expending energy), the time until the interacting agents converge to the target density was found to be sensitive to the reaction constant \(k\). Lower values of \(k\), e.g. \(k=10\), resulted in a slower rate of agent transitions to the motionless state. On the other hand, if the value of \(k\) was chosen too large, some of the agents prematurely transitioned to the motionless state in regions close to their initial positions. The performance of the interacting-agent control law was also affected by the parameter \(\epsilon\). If \(\epsilon\) was taken to be too small, for example \(\epsilon=0.1\), the agents did not converge to any distribution, but instead remained in a state of motion. This can be attributed to the fact that given a fixed value of \(N_{p}\), the sum \(c(\epsilon)\frac{1}{N_{p}}\sum_{i=1}^{N_{p}}K_{\epsilon}(\mathbf{x},\mathbf{x}_ {i}(t))\) becomes a less accurate approximation of the density \(y(\mathbf{x},t)\) as the value of \(\epsilon\) is decreased. On the other hand, if \(\epsilon\) is taken to be too large, then the agent density converges to a regularized approximation of the target density, rather than the target density itself. Due to space limitations, we do not include numerical results on the effects of these parameters here. **Example V.3**.: _Underactuated system on the sphere_ In this example, we consider a system on the \(2\)-dimensional sphere embedded in \(\mathbb{R}^{3}\) given by \(S^{2}=\{\mathbf{x}\in\mathbb{R}^{3};\ \mathbf{x}^{T}\mathbf{x}=1\}\). We define the following matrices \(\mathbf{B}_{i}\), \(i=1,2,3\): \[\mathbf{B}_{1}=\begin{bmatrix}0&-1&0\\ 1&0&0\\ 0&0&0\end{bmatrix}\!,\ \mathbf{B}_{2}=\begin{bmatrix}0&0&1\\ 0&0&0\\ -1&0&0\end{bmatrix}\!,\ \mathbf{B}_{3}=\begin{bmatrix}0&0&0\\ 0&0&-1\\ 0&1&0\end{bmatrix}\!.\] Each of the matrices defines a vector field \(\tilde{X}_{i}\) on \(S^{2}\) given by \[(\tilde{X}_{i}f)(\mathbf{x})=\lim_{t\to 0}\frac{f(\epsilon^{\mathbf{B}_{1}} \mathbf{:x})-f(\mathbf{x})}{t} \tag{48}\] for all \(\mathbf{x}\in S^{2}\) and all functions \(f\in C^{\infty}(S^{2})\). We assume that each agent can control its motion along the vector fields \(\tilde{X}_{1},\tilde{X}_{2}\). Note that in this case, the system is underactuated. This is because \(\mathrm{span}\{\tilde{X}_{1}(\mathbf{x}),\tilde{X}_{2}(\mathbf{x}),[\tilde{X}_{ 1}(\mathbf{x}),\tilde{X}_{2}(\mathbf{x})]\}=T_{\mathbf{x}}S^{2}\), where it can be verified that \([\tilde{X}_{1},\tilde{X}_{2}](\mathbf{x})=\tilde{X}_{3}(\mathbf{x})\) for all \(\mathbf{x}\in S^{2}\). The kernel function is defined as \[K_{\epsilon}(\mathbf{x},\mathbf{y})=\begin{cases}\exp\frac{-1}{1-(\cos( \mathbf{x}^{T}\mathbf{y})/\epsilon)^{2}}&\mathrm{if}\ \mathrm{acos}(\mathbf{x}^{T}\mathbf{y})<\epsilon\\ 0&\mathrm{otherwise}\end{cases} \tag{49}\] for all \(\mathbf{x},\mathbf{y}\in S^{2}\). The target density \(y^{d}:S^{2}\rightarrow\mathbb{R}_{\geq 0}\) (with respect to the Haar measure) is given by \[y^{d}(\mathbf{x})=\begin{cases}c&\mathrm{if}\ x_{i}^{2}\geq 0.75\ \mathrm{for}\ i\in\{1,2,3\}\\ 0&\mathrm{otherwise}\end{cases} \tag{50}\] for all \(\mathbf{x},\mathbf{y}\in S^{2}\), where \(c\) is a normalization parameter chosen such that this function integrates to \(1\). We set \(\epsilon=0.1\). The positions of \(N_{p}=1,000\) agents are generated from a stochastic simulation of the SDE (13) and plotted in Fig. 5 at three times \(t\). The target density \(y^{d}\) is depicted on the surface of the sphere using a color density plot. Blue regions are assigned a low target density of agents, while yellow regions are assigned a high target density. The agent positions are superimposed on the density plot to enable comparison between the actual and target densities. Figure 5 shows that at time \(t=100\ s\), the distribution of the swarm over the sphere is close to the target density. As for the case of the Brockett integrator in Example V.2, only a small fraction of the swarm (\(62\) agents) is in the state of motion once the swarm has converged closely to the target density (\(t=100\ s\)). ## VI Conclusion In this article, we have generalized our diffusion-based multi-agent coverage approach to the case where the agents have nonholonomic dynamics. We established exponential stability of the resulting Kolmogorov forward equation, whose generator is a hypoelliptic operator. In addition, we constructed a hybrid switching diffusion process of mean-field type such that the probability density of the random variable that represents the distribution of a swarm can be stabilized to a target density that is not necessarily positive everywhere on the domain. One possible direction for future work is to investigate the tradeoffs between control laws with and without agent interaction. Another is to incorporate pairwise interactions between agents that model collision avoidance maneuvers, which would require the inclusion of corresponding interaction terms in the PDE model. One could also investigate the convergence of the \(N\)-agent system of hybrid switching diffusion processes to the solution of the semilinear PDE.
2310.08228
Resonant inelastic x-ray scattering of the Jeff = 1/2 Mott insulator Sr2IrO4 from the density-functional theory
We have investigated the electronic structure of Sr2IrO4 within the density-functional theory using the generalized gradient approximation while taking into account strong Coulomb correlations in the framework of the fully relativistic spin-polarized Dirac linear MT orbital band structure method. We have investigated the x-ray absorption spectra, x-ray magnetic circular dichroism, and resonant inelastic x-ray scattering spectra at the Ir L3 and O K edges. The calculated results are in good agreement with the experimental data. The RIXS spectrum of Sr2IrO4 at the Ir L3 edge in addition to the elastic scattering peak at 0 eV possesses a sharp feature below 1.5 eV corresponding to transitions within the Ir t2g levels. The excitation located from 2 eV to 5 eV is due to t2g-eg transitions. The third wide structure situated at 5-12 eV appears due to transitions between the Ir 5d_O states derived from the tails of oxygen 2p states and eg and t2g states. The RIXS spectrum of Sr2IrO4 at the O K edge consists of three major inelastic excitations at 0.7 eV, 3.5 eV, and around 6.2 eV. We have found that the first low energy feature is due to interband transitions between occupied and empty O t2g transitions, which appear due to the strong hybridization between oxygen 2p and Ir t2g states in the close vicinity of the Fermi level. The next two peaks at around 3.5 and 6.2 eV reflect the interband transitions from the occupied O 2p states and the empty oxygen states which arise from the hybridization with Ir t2g and eg states, respectively. We have found that the theory reproduces well the shape and energy position of the low energy feature, but the second and the third peaks are shifted towards smaller energy in comparison with the experimental measurements.
V. N. Antonov, D. A. Kukusta, L. V. Bekenov
2023-10-12T11:22:34Z
http://arxiv.org/abs/2310.08228v1
Resonant inelastic x-ray scattering of the \(J_{eff}=1/2\) Mott insulator Sr\({}_{2}\)IrO\({}_{4}\) from the density-functional theory ###### Abstract We have investigated the electronic structure of Sr\({}_{2}\)IrO\({}_{4}\) within the density-functional theory (DFT) using the generalized gradient approximation while taking into account strong Coulomb correlations (GGA+\(U\)) in the framework of the fully relativistic spin-polarized Dirac linear muffininin orbital band-structure method. We have investigated the x-ray absorption spectra (XAS), x-ray magnetic circular dichroism (XMCD), and resonant inelastic x-ray scattering (RIXS) spectra at the Ir \(L_{3}\) and O \(K\) edges. The calculated results are in good agreement with the experimental data. The RIXS spectrum of Sr\({}_{2}\)IrO\({}_{4}\) at the Ir \(L_{3}\) edge in addition to the elastic scattering peak at 0 eV possesses a sharp feature below 1.5 eV corresponding to transitions within the Ir \(t_{2g}\) levels. The excitation located from 2 eV to 5 eV is due to \(t_{2g}\to e_{g}\) transitions. The third wide structure situated at 5\(-\)12 eV appears due to transitions between the Ir \(5d_{O}\) states derived from the "tails" of oxygen \(2p\) states and \(e_{g}\) and \(t_{2g}\) states. The RIXS spectrum of Sr\({}_{2}\)IrO\({}_{4}\) at the O \(K\) edge consists of three major inelastic excitations at 0.7 eV, 3.5 eV, and around 6.2 eV. We have found that the first low energy feature is due to interband transitions between occupied and empty O\({}_{t_{2g}}\) transitions, which appear due to the strong hybridization between oxygen \(2p\) and Ir \(t_{2g}\) states in the close vicinity of the Fermi level. The next two peaks at around 3.5 and 6.2 eV reflect the iterband transitions from the occupied O \(2p\) states and the empty oxygen states which arise from the hybridization with Ir \(t_{2g}\) and \(e_{g}\) states, respectively. We have found that the theory reproduces well the shape and energy position of the low energy feature, but the second and the third peaks are shifted towards smaller energy in comparison with the experimental measurements. To reproduce the correct energy position of the oxygen \(2p\) band we have used a self-interaction-like correction procedure. We have found that the dependence of the RIXS spectrum at the oxygen \(K\) edge on the incident photon energy and the momentum transfer vector \(\mathbf{Q}\) is much stronger in comparison with the correspondent dependence at the Ir \(L_{3}\) edge. pacs: 75.50.Cc, 71.20.Lp, 71.15.Rf ## I Introduction In \(5d\) transition metal compounds the energy scale of the spin-orbit coupling (SOC) is comparable to the on-site Coulomb interaction and the crystal-field energy. Due to the strong competition between these interactions fascinating electronic states can arise. The SOC in such systems splits the \(t_{2g}\) orbitals into a quartet (\(j_{eff}=3/2\)) and a doublet (\(j_{eff}=1/2\)) [1; 2; 3]. In \(5d^{5}\) (Ir\({}^{4+}\)) iridium oxides, the quartet \(j_{eff}=3/2\) is fully occupied, and the relatively narrow \(j_{eff}=1/2\) doublet occupied by one electron can be splitted by a moderate Hubbard \(U_{eff}\) with opening a small band gap called the relativistic Mott gap [4; 5; 6]. Iridates have been at the center of an intensive search in recent years for novel phenomena, such as topological insulators [7; 8; 9; 10], Mott insulators [1; 4; 5; 11; 12], Weyl semimetals [13; 14; 15], and quantum spin liquids [1; 16]. Among iridium compounds, Sr\({}_{2}\)IrO\({}_{4}\), a single-layer member of the Ruddlesden-Popper series iridates, is of special interest. It has a quasi-two-dimensional (2D) square-lattice perovskite structure and was the first discovered spin-orbit \(j_{eff}=1/2\) Mott insulator [4]. Besides, it has the structural and physical similarity with La\({}_{2}\)CuO\({}_{4}\), a parent compound to high-\(T_{c}\) cuprates, such as the presence of a pseudogap [17; 18; 19], similar Fermi surfaces and Fermi arcs (in electron- and hole-doped compounds) [20; 21], \(d\)-wave symmetry [22; 23], electron-boson coupling [24], and similarities in the magnetic ordering and magnetic excitations [25; 26]. However, the superconductivity in Sr\({}_{2}\)IrO\({}_{4}\) has not been found yet [27]. In this work we focus our attention on the RIXS properties in Sr\({}_{2}\)IrO\({}_{4}\). Since the first publication by Kao _et al._ on NiO [28], the resonant inelastic X-ray scattering method has shown remarkable progress in condensed matter physics research as a spectroscopic technique to record the momentum and energy dependence of inelastically scattered photons in complex materials [29]. RIXS nowadays has rapidly become the forefront of experimental photon science. It provides a direct probe of spin and orbital states and dynamics. RIXS has a number of unique features in comparison with other spectroscopic techniques. It covers a large scattering phase space and requires only small sample volumes. It also is bulk sensitive, polarization dependent, as well as element and orbital specific [29]. Depending on the x-ray resonant energy, RIXS can be divided into two classes: soft x-ray and hard x-ray [29]. For high atomic number transition metal elements, such as \(5d\) transition metal compounds, the \(L\)-edge resonant energies are in the hard x-ray region. For such spectra high quality single crystals are needed as the key optical elements. The RIXS resolution crucially depends on the availability of a high quality single crystal with a Bragg diffraction peak close to back-scattering at the energy of the \(L\)-edge of the targeted element. This requirement severely limits the application of RIXS, and thus by far the majority of hard x-ray RIXS studies have been focused on \(5d\) iridates [30; 31; 32; 33; 34; 35] and osmates [36; 37; 38]. In a soft RIXS setup, the x-ray energy range is usually below about 2 keV [39]. The \(L\)-edges of the \(3d\) transition metal elements all fall below this energy scale. The energy resolution in the soft x-ray region is relatively high. For example, the combined energy resolution was 150 meV at the Ni \(L_{3}\) edge (\(\sim\)850 eV) in Ta\({}_{2}\)NiSe\({}_{5}\)[40]. The best energy resolution currently achieved for RIXS at the oxygen \(K\)-edge (\(\sim\)530 eV) of the common ligand atoms is about 45-50 meV [41; 42], which is much better than the majority of \(5d\) elements probed using \(L\)-edge RIXS in the hard x-ray region up to date. We should mention, however, that in recent years the experimentalists have achieved a remarkable progress in increasing the resolution for hard RIXS spectra. For example, Kim _et al._[43] have been able to obtain the total resolution of 34.2 meV at the Ir \(L_{3}\) edge in Sr\({}_{2}\)IrO\({}_{4}\). Such resolution permits direct measurements of single-magnon excitations as well as other many-body excitations in strongly correlated systems. In the x-ray absorption (XA), x-ray magnetic circular dichroism (XMCD), and RIXS processes at the O \(K\) edge, the \(1s\) core level is involved. The exchange splitting of the \(1s\)-core state is extremely small and SOC is absent in the O \(1s\) orbitals, therefore, only the exchange and spin-orbit splitting of the \(2p\) states is responsible for the observed spectra at the oxygen \(K\) edge. On the other hand, the oxygen valence \(2p\)-states of the surrounding ligand atoms are sensitive to the electronic states at neighboring sites because of their delocalized nature. They strongly hybridize with the \(5d\) orbitals. Due to such hybridization combined with high SOC at the \(5d\) ion, information on the elementary excitations can be extracted using an indirect RIXS process at the O \(K\) edge [42]. Although O \(K\) RIXS has a much smaller penetration depth (\(\sim\)100 nm) than \(5d\)\(L\) RIXS, a comparison between O \(K\) and Ir \(L_{3}\) spectra measured on Sr\({}_{2}\)IrO\({}_{4}\) suggests that they have a comparable counting efficiency [42]. The lower penetration depth of soft x-rays has its own advantages providing high sensitivity to ultrathin samples such as films. Soft x-ray RIXS at the O \(K\) edge is a promising method for studying the electronic and magnetic excitations in 5\(d\) compounds. There are several experimental investigations of the RIXS spectra at the oxygen \(K\) edge [43; 44; 42; 45; 46; 47; 48] in Sr\({}_{2}\)IrO\({}_{4}\). The Ir \(L_{3}\) RIXS spectra in this oxide are investigated in Refs. [43; 44; 45; 46; 47; 48; 25]. We carry out here a detailed study of the electronic structure, the XA, XMCD, and RIXS spectra of Sr\({}_{2}\)IrO\({}_{4}\) in terms of the density functional theory. Our study sheds light on the important role of band structure effects and transition metal \(5d\)\(-\) oxygen \(2p\) hybridization in the spectral properties in \(5d\) oxides. The energy band structure, the XA, XMCD, and RIXS spectra of Sr\({}_{2}\)IrO\({}_{4}\) are investigated in the _ab initio_ approach using the fully relativistic spin-polarized Dirac linear muffin-tin orbital band-structure method. We use both the generalized gradient approximation (GGA) and the GGA+\(U\) approach to assess the sensitivity of the RIXS results to different treatment of the correlated electrons. The paper is organized as follows. The crystal structure of Sr\({}_{2}\)IrO\({}_{4}\) and computational details are presented in Sec. II. Sec. III presents the electronic and magnetic structures of Sr\({}_{2}\)IrO\({}_{4}\). In Sec. IV the theoretical investigations of the XA, XMCD, and RIXS spectra of Sr\({}_{2}\)IrO\({}_{4}\) at the Ir \(L_{2,3}\) edges are presented, the theoretical results are compared with the experimental measurements. In Sec. V we present the theoretical investigations of the XA and RIXS spectra at the O \(K\) edge. Finally, the results are summarized in Sec. VI. ## II Computational details X-ray magnetic circular dichroism.Magneto-optical (MO) effects refer to various changes in the polarization state of light upon interaction with materials possessing a net magnetic moment, including rotation of the plane of linearly polarized light (Faraday, Kerr rotation), and the complementary differential absorption of left and right circularly polarized light (circular dichroism). In the near visible spectral range these effects result from excitation of electrons in the conduction band. Near x-ray absorption edges, or resonances, magneto-optical effects can be enhanced by transitions from well-defined atomic core levels to transition symmetry selected valence states. Within the one-particle approximation, the absorption coefficient \(\mu_{j}^{\lambda}(\omega)\) for incident x-ray polarization \(\lambda\) and photon energy \(\hbar\omega\) can be determined as the probability of electronic transitions from initial core states with the total angular momentum \(j\) to final unoccupied Bloch states \[\mu_{j}^{\lambda}(\omega) = \sum_{m_{j}}\sum_{n\mathbf{k}}|\langle\Psi_{n\mathbf{k}}|\Pi_{ \lambda}|\Psi_{jm_{j}}\rangle|^{2}\delta(E_{n\mathbf{k}}-E_{jm_{j}}-\hbar\omega) \tag{1}\] \[\times\theta(E_{n\mathbf{k}}-E_{F})\,,\] where \(\Psi_{jm_{j}}\) and \(E_{jm_{j}}\) are the wave function and the energy of a core state with the projection of the total angular momentum \(m_{j}\); \(\Psi_{n\mathbf{k}}\) and \(E_{n\mathbf{k}}\) are the wave function and the energy of a valence state in the \(n\)-th band with the wave vector \(\mathbf{k}\); \(E_{F}\) is the Fermi energy. \(\Pi_{\lambda}\) is the electron-photon interaction operator in the dipole approximation \[\Pi_{\lambda}=-e\mathbf{\alpha}\mathbf{a}_{\lambda}, \tag{2}\] where \(\mathbf{\alpha}\) are the Dirac matrices and \(\mathbf{a}_{\lambda}\) is the \(\lambda\) polarization unit vector of the photon vector potential, with \(a_{\pm}=1/\sqrt{2}(1,\pm i,0)\), \(a_{\parallel}=(0,0,1)\). Here, \(+\) and \(-\) denote, respectively, left and right circular photon polarizations with respect to the magnetization direction in the solid. Then, x-ray magnetic circular and linear dichroisms are given by \(\mu_{+}-\mu_{-}\) and \(\mu_{\parallel}-(\mu_{+}+\mu_{-})/2\), respectively. More detailed expressions of the matrix elements in the electric dipole approximation may be found in Refs. [49; 50; 51]. The matrix elements due to magnetic dipole and electric quadrupole corrections are presented in Ref. [51]. Resonant inelastic x-ray scattering.In the direct RIXS process [29] the incoming photon with energy \(\hbar\omega_{\bf k}\), momentum \(\hbar{\bf k}\), and polarization \(\mathbf{\epsilon}\) excites the solid from the ground state \(|\mathrm{g}\rangle\) with energy \(E_{\mathrm{g}}\) to the intermediate state \(|\mathrm{I}\rangle\) with energy \(E_{\mathrm{I}}\). During relaxation an outgoing photon with energy \(\hbar\omega_{\bf k^{\prime}}\), momentum \(\hbar{\bf k^{\prime}}\) and polarization \(\mathbf{\epsilon}^{\prime}\) is emitted, and the solid is in state \(|f\rangle\) with energy \(E_{\mathrm{f}}\). As a result, an excitation with energy \(\hbar\omega=\hbar\omega_{\bf k}-\hbar\omega_{\bf k^{\prime}}\) and momentum \(\hbar{\bf q}=\hbar{\bf k}-\hbar{\bf k^{\prime}}\) is created. Our implementation of the code for the calculation of the RIXS intensity uses Dirac four-component basis functions [52] in the perturbative approach [53; 54]. RIXS is the second-order process, and its intensity is given by \[I(\omega,{\bf k},{\bf k^{\prime}},\mathbf{\epsilon},\mathbf{\epsilon}^{ \prime}) \propto \sum_{\mathrm{f}}\left|\sum_{\mathrm{I}}\frac{\langle\mathrm{f} |\hat{H}^{\prime}_{\bf k^{\prime}\mathbf{\epsilon}}|\mathrm{I}\rangle\langle \mathrm{I}|\hat{H}^{\prime}_{\bf k^{\prime}\mathbf{\epsilon}}|\mathrm{g}\rangle}{E _{\mathrm{g}}-E_{\mathrm{I}}}\right|^{2} \tag{3}\] \[\times\delta(E_{\mathrm{f}}-E_{\mathrm{g}}-\hbar\omega),\] where the RIXS perturbation operator in the dipole approximation is given by the lattice sum \(\hat{H}^{\prime}_{\bf k_{\mathbf{\epsilon}}}=\sum_{\bf R}\hat{\mathbf{\alpha}}\mathbf{ \epsilon}\exp(-\mathrm{i}\mathbf{k}{\bf R})\), where \(\mathbf{\alpha}\) are the Dirac matrices. The sum over the intermediate states \(|\mathrm{I}\rangle\) includes the contributions from different spin-split core states at the given absorption edge. The matrix elements of the RIXS process in the frame of fully relativistic Dirac LMTO method were presented in Ref. [55]. Crystal structure.The powder-neutron-diffraction measurements show that Sr\({}_{2}\)IrO\({}_{4}\) possesses the tetragonal \(I4_{1}/acd\) perovskite structure (group number 142) [Fig. 1(a)] [56]. The IrO\({}_{6}\) octahedra in Sr\({}_{2}\)IrO\({}_{4}\) are rigidly aligned, just as the CuO\({}_{6}\) octahedra in cuprates, rotated by \(\sim\)11\({}^{\circ}\) about the \(c\) axis in the \(a-b\) plane [Fig. 1(b)], and have a local distortion of 4.5% axial elongation. Atomic positions of Sr\({}_{2}\)IrO\({}_{4}\) at 10 K (the lattice constants \(a=5.48164\) A, \(c=25.80019\) A) for Sr, Ir, O\({}_{1}\), and O\({}_{2}\) are (0, 1/4, \(z_{\mathrm{Sr}}\)), (0, 1/4, 3/8), (0, 1/4, \(z_{\mathrm{O}}\)), and (\(x\), \(x\) + 1/4, \(z_{\mathrm{O}}\)), respectively, with \(x=0.1996\), \(z_{\mathrm{Sr}}=0.5506\), and \(z_{\mathrm{O}}=0.4548\)[56]. The oxygen atoms surrounding the Ir sites provide an octahedral environment. The Ir\(-\)O\({}_{1}\) and Ir\(-\)O\({}_{2}\) interatomic distances are equal to 2.05886 A and 1.97704 A, respectively. Around each Ir atom there are eight Sr atoms with the Ir-Sr distance \(d_{\mathrm{Ir-Sr}}=3.34615\) A. The Ir-Ir distance \(d_{\mathrm{Ir-Sr}}=3.87610\) A. Note that in our electronic structure calculations, we rely on experimentally measured internal parameters \(x\), \(z_{\mathrm{Sr}}\), \(z_{\mathrm{O}}\) and lattice constants because they are well established for this material and are probably still more accurate than can be obtained from DFT. Figure 1: (Color online) (a) The schematic representation of the body centered tetragonal \(I4_{1}/acd\) (group number 142) Sr\({}_{2}\)IrO\({}_{4}\) crystal structure [56]; (b) the positions of ions in the IrO\({}_{2}\) plane perpendicular to the \(c\) axis. Calculation detailsThe details of the computational method are described in our previous papers [55; 57; 58; 59] and here we only mention several aspects. The band structure calculations were performed using the fully relativistic linear muffin-tin orbital (LMTO) method [50; 60]. This implementation of the LMTO method uses four-component basis functions constructed by solving the Dirac equation inside an atomic sphere [52]. The exchange-correlation functional of the generalized gradient approximation (GGA)-type was used in the version of Perdew, Burke and Ernzerhof [61]. The Brillouin zone integration was performed using the improved tetrahedron method [62]. The basis consisted of Ir and Sr \(s\), \(p\), \(d\), and \(f\); and O \(s\), \(p\), and \(d\) LMTO's. To take into account the electron-electron correlation effects we used in this work the "relativistic" generalization of the rotationally invariant version of the LSDA+\(U\) method [63] which takes into account that in the presence of spin-orbit coupling the occupation matrix of localized electrons becomes non-diagonal in spin indexes. Hubbard \(U\) was considered as an external parameter and varied from 0.65 eV to 3.65 eV. We used in our calculations the value of exchange Hund coupling \(J_{H}\)=0.65 eV obtained from constrained LSDA calculations [64; 65]. Thus, the parameter \(U_{eff}=U-J_{H}\), which roughly determines the splitting between the lower and upper Hubbard bands, varied between 0 eV and 3.0 eV. We adjusted the value of \(U\) to achieve the best agreement with the experiment. In the RIXS process an electron is promoted from a core level to an intermediate state, leaving a core hole. As a result, the electronic structure of this state differs from that of the ground state. In order to reproduce the experimental spectrum the self-consistent calculations should be carried out including a core hole. Usually, the core-hole effect has no impact on the shape of XAS at the \(L_{2,3}\) edges of 5\(d\) systems and just a minor effect on the XMCD spectra at these edges [50]. However, the core hole has a strong effect on the RIXS spectra in transition metal compounds [55; 66], therefore we take it into account. ## III Electronic and magnetic structures We performed GGA, GGA+SO, and GGA+SO+\(U\) calculations of the electronic and magnetic structures of Sr\({}_{2}\)IrO\({}_{4}\) for the experimental crystal structure [56] and AFM ordering along the \(c\) direction (Fig. 2). Although Sr\({}_{2}\)IrO\({}_{4}\) has a canted AFM structure [11], test calculations for a non-collinear AFM structure gave almost identical optical, XA, XMCD, and RIXS spectra. Fig. 3 (the upper panel) presents the energy band structure of Sr\({}_{2}\)IrO\({}_{4}\) in the energy range of the Ir \(t_{2g}\) manifold from \(-\)2.1 to 1.0 eV, calculated in the fully relativistic Dirac GGA+SO approximation. The GGA+SO bands are presented by circles proportional in size to their orbital character projected onto the basis set of Ir \(d_{3/2}\) (blue) and \(d_{5/2}\) (red) states. The strong SOC splits the \(t_{2g}\) manifold into a lower \(J_{eff}=3/2\) quartet and an upper \(J_{eff}\) = 1/2 doublet. The functions of the \(J_{eff}=3/2\) quartet are dominated by \(d_{3/2}\) states with some weight of \(d_{5/2}\) ones, which is determined by the relative strength of SOC and crystal-field splitting. The \(J_{eff}=1/2\) functions are almost completely given by the linear combinations of \(d_{5/2}\) states. This allows one to identify the bands with pure \(d_{5/2}\) character as originating from the \(J_{eff}=1/2\) states. The GGA+SO approach produces a metallic state in Sr\({}_{2}\)IrO\({}_{4}\). The GGA+SO+\(U\) approximation shifts the occupied and empty \(t_{2g}\) bands downward and upward, respectively, by \(U_{eff}/2\) producing a dielectric ground state (the lower panel of Fig. 3). The energy gap is increased with increasing Hubbard \(U\). Figure 4 presents the experimentally measured real part of the optical conductivity (open magenta circles), \(\sigma_{1xx}\), [67] for the energy below 2 eV in Sr\({}_{2}\)IrO\({}_{4}\) compared with the theoretical spectra calculated in the GGA+SO+\(U\) approximation for different \(U_{eff}\) values. Figure 2: (Color online) The AFM ordering with Ir moments parallel to \(c\) axis used in GGA+SO+\(U\) calculations. The experimental optical absorption consists of two peaks at around 0.5 and 1.0 eV. We found that the low energy peak is derived from transitions between initial and final bands formed by pure \(J_{eff}\) = 1/2 states near band boundaries, e.g., around the X point or the P-N high-symmetry line. The antiferromagnetic ordering of Ir moments within the \(ab\) plane stabilized by the on-site Coulomb repulsion \(U\) causes a gap opening near the zone boundary between two pairs of bands which show nearly parallel dispersion what insures high joint density of states for the interband transitions responsible for the low energy peak. This is in line with previous theoretical calculations [68; 69] and experimental photoemission results [70]. The high energy peak located around 1 eV is dominated by a contribution from transitions with \(J_{eff}\) = 3/2 initial states. Our calculations give the lower absorption peak about twice as strong as the higher energy one, while in the experimental spectra the strength is approximately the same for both. A similar trend was observed also by Propper _et al._[69] and Kim _et al._[71]. The later authors relate this to an interband mixing of \(J_{eff}\) = 3/2 and \(J_{eff}\) = 1/2 states, which reflects the itinerancy of the system, i.e., the hybridization of Ir 5\(d\) states via neighboring oxygen 2\(p\) states. The best agreement between the calculated and experimentally measured energy positions of the optical absorption peaks can be achieved for \(U_{eff}\) = 1.2 eV. Figures 5 and 6 present the energy band structure and partial density of states (DOS), respectively, in Sr\({}_{2}\)IrO\({}_{4}\) calculated in the GGA+SO+\(U\) approximation with \(U_{eff}\) = 1.2 eV. Five electrons occupy the \(t_{2g}\)-type low energy band (LEB) manifold in the energy interval from \(-\)1.5 eV to \(E_{F}\) in Sr\({}_{2}\)IrO\({}_{4}\). The empty \(t_{2g}\) states [the upper energy band (UEB)] consist of one peak and occupy the energy range from 0.41 eV to 0.82 eV (see Fig. 6). The \(e_{g}\)-type states of Ir are distributed far above the Fermi level from 2.2 eV to 5.1 eV. The 4\(d\) states of Sr ions are mostly situated above the Fermi level from 5.3 to 10.8 eV. The electronic structure of apical O\({}_{1}\) and in-plane O\({}_{2}\) ions significantly differ from each other. The apical O\({}_{1}\) 2\(s\) states consist of two very narrow peaks situated at \(-\)15.1 and \(-\)14.4 eV. The in-plane O\({}_{2}\) 2\(s\) states possess a relatively wider two peak structure from \(-\)17.9 to \(-\)16.5 eV. The O\({}_{1}\) 2\(p\) states are situated just below Ir LEB between \(-\)3.5 and \(-\)2.1 eV. There is also a narrow peak at \(-\)5.2 eV. The in-plane O\({}_{2}\) 2\(p\) states occupy a relatively wide energy interval from \(-\)8.7 to \(-\)3.6 eV. The small Figure 4: (Color online) The experimentally measured real part of the optical conductivity (open magenta circles), \(\sigma_{1xx}\), [67] (in 10\({}^{3}\)\(\Omega^{-1}\) cm\({}^{-1}\)) in Sr\({}_{2}\)IrO\({}_{4}\) of the in-plane response compared with the theoretical spectra calculated in the GGA+SO+\(U\) approximation for different \(U_{eff}\) values. Figure 5: (Color online) The energy band structure and total DOS [in states/(cell eV)] of Sr\({}_{2}\)IrO\({}_{4}\) calculated with taking into account Coulomb correlations in the GGA+SO+\(U\) approximation (\(U_{eff}\) = 1.2 eV). Figure 3: (Color online) The \(t_{2g}\) energy band structure of Sr\({}_{2}\)IrO\({}_{4}\) calculated in the fully relativistic Dirac GGA+SO approximation (the upper panel). The bands crossing the Fermi level which have almost pure \(d_{5/2}\) character (open red circles) are formed by \(t_{2g}\) states with \(j_{eff}\) = 1/2. The lower panel presents the \(t_{2g}\) energy bands calculated in the GGA+SO+\(U\) approximation with \(U_{eff}\) = 1.2 eV. peaks in the close vicinity of the Fermi level from \(-1.5\) eV to \(E_{F}\) and from 0.41 eV to 0.82 eV are due to the strong hybridization between O \(2p\) and Ir \(t_{2g}\) LEB and UEB, respectively. The occupation number of \(5d\) electrons in the Ir atomic sphere in Sr\({}_{2}\)IrO\({}_{4}\) is equal to 6.3, which is much larger than the expected value of five \(t_{2g}\) electrons. The excessive charge is provided by the "tails" of oxygen \(2p\) states. These \(5d_{O}\) states are located at the bottom of oxygen \(2p\) states from \(-8.7\) eV to \(-3.1\) eV and play an essential role in the RIXS spectrum at the Ir \(L_{3}\) edge (see Section IV). The theoretically calculated spin \(M_{s}\), orbital \(M_{s}\), and total \(M_{total}\) magnetic moments using the GGA+SO+\(U\) approach (\(U_{eff}\) = 1.2 eV) for the AFM solution are equal to 0.2647 \(\mu_{\rm B}\), 0.4447 \(\mu_{\rm B}\), and 0.7094 \(\mu_{\rm B}\), respectively. The spin and orbital magnetic moments at the Sr site are relatively small (\(M_{s}\) = 0.0007 \(\mu_{\rm B}\) and \(M_{l}\) = 0.0015 \(\mu_{\rm B}\)). The magnetic moments for apical O\({}_{1}\) ions are equal to \(M_{s}\) = 0.0294 \(\mu_{\rm B}\), \(M_{l}\) = 0.0254 \(\mu_{\rm B}\). For in-plane O\({}_{2}\) ions the magnetic moments almost vanish. ## IV Ir XMCD and RIXS spectra Figure 7 presents the experimentally measured x-ray absorption spectra (the upper panel) and XMCD spectra (the lower panel) at the Ir \(L_{2,3}\) edges for Sr\({}_{2}\)IrO\({}_{4}\)[72] (open circles) compared with the theoretically calculated ones in the GGA+SO+\(U\) (\(U_{eff}\) = 1.2 eV) approximation (full blue curves). The theoretically calculated Ir \(L_{2,3}\) XA and XMCD spectra are in good agreement with the experiment. The isotropic XA spectra are dominated by the empty \(e_{g}\) states with a smaller contribution from the empty \(t_{2g}\) orbitals at lower energy. The XMCD spectra, however, mainly come from the \(t_{2g}\) orbitals (\(J_{eff}\) = 1/2). This results in a shift between the maxima of the XA and XMCD spectra. Due to the importance of SOC effects in iridates, it is natural to quantify the strength of the SO interactions in these compounds. One method of accomplishing this is provided by the x-ray absorption spectroscopy. Van der Laan and Thole showed that the so-called branching ratio \({\rm BR}=I_{L_{3}}/I_{L_{2}}\) (\(I_{L_{2,3}}\) is the integrated intensity of the isotropic XAS at the \(L_{2,3}\) edges) is an important quantity in the study of \(5d\) oxides related to the SO interaction [73]. The BR is directly related to the ground-state expectation value of the angular part of the spin-orbit coupling \(<{\bf L}\cdot{\bf S}>\) through \({\rm BR}=(2+r)/(1-r)\), with \(r\)= \(<{\bf L}\cdot{\bf S}>/n_{h}\) and \(n_{h}\) is the number of holes in \(5d\) Figure 6: (Color online) The partial DOSs for Sr\({}_{2}\)IrO\({}_{4}\) calculated in the GGA+SO+\(U\) (\(U_{eff}\)= 1.2 eV) approximation. Figure 7: (Color online) The experimental x-ray absorption (upper panels) and XMCD spectra (lower panels) at the Ir \(L_{2,3}\) edges in the Sr\({}_{2}\)IrO\({}_{4}\) thin film (magenta circles) [72] measured at 6 K under a 0.8 T magnetic field compared with the theoretically calculated spectra in the GGA+SO+\(U\) approximation (full blue curves). The dotted black curves in the upper panels show the background scattering intensity. states [73]. As a result, XAS provides a direct probe of SO interactions, which is complementary to other techniques such as the magnetic susceptibility, electron paramagnetic resonance, and Mossbauer spectroscopy (which probe SOC through the value of the Lande g-factor). In the limit of negligible SOC effects the statistical branching ratio BR = 2, and the \(L_{3}\) white line is twice the size of the \(L_{2}\) feature [73]. The measured BR in Sr\({}_{2}\)IrO\({}_{4}\) is close to 4.1 [72], which differs significantly from the statistical BR = 2 in the absence of orbital magnetization in \(5d\) states. A strong deviation from 2 indicates a strong coupling between the local orbital and spin moments. Our DFT calculations produce BR = 3.56 for the GGA+SO+\(U\) (\(U_{eff}\) = 1.2 eV) approximation which is rather close to the experimental data of Haskel _et al._[72]. It should be mentioned that the effect of Coulomb correlations changes the energy band structure of transition metal compounds in two ways. First, \(d\) occupied states are shifted downward by \(U_{eff}/2\) and empty \(d\) states are shifted upward by this value relative to the Fermi energy. Second, the Coulomb correlations enhance an effective spin-orbit coupling constant [74]. The relative influence of such effect is increased in a row of \(5d\to 4d\to 3d\) transition metal compounds due to an increase of Hubbard \(U\) and a decrease of the atomic SO coupling constant \(\lambda_{SO}\). The RIXS spectra at Ir \(L_{2,3}\) occur from a local excitation between the filled and empty \(5d\) states. More precisely, the incoming photon excites a \(2p_{1/2}\) core electron (\(L_{2}\) spectrum) or \(2p_{3/2}\) one (\(L_{3}\) spectrum) into an empty \(5d\) state what is followed by the de-excitation from the occupied \(5d\) state into the core level. Because of the dipole selection rules, apart from \(6s_{1/2}\)-states (which have a small contribution to RIXS due to relatively small \(2p\to 6s\) matrix elements [50]) only \(5d_{3/2}\)-states occur for \(L_{2}\) RIXS, whereas for \(L_{3}\) RIXS \(5d_{5/2}\)-states also contribute. Although the \(2p_{3/2}\to 5d_{3/2}\) radial matrix elements are only slightly smaller than the \(2p_{3/2}\to 5d_{5/2}\) ones, the angular matrix elements strongly suppress the \(2p_{3/2}\to 5d_{3/2}\) contribution [50]. Therefore, the RIXS spectrum at the Ir \(L_{3}\) edge can be viewed as interband transitions between \(5d_{5/2}\) states. Figure 8 shows the experimental RIXS spectrum (open green circles) measured by Ishii _et al._[45] at the Ir \(L_{3}\) edge in Sr\({}_{2}\)IrO\({}_{4}\) compared with the theoretically calculated one in the GGA+SO+\(U\) approximation (\(U_{eff}\)=1.2 eV). The Ir \(L_{3}\) RIXS spectrum consists of two peaks below 5 eV. We found that the low energy peak corresponds to intra-\(t_{2g}\) excitations. This fine structure has a two peak structure in our calculations but the measurements of Ishii _et al._[45] show only one peak. However, Clancy _et al._[48] using higher resolution were able to distinguish two peaks (see the inset in Fig. 8). The low energy peak at 0.5 eV is due to interband transitions between occupied and empty Ir \(J_{eff}=1/2\) states (the red curves in the lower panel of Fig. 3). These transitions also contribute to the second high energy peak at around 0.7 eV together with \(J_{3/2}\to J_{1/2}\) transitions (the green curve in the inset of Fig. 8). The intensive peak at around 3.4 eV (the red curve in Fig. 8) is due to \(t_{2g}\to e_{g}\) transitions. The next fine structure from 4.5 eV to 12 eV (the magenta curve) is due to \(5d_{O}\to t_{2g}\), \(e_{g}\) transitions. Figure 9 shows the Ir \(L_{3}\) RIXS spectrum as a function Figure 8: (Color online) The experimental RIXS spectrum (open green circles) measured by Ishii _et al._[45] at the Ir \(L_{3}\) edge in Sr\({}_{2}\)IrO\({}_{4}\) compared with the theoretically calculated one in the GGA+SO+\(U\) approximation (\(U_{eff}\)=1.2 eV). Inset: The experimental RIXS spectrum (open magenta circles) measured by Clancy _et al._[48] at the Ir \(L_{3}\) edge for \(t_{2g}\)\(\to t_{2g}\) transitions in Sr\({}_{2}\)IrO\({}_{4}\) compared with the theoretically calculated one in the GGA+SO+\(U\) approach (\(U_{eff}\)=1.2 eV). Figure 9: (Color online) The RIXS spectra as a function of incident photon energy \(E_{i}\) calculated at the Ir \(L_{3}\) edge in Sr\({}_{2}\)IrO\({}_{4}\) with the momentum transfer vector \(\mathbf{Q}\) = (0, 0, 33). of incident photon energy \(E_{i}\) above the corresponding edge with the momentum transfer vector \(\mathbf{Q}\) = (0, 0, 33). We found that the low energy fine structure corresponding to intra-\(t_{2g}\) excitations is slightly decreased when the incident photon energy changes in the interval of 2 eV above the Ir \(L_{3}\) edge, while the high energy peak corresponding to the \(t_{2g}\to e_{g}\) transitions is monotonically increased (the upper panel of Fig. 9). A similar trend was observed in the measurements of Ishii _et al._[45]. They measured the RIXS spectra of Sr\({}_{2}\)IrO\({}_{4}\) at two representative incident photon energies in the interval of 3 eV and discovered that with increasing \(E_{i}\) the low energy peak is decreased but the high energy peak is increased [for the same momentum transfer vector \(\mathbf{Q}\) = (0, 0, 33)]. The lower panel of Fig. 9 shows the Ir \(L_{3}\) RIXS spectrum as a function of incident photon energy in the larger energy interval of 18 eV. With increasing \(E_{i}\) the low energy peak is steadily decreased, but the high energy peak shows more complex behavior. First, the intensity of the peak is increased, but then decreased with changing the relative intensity of the two peaks at 3 and 5 eV. It is widely believed that \(d-d\) excitations show only small momentum transfer vector \(\mathbf{Q}\) dependence in \(5d\) transition metal compounds [30; 75]. In particular, Sr\({}_{2}\)IrO\({}_{4}\) has a layered-perovskite structure, therefore, the momentum dependence along the \(c\) axis is expected to be small, as in high-\(T_{c}\) cuprates [76]. Indeed, as we see in the lower panel of Fig. 10, the RIXS spectra are almost identical for the transfer vectors \(\mathbf{Q}\) = (0, 0, 33) and (0, 0, 25). Similar dependence was observed also in the measurements of Ishii _et al._[45]. The upper panel of Fig. 10 shows the RIXS spectra at the Ir \(L_{3}\) edge in Sr\({}_{2}\)IrO\({}_{4}\) calculated as a function of \(Q_{y}\) with the momentum transfer vector \(\mathbf{Q}\) = (0, Q\({}_{g}\), 33) for incident photon energy \(\hbar\omega_{in}\) = 11210 eV. We found that with increasing Q\({}_{y}\) the first low energy peak is increased and the high energy fine structure is decreased. Analyzing Fig. 10 we can conclude that the momentum dependence of the excitations in Sr\({}_{2}\)IrO\({}_{4}\) is rather small as it was earlier observed in other iridates such as Sr\({}_{3}\)CuIrO\({}_{6}\)[30] or In\({}_{2}\)IrO\({}_{7}\)[75]. ## V O XAS and RIXS spectra The RIXS spectra at the O \(K\) edge in Sr\({}_{2}\)IrO\({}_{4}\) were measured by Liu _et al._[39], Lu _et al._[42], Paris _et al._[44], and Kim _et al._[43]. The last three investigations concentrate on the analysis of low energy excitations be Figure 11: (Color online) (The upper panel) The experimental RIXS spectrum (open magenta circles) measured by Liu _et al._[39] at the O \(K\) edge in Sr\({}_{2}\)IrO\({}_{4}\) compared with the theoretically calculated ones in the GGA+SO+\(U\)+SIC approximation (\(U_{eff}\) = 1.2 eV, \(V_{O_{2p}}\) = \(-\)3 eV). (The lower panel) The experimental RIXS spectrum (open magenta circles) measured by Liu _et al._[39] at the O \(K\) edge in Sr\({}_{2}\)IrO\({}_{4}\) compared with the theoretically calculated ones in the GGA+SO+\(U\)+SIC approach (\(U_{eff}\) = 1.2 eV) for different parameters \(V_{l}\). Figure 10: (Color online) The RIXS spectra at the Ir \(L_{3}\) edge in Sr\({}_{2}\)IrO\({}_{4}\) calculated as a function of \(Q_{y}\) (the upper panel) and \(Q_{z}\) (the lower panel) with the momentum transfer vector \(\mathbf{Q}\) = (0, Q\({}_{y}\), Q\({}_{z}\)) for incident photon energy \(\hbar\omega_{in}\) = 11210 eV. low 1.2 eV. Liu _et al._[39] present the RIXS spectrum up to 12 eV using circular and \(\pi\) polarizations of the incident beam. The O \(K\) RIXS spectrum consists of a peak centered at zero energy loss, which comprises the elastic line and other low-energy features such as phonons, magnons, etc., and three major inelastic excitations at 0.7 eV, 3.5 eV, and around 6.2. We found that the first low energy feature is due to the interband transitions between occupied and empty O\({}_{t_{2g}}\) states, which appear as a result of the strong hybridization between oxygen \(2p\) states with Ir \(t_{2g}\) LEB and UEB in the close vicinity of the Fermi level (see Fig. 6), therefore, the oxygen \(K\) RIXS spectroscopy can be used for the estimation of the energy band gap and positions of Ir \(5d\) Hubbard bands. The next two peaks at around 3.5 and 6.2 eV reflect the interband transitions from the occupied O \(2p\) states and the empty oxygen states which originate from the hybridization with Ir \(t_{2g}\) and \(e_{g}\) states, respectively. We found that the theory reproduces well the shape and energy position of the low energy feature, but the second and the third peaks are shifted towards smaller energy in comparison with the experimental measurements. It means that the DFT calculations cannot produce the correct energy position of the oxygen \(2p\) bands. These bands are almost fully occupied in Sr\({}_{2}\)IrO\({}_{4}\), therefore, they cannot be described by the GGA+\(U\) method. To reproduce the correct energy position of the oxygen \(2p\) band in Sr\({}_{2}\)IrO\({}_{4}\) we used a self-interaction-like correction procedure as proposed by Kaneko _et al._[77], where the valence bands are shifted downwards by adding a SIC-like orbital-dependent potential \(V_{l}\) into the Hamiltonian. We used \(V_{l}\) as a parameter an adjusted it to produce the correct energy position of the oxygen \(2p\) bands. We found that the best agreement with the experiment can be achieved for \(V_{O_{2p}}\) = \(-\)3.0 eV (see the lower panel of Fig. 11). Figure 12 presents the valence band photoemission spectrum of Sr\({}_{2}\)IrO\({}_{4}\)[78] compared with total DOS calculated in the GGA+SO+\(U\)+SIC approach (\(U_{eff}\) = 1.2 eV). Two dominant peaks at around \(-\)1 and \(-\)3.2 eV are observed, which might be attributed to the photoemission from Ir \(t_{2g}\) and oxygen \(2p\) states, respectively [78]. Like in the case of O \(K\) RIXS spectrum, the GGA+SO+\(U\) approach cannot reproduce the correct energy position of the peak at \(-\)3.2 eV. However, the SIC-like approach with \(V_{O_{2p}}\) = \(-\)3 eV improves the situation. Figure 13 presents the RIXS spectra as a function of incident photon energy \(E_{i}\) calculated at the O \(K\) edge in Sr\({}_{2}\)IrO\({}_{4}\) with circular polarization. We found much stronger dependence on the incident photon energy in the case of the O \(K\) RIXS spectrum in comparison with the corresponding dependence at the Ir \(L_{3}\) edge (compare Figs. 9 and 13). With increasing the incident photon energy both peaks at 0.7 eV and 3.5 eV are increased, the later one is increased dramatically. And this occurs in a small energy interval for \(E_{i}\) of 0.6 eV. Figure 14 shows the RIXS spectra at the O \(K\) edge in Sr\({}_{2}\)IrO\({}_{4}\) calculated as a function of \(Q_{y}\) (the upper panel) and \(Q_{z}\) (the lower panel) with the momentum transfer vector \(\mathbf{Q}\) = (0, Q\({}_{y}\), Q\({}_{z}\)). With decreasing parameters Q\({}_{x}\) and Q\({}_{z}\) the intensity of the major peak at 3.5 eV is decreased but the low energy peak at 0.7 eV is increased. There is also a strong change in the shape of the low energy peak at 0.7 eV and the third peak at around 6.2 eV with the change of the parameter Q\({}_{z}\). Figure 15 presents the experimental O \(K\) polarization dependent x-ray absorption spectra (open magenta circles) [4] compared with the theoretically calculated ones in the GGA+SO+\(U\)+SIC approach (\(U_{eff}\) = 1.2 eV, \(V_{O_{2p}}\) = \(-\)3 eV). Due to the quasi-two-dimensional structure of Sr\({}_{2}\)IrO\({}_{4}\) there is strong anisotropy in the x-ray absorption spectra. There are two small peaks \(a\) and \(b\) at 529.1 and 529.9 eV and larger peak \(c\) at around 531.8 eV for the \(\mathbf{E}\perp c\) polarization and only two peaks \(A\) and \(B\) for \(\mathbf{E}\parallel c\) at 529.9 and 531.3 eV. We foun Figure 12: (Color online) The valence band photoemission spectrum of Sr\({}_{2}\)IrO\({}_{4}\)[78] compared with total DOS calculated in the GGA+SO+\(U\)+SIC approach (\(U_{eff}\) = 1.2 eV) for different parameters \(V_{l}\). Figure 13: (Color online) The RIXS spectra as a function of incident photon energy calculated at the O \(K\) edge in Sr\({}_{2}\)IrO\({}_{4}\) with circular polarization. peak \(a\) for \({\bf E}\perp c\) and the large peak \(B\) for \({\bf E}\parallel c\) are derived from the apical oxygens O\({}_{1}\). The peaks \(b\) and \(A\) are due to the \(1s\to 2p\) x-ray absorption on the in-plane O\({}_{2}\) oxygens. The theory relatively well reproduces the experimentally measured XA spectra. ## VI Conclusions The electronic and magnetic structures of Sr\({}_{2}\)IrO\({}_{4}\) were investigated theoretically in the frame of the fully relativistic spin-polarized Dirac LMTO band-structure method in order to understand the importance of Coulomb interaction and spin-orbit coupling. We also present comprehensive theoretical calculations of the XA, XMCD, and RIXS spectra at the Ir \(L_{2,3}\) and oxygen \(K\) edges. The strong SOC splits the \(t_{2g}\) manifold into a lower \(J_{eff}=3/2\) quartet and an upper \(J_{eff}=1/2\) doublet. The functions of the \(J_{eff}=3/2\) quartet are dominated by \(d_{3/2}\) states with some weight of \(d_{5/2}\) ones, which is determined by the relative strength of the SOC and crystal-field splitting. The \(J_{eff}=1/2\) functions are almost completely given by linear combinations of \(d_{5/2}\) states. This allows one to identify the bands with pure \(d_{5/2}\) character as originating from the \(J_{eff}=1/2\) states. The GGA+SO approach produces a metallic state in Sr\({}_{2}\)IrO\({}_{4}\). The GGA+SO+\(U\) approximation shifts the occupied and empty \(t_{2g}\) bands downward and upward, respectively, by \(U_{eff}/2\) producing a dielectric ground state in Sr\({}_{2}\)IrO\({}_{4}\). We found that the best agreement between the calculated and experimentally measured optical conductivity spectrum can be achieved for \(U_{eff}=1.2\) eV. The experimental optical absorption consists of two peaks at around 0.5 and 1.0 eV. The low energy peak is derived from transitions between initial and final bands formed by pure \(J_{eff}=1/2\) states near band boundaries. The high energy peak located around 1 eV is dominated by a contribution from transitions with \(J_{eff}=3/2\) initial states. The theoretically calculated Ir \(L_{2,3}\) XAS and XMCD spectra are in good agreement with the experiment. The isotropic XA spectra are dominated by the empty \(e_{g}\) states with a smaller contribution from the empty \(t_{2g}\) orbitals at lower energy. The XMCD spectra, however, mainly come from the \(t_{2g}\) orbitals (\(J_{eff}=1/2\)). This results in a shift between the maxima of the XA and XMCD spectra. The ratio BR = \(I_{L_{3}}/I_{L_{2}}\) is an important quantity in the study of the SO interaction in \(5d\) oxides. It is directly related to the ground-state expectation value of the angular part of the spin-orbit coupling. Our DFT calculations produce BR = 3.56 which is rather close to the experimental value of 4.1 [72]. These values differ significantly from the statistical BR = 2 in the absence Figure 14: (Color online) The RIXS spectra at the O \(K\) edge in Sr\({}_{2}\)IrO\({}_{4}\) calculated as a function of \(Q_{y}\) (the upper panel) and \(Q_{z}\) (the lower panel) with the momentum transfer vector \({\bf Q}=(0,\) Q\({}_{y},\) Q\({}_{z})\) for incident photon energy \(\hbar\omega_{in}=529\) eV. Figure 15: (Color online) The experimental O \(K\) polarization dependent XA spectra (open magenta circles) [4] in Sr\({}_{2}\)IrO\({}_{4}\) compared with the theoretically calculated ones in the GGA+SO+\(U\)+SIC approach (\(U_{eff}=1.2\) eV, \(V_{O_{2p}}=-3\) eV). of orbital magnetization in \(5d\) states. A strong deviation from 2 indicates a strong coupling between the local orbital and spin moments. The Ir \(L_{3}\) RIXS spectrum consists of two peaks below 5 eV. We found that the low energy peak corresponds to intra-\(t_{2g}\) excitations. This fine structure has a two peak structure. The low energy peak at 0.5 eV is due to interband transitions between occupied and empty Ir \(J_{eff}=1/2\) states. These transitions also contribute to the second high energy peak at around 0.7 eV together with \(J_{3/2}\)\(\to J_{1/2}\) transitions. The intensive peak at around 3.4 eV is due to \(t_{2g}\to e_{g}\) transitions. The next fine structure from 4.5 eV to 12 eV is due to \(5d_{\rm O}\to t_{2g}\), \(e_{g}\) transitions. We investigated theoretically the influence of the incident photon energy \(E_{i}\) and the momentum transfer vector \(\mathbf{Q}\) on the shape of the Ir \(L_{3}\) RIXS spectrum. We found that with increasing of \(E_{i}\) in the interval of 2 eV above the Ir \(L_{3}\) edge the low energy fine structure corresponding to intra-\(t_{2g}\) excitations is decreased and the high energy peak corresponding to the \(t_{2g}\to e_{g}\) transitions is monotonically increased in agreement with the measurements of Ishii _et al._[45]. The momentum dependence of Ir \(L_{3}\) RIXS was found to be relatively small. With increasing Q\({}_{y}\) the first low energy peak is increased and the high energy fine structure is slightly decreased. The variation of parameter Q\({}_{z}\) almost does not influence the RIXS spectrum at the Ir \(L_{3}\) edge. The RIXS spectrum of Sr\({}_{2}\)IrO\({}_{4}\) at the O \(K\) edge consists of three major inelastic excitations at 0.7 eV, 3.5 eV, and around 6.2 eV. We found that the first low energy feature is due to the interband transitions between occupied and empty O\({}_{t_{2g}}\) states, which appear as a result of the strong hybridization between oxygen \(2p\) states with Ir \(t_{2g}\) LEB and UEB in the close vicinity of the Fermi level. The next two peaks at around 3.5 and 6.2 eV reflect the interband transitions from the occupied O \(2p\) states to the empty oxygen states which originate from the hybridization with Ir \(t_{2g}\) and \(e_{g}\) states, respectively. We found that the theory reproduces well the shape and energy position of the low energy feature, but the second and the third peaks are shifted towards smaller energy in comparison with the experimental measurements. It means that the DFT calculations cannot produce the correct position of oxygen \(2p\) bands. To reproduce the correct energy position of the oxygen \(2p\) band in Sr\({}_{2}\)IrO\({}_{4}\) we used a self-interaction-like correction procedure. We added a SIC-like orbital-dependent potential \(V_{l}\) into the Hamiltonian and found that the best agreement with the experiment can be achieved for \(V_{O_{2p}}=-3.0\) eV. We found that the dependence of the RIXS spectrum at the oxygen \(K\) edge on the incident photon energy and the momentum transfer vector \(\mathbf{Q}\) is much stronger in comparison with the correspondent dependency at the Ir \(L_{3}\) edge. Due to the quasi-two-dimensional structure of Sr\({}_{2}\)IrO\({}_{4}\) there is strong anisotropy in the x-ray absorption spectra at the oxygen \(K\) edge. There are two small peaks \(a\) and \(b\) at 529.1 and 529.9 eV and a larger peak \(c\) at around 531.8 eV for the \(\mathbf{E}\perp c\) polarization and only two peaks \(A\) and \(B\) for \(\mathbf{E}\parallel c\) at 529.9 and 531.3 eV. We found that the low energy peak \(a\) for \(\mathbf{E}\perp c\) and the large peak \(B\) for \(\mathbf{E}\parallel c\) are derived from the apical oxygens O\({}_{1}\). The peaks \(b\) and \(A\) are due to the \(1s\to 2p\) x-ray absorption on the in-plane O\({}_{2}\) oxygens. The theory relatively well reproduces the experimentally measured XA spectra. ###### Acknowledgements. We are thankful to Dr. Alexander Yaresko from the Max-Planck-Institute FKF in Stuttgart for helpful discussions.
2310.10467
Stance Detection with Collaborative Role-Infused LLM-Based Agents
Stance detection automatically detects the stance in a text towards a target, vital for content analysis in web and social media research. Despite their promising capabilities, LLMs encounter challenges when directly applied to stance detection. First, stance detection demands multi-aspect knowledge, from deciphering event-related terminologies to understanding the expression styles in social media platforms. Second, stance detection requires advanced reasoning to infer authors' implicit viewpoints, as stance are often subtly embedded rather than overtly stated in the text. To address these challenges, we design a three-stage framework COLA (short for Collaborative rOle-infused LLM-based Agents) in which LLMs are designated distinct roles, creating a collaborative system where each role contributes uniquely. Initially, in the multidimensional text analysis stage, we configure the LLMs to act as a linguistic expert, a domain specialist, and a social media veteran to get a multifaceted analysis of texts, thus overcoming the first challenge. Next, in the reasoning-enhanced debating stage, for each potential stance, we designate a specific LLM-based agent to advocate for it, guiding the LLM to detect logical connections between text features and stance, tackling the second challenge. Finally, in the stance conclusion stage, a final decision maker agent consolidates prior insights to determine the stance. Our approach avoids extra annotated data and model training and is highly usable. We achieve state-of-the-art performance across multiple datasets. Ablation studies validate the effectiveness of each design role in handling stance detection. Further experiments have demonstrated the explainability and the versatility of our approach. Our approach excels in usability, accuracy, effectiveness, explainability and versatility, highlighting its value.
Xiaochong Lan, Chen Gao, Depeng Jin, Yong Li
2023-10-16T14:46:52Z
http://arxiv.org/abs/2310.10467v2
# Stance Detection with Collaborative Role-Infused LLM-Based Agents ###### Abstract Stance detection automatically detects the stance in a text towards a target, vital for content analysis in web and social media research. Despite their promising capabilities, LLMs encounter challenges when directly applied to stance detection. First, stance detection demands multi-aspect knowledge, from deciphering event-related terminologies to understanding the expression styles in social media platforms. Second, stance detection requires advanced reasoning to infer authors' implicit viewpoints, as stance are often subtly embedded rather than overtly stated in the text. To address these challenges, we design a three-stage framework COLA (short for **C**ollaborative **r**O**le-infused **LLM-**s**ead **A**gents) in which LLMs are designated distinct roles, creating a collaborative system where each role contributes uniquely. Initially, in the multidimensional text analysis stage, we configure the LLMs to act as a linguistic expert, a domain specialist, and a social media veteran to get a multifaceted analysis of texts, thus overcoming the first challenge. Next, in the reasoning-enhanced debating stage, for each potential stance, we designate a specific LLM-based agent to advocate for it, guiding the LLM to detect logical connections between text features and stance, tackling the second challenge. Finally, in the stance conclusion stage, a final decision maker agent consolidates prior insights to determine the stance. Our approach avoids extra annotated data and model training and is highly usable. We achieve state-of-the-art performance across multiple datasets. Ablation studies validate the effectiveness of each design role in handling stance detection. Further experiments have demonstrated the explainability and the versatility of our approach. Our approach excels in usability, accuracy, effectiveness, explainability and versatility, highlighting its value. ## 1 Introduction Stance detection is commonly defined as automatically detecting the stance (as _Favor_, _Against_, or _Neutral_) of the text producer towards a target [30; 31; 6]. Stance detection plays a pivotal role in the analysis of large-scale text data on the web and social media platforms [23; 42]. Over the years, numerous methodologies have been proposed for stance detection [24; 3]. However, a persistent challenge lies in the need to train models specifically for the targets of interest. Even with advancements in cross-target stance detection[26] and zero-shot stance detection[4; 25], a suitable training on annotated corpora is often required. Acquiring large-scale labeled datasets is not trivial, which curtails the model's usability. Recently, large language models (LLMs) have demonstrated remarkable capabilities across various applications [10; 34; 2]. The inherent semantic understanding of these large models presents an exciting opportunity for stance detection. Most LLMs can be easily interacted with by users through zero-shot prompting. This significantly enhances the usability of models. Thus, with their strength and usability, large language models could reshape how we approach stance detection. Researchers have discerned the transformative potential LLMs bring to stance detection. Some works have proposed simple methods using LLMs for stance detection [51, 52]. Yet, while these works report satisfactory results on specific subset of certain datasets, our rigorous replications indicate these methods often underperform compared to the state-of-the-art non-LLM baselines. This can be attributed to two inherent challenges with stance detection, which can be listed as follows and are further illustrated in Figure 1. * **First, stance detection demands multi-aspect knowledge.** Sentences may contain elements like domain-specific terms, cultural references, social media linguistic styles, and more. These are not immediately comprehensible to large language models and require specialized parsing to be truly understood. * **Second, stance detection necessitates advanced reasoning.** Often, authors don't state their stances directly but inadvertently reveal them in various ways, such as through their attitudes towards related topics or events. Stance detection requires reasoning from various textual features to arrive at the correct stance. To address these challenges, we introduce our three-stage framework named COLA(short for **C**ollaborative **r**Ole-infused **LLM-based **A**gents). We design a stance detection system consisting of role-infused LLM-based agents, with each role bearing distinct responsibilities and significance. To counter the first challenge, we initiate a multidimensional text analysis stage. In this stage, LLMs are designated with three roles, named as linguistic expert, domain specialist, and social media veteran, to analyze text from various perspectives. While the linguistic expert delve into syntax, diction, and tenses, the domain specialist elucidate characters, events, and other textual elements. What's more, the social media veteran decode platform-specific terminologies and expression styles. Their combined insights help unearth stance indicators in the text. Addressing the second challenge, we propose a reasoning-enhanced debating stage. Here, we assign advocates for each potential stance category. Drawing evidence from the preceding phase, these advocates present arguments to bolster their respective stances, forcing the LLMs to discern the latent logic connecting textual features and actual stances. Lastly, a stance conclusion stage determines the text's stance, drawing insights both from the text itself and the debates. Our approach does not necessitate annotated data nor additional model training, hence ensuring high **usability**. Extensive experiments validate our method's superior performance over existing baselines, affirming its **accuracy1**. A representative result is that our zero-shot framework achieves a 21.7% absolute improvement compared to the best in-target labeled data dependent baseline on the \(F_{avg}\) metric on the CC target of the SEM16 dataset. Ablation studies elucidate the **effectiveness** of each Figure 1: Illustration of the challenges of stance detection. module in handling stance detection. Case studies and quantitative experiments substantiate our approach's **explainability**. The powerful performance of our proposed framework in a series of text classification tasks underscores its **versatility**. Our approach stands out for its usability, accuracy, effectiveness, explainability, and versatility, all of which highlight its value. Our main contributions are summarized as follows: * We are among the first to delve into harnessing LLMs to bolster stance detection. * We introduce a approach based on collaborative role-infused LLM-empowered agents, which exhibits outstanding performance on stance detection and achieves high levels of usability and explainability. * Our proposed three-stage framework--analyst, debater, and summarizer--offers significant potential for a range of text classification tasks, providing a powerful tool for text analysis on web and social media. The subsequent sections are organized as follows. We first review related works. The wen describe our three-stage framework in detail. We then presents our experiments, providing robust empirical evidence that demonstrated the superiority of our method from multiple perspectives. Lastly, we conclude our work and highlight potential areas for future improvement. ## 2 Related Work This section is structured as follows: First, we provide a detailed overview of advancements in stance detection. Next, we introduce recent progress in large language models. Lastly, we focus on reviewing a subset of works closely related to ours, specifically multi LLM-based agents systems. **Stance detection.** Stance detection aims to discern the stance of the author towards a particular target from textual content. Typically, stances are categorized into favor, against, neutral. A plethora of algorithms for stance detection have been proposed by researchers, encompassing both feature-based methods [1; 9; 29] and deep learning techniques [21; 47; 28]. These methodologies have enabled in-depth analysis of content on the internet and social media platforms. For example, Jang et al. [23] develop a method to find controversies on social media by generating stance-aware summaries of tweets. Grcar et al. [20] examine the Twitter stance before the Brexit referendum, revealing the pro-Brexit camp's higher influence. Conventionally, stance detection necessitates training on datasets annotated for the specific target. Such datasets are not trivially obtainable, thereby constraining the usability of many methods. Recognizing this limitation, researchers have ventured into cross-target stance detection, aiming to train classifiers that can adapt to unfamiliar but related targets after being trained on a known target [49; 46; 26]. Recently, there has been an emergence of zero-shot stance detection approaches that automatically detects the stance on unseen tasks [4; 25]. However, all these methods require training on annotated datasets. Unlike these methods, our approach uses pre-trained LLM, removing the need for additional annotated data. Through prompt engineering, we refine these models without extra training, offering a solution with high usability. **Large language models.** Large language models (LLMs) represent one of the most significant advancements of artificial intelligence in recent years. With the release of ChatGPT2 at the end of 2022, LLMs witnessed a meteoric rise in attention, predominantly driven by their outstanding performance. A myriad of LLMs, such as GPT-4 [33], Llama 2 [41], ChatGLM [50], and others, have been introduced at a rapid pace. In conventional NLP tasks, the zero-shot capabilities of these LLMs often rival or even surpass meticulously crafted, domain-specific models [45]. The emergence of robust capabilities, such as planning and reasoning within LLMs, has further enabled their adoption across diverse applications. Some endeavors integrate LLMs with existing tools [37; 38], others explore the potential of LLMs to create new tools [11], and there is a growing trend towards leveraging LLMs for dynamic decision-making, planning, and embodied intelligence [2; 39; 48]. Footnote 2: chat.openai.com Inherently, the vast knowledge and potent semantic understanding of LLMs offer immense potential in tackling stance detection tasks. Several research initiatives have indeed explored the application of LLMs in stance detection [51; 54; 52]. However, these existing methods often adopt a relatively straightforward approach, neglecting the intrinsic challenges specific to stance detection. As a result, our rigorous replication efforts have frequently found their performance to be subpar in comparison to training dependent baselines. In contrast, our method is specifically tailored to cater to the expert knowledge and intricate reasoning often required for stance detection, consequently achieving commendable results. **Multi LLM-based agents system.** Systems comprised of multiple LLM-based agents have demonstrated complex and powerful capabilities not inherent to individual LLM. Leveraging the human-like capacities of LLM, systems formed from multiple LLM-based agents have been applied in both online and offline societal simulations, showcasing credibility at the individual level and emergent social behaviors. For instance, Part et al. [34] construct an AI town with 25 agents, witnessing phenomena such as mayoral elections and party organization. Gao et al. [19] conduct simulations of online social networks with thousands of LLM-based agents, observing group emotional responses and opinion shifts that mirrored real-world trends. What's more, some studies have employed collaborative efforts between LLMs with distinct roles to accomplish tasks. In METAGPT [22], LLM-based agents with different roles collaboratively develop computer software, while DERA [32] uses discussions among various agents to refine medical summary dialogues and care plan generation. Additionally, several efforts have utilized debates between large language model agents to enhance model performance. For example, ChatEval [12] improves text evaluation capabilities through multi-agent debates. Du et al. [18] amplify the factuality and reasoning capacities of large language models by facilitating debates among them. To the best of our knowledge, our work is the pioneering effort in employing muliti LLM-based agents system for the task of stance detection. ## 3 Methods In this section, we desribe our proposed COLA in detail. The architecture of COLA is shown in Figure 2. Figure 2: Architecture of our proposed COLA. In the multidimensional text analysis stage, the liquisic expert, the domain specialist and the social media veteran analyze the text from web or social media from various perspectives, providing a holistic understanding. In the reasoning-enhanced debating stage, for each possible stance, a debater defends it, seeking possible logical chains between text features and stance. Finally, in the stance conclusion stage, a final judge determines the stance based on the statements made by all debaters. ### Task Description In stance detection, the objective is to decide the stance of a given opinionated document with respect to a specified target. Let us define a dataset \(D=\{(x_{i}=(d_{i},t_{i}),y_{i})\}_{i=1}^{n}\) consisting of \(n\) instances. For each instance, \(x_{i}\) represents a tuple comprising a document \(d_{i}\) and a corresponding target \(t_{i}\). The task is to detect the stance \(y_{i}\), which can be one of the following categories: favor, against, or neutral. ### Multidimensional Text Analysis Stage #### 3.2.1 Challenge: Stance detection necessitates a profound grasp of multi-aspect knowledge. Sentences on social media that convey the author's stance may be influenced by various linguistic phenomena, such as grammatical structures, tenses, and moods. There is also often an abundance of domain-specific terminologies, including references to characters, political parties, and events, and their relationships with the target. Additionally, unique language features of social media, such as hashtags, come into play. Although large language models have assimilated vast knowledge from their training data, their direct application for stance detection often fails to adequately harness this knowledge, leading to suboptimal results, a fact corroborated by our subsequent experiments. #### 3.2.2 Approach: To address this challenge and leverage the rich knowledge encoded within large language models, we designed a multidimensional text analysis stage. During this stage, we introduced three distinct LLM-based agents to parse the text from different perspectives, ensuring a comprehensive understanding of potential elements influencing the author's stance.These agents are the Linguistic Expert, Domain Specialist, and Social Media Veteran. We ask the LLM to behave in the way of the roles through prompting. Specifically, the inputs and outputs of the role-infused agents in this stage are as follows. **Input:** A text with a stance. **Output:** The individual analyses of the text by the linguistic expert, the domain specialist, and the social media veteran. The detailed configurations of agents are as follows. **Liguistic Expert.** This Agent is tasked with dissecting the text from a linguistic standpoint, exploring factors including but not limited to: * _Grammatical structure._ The arrangement and relationship of words in a sentence, which determines how different elements combine to produce specific meanings. * _Tense and inflection._ Tense identifies when an action occurs, influencing the stance's immediacy or distance. Inflection adjusts word forms, providing clues about the sentence's grammatical and relational context. * _Rhetorical devices._ These are techniques used to enhance the expressiveness of language. By emphasizing, contrasting, or evoking emotions, they shape the tone and attitude of a statement. * _Lexical choices._ The selection of particular words or phrases in writing, which can reveal deeper nuances, biases, or viewpoints about a topic. **Domain Specialist.** This agent focuses on domain-relevant knowledge, exploring facets such as: * _Characters._ Key individuals or entities in a text. * _Events._ Significant occurrences within a text. How they're portrayed can hint at the author's stance on certain issues or topics. * _Organizations._ Established groups mentioned. Their depiction can showcase the author's feelings towards certain societal structures or institutions. * _Parties._ Political groups with distinct ideologies. A text's treatment of these can provide insights into the author's political leanings or criticisms. * _Religions._ Specific faiths or spiritual beliefs. How they are referenced might shed light on the author's personal beliefs or societal observations. **Social Media Veteran.** This agent delves into the nuances of social media expression, focusing on aspects like: * _Hashtags._ Specific labels used on social media platforms, assisting in categorizing posts or emphasizing specific themes, making content easily discoverable. * _Internet slang and colloquialisms._ These refer to informal terms and expressions often used in online communities. Their usage can introduce nuances, cultural contexts, or specific attitudes, making them significant indicators of the underlying stance in a statement. * _Emotional tone._ This captures the sentiment inherent in a piece of writing, revealing the author's feelings, whether positive, negative, or neutral, about a particular subject. ### Reasoning-Enhanced Debating Stage #### 3.3.1 Challenge: The task of stance detection requires sophisticated reasoning. Authors often do not explicitly state their positions in a text. Instead, their stance may be implied through their sentiment towards certain entities or by mechanisms like comparison and contrast. Identifying these implicit stances requires detailed reasoning. Although large-scale language models possess some reasoning capabilities, their performance can be suboptimal in intricate reasoning tasks without proper guidance, which can affect the quality of stance detection results. #### 3.3.2 Approach: Drawing inspiration from recent works that leverage discussions or debates among large models to enhance their performance [18; 12; 27], especially in reasoning tasks, we introduce a reasoning-enhanced debating stage. In this stage, for every potential stance, an agent is designated. This agent seeks evidence from expert analyses of the text and advocates for its designated stance. Specifically, the inputs and outputs of agents in this stage are as follows. **Input:** A text with a stance. The analyses of the text by the linguistic expert, the domain specialist, and the social media veteran. **Output:** The debate from each agent for the stance they support, including the evidence it chooses and its logical chain. In our framework, we only engage in a single round of debate, reserving multi-round debates for future exploration. Directing agents to search for evidence and defend their aligned stances compels the large language model to establish logical connections between discerned textual features (as well as their multifaceted interpretations) and the actual underlying stance of the text. By having multiple agents debate in favor of different stances, the system encourages the large model's divergent thinking. This generates a plethora of potential text stance interpretations, ensuring that the probable correct interpretation has a higher likelihood of being produced by the system. These outputs subsequently feed into the stance conclusion stage, which renders a final, judicious judgment. ### Stance Conclusion Stage To infer a conclusive stance from diverse agent debates, we introduce the stance conclusion stage. In this stage, a judger agent determines the final stance of a text based on both the text itself and the arguments presented by debater agents. The process is delineated as: **Input:** A text with an embedded stance. Arguments from each agent, including evidence and their logical reasoning. **Output:**The identified stance of the text. The judger agent evaluates the text's inherent qualities, the evidence provided by debaters, and their logical frameworks to reach an informed decision. After going through the three stages mentioned above, we have effectively extracted the underlying stance towards the given target from the text. ## 4 Experimental Setup In this section, we describe the specific setup of our experiments. ### Datasets We conduct experiments on three distinct datasets: **SEM16**[30]. This dataset features six specific targets from diverse domains, namely _Donald Trump_ (DT), _Hillary Clinton_ (HC), _Feminist Movement_ (FM), _Legalization of Abortion_ (LA), _Atheism_ (A), and _Climate Change is Real Concern_ (CC). Each instance is classified into one of three stance categories: _Favor_, _Against_, or _None_. **WT-WT**[15]. Specializing in discourse about mergers and acquisitions between companies, this dataset comprises four targets: _CVS_AET_ (CA), _CI_ESRX_ (CE), _ANTM_CI_ (AC), _and AET_HUM_ (AH). Stance labels include _Support_, _Refute_, _Comment_ (Neutral), or _Unrelated_. **VAST**[4]. This dataset is characterized by its large number of varying targets. An instance in VAST includes a sentence, a target, and a stance, which may be _Pro_, _Con_, or _Neutral_. The statistics of our utilized datasets are shown in Table 1. Due to the zero-shot nature of our method, we do not split the dataset into training, development, and testing sets, but instead conduct experiments on the entire dataset. For zero-shot stance detection approaches, we evaluate their performance across all three datasets. However, for in-target stance detection methods, we assess their performance on SEM16 and WT-WT, because the targets within the VAST dataset are mainly few-shot or zero-shot. The datasets contain no personally identifiable information, but may contain offensive content because the text has a clear stance on topics such as religion, politics, climate, etc. We strictly adhere to the requirements of the respective licenses when using all datasets mentioned in the paper. ### Experimental Implementation #### 4.2.1 Implementation of COLA In our study, we employ the GPT-3.5 Turbo model, provided by OpenAI, as our backbone. We opt for GPT-3.5 Turbo primarily due to its superior performance, cost-effectiveness, and the ease of interaction offered via the OpenAI API. These attributes not only facilitate efficient research but also ensure the usability of our methodology for future applications. By utilizing the system instruction feature available through the OpenAI API, we instruct the model to act as various agent roles, feeding text inputs via prompts and collecting textual outputs from the model. To maximize reproducibility, \begin{table} \begin{tabular}{c|c|c c c c} \hline Dataset & Target & Pro & Con & Neutral & Unrelated \\ \hline \multirow{4}{*}{SEM16} & DT & 148 & 299 & 260 & - \\ & HC & 163 & 565 & 256 & - \\ & FM & 268 & 511 & 170 & - \\ & LA & 167 & 544 & 222 & - \\ & A & 124 & 464 & 145 & - \\ & CC & 335 & 26 & 203 & - \\ \hline \multirow{4}{*}{WT-WT} & CA & 2469 & 518 & 5520 & 3115 \\ & CE & 773 & 253 & 947 & 554 \\ \cline{1-1} & AC & 970 & 1969 & 3098 & 5007 \\ \cline{1-1} & AH & 1038 & 1106 & 2804 & 2949 \\ \hline VAST & - & 6952 & 7297 & 4296 & - \\ \hline \end{tabular} \end{table} Table 1: Statistics of our utilized datasets. we set the temperature parameter to 0. The reported results are the average of 5 repeated runs to ensure statistical reliability. #### 4.2.2 Evaluation Metric For SEM16 dataset, following Allaway et al. [5], we calculate \(F_{avg}\), which represents the average of F1 scores for _Favor_ and _Against_. For the WT-WT dataset, we follow the guidelines set by Conforti et al. [15] and calculate the Macro-F1 score for each target. For the VAST dataset, we adopt the method from Allaway et al. [4] and compute the F1 score for _Pro_, _Con_ and the Macro-F1 score to assess model performance. ### Comparison Methods We compare COLA with state-of-the-art (SOTA) methods in stance detection. We conduct comparisons with methods for two tasks: zero-shot stance detection and in-target stance detection. We compare our method with various zero-shot stance detection methods. This includes adversarial learning method: TOAD [5], contrastive learning methods: PT-HCL [25], Bert-based techniques: TGA-Net [25] and Bert-GCN [28]. We also include two baselines based on large language models: GPT-3.5 Turbo and GPT-3.5 Turbo+Chain-of-thought(COT), both of which can be considered zero-shots, implemented in strict accordance with Zhang et al. [51] and Zhang et al. [52], respectively. To further verify the performance of our model, we compare our model to in-target stance detection methods. Such methods undergo extensive training on datasets for a given target and are then evaluated on the test set of the same target. In contrast, our method remains strictly zero-shot, with **no fine-tuning** applied to our backbone model. We compare our approach with various inter-target stance detection baselines, including RNN-based methods: BiCond [8], and ATT-LSTM [44]; Attention-based method: CrossNet [49]; Bert-based method: BERT [17]; and Graph-based methods: ASGCN [53] and TPDG [26]. For non-LLM approaches, we retrieve results from existing literature for a comprehensive comparison [15; 4; 5; 28; 26; 25]. ## 5 Experimental Results In this section, we aim to answer the following research questions (RQs) with the help of experimental results: RQ1: How is the performance of COLA compare with state-of-the-art stance detection models? **(Accuracy)** RQ2: Is every component in our model effective and contributory to performance enhancement? **(Effectiveness)** RQ3: Can our model explain the rationale and logic behind its stance determinations? **(Explainability)** RQ4: Is our framework adaptable for other text classification tasks related to web and social media content analysis? **(Versatility)** ### Overall Performance (RQ1) In Table 2, we present the zero-shot stance detection performance of COLA across three datasets in comparison to baseline methods. Furthermore, Table 3 showcases the results of both our zero-shot COLA and the in-target labeled data dependent baselines on the SEM16 and WT-WT datasets for the in-target stance detection task. Overall results have demonstrated the strong performance of our approach. Specifically, the key findings are enumerated below. * Our method outperforms the state-of-the-art zero-shot stance detection approaches across the majority of metrics. On most metrics across three datasets, our model demonstrates statistically significant improvements over the best baseline. For the CC and LA targets in the SEM16 dataset, our approach achieves substantial gains over the best baseline, with absolute increases in \(F_{avg}\) of 15.7% and 25.1% respectively. In the WT-WT dataset, our method realizes significant improvements over the best baseline for all targets except for AH. In the VAST dataset, which comprises tens of thousands of instances, our model secures a notable absolute boost of 1.8% in the overall Macro-F1 Score. This attests to the robust zero-shot stance detection capabilities of our approach. * The zero-shot stance detection performance of our method is closely aligned with that of the state-of-the-art in-target stance detection techniques, even when they are fully trained on corresponding targets. On the SEM16 dataset, our approach significantly outperforms the best baseline, TPDG, on the DT and CC targets, while maintaining comparable performance on other targets. In the WT-WT dataset, our method consistently matches the performance of TPDG across all targets. Remarkably, even though these comparison methods have been extensively trained on their respective targets, our approach still sustains comparable or superior performance, underscoring our method's strong performance. * Direct application of large language models may yield poor performance, especially on abstract concept targets. In the SEM16 dataset, for the targets A (_Atheism_) and CC (_Climate Change is a Real Concern_), GPT-3.5 achieves only 8.1% and 24.7% in \(F_{avg}\) respectively. Even with the enhanced GPT-3.5+COT, the scores are merely 10.3% and 25.2%. Across almost all datasets and metrics, the performance of simply deploying large language models significantly lags behind our proposed method. This underscores the limitations of directly using large language models for stance detection tasks, especially in handling stances towards abstract concept targets, highlighting the necessity and validity of our design. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{5}{c}{**SEM16(\%)**} \\ & DT & HC & FM & LA & A & CC \\ \hline COLA & 71.2 & 75.9 & 69.1 & 71.0 & 62.3 & 64.0 \\ w/o Liguisitic Expert & 69.1 & 74.2 & 67.8 & 67.2 & 46.0 & 62.1 \\ w/o Domain Specialist & 70.4 & 75.0 & 66.5 & 60.1 & 42.4 & 58.2 \\ w/o Social Media Veteran & 67.8 & 75.5 & 68.2 & 64.4 & 54.6 & 60.0 \\ w/o Multidimensional Text Analysis Stage & 67.4 & 72.8 & 65.2 & 52.2 & 23.3 & 55.9 \\ w/o Reasoning-Enhanced Debating Stage & 64.7 & 73.3 & 64.0 & 53.8 & 26.6 & 49.1 \\ \hline \hline \end{tabular} \end{table} Table 4: Experimental results of ablation study. \begin{table} \begin{tabular}{c|c c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{8}{c}{**SEM16(\%)**} & \multicolumn{8}{c}{**WT-WT(\%)**} & \multicolumn{8}{c}{**VAST(\%)**} \\ & DT & HC & FM & LA & A & CC & CA & CE & AC & AH & Pro & Con & All \\ \hline TOAD & 49.5 & 51.2 & 54.1 & 46.2 & 46.1 & 30.9 & 55.3 & 57.7 & 58.6 & 61.7 & 42.6 & 36.7 & 41.0 \\ TGA Net & 40.7 & 49.3 & 46.6 & 45.2 & 52.7 & 36.6 & 65.7 & 63.5 & 69.9 & 68.7 & 55.4 & 58.5 & 66.6 \\ BERT-GCN & 42.3 & 50.0 & 44.3 & 44.2 & 53.6 & 35.5 & 67.8 & 64.1 & 70.7 & 69.2 & 58.3 & 60.6 & 68.6 \\ PT-HCL & 50.1 & 54.5 & 54.6 & 50.9 & 56.5 & 38.9 & 73.1 & 69.2 & 76.7 & 76.3 & 61.7 & 63.5 & 71.6 \\ GPT-3.5 & 69.5 & 74.0 & 59.1 & 52.0 & 8.1 & 24.7 & 65.5 & 61.1 & 64.3 & 66.4 & 66.2 & 67.5 & 65.0 \\ GPT-3.5+COT & 69.0 & 75.5 & 60.8 & 55.3 & 10.3 & 25.2 & 66.2 & 63.3 & 65.5 & 66.7 & 68.5 & 66.4 & 66.4 \\ COLA(ours) & **71.2** & **75.9** & **69.1\({}^{*}\)** & **71.0\({}^{*}\)** & **62.3\({}^{*}\)** & **64.0\({}^{*}\)** & **80.8\({}^{*}\)** & **76.2\({}^{*}\)** & **83.0\({}^{*}\)** & **78.9** & **73.4\({}^{*}\)** & **77.2\({}^{*}\)** & **73.4\({}^{*}\)** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of COLA and baselines in zero-shot stance detection task. Best scores are in bold. * denotes COLA improves the best baseline at \(p<0.05\) with paired t-test. \begin{table} \begin{tabular}{c|c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{**Category**} & \multirow{2}{*}{**Model**} & \multicolumn{8}{c}{**SEM16(\%)**} & \multicolumn{8}{c}{**WT-WT(\%)**} \\ & & DT & HC & FM & LA & A & CC & CA & CE & AC & AH \\ \hline \multirow{5}{*}{In-target Labeled Data} & BiCond & 59.0 & 56.1 & 52.9 & 61.2 & 55.3 & 35.6 & 71.1 & 72.3 & 72.6 & 72.0 \\ & BERT & 57.9 & 61.3 & 59.0 & 63.1 & 60.7 & 38.8 & 73.6 & 73.2 & 76.6 & 75.5 \\ \cline{1-1} & CrossNet & 60.2 & 60.2 & 55.7 & 61.3 & 56.4 & 40.1 & 71.7 & 71.2 & 73.8 & 72.5 \\ \cline{1-1} & ATT-LSTM & 55.3 & 59.8 & 55.3 & 62.6 & 55.9 & 39.2 & 72.0 & 71.4 & 74.3 & 73.5 \\ \cline{1-1} & ASGCN & 58.7 & 61.0 & 58.7 & 63.2 & 59.5 & 40.6 & 72.2 & 72.9 & 75.1 & 74.3 \\ \cline{1-1} & TPDG & 63.0 & 73.4 & 67.3 & **74.7** & **64.7** & 42.3 & 79.3 & **77.6** & **81.5** & **80.2** \\ \hline Zero-shot Method & COLA(ours) & **71.2\({}^{*}\)** & **75.9** & **69.1** & 71.0 & 62.3 & **64.0\({}^{*}\)** & **80.8** & 76.2 & **83.0** & 78.9 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of zero-shot COLA and baselines fully trained on labeled data for the in-target stance detection task. Best scores are in bold. * denotes COLA improves the best baseline at \(p<0.05\) with paired t-test. ### Ablation Study (RQ2) To investigate the impact of each module in our design, we conduct ablation studies to assess the performance of our framework when each module is removed. The results are shown in Table 4, which demonstrate that every module in our framework contributes to performance enhancement. In the following, we provide a detailed description of the results. #### 5.2.1 Study on multidimensional text analysis stage. During the multidimensional text analysis stage, three expert agents from different domains concurrently analyze the text. We individually removed each of these experts to assess the performance of our approach. We also evaluated the performance when all expert analyses are excluded. The results show that the removal of any expert agent results in a certain degree of performance degradation. Moreover, eliminating the entire multidimensional text analysis stage leads to a significant performance drop. The most pronounced performance decline was observed for the A (_Atheism_) target. Removing the Linguistic Expert, Domain Specialist, and Social Media Veteran leads to an \(F_{avg}\) decrease to 46.0%, 42.4%, and 54.6%, respectively. What's more, without the multidimensional text analysis stage, the \(F_{avg}\) drops to a mere 23.3%. This could be attributed to the complexity of the _Atheism_ topic across various domains such as religion and society. These findings underscore the effectiveness of our multidimensional text analysis stage and the design of each agent therein. #### 5.2.2 Study on reasoning-enhanced debating stage. In the reasoning-enhanced debating phase, we introduce debates among agents with differing perspectives to augment the reasoning capabilities of our LLM-based system. We remove this stage and let the judger agent directly deduce the text's stance from the expert agents' text analysis, aiming to verify the effectiveness of the debating design. Upon the removal of the debating stage, our method experiences a noticeable performance degradation. The most significant drops are observed for the abstract concept targets A (_Atheism_), CC (_Climate Change is Real Concer_), and LA (Legalization of Abortion), with the absolute \(F_{avg}\) declining by 35.7%, 14.9%, and 10.6%, respectively. This indicates that the reasoning-enhanced debating stage offers substantial benefits, especially when dealing with relatively abstract targets. The results validate the effectiveness of the reasoning-enhanced debating stage design. Figure 3: Cases of explainations generated by our approach. ### Study on Explainablity (RQ3) An explainable artificial intelligence (XAI) is one that offers clear insights or justifications to make its decisions comprehensible [7]. By elucidating its decision-making processes, an XAI augments transparency and reinforces model trustability [16]. Large language models inherently possess the capability to explain their outputs. By prompting them about the rationale behind their decisions, we can obtain explanations for their determinations directly. To delve deeper into the explainablility of our approach, we conduct both case studies and quantitative experiments to verify its ability to generate clear and reasonable explanations. During the stance conclusion stage, we mandate the judger agent to provide outputs in a JSON format, consisting of two components: the stance and a concise explanation not exceeding 100 tokens. We conduct our experiment on the SEM16 dataset. After closely examining the generated outputs, we find that our model can provide clear explanations for its decisions. In Figure 3, we show two cases to illustrate. In the first case, the tweet _"The ruling by @Scotus is a major setback for @EPA & the environment. #dirtycoal"_ agrees that climate change is a real concern. Our model detects this stance. In its generated explanation, the model discerns the mention of the EPA and the usage of the #dirtycoal tag, indicating an environmental concern. Moreover, the model perceives an emotional tone of frustration, further reflecting a pro-environmental perspective. In the second case, the tweet _"@GovtsTheProblem This is what I see: Make way 4 ur queen peasants! Don't touch or talk 2 her U filt! #NoHillary2016 #Benghazi"_ portrays an opposing stance toward Hillary. Our model rationally explains its judgment from a linguistic perspective (utilization of derogatory language), a domain-specialist perspective (mentioning the Benghazi incident in a negative context), and a social media lens (the hashtag #NoHillary2016). These cases validate the model's proficiency in generating clear and reasonable explanations. To further validate our model's ability to produce clear and logical explanations, we conduct quantitative experiments. For the SEM16 dataset, we collect explanations (from the second part of the JSON output) related to each instance's stance generated by COLA. These explanations, along with the original text, are fed into the GPT-3.5 Turbo model. We inform the model that these explanations could be used as references for its decisions. As a result, we obtain a new set of judgments from the model. It's evident that the performance of GPT-3.5 Turbo significantly improves by incorporating explanations generated by COLA in addition to the original text, as presented in Table 5. There is a noticeable increase for the A(_Atheism_) and CC(_Climate Change is Real Concer_) targets, with \(F_{avg}\) improving by 51.6 and 29.3 points, respectively. For the HC(_Hillary Clinton_) and FM(_Feminist Movement_) targets, the results even exceed that of COLA. This further confirms our model's strong ability in generating clear and logical explanations. \begin{table} \begin{tabular}{c|c|c|c c|c c} \hline \hline \multirow{2}{*}{**Category**} & \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Restaurant14(\%)**} & \multicolumn{2}{c|}{**Laptop14(\%)**} & \multicolumn{2}{c}{**Restaurant15(\%)**} \\ & & Accuracy & Macro-F1 & Accuracy & Macro-F1 & Accuracy & Macro-F1 \\ \hline Labeled Data & DGETD & **86.3** & 80.0 & 79.8 & 75.6 & 84.0 & 71.0 \\ Dependent Methods & doGCCN & 86.2 & **80.5** & 81.0 & **78.1** & 85.2 & 72.7 \\ \hline \multirow{2}{*}{Zero-shot Methods} & GPT-3.5 Turbo & 74.3 & 69.6 & 69.9 & 61.0 & 80.4 & 67.7 \\ & Ours & 84.1 & 77.7 & **81.6** & 77.0 & **85.4** & **74.9** \\ \hline \hline \end{tabular} \end{table} Table 6: Performance of our framework and baselines on aspect-based sentiment analysis. Best scores are in bold. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{5}{c}{**SEM16(\%)**} \\ & DT & HC & FM & LA & A & CC \\ \hline GPT-3.5 & 69.0 & 75.5 & 60.8 & 55.3 & 10.3 & 25.2 \\ COLA & **71.2** & 75.9 & 69.1 & **71.0** & **62.3** & **64.0** \\ GPT-3.5+COLA’s Explainations & 69.4 & **77.7** & **70.7** & 66.7 & 61.9 & 54.5 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance of GPT-3.5 Turbo, COLA and GPT-3.5 Turbo with explainations generated by COLA. Best scores are in bold. ### Study on Versatility (RQ4) Our proposed COLA can be summarized as an Analyst-Debater-Summarizer framework. In this section, we conduct experiments to validate that the Analyst-Debater-Summarizer framework can be applied to other text classification tasks for text analysis on web and social media, not just as an ad-hoc approach for stance detection. We perform experiments on two additional text classification tasks: aspect-based sentiment analysis and persuasion prediction. We select aspect-based sentiment analysis because it demands precise understanding of sentiments tied to specific elements in text, reflecting the detailed analysis capability of our framework. Meanwhile, persuasion prediction is chosen due to its emphasis on detecting underlying intent, highlighting COLA's ability to adeptly handle intricate conversational dynamics commonly seen in web and social media exchanges. Aspect-based sentiment analysis is to determine the sentiment polarity (_Positive_, _Negative_, or _Neutral_) expressed towards each aspect mentioned in the text [36]. In this task, we modify the debater component in our original framework to engage in sentiment debates instead of stance debates, while keeping other design unchanged. We evaluate our approach's performance on the Restaurant14 and Laptop14 datasets from SemEval14 [36], as well as the Restaurant15 dataset from SemEval15 [35]. We follow Chen et al. [14] and use Accuracy and Macro-F1 score as evaluation metrics. We compare our approach with state-of-the-art models that require training, namely DGEDT [40] and dotGCN [13]. The experimental results are presented in Table 6. It can be observed that our zero-shot method performs comparably to the best baseline models that rely on labeled data. On the Restaurant15 dataset, our approach even outperforms the top baseline. Another crucial finding is that our approach consistently outperforms directly applying GPT-3.5 Turbo while maintaining ease of use. Following Ziems et al. [54], we define persuasion prediction as determining whether one party in a conversation is persuaded after the conversation ends. In this task, we replace the three experts in our original framework with two experts: a domain expert and a psychologist. They provide detailed analysis of various concepts and nouns in the conversation topic and analyze the psychological changes of the individuals involved. The debaters are modified to argue for whether a participant in the conversation has been persuaded. We use the dataset provided by Wang et al. [43] and follow their evaluation metrics, using Accuracy and Macro-F1. We compare our approach with Hybrid RCNN [43] and GPT-3.5 Turbo, and the results are presented in Table 7. The experimental results show that our approach achieves better performance compared to the baseline and a significant improvement over GPT-3.5 Turbo. The Analyst-Debater-Summarizer framework has proven to be highly successful in both aspect-based sentiment analysis and persuasion classification tasks. On a series of tasks, our zero-shot framework performs on par with state-of-the-art baselines that rely on training data and significantly outperforms direct application of GPT-3.5 Turbo. These experiments demonstrate the versatility of our approach. ### Discussions In the aforementioned experiment, we extensively evaluate the performance of our approach across various dimensions. From the perspective of our method's design rationale, the ablation study confirms that every component in our approach contributes to a performance boost, indicating that the design is free of redundancy and can be considered efficacious. In comparison with existing methods, experimental evidence shows that our approach outperforms all other zero-shot methods on stance detection. Furthermore, its performance is on par with in-target stance detection methods that rely on in-target labeled data, exhibiting impressive accuracy. In addition, for two other text classification tasks related to web and social media content analysis, our method achieves results comparable \begin{table} \begin{tabular}{c c c} \hline \hline **Model** & **Accuracy(\%)** & **F1-Score(\%)** \\ \hline Hybrid RCNN & 74.8 & 59.6 \\ GPT-3.5 Turbo & 67.6 & 56.0 \\ Ours & **76.5** & **63.9** \\ \hline \hline \end{tabular} \end{table} Table 7: Performance of our framework and baselines on persuasion prediction. Best scores are in bold. to state-of-the-art baselines, underscoring its versatility. From a practical application standpoint, our method does not require additional training for the model. Instead, it can be implemented by interacting with existing large language models through APIs or other means, showcasing its strong usability. The experiments also prove that our framework can provide clear and rational explanations for its decisions, ensuring a high degree of explainability. Such generated explanations can bolster users' trust in our approach and are conducive to further analysis. Given these advantages, our method promises a broad range of applications. ## 6 Conclusion and Future Work In this work, we harness the formidable capabilities of LLMs for advanced stance detection. We propose COLA, where multiple LLM-based agents collaborate to reach an conclusion. This method encompasses three stages: the multidimensional text analysis stage, the reasoning-enhanced debating stage, and the stance conclusion stage. Experimental results demonstrate that our approach achieves high accuracy, effectiveness, explainability, and versatility, showcasing its significant applicability. Our method is not without limitation. Due to the absence of real-time training data for large language models, the performance in analyzing real-time topics might be slightly compromised. For future work, we intend to incorporate a real-time updating knowledge base into the text analysis stage to enhance our framework's capability to analyze texts that include current events. Furthermore, there remains vast potential for exploring its implementation in addressing extensive text analysis tasks on web and social media.
2301.09677
Study of the generalized von mangoldt function defined by L-additive function
The main object of this paper is to study the generalized von mangoldt function using the L-additive function, which can help us give many result about the classical arithmetic function.
Es-said En-naoui
2023-01-23T19:17:17Z
http://arxiv.org/abs/2301.09677v1
# Study of the generalized von Mangoldt function defined by L-additive function ###### Abstract The main object of this paper is to study the generalized von mangoldt function using the L-additive function, which can help us give many result about the classical arithmetic function. ## 1 Introduction In this work, we address two major result: Dirichlet product of generalized von Mangoldt function with an arithmetic function \(f\) and dirichlet serie of many arithmetics function completely additive. In this article, we study the generalized function von Mangoldt by the L-additive function to find alternative proof for many series expansions that depend on the arithmetic function additive and completely additive. The methods readily generalize, and can be applied to other L-additive functions. Our principal result are that : \[\sum_{n\geq 1}\frac{\Lambda_{f}(n)}{n^{s}}=\sum_{p}\frac{f(p)}{h_{f}(p)p^{s}-h _{f}(p)}\] where \(f\) is L-additive function with \(h_{f}\) is nonzero-valued. First of all, to cultivate analytic number theory one must acquire a considerable skill for operating with arithmetic functions. we begin with a few elementary considerations. **Definition 1.1** (arithmetic function).: An **arithmetic function** is a function \(f:\mathbb{N}\longrightarrow\mathbb{C}\) with domain of definition the set of natural numbers \(\mathbb{N}\) and range a subset of the set of complex numbers \(\mathbb{C}\). **Definition 1.2** (multiplicative function).: A function \(f\) is called an **multiplicative function** if and only if : \[f(nm)=f(n)f(m) \tag{1}\] for every pair of coprime integers \(n\),\(m\). In case (1) is satisfied for every pair of integers \(n\) and \(m\), which are not necessarily coprime, then the function \(f\) is called **completely multiplicative**. Clearly, if \(f\) are a multicative function, then \(f(n)=f(p_{1}^{\alpha_{1}})\dots f(p_{s}^{\alpha_{s}})\), for any positive integer \(n\) such that \(n=p_{1}^{\alpha_{1}}\dots p_{s}^{\alpha_{s}}\), and if \(f\) is completely multiplicative, so we have : \(f(n)=f(p_{1})^{\alpha_{1}}\dots f(p_{s})^{\alpha_{s}}\). The functions defined above are widely studied in the literature, (see, e.g., [5, 8, 9, 10]). **Definition 1.3** (additive function).: A function \(f\) is called an **additive function** if and only if : \[f(nm)=f(n)+f(m) \tag{2}\] for every pair of coprime integers \(n\),\(m\). In case (3) is satisfied for every pair of integers \(n\) and \(m\), which are not necessarily coprime, then the function \(f\) is called **completely additive**. Clearly, if \(f\) are a additive function, then \(f(n)=f(p_{1}^{\alpha_{1}})+\ldots+f(p_{s}^{\alpha_{s}})\), for any positive integer \(n\) such that \(n=p_{1}^{\alpha_{1}}\ldots p_{s}^{\alpha_{s}}\), and if \(f\) is completely additive, so we have : \(f(n)=\alpha_{1}f(p_{1})+\ldots+\alpha_{s}f(p_{s})\). **Example 1.1**.: This is the some classical arithmetic functions used in this paper : 1. **the arithmetic logarithmic derivative** : \(Ld(n)=\sum\limits_{p^{\alpha}\parallel n}\frac{\alpha}{p}\). 2. **The number of prime factors of n counting multiplicity** : \(\Omega(n)=\sum\limits_{p^{\alpha}\parallel n}\alpha\). 3. **The generalization of the number of prime factors of n counting multiplicity** : \(\Omega_{k}(n)=\sum\limits_{p^{\alpha}\parallel n}\alpha^{k}\). 4. The function defined by : \(A(n)=\sum\limits_{p^{\alpha}\parallel n}\alpha p\). 5. **Unit function** : The function defined by \(e(n)=\left\{\begin{array}{ll}1&\mbox{if}\ \ \ n=1\\ 0&\mbox{for}\ \ \ n\geq 2\end{array}\right.\) 6. **the unit function** : The function defined by \(1(n)=1\) for all \(n>1\). 7. **Logarithm function** : the (natural) logarithm, restricted to \(\mathbb{N}\) and regarded as an arithmetic function. 8. **The sum k-th powers of the prime factors of n : \(\beta_{k}(n)=\sum\limits_{p\mid n}p^{k}\). 9. **The number of distinct prime divisors of n :**\(\omega(n)=\beta_{0}(n)=\sum\limits_{p\mid n}1\). 10. **The sum of the prime factors of n :**\(\beta(n)=\beta_{1}(n)=\sum\limits_{p\mid n}p\). 11. **The Mobius function** : \(\mu(n)=\left\{\begin{array}{ll}1&\mbox{if}\ \ \ n=1\\ 0&\mbox{if}\ \ p^{2}|n\ \ for\ some\ prime\ p\\ (-1)^{\omega(n)}&\mbox{otherwise}\end{array}\right.\) 12. **number of positive divisors of \(n\)** defined by : \(\tau(n)=\sum\limits_{d\mid n}1\). **Definition 1.4** (L-additive function).: We say that an arithmetic function \(f\) is _Leibniz-additive_ (or, _L-additive_, in short) (see, e.g., [1]) if there is a completely multiplicative function \(h_{f}\) such that \[f(mn)=f(m)h_{f}(n)+f(n)h_{f}(m) \tag{3}\] for all positive integers \(m\) and \(n\). Then \(f(1)=0\) since \(h_{f}(1)=1\). The property (3) may be considered a generalized Leibniz rule. For example, the arithmetic derivative \(\delta\) is L-additive with \(h_{\delta}(n)=n\), since it satisfies the usual Leibniz rule \[\delta(mn)=n\delta(m)+m\delta(n)\] for all positive integers \(m\) and \(n\), and the function \(h_{\delta}(n)=n\) is completely multiplicative. Further, all completely additive functions \(f\) are L-additive with \(h_{f}(n)=1\). For example, the logarithmic derivative of \(n\) is completely additive since \[\mathrm{ld}(mn)=\mathrm{ld}(m)+\mathrm{ld}(n).\] **Theorem 1.1**.: _Let \(f\) be an arithmetic function. If \(f\) is L-additive and \(h_{f}\) is nonzero-valued, then \(f/h_{f}\) is completely additive._ Proof.: (see, e.g., [1, Theorem 2.1]) **Theorem 1.2**.: _Let \(f\) an arithmetic function, if \(n=\prod_{i=1}^{s}p_{i}^{\alpha_{i}}\) is the prime factorization of \(n\) and \(f\) is L-additive with \(h_{f}(p_{1}),\ldots,h_{f}(p_{s})\neq 0\), then_ \[f(n)=h_{f}(n)\sum_{i=1}^{s}\frac{\alpha_{i}f(p_{i})}{h_{f}(p_{i})}.\] Proof.: (see, e.g., [1, Theorem 2.4]) **Corollary 1.1**.: _Let \(f\) an arithmetic function, if \(f\) is L-additive with \(h_{f}\) is nonzero-valued, then we have :_ \[(f*h_{f})(n)=\frac{1}{2}f(n)\tau(n)\] Proof.: (see, e.g., [1, Corollary 3.1]) Further, all completely additive functions \(f\) are L-additive with \(h_{f}(n)=1\), then the extended of completely addtive function to the set of rational number \(\mathbb{Q}\) give us this formula : \[f\left(\frac{n}{m}\right)=f(n)-f(m)\] For example, the logarithmic derivative of \(n\) is completely additive, then we have : \[Ld\left(\frac{n}{m}\right)=Ld(n)-Ld(m)\] Let \(f\) and \(g\) be arithmetic functions. Their _Dirichlet convolution_ is : \[(f*g)(n)=\sum_{\begin{subarray}{c}a,b=1\\ ab=n\end{subarray}}^{n}f(a)g(b)=\sum_{d|n}^{n}f(d)g\left(\frac{n}{d}\right). \tag{4}\] where the sum extends over all positive divisors \(d\) of \(n\), or equivalently over all distinct pairs \((a,b)\) of positive integers whose product is \(n\). In particular, we have \((f*g)(1)=f(1)g(1)\),\((f*g)(p)=f(1)g(p)+f(p)g(1)\) for any prime \(p\) and for any power prime \(p^{m}\) we have : \[(f*g)(p^{m})=\sum_{j=0}^{m}f(p^{j})g(p^{m-j}) \tag{5}\] In this paper, we consider L-additive functions, especially from the viewpoint that they are a way to generalizations of von mangoldt function. In the next section, we present their general basic properties. In the last section, we study the application of this generalizations in terms of the Dirichlet convolution and Dirichlet series. ## 2 The generalized von mangoldt function using L-additive function : In this section, let \(f\) L-additive function with \(h_{f}\) is nonzero-valued, then Now we defined the von Mangoldt function related to The function \(f\) by : \[\Lambda_{f}(n)=\begin{cases}\frac{f(p)}{h_{f}(p)}&\text{if $n=p^{k}$ for some prime $p$ and integer $k\geq 1$},\\ 0&\text{otherwise.}\end{cases} \tag{6}\] then we have this result : **Theorem 2.1**.: _If \(n\geq 1\) then we have :_ \[f(n)=h_{f}(n)\sum_{d|n}\Lambda_{f}(d)\] _That mean by using dirichlet convolution : \(f=h_{f}*h_{f}\Lambda_{f}\)_ Proof.: If \(n=p_{1}^{\alpha_{1}}\dots p_{s}^{\alpha_{s}}\) then we have : \[\sum_{d|n}\Lambda_{f}(d)=\sum_{i=1}^{s}\sum_{k=1}^{i}\Lambda_{f}(p_{i}^{k})= \sum_{i=1}^{s}\sum_{k=1}^{i}\frac{f(p_{i})}{h_{f}(p_{i})}=\sum_{i=1}^{s}\frac {if(p_{i})}{h_{f}(p_{i})}=\frac{f(n)}{h_{f}(n)}\] as claimed **Theorem 2.2**.: _for every positive integer \(n\) we have :_ \[\Lambda_{f}(n)=\sum_{d|n}\frac{\mu\left(\frac{n}{d}\right)f\left(d\right)}{h_{f} \left(d\right)}=-\sum_{d|n}\frac{\mu(d)f(d)}{h_{f}(d)}\] _That is we have :_ \[\Lambda_{f}=\mu*\frac{f}{h_{f}}=-1*\frac{\mu f}{h_{f}}\] Proof.: By Theorem (2.1) applying Mobius inversion, we have \(\Lambda_{f}=\mu*\frac{f}{h_{f}}\), Note that : \[\sum_{d|n}\frac{\mu\left(\frac{n}{d}\right)f\left(d\right)}{h_{f} \left(d\right)} =\sum_{d|n}\mu(d)\frac{f\left(\frac{n}{d}\right)}{h_{f}\left(\frac {n}{d}\right)}=\sum_{d|n}\mu(d)\frac{h_{f}(d)}{h_{f}(n)}\bigg{(}\frac{h_{f}(d )f(n)-h_{f}(n)f(d)}{h_{f}^{2}(d)}\bigg{)}\] \[=\sum_{d|n}\frac{\mu(d)h_{f}^{2}(d)f(n)}{h_{f}(n)h_{f}^{2}(d)}- \frac{\mu(d)h_{f}(n)h_{f}(d)f(d)}{h_{f}(n)h_{f}^{2}(d)}\] \[=\frac{f(n)}{h_{f}(n)}\sum_{d|n}\mu(d)-\sum_{d|n}\frac{\mu(d)f(d )}{h_{f}(d)}\] \[=\frac{f(n)e(n)}{h_{f}(n)}-\sum_{d|n}\frac{\mu(d)f(d)}{h_{f}(d)}\] \[=-\sum_{d|n}\frac{\mu(d)f(d)}{h_{f}(d)}\] Hence \(\Lambda_{f}(n)=-1*\frac{\mu f}{h_{f}}\). This completes the proof. **Corollary 2.1**.: _Let \(f\) an arithmetic function. If \(f\) is L-additive and \(h_{f}\) is nonzero-valued, then :_ \[(\tau*\Lambda_{f})(n)=\frac{f(n)\tau(n)}{2h_{f}(n)} \tag{7}\] Proof.: By the Corollary (1.1) we have that : \[(f*h_{f})(n)=\frac{1}{2}f(n)\tau(n)\] Since from the theorem (2.1) we know : \[f(n)=\left(h_{f}*h_{f}\Lambda_{f}\right)(n)\] Then we find that : \[\left(h_{f}*f\right)(n)=\left(h_{f}*h_{f}*h_{f}\Lambda_{f}\right)(n)=h_{f}(n) \left(1*1*\Lambda_{f}\right)(n)\] We conclude that : \[h_{f}(n)\left(\tau*\Lambda_{f}\right)(n)=\frac{1}{2}\tau(n)f(n)\] This completes the proof of Theorem **Theorem 2.3**.: _Let \(g\) an arithmetic function, if \(g\) is completely additive then we have :_ \[\left(1*g\Lambda_{f}\right)(n)=\frac{1}{2}\sum_{p^{n}||n}\frac{\alpha(\alpha+1 )f(p)g(p)}{h_{f}(p)} \tag{8}\] Proof.: Let \(g\) an arithmetic function completely additive, then : \[\left(1*g\Lambda_{f}\right)(n) =\sum_{d|n}g(d)\Lambda_{f}(d)=\sum_{p^{\alpha}||n}\sum_{i=1}^{i= \alpha}g(p^{i})\Lambda_{f}(p^{i})\] \[=\sum_{p^{\alpha}||n}\sum_{i=1}^{i=\alpha}\frac{ig(p)f(p)}{h_{f}( p)}\] \[=\sum_{p^{\alpha}||n}\frac{g(p)f(p)}{h_{f}(p)}\sum_{i=1}^{i= \alpha}i\] \[=\sum_{p^{\alpha}||n}\frac{\alpha(\alpha+1)f(p)g(p)}{2h_{f}(p)}\] **Theorem 2.4**.: _Let \(g\) an arithmetic function, if \(g\) is completely additive then we have :_ \[\left(\Lambda_{f}*g\right)(n)=\frac{f(n)g(n)}{h_{f}(n)}-\left(1*g\Lambda_{f} \right)(n) \tag{9}\] Proof.: Let \(g\) an arithmetic function completely additive,since \(g\left(\frac{n}{d}\right)=g(n)-g(d)\) then : \[\left(\Lambda_{f}*g\right)(n) =\sum_{d|n}\Lambda_{f}(d)g\left(\frac{n}{d}\right)=\sum_{d|n} \Lambda_{f}(d)\left(g(n)-g(d)\right)\] \[=\sum_{d|n}\Lambda_{f}(d)g(n)-\sum_{d|n}\Lambda_{f}(d)g(d)\] \[=g(n)\sum_{d|n}\Lambda_{f}(d)-\sum_{d|n}\Lambda_{f}(d)g(d)\] Since : \[\sum_{d|n}\Lambda_{f}(d)g(d)=\left(1*g\Lambda_{f}\right)(n)\] And by theorem (2.1) we have : \[\sum_{d|n}\Lambda_{f}(d)=\frac{f(n)}{h_{f}(n)}\] Then we conclude that : \[\left(\Lambda_{f}*g\right)(n)=\frac{f(n)g(n)}{h_{f}(n)}-\left(1*g\Lambda_{f} \right)(n)\] **Theorem 2.5**.: _Let \(f\) and \(g\) two L-additive functions, then we have :_ \[\left(\Lambda_{f}*\Lambda_{g}\right)(n)=\begin{cases}\frac{(\alpha-1)f(p)g(p)} {h_{f}(p)h_{g}(p)}&\text{if }n=p^{\alpha}\text{ for some prime }p\text{ and integer }\alpha\geq 1,\\ \frac{f(p)g(q)}{h_{f}(p)h_{g}(q)}+\frac{f(q)g(p)}{h_{f}(q)h_{g}(p)}&\text{if }n=p^{ \alpha}q^{\beta}\text{ for some prime }p,q\text{ and integer }\alpha,\beta\geq 1\\ 0&\text{otherwise.}\end{cases} \tag{10}\] Proof.: Let \(f\) and \(g\) two L-additive functions, and \(n>1\). if \(n=p^{\alpha}\) then we have : \[\left(\Lambda_{f}*\Lambda_{g}\right)(n) =\sum_{d|n}\Lambda_{f}(d)\Lambda_{g}\left(\frac{n}{d}\right)= \sum_{i=1}^{\alpha-1}\Lambda_{f}(p^{i})\Lambda_{g}(p^{\alpha-i})\] \[=\sum_{i=1}^{\alpha-1}\frac{f(p)}{h_{f}(p)}\frac{g(p)}{h_{g}(p)}\] \[=\frac{(\alpha-1)f(p)g(p)}{h_{f}(p)h_{g}(p)}\] if \(n=p^{\alpha}q^{\beta}\) then : \[\left(\Lambda_{f}*\Lambda_{g}\right)(n) =\sum_{d\mid n}\Lambda_{f}(d)\Lambda_{g}\left(\frac{n}{d}\right)= \sum_{i=1}^{\alpha}\Lambda_{f}(p^{i})\Lambda_{g}\left(\frac{n}{p^{i}}\right)+ \sum_{i=1}^{\beta}\Lambda_{f}(q^{i})\Lambda_{g}\left(\frac{n}{q^{i}}\right)\] \[=\sum_{i=1}^{\alpha}\Lambda_{f}(p^{i})\Lambda_{g}\left(p^{\alpha- i}q^{\beta}\right)+\sum_{j=1}^{\beta}\Lambda_{f}(q^{j})\Lambda_{g}\left(p^{\alpha}q^{ \beta-j}\right)\] \[=\Lambda_{f}(p^{\alpha})\Lambda_{g}(q^{\beta})+\Lambda_{f}(q^{ \beta})\Lambda_{g}(q^{\alpha})\] \[=\frac{f(p)g(q)}{h_{f}(p)h_{g}(q)}+\frac{f(q)g(p)}{h_{f}(q)h_{g}( p)}\] Now if \(\omega(n)>2\), the for every divisor \(d\) of \(n\) we have \(\Lambda_{f}(d)=0\) or \(\Lambda_{f}\left(\frac{n}{d}\right)\), then \(\left(\Lambda_{f}*\Lambda_{g}\right)(n)=0\) **Theorem 2.6**.: _Let \(s\) a complex number, if then we have :_ \[\sum_{n\geq 1}\frac{\Lambda_{f}(n)}{n^{s}}=\sum_{p}\frac{f(p)}{h_{f}(p)p^{s}-h_{ f}(p)}\] Proof.: Let \(s\) a number complex such that \(Re(s)>0\), then \[\sum_{n\geq 1}\frac{\Lambda_{f}(n)}{n^{s}} =\frac{\Lambda_{f}(1)}{1^{s}}+\frac{\Lambda_{f}(2)}{2^{s}}+\frac {\Lambda_{f}(3)}{3^{s}}+\frac{\Lambda_{f}(4)}{4^{s}}+\frac{\Lambda_{f}(5)}{5^ {s}}+\ldots+\frac{\Lambda_{f}(16)}{16^{s}}+\ldots\] \[=\frac{f(2)}{h_{f}(2)2^{s}}+\frac{f(3)}{h_{f}(3)3^{s}}+\frac{f(2) }{h_{f}(2)2^{2s}}+\frac{f(5)}{h_{f}(5)5^{s}}+\frac{f(7)}{h_{f}(7)7^{s}}+\frac{ f(2)}{h_{f}(2)2^{3s}}+\ldots+\frac{f(2)}{h_{f}(2)2^{4s}}+\ldots\] \[=\sum_{p}\sum_{k\geq 1}\frac{f(p)}{h_{f}(p)p^{ks}}=\sum_{p} \frac{f(p)}{h_{f}(p)}\sum_{k\geq 1}\frac{1}{p^{ks}}\] \[=\sum_{p}\frac{f(p)}{h_{f}(p)}\sum_{k\geq 1}\left(\frac{1}{p^{s}} \right)^{k}=\sum_{p}\frac{f(p)}{h_{f}(p)}\frac{1}{p^{s}}.\frac{1}{1-\frac{1}{ p^{r}}}\] \[=\sum_{p}\frac{f(p)}{h_{f}(p)p^{s}-h_{f}(p)}\] which completes the proof ### The derivatives of arithmetical functions using L-additive function Let \(f\) L-additive function with \(h_{f}\) is nonzero-valued, Now we defined the derivatives of arithmetical functions related to The function \(f\) by : **Definition 2.1**.: For any arithmetical function \(g\) we define its derivative \(g^{\prime}\) to be the arithmetical function given by the equation : \[g^{\prime}(n)=\frac{g(n)f(n)}{h_{f}(n)}\quad\text{ }for\quad n\geqslant 1\] Since \(e(n)\frac{f(n)}{h_{f}(n)}=0\) for all \(n\) we have \(e^{\prime}(n)=0\). Since \(1^{\prime}(n)=\frac{f(n)}{h_{f}(n)}\) for all \(n\) Hence, the formula \(\sum_{d\mid n}\Lambda_{f}(d)=\frac{f(n)}{h_{f}(n)}\) can be written as \[1^{\prime}(n)=\left(1*\Lambda_{f}\right)(n) \tag{11}\] This concept of derivative using L-additive function shares many of the properties of the ordinary derivative discussed in elementary calculus. For example, the usual rules for differentiating sums and products also hold if the products are Dirichlet products. **Theorem 2.7**.: _If \(g\) and \(h\) are arithmetical functions we have:_ 1. \(\left(g+h\right)^{\prime}(n)=g^{\prime}(n)+h^{\prime}(n)\) _ 2. \(\left(g*h\right)^{\prime}\left(n\right)=\left(g^{\prime}*h\right)\left(n\right)+ \left(g*h^{\prime}\right)\left(n\right)\)__ Proof.: The proof of \(\left(a\right)\) is immediate. Of course, and to prove \(\left(b\right)\) we use the identity \(\frac{f\left(n\right)}{h_{f}\left(n\right)}=\frac{f\left(d\right)}{h_{f}\left( d\right)}+\frac{f\left(\frac{n}{d}\right)}{h_{f}\left(\frac{n}{d}\right)}\) to write : \[\left(g*h\right)^{\prime}\left(n\right) =\sum_{d\mid n}g(d)h\left(\frac{n}{d}\right)\frac{f\left(n\right) }{h_{f}\left(n\right)}\] \[=\sum_{d\mid n}g(d)\frac{f\left(d\right)}{h_{f}\left(d\right)}h \left(\frac{n}{d}\right)+\sum_{d\mid n}g(d)h\left(\frac{n}{d}\right)\frac{f \left(\frac{n}{d}\right)}{h_{f}\left(\frac{n}{d}\right)}\] \[=\sum_{d\mid n}\frac{g(d)f\left(d\right)}{h_{f}\left(d\right)}h \left(\frac{n}{d}\right)+\sum_{d\mid n}g(d)\frac{h\left(\frac{n}{d}\right)f \left(\frac{n}{d}\right)}{h_{f}\left(\frac{n}{d}\right)}\] \[=\left(g^{\prime}*h\right)\left(n\right)+\left(g*h^{\prime} \right)\left(n\right)\] **Theorem 2.8** (Ennaoui-Selberg identity.).: _For \(n>1\) we have:_ \[\frac{\Lambda_{f}(n)f(n)}{h_{f}(n)}+\sum_{d\mid n}\Lambda_{f}(d)\Lambda_{f} \left(\frac{n}{d}\right)=\sum_{d\mid n}\mu\left(\frac{n}{d}\right)\frac{f^{2 }\left(d\right)}{h_{f}^{2}\left(d\right)} \tag{12}\] _by using Dirichlet product that mean :_ \[\frac{f\Lambda_{f}}{h_{f}}+\Lambda_{f}*\Lambda_{f}=\mu*\left(\frac{f}{h_{f}} \right)^{2} \tag{13}\] Proof.: Equation (11) states that \(1^{\prime}=1*\Lambda_{f}\). Differentiation of this equation gives us \[1^{\prime\prime}=1^{\prime}*\Lambda_{f}+1*\Lambda_{f}^{\prime}\] Since \(1^{\prime}=1*\Lambda_{f}\) we have : \[1^{\prime\prime}=\left(1*\Lambda_{f}\right)*\Lambda_{f}+1*\Lambda_{f}^{\prime}\] Now we multiply both sides by \(\mu=1^{-1}\) to obtain : \[\mu*1^{\prime\prime}=\Lambda_{f}^{\prime}+\Lambda_{f}*\Lambda_{f}\] This is the required identity. ### Results : completely additive function As we knows the arithmetic function \(f\) completely additive is L-additive function with \(h_{f}(n)=1(n)=1\) for every integer not null, then we have the von Mangoldt function related to The function \(f\) defined by : \[\Lambda_{f}(n)=\begin{cases}f(p)&\text{if $n=p^{k}$ for some prime $p$ and integer $k\geq 1$},\\ 0&\text{otherwise}.\end{cases} \tag{14}\] Substituting \(h_{f}(n)=1\) into all results in previous section to find that : **Corollary 2.2**.: _Let \(f\) an arithmetic function completely additive, then :_ \[f(n)=\sum_{d\mid n}\Lambda_{f}(d)\] _That mean by using dirichlet convolution : \(f=1*\Lambda_{f}\)_ **Corollary 2.3**.: _Let \(f\) an arithmetic function completely additive, then we have:_ \[\Lambda_{f}(n)=\sum_{d\mid n}\mu\left(\frac{n}{d}\right)f\left(d\right)=- \sum_{d\mid n}\mu(d)f(d)\] _That is we have :_ \[\Lambda_{f}=\mu*f=-1*\mu f\] **Corollary 2.4**.: _Let \(g\) an arithmetic function, if \(g\) is completely additive then we have :_ \[\left(1*g\Lambda_{f}\right)(n)=\frac{1}{2}\sum_{p^{\alpha}\mid\mid n}\alpha( \alpha+1)f(p)g(p) \tag{15}\] **Corollary 2.5**.: _For \(n>1\) we have :_ \[\left(\Lambda_{f}*g\right)(n)=f(n)g(n)-\left(1*g\Lambda_{f}\right)(n) \tag{16}\] **Corollary 2.6**.: _if \(f\) and \(g\) is two arithmetic function completely additive then we have :_ \[\left(\Lambda_{f}*\Lambda_{g}\right)(n)=\begin{cases}(\alpha-1)f(p)g(p)&\text{ if }n=p^{\alpha}\text{ for some prime }p\text{ and integer }\alpha\geq 1,\\ f(p)g(q)+f(q)g(p)&\text{if }n=p^{\alpha}q^{\beta}\text{ for some prime }p,q\text{ and integer }\alpha,\beta\geq 1\\ 0&\text{otherwise.}\end{cases} \tag{17}\] **Corollary 2.7** (Ennaoui-Selberg identity.).: _For \(n>1\) we have:_ \[\Lambda_{f}(n)f(n)+\sum_{d\mid n}\Lambda_{f}(d)\Lambda_{f}\left(\frac{n}{d} \right)=\sum_{d\mid n}\mu\left(\frac{n}{d}\right)f^{2}\left(d\right) \tag{18}\] _by using Dirichlet product we have that :_ \[f\Lambda_{f}+\Lambda_{f}*\Lambda_{f}=\mu*f^{2} \tag{19}\] **Corollary 2.8**.: _Let \(f\) an arithmetic function. If \(f\) is completely additive, then :_ \[(\tau*\Lambda_{f})(n)=\frac{1}{2}f(n)\tau(n) \tag{20}\] **Corollary 2.9**.: _Let \(s\) a number complex such that \(Re(s)>0\), then we have :_ \[\sum_{n\geq 1}\frac{\Lambda_{f}(n)}{n^{s}}=\sum_{p}\frac{f(p)}{p^{s}-1}\] ## 3 Application : classical arithmetics function completely additive ### The function \(\Omega\) of the number of prime factors of n counting multiplicity. We know the \(\Omega\) function of the number of prime factors of n counting multiplicity is L-additive function with \(h_{\Omega}(n)=1\) then The von mangoldt function of \(\Omega\) is defined by : \[\Lambda_{\Omega}(n)=\begin{cases}1&\text{if }n=p^{k}\text{ for some prime }p\text{ and integer }k\geq 1,\\ 0&\text{otherwise.}\end{cases}\] then by the corollary (2.2) and (2.2) we have : **Corollary 3.1**.: _for \(n>1\) we have :_ \[\Omega(n)=\left(1*\Lambda_{\Omega}\right)(n) \tag{21}\] \[\Lambda_{\Omega}(n)=\left(\mu*\Omega\right)(n)=-\left(1*\Omega \mu\right)(n) \tag{22}\] Substituting \(f(n)=\Omega(n)\) into the corollary (2.8) to conclude that : **Corollary 3.2**.: _For every integer \(n>1\) we have_ \[\left(\Lambda_{\Omega}*\tau\right)(n)=\frac{1}{2}\tau(n)\Omega(n) \tag{23}\] if \(f=g=\Omega\) then by corollary (2.4) we have this result : \[\left(1*\Omega\Lambda_{\Omega}\right)(n)=\frac{1}{2}\left(\Omega_{2}(n)+\Omega (n)\right) \tag{24}\] Substituting \(f(n)=\Omega(n)\) into the Ennaoui-Selberg identity (2.7) to find that : \[\Omega(n)\Lambda_{\Omega}(n)+\left(\Lambda_{\Omega}*\Lambda_{\Omega}\right)(n )=\left(\mu*\Omega^{2}\right)(n) \tag{25}\] **Theorem 3.1**.: _for every integer \(n>0\) we have :_ \[\left(\Lambda_{\Omega}*\beta_{k}\right)(n)=\Omega(n)\beta_{k}(n)-\beta_{k}(n) \tag{26}\] Proof.: for any positive integer \(n\) such that \(n=p_{1}^{\alpha_{1}}\ldots p_{s}^{\alpha_{s}}\) we have : \[\left(\Lambda_{\Omega}*\beta_{k}\right)(n) =\sum_{d|n}\Lambda_{\Omega}(d)\beta_{k}\left(\frac{n}{d}\right)= \sum_{p^{n}||n}\sum_{i=1}^{i=\alpha}\Lambda_{\Omega}(p^{i})\beta_{k}\left( \frac{n}{p^{i}}\right)\] \[=\sum_{p^{n}||n}\sum_{i=1}^{i=\alpha}\beta_{k}\left(\frac{n}{p^{i }}\right)\] \[=\sum_{p^{n}||n}\left(-p^{k}+\sum_{i=1}^{i=\alpha}\beta_{k}\left( n\right)\right)\] \[=\sum_{p^{n}||n}-p^{k}+\alpha\beta_{k}\left(n\right)\] \[=\beta_{k}(n)\sum_{p^{n}||n}\alpha-\sum_{p^{n}||n}p^{k}\] \[=\beta_{k}(n)\Omega(n)-\beta_{k}(n)\] ### the arithmetic logarithmic derivative function : Now we can defined the von mangoldt function associated to arithmetic derivative \(Ld\) by : \[\Lambda_{Ld}(n)=\begin{cases}\frac{1}{p}&\text{if $n=p^{k}$ for some prime $p$ and integer $k\geq 1$},\\ 0&\text{otherwise}.\end{cases}\] Substituting \(f(n)=Ld(n)\) into the corollary (2.2) and (2.3) gives : \[Ld(n)=\left(1*\Lambda_{Ld}\right)(n) \tag{27}\] And : \[\Lambda_{Ld}(n)=\left(\mu*Ld\right)(n)=-\left(1*Ld\mu\right)(n) \tag{28}\] Now substituting \(f(n)=Ld(n)\) into the corollary (2.7) and (2.8) to get : **Corollary 3.3**.: _For \(n>1\) we have_ \[\left(\Lambda_{Ld}*\tau\right)(n)=\frac{1}{2}\tau(n)Ld(n) \tag{29}\] \[Ld(n)\Lambda_{Ld}(n)+\left(\Lambda_{Ld}*\Lambda_{Ld}\right)(n)=\left(\mu*Ld^{ 2}\right)(n) \tag{30}\] **Theorem 3.2**.: _Let \(g\) an arithmetic function, if \(g\) is completely additive then we have :_ \[\left(\Lambda_{Ld}*g\right)(n)=g(n)Ld(n)-\frac{1}{2}\sum_{p^{n}||n}\frac{ \alpha(\alpha+1)g(p)}{p} \tag{31}\] Proof.: First take \(f=Ld\) in the corollary (2.5) to find that : \[\left(\Lambda_{Ld}*g\right)(n)=g(n)Ld(n)-\left(1*g\Lambda_{Ld}\right)(n)\] in the same substituting \(f=Ld\) into corollary (2.4) gives : \[\left(1*g\Lambda_{Ld}\right)(n)=\frac{1}{2}\sum_{p^{n}||n}\alpha(\alpha+1)Ld(p )g(p)\] Substituting \(Ld(p)=\frac{1}{p}\) completes the proof. **Theorem 3.3**.: _For \(n>1\) and for \(k\in\mathbb{Z}\) we have :_ \[\left(\Lambda_{Ld}\ast\beta_{k}\right)(n)=Ld(n)\beta_{k}(n)-\beta_{k-1}(n) \tag{32}\] Proof.: for any positive integer \(n\) such that \(n=p_{1}^{\alpha_{1}}\ldots p_{s}^{\alpha_{s}}\) we have : \[\left(\Lambda_{Ld}\ast\beta_{k}\right)(n) =\sum_{d|n}\Lambda_{Ld}(d)\beta_{k}\left(\frac{n}{d}\right)=\sum_ {p^{\alpha}||n}\sum_{i=1}^{i=\alpha}\Lambda_{Ld}(p^{i})\beta_{k}\left(\frac{n} {p^{i}}\right)\] \[=\sum_{p^{\alpha}||n}\sum_{i=1}^{i=\alpha}\frac{\beta_{k}\left( \frac{n}{p^{\alpha}}\right)}{p}\] \[=\sum_{p^{\alpha}||n}\left(-\frac{p^{k}}{p}+\frac{1}{p}\sum_{i=1 }^{i=\alpha}\beta_{k}\left(n\right)\right)\] \[=\sum_{p^{\alpha}||n}-p^{k-1}+\frac{\alpha}{p}\beta_{k}\left(n\right)\] \[=\beta_{k}(n)\sum_{p^{\alpha}||n}\frac{\alpha}{p}-\sum_{p^{ \alpha}||n}p^{k-1}\] \[=\beta_{k}(n)Ld(n)-\beta_{k-1}(n)\] **Corollary 3.4**.: \[\left(\Lambda_{Ld}\ast\Omega\right)(n)=\Omega(n)Ld(n)-Ld(n)-\sum_{p^{\alpha}|| n}\frac{\alpha^{2}}{p}\] (33) ### The function \(A\) of the sum of all prime factors in the prime factorization : The function \(A\) (OEIS A001414) which gives the sum of prime factors (with repetition) of a number \(n\) is one of the arithmetic functions that studied by K. ALLADI and P. ERDOS (see, e.g., [7]), So in this section we study this function and we give some result about dirichlet product of this function with many classical arithmetic function. Clearly, the funtion \(A\) is completely additive due to the uniqueness of the prime factorization of every integer n.then The von mangoldt function of \(A\) is defined by : \[\Lambda_{A}(n)=\begin{cases}p&\text{if $n=p^{k}$ for some prime $p$ and integer $k\geq 1$},\\ 0&\text{otherwise}.\end{cases}\] then by the corollary (2.2) and (2.2) we have : **Corollary 3.5**.: _for \(n>1\) we have :_ \[A(n)=\left(1\ast\Lambda_{A}\right)(n) \tag{34}\] \[\Lambda_{A}(n)=\left(\mu\ast A\right)(n)=-\left(1\ast A\mu\right) (n) \tag{35}\] Substituting \(f(n)=A(n)\) into the corollary (2.8) gives that : **Corollary 3.6**.: _For every integer \(n>1\) we have_ \[\left(\Lambda_{A}\ast\tau\right)(n)=\frac{1}{2}\tau(n)A(n) \tag{36}\] Substituting \(f(n)=A(n)\) into the Ennaoui-Selberg identity (2.7) to find that : \[A(n)\Lambda_{A}(n)+\left(\Lambda_{A}\ast\Lambda_{A}\right)(n)=\left(\mu\ast A^ {2}\right)(n) \tag{37}\] Now by using the theorem (3.2) if we take \(g(n)=A(n)\) we have that : \[\left(\Lambda_{Ld}\ast A\right)(n)=A(n)Ld(n)-\frac{1}{2}\left(\Omega_{2}(n)+ \Omega(n)\right) \tag{38}\] substituting \(f=Ld\) and \(g=A\) into the corollary (2.4) then we have : \[\left(1\ast A\Lambda_{Ld}\right)=\left(1\ast Ld\Lambda_{A}\right)(n)=\frac{1}{ 2}\left(\Omega_{2}(n)+\Omega(n)\right) \tag{39}\] **Theorem 3.4**.: _for every integer \(n>0\) we have :_ \[\left(\Lambda_{A}\ast\beta_{k}\right)(n)=A(n)\beta_{k}(n)-\beta_{k+1}(n) \tag{40}\] Proof.: for any positive integer \(n\) such that \(n=p_{1}^{\alpha_{1}}\ldots p_{s}^{\alpha_{s}}\) we have : \[\left(\Lambda_{A}\ast\beta_{k}\right)(n) =\sum_{d\mid n}\Lambda_{A}(d)\beta_{k}\left(\frac{n}{d}\right)= \sum_{p^{n}\mid n}\sum_{i=1}^{i=\alpha}\Lambda_{A}(p^{i})\beta_{k}\left(\frac{ n}{p^{i}}\right)\] \[=\sum_{p^{n}\mid\mid n}\sum_{i=1}^{i=\alpha}p\beta_{k}\left(\frac {n}{p^{i}}\right)\] \[=\sum_{p^{n}\mid\mid n}\left(-p^{k+1}+p\sum_{i=1}^{i=\alpha}\beta _{k}\left(n\right)\right)\] \[=\sum_{p^{n}\mid\mid n}-p^{k+1}+\alpha p\beta_{k}\left(n\right)\] \[=\beta_{k}(n)\sum_{p^{n}\mid\mid n}\alpha p-\sum_{p^{n}\mid\mid n} p^{k+1}\] \[=\beta_{k}(n)A(n)-\beta_{k+1}(n)\] The definition (6) may be considered a generalized von mangoldt function. This terminology arises from the observation that the logarithm is L-additive with \(h_{log}(n)=1\); it satisfies the usual von mangoldt function denoted by : \[\Lambda(n)=\Lambda_{log}(n)=\begin{cases}\frac{log(p)}{h_{log}(p)}=log(p)& \text{if $n=p^{k}$ for some prime $p$ and integer $k\geq 1$},\\ 0&\text{otherwise}.\end{cases}\] By using the corollary (2.2) and (2.3) the Properties of the von mangoldt function is hold and we have \[log=1\ast\Lambda\ \ \ and\ \ \ \Lambda=\mu\ast\log=-1\ast\mu\log\] ## 4 the generalized von mangoldt functions in terms of the Dirichlet serie Above we have seen that many fundamental properties of the generalized von mangoldt function. We complete this article by changing our point of view slightly and demonstrate that generalized von mangoldt function can also be studied in terms of the Dirichlet series. Dirichlet product defined in (4) occurs naturally in the study of Dirichlet series such as the Riemann zeta function. It describes the multiplication of two Dirichlet series in terms of their coefficients: \[\bigg{(}\sum_{n\geq 1}\frac{\big{(}f\ast g\big{)}(n)}{n^{s}}\bigg{)}=\bigg{(} \sum_{n\geq 1}\frac{f(n)}{n^{s}}\bigg{)}\bigg{(}\sum_{n\geq 1}\frac{g(n)}{n^{s}} \bigg{)} \tag{41}\] with Riemann zeta function or is defined by : \[\zeta(s)=\sum_{n\geq 1}\frac{1}{n^{s}}\] These functions are widely studied in the literature (see, e.g., [2, 3, 4]). For later convenience we introduce the prime zeta function, described in Froberg (1968) (see, e.g., [6]), denoted by \(P(s)\). We define it by : \[P(s)=\sum_{p}\frac{1}{p^{s}}\] In the next of this section we will use this notation : \[P_{f}(s)=\sum_{p}\frac{f(p)}{p^{s}-1}\] **Theorem 4.1**.: _For \(s\in\mathbb{C}\) such that \(Re(s)>1\) we have :_ \[\sum_{n\geq 1}\frac{\tau(n)\Omega(n)}{n^{s}}=2\zeta^{2}(s)P_{\Omega}(s)\] Proof.: Let \(s\in\mathbb{C}\) such that \(Re(s)>1\), then by using the corollary \(\left(\ref{eq:1}\right)\) we have : \[\left(\Lambda_{\Omega}*\tau\right)(n)=\frac{1}{2}\tau(n)\Omega(n)\] then by the formula \(\left(\ref{eq:1}\right)\) we have : \[\sum_{n\geq 1}\frac{\tau(n)\Omega(n)}{n^{s}}=\sum_{n\geq 1}\frac{\left(\Lambda_{ \Omega}*\tau\right)(n)}{n^{s}}=2\left(\sum_{n\geq 1}\frac{\tau(n)}{n^{s}} \right)\left(\sum_{n\geq 1}\frac{\Lambda_{\Omega}(n)}{n^{s}}\right)\] Since (see, e.g., [12]) : \[\sum_{n\geq 1}\frac{\tau(n)}{n^{s}}=\zeta^{2}(s)\] and by the corollary (2.9) we have : \[P_{\Omega}(s)=\sum_{n\geq 1}\frac{\Lambda_{\Omega}(n)}{n^{s}}\] then we conclude that : \[\sum_{n\geq 1}\frac{\tau(n)\Omega(n)}{n^{s}}=2\zeta^{2}(s)P_{\Omega}(s)\] **Theorem 4.2**.: _For \(s\in\mathbb{C}\) such that \(Re(s)>max(1,1+k)\) we have :_ \[\sum_{n\geq 1}\frac{\Omega(n)\beta_{k}(n)}{n^{s}}=\zeta(s)P(s-k)\bigg{(}P_{ \Omega}(s)+1\bigg{)}\] Proof.: Let \(s\in\mathbb{C}\) such that \(Re(s)>Max(1,k+1)\), then By using the theorem \(\left(\ref{eq:1}\right)\) we have : \[\left(\Lambda_{\Omega}*\beta_{k}\right)(n)=\Omega(n)\beta_{k}(n)-\beta_{k}(n)\] then by the formula \(\left(\ref{eq:1}\right)\) we have : \[\sum_{n\geq 1}\frac{\Omega(n)\beta_{k}(n)}{n^{s}}=\sum_{n\geq 1}\frac{\left( \Lambda_{\Omega}*\beta_{k}\right)(n)}{n^{s}}+\sum_{n\geq 1}\frac{\beta_{k}(n)}{n^{s} }=\left(\sum_{n\geq 1}\frac{\beta_{k}(n)}{n^{s}}\right)\left(\sum_{n\geq 1} \frac{\Lambda_{\Omega}(n)}{n^{s}}\right)+\sum_{n\geq 1}\frac{\beta_{k}(n)}{n^{s}}\] Since (see, e.g., [12]) : \[\sum_{n\geq 1}\frac{\beta_{k}(n)}{n^{s}}=\zeta(s)P(s-k)\] and by the corollary (2.9) we have : \[P_{\Omega}(s)=\sum_{n\geq 1}\frac{\Lambda_{\Omega}(n)}{n^{s}}\] then we conclude that : \[\sum_{n\geq 1}\frac{\Omega(n)\beta_{k}}{n^{s}}=\zeta(s)P(s-k)P_{\Omega}(s)+ \zeta(s)P(s-k)\] which completes the proof **Theorem 4.3**.: _For \(s\in\mathbb{C}\) such that \(Re(s)>max(1,k-1)\) we have :_ \[\sum_{n\geq 1}\frac{Ld(n)\beta_{k}(n)}{n^{s}}=\zeta(s)\bigg{(}P(s-k)P_{Ld}(s)+P (s-k+1)\bigg{)}\] Proof.: Let \(s\in\mathbb{C}\) such that \(Re(s)>max(1,k-1)\), then by the theorem \(\left(\ref{eq:1}\right)\) we have : \[\left(\Lambda_{Ld}*\beta_{k}\right)(n)=Ld(n)\beta_{k}(n)-\beta_{k-1}(n)\] then by the formula \(\left(\ref{eq:1}\right)\) we have : \[\sum_{n\geq 1}\frac{Ld(n)\beta_{k}(n)}{n^{s}}=\sum_{n\geq 1}\frac{\left(\Lambda_{ Ld}*\beta_{k}\right)(n)}{n^{s}}+\sum_{n\geq 1}\frac{\beta_{k-1}(n)}{n^{s}}= \left(\sum_{n\geq 1}\frac{\beta_{k}(n)}{n^{s}}\right)\left(\sum_{n\geq 1} \frac{\Lambda_{Ld}(n)}{n^{s}}\right)+\sum_{n\geq 1}\frac{\beta_{k-1}(n)}{n^{s}}\] Since (see, e.g., [12]) : \[\sum_{n\geq 1}\frac{\beta_{k}(n)}{n^{s}}=\zeta(s)P(s-k)\] and by the corollary (2.9) we have : \[P_{Ld}(s)=\sum_{n\geq 1}\frac{\Lambda_{Ld}(n)}{n^{s}}\] then we find that : \[\sum_{n\geq 1}\frac{Ld(n)\beta_{k}}{n^{s}}=\zeta(s)P(s-k)P_{Ld}(s)+\zeta(s)P(s -k+1)\] which completes the proof. **Theorem 4.4**.: _For \(s\in\mathbb{C}\) such that \(Re(s)>max(1,k+2)\) we have :_ \[\sum_{n\geq 1}\frac{A(n)\beta_{k}(n)}{n^{s}}=\zeta(s)\bigg{(}P(s-k)P_{Ld}(s)+ P(s-k-1)\bigg{)}\] Proof.: Let \(s\in\mathbb{C}\) such that \(Re(s)>max(1,k+2)\), then by the theorem \(\left(\ref{eq:1}\right)\) we have : \[\left(\Lambda_{A}*\beta_{k}\right)(n)=A(n)\beta_{k}(n)-\beta_{k+1}(n)\] then by the formula \(\left(\ref{eq:1}\right)\) we have : \[\sum_{n\geq 1}\frac{A(n)\beta_{k}(n)}{n^{s}}=\sum_{n\geq 1}\frac{\left( \Lambda_{A}*\beta_{k}\right)(n)}{n^{s}}+\sum_{n\geq 1}\frac{\beta_{k+1}(n)}{n^{s}}= \left(\sum_{n\geq 1}\frac{\beta_{k}(n)}{n^{s}}\right)\left(\sum_{n\geq 1} \frac{\Lambda_{A}(n)}{n^{s}}\right)+\sum_{n\geq 1}\frac{\beta_{k+1}(n)}{n^{s}}\] Since (see, e.g., [12]) : \[\sum_{n\geq 1}\frac{\beta_{k}(n)}{n^{s}}=\zeta(s)P(s-k)\] and by the corollary (2.9) we have : \[P_{A}(s)=\sum_{n\geq 1}\frac{\Lambda_{A}(n)}{n^{s}}\] then we find that : \[\sum_{n\geq 1}\frac{A(n)\beta_{k}}{n^{s}}=\zeta(s)P(s-k)P_{A}(s)+\zeta(s)P(s-k-1)\] which completes the proof. ## 5 Conclusion : The von Mangoldt function \(\Lambda_{f}\) related to the L-additive function \(f\) is another way to be solved many problem of the Dirichlet series of the arithmetic function.
2303.07023
Testing structural balance theories in heterogeneous signed networks
The abundance of data about social relationships allows the human behavior to be analyzed as any other natural phenomenon. Here we focus on balance theory, stating that social actors tend to avoid establishing cycles with an odd number of negative links. This statement, however, can be supported only after a comparison with a benchmark. Since the existing ones disregard actors' heterogeneity, we extend Exponential Random Graphs to signed networks with both global and local constraints and employ them to assess the significance of empirical unbalanced patterns. We find that the nature of balance crucially depends on the null model: while homogeneous benchmarks favor the weak balance theory, according to which only triangles with one negative link should be under-represented, heterogeneous benchmarks favor the strong balance theory, according to which also triangles with all negative links should be under-represented. Biological networks, instead, display strong frustration under any benchmark, confirming that structural balance inherently characterizes social networks.
Anna Gallo, Diego Garlaschelli, Renaud Lambiotte, Fabio Saracco, Tiziano Squartini
2023-03-13T11:41:07Z
http://arxiv.org/abs/2303.07023v4
# Strong, weak or no balance? Testing structural hypotheses against real networks ###### Abstract The abundance of data about social, economic and political relationships has opened an era in which social theories can be tested against empirical evidence, allowing human behaviour to be analyzed just as any other natural phenomenon. The present contribution focuses on balance theory, stating that social agents tend to avoid the formation of 'unbalanced', or 'frustrated', cycles, i.e. cycles with an odd number of negative links. Such a statement can be made statistically rigorous only after a comparison with a null model. Since the existing ones cannot account for the heterogeneity of individual actors, we, first, extend the Exponential Random Graphs framework to binary, undirected, signed networks with local constraints and, then, employ both homogeneous and heterogeneous benchmarks to compare the empirical abundance of short cycles with its expected value, on several, real-world systems. What emerges is that the level of balance in real-world networks crucially depends on (at least) three factors, i.e. the measure adopted to quantify it, the nature of the data, the null model employed for the analysis. As an example, the study of triangles reveals that homogeneous null models with global constraints tend to favour the weak version of balance theory, according to which only the triangle with one negative link should be under-represented in real, social and political networks; on the other hand, heterogeneous null models with local constraints tend to favour the strong version of balance theory, according to which also the triangle with all negative links should be under-represented in real, social networks. Biological networks, instead, are found to be significantly frustrated under any benchmark considered here. pacs: 89.75.Fb; 89.65.-s; 02.50.Tt ## I I. Introduction Network theory has emerged as a powerful framework to model many different kinds of real-world systems, by representing their units as _nodes_ and the interactions between them as _links_. Out of the many types of edges that have been considered so far, the _signed_ one has recently seen its popularity revived [1; 2; 3; 4]: the importance of the signed character of links lies in the possibility it offers to model positive as well as negative social interactions. From an historical perspective, the interest in the study of signed networks is rooted into the so-called _balance theory_ (BT), firstly proposed by Heider [5] and further developed by Cartwright and Harary. The choice, pursued by the latter ones, of adopting signed graphs to model it led to the birth of the so-called _structural balance theory_[6] which has not only found application in the study of human relationships but also in that of biological, ecological and economic systems [7; 8; 9; 10]. BT deals with the concept of _balance_: a complete, signed graph is said to be balanced if all its triads have an even number of negative edges, i.e. either zero (in this case, the three edges are all positive) or two (see Fig. 1). Informally speaking, the BT formalizes the principles 'the friend of my friend is my friend' and 'the enemy of my enemy is my friend'. Remarkably, the definition above leads to the so-called _structure theorem_, stating that a complete, signed graph is balanced if and only if its set of nodes can be partitioned into two, disjoint subsets whose intra-modular links are all positive and whose inter-modular links are all negative. In [6], Cartwright and Harary extended the definition of balance to incomplete graphs, by including cycles whose length is larger than three: a (connected) network is said to be balanced when _all_ cycles are positive, i.e. contain an even number of negative edges. Taken altogether, the results above constitute the so-called _strong balance theory_ (SBT). The framework of SBT has been extended by Davis in [11] where the concept of _\(k\)-balanced_ networks is introduced: according to it, signed graphs whose set of nodes can be partitioned into \(k\) disjoint subsets with positive, intra-modular links and negative, inter-modular links are balanced. This generalized definition of balance leads to the formulation of the _weak balance theory_ (WBT); according to it, the triad with all negative edges is balanced, since each node constitutes a group on its own and the links between these single-node clusters are negative (see Fig. 1). Several metrics to decide if signed networks are strongly or weakly balanced have been proposed. In [12; 13], the degree of balance of a network is quantified by the number of edges that need to be removed, or whose sign must be reversed, in order to obtain a network each cycle of which has an even number of negative links; in [14; 15; 16; 17], it is quantified by the number of balanced, closed walks (i.e. closed walks with an even number of negative links) that are present in the network; in [18] an incomplete, signed network is deemed as balanced if it is possible to fill in all its missing links to obtain a complete, balanced graph according to the SBT; in [19], the authors define three, different levels of balance: at the micro-scale, involving triads, at the meso-scale, involving larger subgraphs, and at the macro-scale, involving the entire network. Other approaches have been adopted in [20; 21; 22], where the problem is studied from a spectral perspective, and in [23], where the problem is studied by employing concepts borrowed from statistical physics (the authors assign each signed triad an energy, claiming that the networks at the 'lowest temperature' are those whose triangles have no negative edges). Other authors, instead, have focused on the complementary notion of balance, i.e. that of _frustration_, trying to quantify the extent to which signed networks are far from balance [19; 24; 25; 26]: in [24], the authors define the so-called _balanced decomposition number_, i.e. the (minimum) number of balanced groups into which nodes can be partitioned, and evaluate it by counting the (minimum) number of edges whose removal increases a network balance; in [27], instead, the same index is evaluated by adopting the so-called _switching signs method_ introduced in [28] and prescribing to count the (minimum) number of signs that must be reversed to balance a network; in [20], the degree of (im)balance of a network is proxied by the magnitude of the smallest eigenvalue of the Laplacian matrix. Empirical observations seem to point out that real-world, signed networks tend to be \(k\)-balanced, i.e. avoid establishing the patterns that are deemed as frustrated by the WBT: as an example, in [22], the authors study a pair of online, social networks induced by the relationships between users, showing that their balance increases as the number of clusters into which nodes are partitioned is larger than two; in [17], the authors notice that the weak formulation of the balance theory allows a better performance in predicting signs to be achieved. Any measure of balance (or frustration) is not meaningful by itself: in order to perform a statistically sound analysis, its empirical value must be compared with the outcome of a properly defined benchmark model, i.e. a reference model preserving some of the network properties while randomizing everything else. The far most common null model for signed graphs is the one keeping the positions of edges fixed while shuffling their signs [2; 17]; in [29], the authors implement the canonical variant of the aforementioned exercise, assigning signs by means of a Bernoulli distribution; in [15], the authors define a null model for randomizing both the presence and the sign of links. More complex null models have been proposed as well: in [10], the authors implement the signed version of the Local Rewiring Algorithm (at each step, two edges with the same sign are selected and rewired, to preserve the total number of signed links that are incident to each node); its canonical variant is implemented in [30], where the Balanced Signed Chung-Lu model (BSCL) is proposed - although it constrains also the average number of signed triangles each edge is part of. Finally, in [31; 32; 33; 34], the definition of models constraining the structural properties of signed networks has been explored within the framework of the Exponential Random Graphs (ERG). Our contribution focuses on binary, undirected, signed networks, extending the ERG framework to include null models suitable for the analysis of signed graphs with 'plus one','minus one' and 'zero' edges as well as those with just 'plus one' and'minus one' edges. Afterwards, we employ them to inspect the statistical properties of the far most commonly studied patterns for this kind of analysis, i.e. triangles - the reasons behind this choice being, at least, two: 1) considering longer cycles is computationally expensive; 2) the contribution of short(er) cycles to frustration is argued to be more relevant than that of long(er) ones. ## II II. Formalism and Basic Quantities A _signed_ graph is a graph where each edge can be _positive_, _negative_ or _missing_. In what follows, however, we will focus on binary, undirected, signed networks: hence, each edge will be 'plus one','minus one' or 'zero'. More Figure 1: Fundamental motifs for the strong (top) and the weak (bottom) balance theory. formally, for any two nodes \(i\) and \(j\), the corresponding entry of the adjacency matrix \(\mathbf{A}\) will be assumed to read \(a_{ij}=-1,0,+1\) (with \(a_{ij}=a_{ji}\), \(\forall\,i<j\)). Since the total number of node-pairs is \(\frac{N(N-1)}{2}=\binom{N}{2}\) and any node-pair can be positively connected, negatively connected or disconnected, the total number of possible, binary, undirected, signed configurations (or, equivalently, the cardinality of the ensemble of binary, undirected, signed graphs) is \(|\mathbb{A}|=3^{\binom{N}{2}}\). To simplify the calculations, let us define the three functions of a generic entry, reading \[a_{ij}^{-}=[a_{ij}=-1],\quad a_{ij}^{0}=[a_{ij}=0],\quad a_{ij}^{+}=[a_{ij}=+1] \tag{1}\] where we have employed the Iverson's brackets notation (see Appendix A for more details): these new variables are mutually exclusive, i.e. \(\{a_{ij}^{-},a_{ij}^{0},a_{ij}^{+}\}=\{(1,0,0),(0,1,0),(0,0,1)\}\), sum to 1, i.e. \(a_{ij}^{-}+a_{ij}^{0}+a_{ij}^{+}=1\), and induce the definition of the matrices \(\mathbf{A}^{+}\) and \(\mathbf{A}^{-}\) (see Appendix A for more details). The number of positive and negative links can be, respectively, defined as \[L^{+}=\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ (j>i)\end{subarray}}^{N}a_{ij}^{+}\quad\text{and}\quad L^{-}=\sum_{i=1}^{N} \sum_{\begin{subarray}{c}j=1\\ (j>i)\end{subarray}}^{N}a_{ij}^{-}; \tag{2}\] analogously, the positive and negative degree of node \(i\) can be, respectively, defined as \[k_{i}^{+}=\sum_{\begin{subarray}{c}j=1\\ (j\neq i)\end{subarray}}^{N}a_{ij}^{+}\quad\text{and}\quad k_{i}^{-}=\sum_{ \begin{subarray}{c}j=1\\ (j\neq i)\end{subarray}}^{N}a_{ij}^{-} \tag{3}\] (naturally, \(2L^{+}=\sum_{i=1}^{N}k_{i}^{+}\) and \(2L^{-}=\sum_{i=1}^{N}k_{i}^{-}\)). The advantage of adopting a definition based upon Iverson's brackets becomes evident when looking at the definitions above: each quantity is, now, computed on the corresponding, signed matrix whose entries are, by definition, positive; as a consequence, all quantities of interest are positive as well. Let us, now, focus on the signed triads (see Fig. 2), i.e. the quantities representing the starting point to verify the (strong and weak versions of the) balance theory. According to the BT, a social system tends to arrange itself into a configuration satisfying the principles 'the friend of my friend is my friend', 'the friend of my enemy is my enemy', 'the enemy of my friend is my enemy', 'the enemy of my enemy is my friend' [5]: the SBT formalizes them by stating that the overall network balance increases with the number of triangles having an even number of negative edges (said to be balanced or 'positive' since the product of the edge sings is a 'plus') and decreases with the number of triangles having an odd number of negative edges (said to be unbalanced or 'negative' since the product of the edge sings is a'minus'); the WBT, on the other hand, considers the triangle with all, negative edges balanced as well. Upon considering that the row-by-column product of an arbitrary number of \(\mathbf{A}^{+}\) and \(\mathbf{A}^{-}\) matrices allows us to count the abundance of closed walks whose signature matches the sequence of signs of the matrices - for example, the expression \([\mathbf{A}^{+}\mathbf{A}^{-}\mathbf{A}^{+}]_{ii}\) counts the number of closed walks, whose length is 3 and whose signature is \((+-+)\), starting from \(i\) and ending in \(i\); the expression \([\mathbf{A}^{+}\mathbf{A}^{+}\mathbf{A}^{-}\mathbf{A}^{+}]_{ii}=[(\mathbf{A}^ {+})^{2}\mathbf{A}^{-}\mathbf{A}^{+}]_{ii}\) counts the number of closed walks, whose length is 4 and whose signature is \((++-+)\), starting from \(i\) and ending in \(i\) - the degree of balance of a network can be quantified upon calculating the abundance of (non-degenerate) triangles with an even number of negative links, i.e. \[T^{(+++)} =\frac{1}{3}\sum_{i=1}^{N}T_{i}^{(+++)}=\frac{\text{Tr}[(\mathbf{ A}^{+})^{3}]}{6}, \tag{4}\] \[T^{(+--)} =\frac{1}{2}\sum_{i=1}^{N}T_{i}^{(+--)}=\frac{\text{Tr}[\mathbf{ A}^{+}(\mathbf{A}^{-})^{2}]}{2}; \tag{5}\] similarly, the degree of frustration of a network can be quantified upon calculating the abundance of (non-degenerate) triangles with an odd number of negative links, i.e. \[T^{(---)} =\frac{1}{3}\sum_{i=1}^{N}T_{i}^{(---)}=\frac{\text{Tr}[(\mathbf{ A}^{-})^{3}]}{6}, \tag{6}\] \[T^{(++-)} =\frac{1}{2}\sum_{i=1}^{N}T_{i}^{(++-)}=\frac{\text{Tr}[(\mathbf{ A}^{+})^{2}\mathbf{A}^{-}]}{2}; \tag{7}\] Figure 2: Signed triangles headed at node \(i\). Solid lines denote positive edges while dashed lines denote negative edges: according to the strong balance theory, triangles (a), (d), (f) and (g) are balanced while triangles (b), (c), (e) and (h) are unbalanced; according to the weak balance theory, triangles (a), (d), (f), (g) and (h) are balanced while triangles (b), (c) and (e) are unbalanced. (see Appendix B for more details). The expressions provided above allow us to define several indices for quantifying the degree of balance of a network. According to the SBT, the total number of balanced patterns reads \(\#_{\mathbf{\Delta}}^{sb}=T^{(+++)}+T^{(+--)}\); equivalently, the total number of unbalanced patterns, according to the same variant of the BT, reads \(\#_{\mathbf{\Delta}}^{su}=T^{(---)}+T^{(++-)}\). Hence, the measure of'strong balance' reading \[\text{SBI}=\frac{\#_{\mathbf{\Delta}}^{sb}}{\#_{\mathbf{\Delta}}^{sb}+\#_{\mathbf{\Delta}}^ {su}} \tag{8}\] remains naturally defined, as well as the measure of'strong frustration' reading \(\text{SFI}=1-\text{SBI}\). On the other hand, the total number of balanced patterns, according to the WBT, reads \(\#_{\mathbf{\Delta}}^{wb}=T^{(+++)}+T^{(+--)}+T^{(---)}\) while the total number of unbalanced patterns, according to the same variant of the BT, reads \(\#_{\mathbf{\Delta}}^{wu}=T^{(++-)}\). Hence, the measure of 'weak balance' reading \[\text{WBI}=\frac{\#_{\mathbf{\Delta}}^{wb}}{\#_{\mathbf{\Delta}}^{wb}+\#_{\mathbf{\Delta} }^{ew}} \tag{9}\] remains naturally defined, while the corresponding measure of 'weak frustration' reads \(\text{WFI}=1-\text{WBI}\). The indices defined above quantify 'unbalance' by measuring the abundance of locally frustrated patterns; one can define other indices of frustration, accounting for larger-scale structures: an example is provided by the 'higher-order' frustration index reading \[\text{HOFI}=\frac{L_{\circ}^{+}+L_{\bullet}^{-}}{L} \tag{10}\] and measuring the percentage of'misplaced' links, i.e. the total number of positive links between communities, \(L_{\circ}^{+}\), plus the total number of negative links within communities, \(L_{\bullet}^{-}\), divided by the total number of links, \(L\) (the formalism is adapted from the one proposed in [35]). ## III. Randomization of binary, undirected, signed graphs Let us, now, generalize the ERG framework to accommodate models for studying binary, undirected, signed graphs: we will follow the analytical approach introduced in [36], and further developed in [37], aimed at individuating the functional form of a probability distribution induced by a bunch of empirical properties to be preserved on average. The aforementioned approach prescribes to carry out a constrained maximization of Shannon entropy \[S=-\sum_{\mathbf{\Delta}\in\mathbb{A}}P(\mathbf{\Lambda})\ln P(\mathbf{\Lambda}) \tag{11}\] where the sum runs over the ensemble of \(|\mathbb{A}|=3^{\binom{N}{2}}\) binary, undirected, signed graphs a generic entry of which can assume the values \(-1\), \(0\), \(+1\). In what follows, we will consider two classes of models, i.e. those keeping a network topology fixed and those letting the topology vary along with the edge signs: models belonging to the first class are better suited for studying systems where agents cannot choose 'with whom' to interact but only 'how', whereas models belonging to the second class are better suited for studying systems where agents can choose their neighbours as well [17]. Still, comparing the two types of recipes on the same configuration is instructive, as it allows the role played by signed constraints to be disentangled from the one played by non-signed constraints in shaping its structure. ### Signed Random Graph Model The Signed Random Graph Model (SRGM) is induced by the Hamiltonian \[H(\mathbf{\Lambda})=\alpha L^{+}(\mathbf{\Lambda})+\beta L^{-}(\mathbf{\Lambda}) \tag{12}\] i.e. by the two, global constraints \(L^{+}(\mathbf{\Lambda})\) and \(L^{-}(\mathbf{\Lambda})\). According to the SRGM, each entry of a signed network is a random variable whose behaviour is described by the following finite scheme \[a_{ij}\sim\begin{pmatrix}-1&0&+1\\ p^{-}&p^{0}&p^{+}\end{pmatrix}\quad\forall\,i<j \tag{13}\] with \[p^{-}\equiv\frac{e^{-\beta}}{1+e^{-\alpha}+e^{-\beta}}\equiv \frac{y}{1+x+y}, \tag{14}\] \[p^{+}\equiv\frac{e^{-\alpha}}{1+e^{-\alpha}+e^{-\beta}}\equiv \frac{x}{1+x+y} \tag{15}\] and \(p^{0}\equiv 1-p^{-}-p^{+}\). In other words, \(a_{ij}\) obeys a generalized Bernoulli distribution whose probability coefficients are determined by the (Lagrange multipliers of the) imposed constraints (see Appendix C for more details): each positive link appears with probability \(p^{+}\), each negative link appears with probability \(p^{-}\) and each missing link has a probability \(p^{0}\). In order to employ the SRGM for studying real-world networks, the parameters that define it need to be properly tuned: more specifically, one needs to ensure that \(\langle L^{+}\rangle_{\text{SRGM}}=L^{+}(\mathbf{\Lambda}^{*})\) and \(\langle L^{-}\rangle_{\text{SRGM}}=L^{-}(\mathbf{\Lambda}^{*})\), with the symbol \(\mathbf{\Lambda}^{*}\) indicating the specific, empirical network under analysis. To this aim, one can maximize the likelihood function \(\mathcal{L}_{\text{SRGM}}(x,y)\equiv\ln P_{\text{SRGM}}(\mathbf{\Lambda}^{*}|x,y)\) with respect to the unknown parameter(s) that define it [38]. Such a recipe leads us to find \[p^{+}=\frac{2L^{+}(\mathbf{A}^{*})}{N(N-1)},\quad p^{-}=\frac{2L^{-}(\mathbf{A}^{* })}{N(N-1)} \tag{16}\] and \(p^{0}\equiv 1-p^{-}-p^{+}\). ### Signed Random Graph Model with fixed topology The way the SRGM has been defined allows a network topological structure to vary along with the edge signs. A variant of the SRGM that keeps the topology of the network under analysis fixed while (solely) randomizing the edge signs is, however, definable. The Hamiltonian inducing it reads, again, \(H(\mathbf{A})=\alpha L^{+}(\mathbf{A})+\beta L^{-}(\mathbf{A})\) but the role of random variables is played by the entries of the adjacency matrix corresponding to the connected pairs of nodes, i.e. the ones for which \(|a_{ij}|=1\). Each of them obeys the finite scheme \[a_{ij}\sim\begin{pmatrix}-1&+1\\ p^{-}&p^{+}\end{pmatrix}\quad\forall\,i<j\mid|a_{ij}|=1 \tag{17}\] with \[p^{-}\equiv\frac{e^{-\beta}}{e^{-\alpha}+e^{-\beta}}\equiv\frac{ y}{x+y}, \tag{18}\] \[p^{+}\equiv\frac{e^{-\alpha}}{e^{-\alpha}+e^{-\beta}}\equiv\frac {x}{x+y}; \tag{19}\] in other words, each entry for which \(|a_{ij}|=1\) obeys a Bernoulli distribution whose probability coefficients are determined by the (Lagrange multipliers of the) imposed constraints (see Appendix C for more details): each existing link is assigned a 'plus one' with probability \(p^{+}\) and a'minus one' with probability \(p^{-}\). The maximization of the likelihood function \(\mathcal{L}_{\text{SRGM-FT}}(x,y)\equiv\ln P_{\text{SRGM-FT}}(\mathbf{A}^{*}|x,y)\) with respect to the unknown parameter(s) that define it leads us to find \[p^{+}=\frac{L^{+}(\mathbf{A}^{*})}{L(\mathbf{A}^{*})},\quad p^{-}=\frac{L^{-}( \mathbf{A}^{*})}{L(\mathbf{A}^{*})} \tag{20}\] with \(L(\mathbf{A}^{*})\) representing the (empirical) number of links characterizing the fixed topology under consideration. Remarkably, the SRGM and the SRGM-FT are related via the simple expression \[P_{\text{SRGM}}(\mathbf{A})=P_{\text{RGM}}(\mathbf{A})\cdot P_{\text{SRGM-FT} }(\mathbf{A}) \tag{21}\] involving the probability of the 'usual' Random Graph Model (RGM) and stating that the probability of connecting any two nodes with, say, a positive link can be rewritten as the probability of connecting them with a link times the probability of assigning the latter a 'plus one': in formulas, \(p_{\text{SRGM}}^{+}/p_{\text{RGM}}=p_{\text{SRGM-FT}}^{+}\) (see Appendix C for more details). Notice that if the network under analysis is completely connected, the SRGM and the SRGM-FT coincide. Although the recipes implemented in [15] and [2; 29] are similar in spirit to the SRGM and the SRGM-FT, the rigorous derivation of both models is provided here for the first time, together with the proof that the latter is nothing but the conditional version of the former. ### Signed Configuration Model The two, aforementioned versions of the SRGM are defined by constraints which are global in nature. Let us, now, consider a more refined null model, induced by constraints which, instead, are local. The Signed Configuration Model (SCM) is induced by the Hamiltonian \[H(\mathbf{A})=\sum_{i=1}^{N}[\alpha_{i}k_{i}^{+}(\mathbf{A})+\beta_{i}k_{i}^{- }(\mathbf{A})] \tag{22}\] i.e. by the two series of local constraints \(\{k_{i}^{+}(\mathbf{A})\}_{i=1}^{N}\) and \(\{k_{i}^{-}(\mathbf{A})\}_{i=1}^{N}\). According to the SCM, each entry of a signed network is a random variable whose behaviour is described by the following finite scheme \[a_{ij}\sim\begin{pmatrix}-1&0&+1\\ p_{ij}^{-}&p_{ij}^{0}&p_{ij}^{+}\end{pmatrix}\quad\forall\,i<j \tag{23}\] with \[p_{ij}^{-}\equiv\frac{e^{-(\beta_{i}+\beta_{j})}}{1+e^{-(\alpha_{i}+\alpha_{j })}+e^{-(\beta_{i}+\beta_{j})}}\equiv\frac{y_{i}y_{j}}{1+x_{i}x_{j}+y_{i}y_{j }}, \tag{24}\] \[p_{ij}^{+}\equiv\frac{e^{-(\alpha_{i}+\alpha_{j})}}{1+e^{-(\alpha_{i}+\alpha_{ j})}+e^{-(\beta_{i}+\beta_{j})}}\equiv\frac{x_{i}x_{j}}{1+x_{i}x_{j}+y_{i}y_{j }} \tag{25}\] and \(p_{ij}^{0}\equiv 1-p_{ij}^{-}-p_{ij}^{+}\). In other words, \(a_{ij}\) obeys a generalized Bernoulli distribution whose probability coefficients are determined by the (Lagrange multipliers of the) imposed constraints (see Appendix C for more details): given any two nodes \(i\) and \(j\), they are connected by a positive link with probability \(p_{ij}^{+}\), by a negative link with probability \(p_{ij}^{-}\) and are disconnected with probability \(p_{ij}^{0}\). In order to tune the parameters defining the SCM to ensure that \(\langle k_{i}^{+}\rangle_{\text{SCM}}=k_{i}^{+}(\mathbf{A}^{*})\) and \(\langle k_{i}^{-}\rangle_{\text{SCM}}=k_{i}^{-}(\mathbf{A}^{*})\), \(\forall\,i\), let us maximize the likelihood function \(\mathcal{L}_{\text{SCM}}(\{x_{i}\}_{i=1}^{N},\{y_{i}\}_{i=1}^{N})\equiv\ln P_{ \text{SCM}}(\mathbf{A}^{*}|\{x_{i}\}_{i=1}^{N},\{y_{i}\}_{i=1}^{N})\) with respect to the unknown parameter(s) that define it. Such a recipe leads us to find \[k_{i}^{+}(\mathbf{A}^{*}) = \sum_{\begin{subarray}{c}j=1\\ (j\neq i)\end{subarray}}^{N}\frac{x_{i}x_{j}}{1+x_{i}x_{j}+y_{i}y_{j}}=\langle k _{i}^{+}\rangle\quad\forall\,i, \tag{26}\] \[k_{i}^{-}(\mathbf{A}^{*}) = \sum_{\begin{subarray}{c}j=1\\ (j\neq i)\end{subarray}}^{N}\frac{y_{i}y_{j}}{1+x_{i}x_{j}+y_{i}y_{j}}=\langle k _{i}^{-}\rangle\quad\forall\,i \tag{27}\] a system that can be solved only numerically, e.g. following the guidelines provided in [39] (see Appendix D for more details). Whenever \(x_{i}\ll 1\) and \(y_{i}\ll 1\), \(\forall\,i\), the'sparse-case' approximation of the SCM holds true and one can simplify the expressions of the probability coefficients, by rewriting them in a factorized fashion, i.e. \(p_{ij}^{+}\simeq x_{i}x_{j}\) and \(p_{ij}^{-}\simeq y_{i}y_{j}\), \(\forall\,i<j\). Such a manipulation leads us to find \[p_{ij}^{+}\simeq\frac{k_{i}^{+}(\mathbf{A}^{*})k_{j}^{+}(\mathbf{A}^{*})}{2L^ {+}(\mathbf{A}^{*})},\quad p_{ij}^{-}\simeq\frac{k_{i}^{-}(\mathbf{A}^{*})k_{ j}^{-}(\mathbf{A}^{*})}{2L^{-}(\mathbf{A}^{*})} \tag{28}\] a set of equations that is also known with the name of Signed Chung-Lu Model (SCLM). The SCM has no precedents in the literature: while the model employed in [10] is microcanonical in nature, the variant considered in [30] is just an approximation of the proper, canonical one, whose derivation is provided here for the first time. Interestingly, the bipartite version of the SCM can be recovered as a special case of the Bipartite Score Configuration Model, proposed in [33]. ### Signed Configuration Model with fixed topology As for the SRGM, a variant of the SCM that keeps the topology of the network under analysis fixed while (solely) randomizing the signs of the edges can be defined. Again, its Hamiltonian reads \(H(\mathbf{A})=\sum_{i=1}^{N}[\alpha_{i}k_{i}^{+}(\mathbf{A})+\beta_{i}k_{i}^ {-}(\mathbf{A})]\) and the role of random variables is played by the entries of the adjacency matrix corresponding to connected pairs of nodes, i.e. the ones for which \(|a_{ij}|=1\). Each of them obeys the finite scheme \[a_{ij}\sim\begin{pmatrix}-1&+1\\ p_{ij}^{-}&p_{ij}^{+}\end{pmatrix}\quad\forall\,i<j\mid|a_{ij}|=1 \tag{29}\] with \[p_{ij}^{-} \equiv \frac{e^{-(\beta_{i}+\beta_{j})}}{e^{-(\alpha_{i}+\alpha_{j})}+ e^{-(\beta_{i}+\beta_{j})}}\equiv\frac{y_{i}y_{j}}{x_{i}x_{j}+y_{i}y_{j}}, \tag{30}\] \[p_{ij}^{+} \equiv \frac{e^{-(\alpha_{i}+\alpha_{j})}}{e^{-(\alpha_{i}+\alpha_{j})} +e^{-(\beta_{i}+\beta_{j})}}\equiv\frac{x_{i}x_{j}}{x_{i}x_{j}+y_{i}y_{j}}; \tag{31}\] in other words, each entry for which \(|a_{ij}|=1\) obeys a Bernoulli distribution whose probability coefficients are determined by the (Lagrange multipliers of the) imposed constraints (see Appendix C for more details): given any two, connected nodes \(i\) and \(j\), their link is assigned a 'plus one' with probability \(p_{ij}^{+}\) and a'minus one' with probability \(p_{ij}^{-}\). The maximization of the likelihood function \(\mathcal{L}_{\text{SCM-FT}}(\{x_{i}\}_{i=1}^{N},\{y_{i}\}_{i=1}^{N})\equiv\ln P_ {\text{SCM-FT}}(\mathbf{A}^{*}|\{x_{i}\}_{i=1}^{N},\{y_{i}\}_{i=1}^{N})\) with respect to the unknown parameter(s) that define it leads us to find \[k_{i}^{+}(\mathbf{A}^{*}) = \sum_{\begin{subarray}{c}j=1\\ (j\neq i)\end{subarray}}^{N}\frac{x_{i}x_{j}}{x_{i}x_{j}+y_{i}y_{j}}=\langle k _{i}^{+}\rangle\quad\forall\,i, \tag{32}\] \[k_{i}^{-}(\mathbf{A}^{*}) = \sum_{\begin{subarray}{c}j=1\\ (j\neq i)\end{subarray}}^{N}\frac{y_{i}y_{j}}{x_{i}x_{j}+y_{i}y_{j}}=\langle k _{i}^{-}\rangle\quad\forall\,i \tag{33}\] (the formalism \(j\neq i\) is meant to indicate that the sums run over the connected pairs of nodes), a system that can be solved only numerically - again, along the guidelines provided in [39] (see Appendix D for more details). Similarly to what has been observed for the SRGM and the SRGM-FT, the SCM and the SCM-FT are related via the simple expression \[P_{\text{SCM}}(\mathbf{A})=P_{\text{ICM}}(\mathbf{A})\cdot P_{\text{SCM-FT}}( \mathbf{A}) \tag{34}\] involving the probability of a Configuration Model (CM) whose coefficients are 'induced' by the ones of the SCM - whence the acronym: in formulas, \((p_{ij}^{+})_{\text{SCM}}/(p_{ij})_{\text{ICM}}=(p_{ij}^{+})_{\text{SCM}}/[(p_{ ij}^{+})_{\text{SCM}}+(p_{ij}^{-})_{\text{SCM}}]=(p_{ij}^{+})_{\text{SCM-FT}}\), for any pair of nodes (see Appendix C for more details). Notice that if the network under analysis is completely connected, the SCM and the SCM-FT coincide. The SCM-FT has no precedents in the literature: its derivation is provided here for the first time together with the proof that it is nothing but the conditional version of the SCM. A summary of the null models introduced in the present section is provided in Table 1. ## IV IV. Results Let us, now, employ our benchmarks to analyze a bunch of real-world networks (although many of them are induced by social relationships, we have also considered biological data). The first dataset is the so-called 'Correlates of Wars' (CoW) dataset [41]. It provides a picture of the international, political relationships across the years 1946-1997 and consists of 13 snapshots of 4 years each: a positive edge between any two countries indicates that they are allied, have a political agreement or are part of the same, governmental organization; conversely, a negative edge indicates that they are enemy, have a political disagreement or are part of different, governmental organizations (for more information about the CoW dataset, see [42; 43]). The second dataset collects information about the relationships among the \(\simeq 300.000\) players of a massive multiplayer online game (MMOG) allowing users to experience alternative lives [44]. A positive edge between any two players indicates the existence of a friendship, an alliance, an economic relation; conversely, a negative edge indicates the existence of an ennity, a conflict, a fight. Since the network representing the MMOG dataset is directed, we have determined the signs of its undirected version by applying the following set of rules (allowing the total number of nodes to be preserved): \(+1\cdot+1=+1\); \(+1\cdot-1=-1\cdot+1=-1\); \(-1\cdot-1=-1\); \(+1\cdot 0=0\cdot+1=+1\); \(-1\cdot 0=0\cdot-1=-1\). Moreover, we have considered the networks collected in [45] and analyzed in [25], i.e. three, gene-regulatory ones (GRNs - _E. Coli_, _Macrophage_, _Epidermal Growth Factor Receptor_), three, socio-political ones (SPNs - _N.G.H. Tribes_, _Senate US_, _Monastery_) and two, financial ones (FNs - _Bitcoin Alpha_ and _Bitcoin OTC_). For what concerns the GRNs, each node of them represents a gene, with positive links indicating activating connections and negative links indicating inhibiting connections. Specifically, _E. Coli_ collects data about a transcriptional network of the bacterium _Escherichia Coli_; _Macrophage_ collects data about a blood cell that eliminates substances such as cancer cells, cellular debris and microbes; _Epidermal Growth Factor Receptor_ collects data about the protein that is responsible for cell division and survival in epidermal tissues. For what concerns the SPNs, _N.G.H. Tribes_ collects data about New Guinea Highland Tribes - here, a positive (negative) link denotes alliance (rivalry) - _Monastery_ corresponds to the last frame of Sampson's data about the relationships between novices in a monastery [46] - here, a positive (negative) link indicates a positive (negative) interaction - and _Senate US_ collects data about the members of the 108th US Senate Congress - here, a positive (negative) link indicates trust/similar political opinions (distrust/different political opinions). Lastly, the FNs are 'who-trust-whom' networks of Bitcoin traders on an online platform: a positive (negative) link indicates trust (distrust) between users [47]. The networks representing the FNs are weighted, directed ones: hence, after having binarized them by replacing each positive (negative) weight with a \(+1\) (\(-1\)), we have made them undirected by applying the same rules adopted for the MMOG dataset. From a purely empirical perspective, the vast majority of the networks considered here is characterized by a small link density \(c=2L/N(N-1)\) but a large percentage \(L^{+}/L\) of positive links. The connectance of the configurations constituting the CoW decreases from \(\simeq 0.2\) to \(\simeq 0.1\) and the percentage of positive links is steadily around \(\simeq 88\%\); on the other hand, the link density of the configurations constituting the MMOG is steadily around \(0.003\) and the percentage of positive links decreases from \(\simeq 98\%\) to \(\simeq 60\%\). The GRNs have a link density ranging from \(\simeq 10^{-3}\) to \(\simeq 10^{-2}\) and a percentage of positive links ranging from \(\simeq 58\%\) to \(\simeq 66\%\); the SPNs have the largest values of link density among the configurations in our basket, ranging from \(\simeq 0.3\) to \(\simeq 0.5\), and percentages of positive links ranging from \(\simeq 50\%\) to \(\simeq 75\%\). Lastly, \begin{table} \begin{tabular}{l|l|l} Null model & Topology: free & Topology: fixed \\ \hline Homogenous & SRGM: each pair of nodes is assigned a ‘plus’, a ‘minus’ or a ‘zero’ edge with a probability that is pair-independent; all nodes are statistically equivalent. Differently from the recipe adopted in [15; 40], the parameters defining our SRGM can be unambiguously tuned to reproduce the empirical number of ‘plus’ and ‘minus’ edges of any (binary, undirected, signed) network. The SRGM is rigorously derived here for the first time & SRGM-FT: the topology is the same as in the real network and the connected pairs of nodes are assigned either a ‘plus one’ or a ‘minus one’, with a probability that is pair-independent. Differently from the recipe adopted in [2; 29], the parameters defining our SRGM-FT can be unambiguously tuned to reproduce the empirical number of ‘plus’ and ‘minus’ edges of any (binary, undirected, signed) network. The SRGM-FT, rigorously derived here for the first time, is the conditional version of the SRGM & SRGM-FT: the topology is the same as in the real network and the connected pairs of nodes are assigned either a ‘plus one’ or a ‘minus one’, with a probability that is pair-dependent and determined by the different tendencies of nodes to establish positive and negative interactions. The SCM-FT, derived here for the first time, is the conditional version of the SCM \\ \hline \hline Heterogenous & SCM: each pair of nodes is assigned a ‘plus’, a ‘minus’ or a ‘zero’ edge, with a probability that is pair-dependent and determined by the different tendencies of nodes to establish positive and negative interactions. The SCM-FT, derived here for the first time, is the conditional version of the SCM \\ \hline \hline \end{tabular} \end{table} Table 1: Table summarizing the null models considered in this article. _Bitcoin Alpha_ has a link density of \(\simeq 0.002\) and a percentage of positive links of \(\simeq 90\%\) while _Bitcoin OTC_ has a link density of \(\simeq 0.001\) and a percentage of positive links of \(\simeq 85\%\). In order to test the validity of the balance theory in its two formulations, we need to compare the empirical abundance of the quantities defined in the previous sections with the one expected under (any of) our null models. To this aim, a very useful indicator is represented by the so-called \(z\)-score, reading \(z_{m}=[N_{m}(\mathbf{A}^{*})-\langle N_{m}\rangle]/\sigma[N_{m}]\) where \(N_{m}(\mathbf{A}^{*})\) is the empirical abundance of motif \(m\) as measured on \(\mathbf{A}^{*}\), \(\langle N_{m}\rangle\) is the expected abundance of motif \(m\) under the chosen null model and \(\sigma[N_{m}]=\sqrt{\langle N_{m}^{2}\rangle-\langle N_{m}\rangle^{2}}\) is the standard deviation of \(N_{m}\) under the same null model; \(z_{m}\) returns the number of standard deviations by which the empirical abundance of motif \(m\) differs from the expected one: a result \(|z_{m}|\leq 2\) (\(|z_{m}|\leq 3\)) indicates that the empirical abundance of \(m\) is compatible with the one expected under the chosen null model at the \(5\%\) (\(1\%\)) level of statistical significance; on the other hand, a result \(|z_{m}|>2\) (\(|z_{m}|>3\)) indicates that the empirical abundance of \(m\) is not compatible with the one expected under the chosen null (at the aforementioned levels of statistical significance), which either under- or over-estimates it. The calculation of our \(z\)-scores is carried out by numerically sampling the ensembles induced by our null models: as the entries of our adjacency matrix are treated as independent, random variables by each of them, the explicit generation of any \(\mathbf{A}\in\mathbb{A}\) can be carried out by drawing a real number \(u_{ij}\in U[0,1]\) and posing \(a_{ij}=-1\) if \(0\leq u_{ij}\leq p_{ij}^{-}\), \(a_{ij}=+1\) if \(p_{ij}^{-}<u_{ij}<p_{ij}^{-}+p_{ij}^{+}\) and \(a_{ij}=0\) if \(p_{ij}^{-}+p_{ij}^{+}\leq u_{ij}\leq 1\), \(\forall\;i<j\); in case we are considering models with fixed topology, the sampling algorithm becomes \(a_{ij}=-1\) if \(0\leq u_{ij}\leq p_{ij}^{-}\) and \(a_{ij}=+1\) if \(p_{ij}^{-}<u_{ij}\leq 1\), \(\forall\;i<j\). As Fig. 3 shows, the trends highlighted by the SRGM-FT support the WBT: in fact, the only, significantly overestimated pattern is precisely the one deemed as frustrated by such a version of the BT whereas the empirical abundance of the triangle whose edges are all negative is always compatible with the one expected under such a null model; interestingly, also the empirical abundance of the triangle with two, negative edges is always compatible with the one expected under the SRGM-FT although its \(z\)-score is (basically always) smaller than the \(z\)-score of the purple motif. Conversely, the abundance of the triangle with three, positive edges is significantly underestimated on both datasets. The aforementioned results constitute the backbone of the narrative according to which the weak version of the BT is the one better supported by data [2; 17]. A comparison with the trends highlighted by the SRGM, however, reveals such a conclusion to be only partially true: in fact, even if the pattern deemed as unbalanced by the WBT is often the one achieving the minimum \(z\)-score, some frustration persists once the topology is left to vary along with the edge signs. Again, the abundance of the triangle with three, positive edges is significantly underestimated on both datasets - a tendency becoming more evident with time: when considering the CoW dataset, this result seems to mirror the increasingly peaceful environment established by world relationships; an analogous behaviour characterizes the triad with two, negative edges, on the MMOG dataset. The results of our analysis on the SPNs lead to the same conclusion: as Fig. 5 shows, the homogeneous null models (i.e. both the SRGM and the SRGM-FT) favour the WBT, by significantly over-estimating the pattern labeled as frustrated by it (i.e. the one corresponding to the triangle with a unique, negative link). Quite remarkably, biological networks show the opposite behaviour: in these cases, in fact, frustrated patterns are underestimated by our homogeneous models, a result implying that the nodes belonging to the GRNs tend to arrange themselves into such configurations; even more remarkably, according to the SRGM-FT, they tend to avoid creating the triangular, balanced ones. Lastly, financial networks show a peculiar behaviour since their nodes avoid creating triangles with a unique, negative link but engage in significantly many triangles with all negative links. The results of the analysis carried out in [17] suggest the SRGM-FT to be preferred to the SRGM as it provides a better explanation of empirical network structures. Fixing topology, however, does not represent the only solution to the problem: a viable alternative is that of considering a model such as the SCM, still letting the topology vary along with the edge signs but constraining a larger number of network properties. As Fig. 4 shows, the predictions achieved under the SCM are much more accurate than the ones achieved under the SRGM: this is true for both datasets, as the \(z\)-scores decrease up to one order of magnitude; moreover, the empirical abundance of the triangular pattern with three, negative edges (one, negative edge) becomes fully compatible with the one predicted by the SCM, on the CoW dataset (on the MMOG dataset). Still, the motifs classified as balanced according to the SBT are always significantly under-estimated and the purple one is, now, significantly over-estimated on several snapshots of the MMOG dataset. Interestingly, while the formation of triads with an odd number of negative links is strongly disfavoured according to the SCM-FT, the same benchmark underestimates the number of triads with an even number of negative links, an evidence fully supporting the SBT; while this is particularly evident for the MMOG dataset, it holds true only for the first half of the CoW dataset, the second one confirming that such a system tend to be balanced in a very strict sense (i.e. only the triangular pattern with three, positive edges is systematically under-estimated). The results of our analysis on the SPNs and FNs (see Fig. 5) let us conclude that both the SCM and the SCM-FT (i.e. the heterogeneous null models) allow for an overall better explanation of the empirical patterns to be obtained; still, the results concerning the identity of the significant ones are quite dataset-dependent. In fact, while homogeneous models favour the WBT on all datasets, heterogeneous models explain the abundance of the two, unbalanced motifs on the _Monastery_ dataset while favouring the WBT on the _N.G.H. Tribes_ dataset and the SBT on the _Senate US_, the _Bitcoin Alpha_ and the _Bitcoin OTC_ datasets. Financial networks keep showing a peculiar behaviour since the motif with all negative links is'more disfavoured' than the one with a unique, negative link. Again, biological networks behave differently: in these cases, in fact, frustrated patterns are largely under-estimated by any benchmark; even more Figure 3: Evolution of the \(z\)-scores of triadic motifs under homogeneous benchmarks. Top panels refer to the 13 snapshots of 4 years each of the CoW dataset (covering the period 1946-1997). Bottom panels refer to the snapshots of the MMOG dataset. The SRGM-FT (right panels) points out that the only pattern to be significantly over-estimated is the one labeled as frustrated by the WBT while the empirical abundance of the triangle whose edges are all negative is always compatible with such a null model; on the other hand, the tendency of nodes to establish triadic relationships whose links are all positive is confirmed on both datasets. The aforementioned results constitute the backbone of the narrative according to which the weak version of the BT is the one better supported by data although some frustration persists once the topology is left to vary along with the edge signs; still, the pattern achieving the minimum \(z\)-score is precisely the (only) one deemed as unbalanced by the WBT, an evidence leading us to conclude that homogeneous benchmarks (overall) favour this variant of the BT. so, according to both the SRGM-FT and the SCM-FT, the nodes belonging to the GRNs tend to avoid creating the balanced ones. The conclusions supported by our null models can be (at least partially) reconciled once the evolution of the \(z\)-scores of the strong and weak balance indices are inspected: as Fig. 6 shows, once the percentage of unbalanced motifs is considered, frustration is over-estimated by all benchmarks. For what concerns the CoW dataset, the SCM is the model performing best in reproducing (the evolution of) the SFI although it competes with the SCM-FT in reproducing (the evolution of) the WFI, on some snapshots. For what concerns the MMOG dataset, the SCM competes with the SRGM in reproducing (the evolution of) the WFI but is outperformed by both the SRGM and the SCM-FT in reproducing (the evolution of) the SFI. As already pointed out, biological networks are characterized by a non-trivial level of frustration, an evidence leading to the conclusion that their self-organization principles are markedly different from the Figure 4: Evolution of the \(z\)-scores of triadic motifs under heterogeneous benchmarks. Top panels refer to the 13 snapshots of 4 years each of the CoW dataset (covering the period 1946-1997). Bottom panels refer to the snapshots of the MMOG dataset. The predictions achieved under the SCM (left panels) and the SCM-FT (right panels) are much more accurate than the ones achieved under the SRGM and the SRGM-FT. Still, the tendency of nodes to establish triadic relationships whose links are all positive is confirmed on both datasets; besides, the SCM-FT points out that the significantly over-estimated patterns are, now, the ones labeled as frustrated by the SBT. The aforementioned results contrast with the narrative according to which the weak version of the BT is the one better supported by data, confirming that conclusions of the kind are model dependent: heterogeneous benchmarks (overall) favour the strong variant of the balance theory, ‘assigning’ the minimum \(z\)-scores to the patterns deemed as unbalanced by it. ones shaping social networks. ## IV IV. Discussion Several conclusions can be drawn from our analysis. First, the validity of social theories such as the WBT and the SBT is both dataset-dependent and model-dependent: although each benchmark considered here over-estimates the percentage of frustrated patterns characterizing social networks (according to either variant of the balance theory), a more refined analysis reveals them to (strongly, in some cases) disagree on the specific kinds of motifs to be deemed as statistically significant. Second, homogeneous benchmarks such as the SRGM and the SRGM-FT tend to favour the WBT, as the motif deemed as unbalanced by this variant of the theory is also the only one being significantly over-estimated by Figure 5: Analysis of triadic \(z\)-scores for three biological networks (_E. Coli, Macrophage, EGFR_), three socio-political networks (_N.G.H. Tribes, Senate US, Monastery_) and two, financial ones (_Bitcoin Alpha_ and _Bitcoin OTC_). Interestingly, homogeneous null models (i.e. both the SRGM and the SRGM-FT) tend to favour the WBT, by largely over-estimating the pattern labeled as frustrated by it (on the _Senate US_ dataset such a result is even stronger as this pattern is the only, over-estimated one); heterogeneous null models, instead, explain the abundance of the two, unbalanced motifs on the _Monastery_ dataset while favouring the WBT on the _N.G.H. Tribes_ dataset and the SBT on the _Senate US_, the _Bitcoin Alpha_ and the _Bitcoin OTC_ datasets. Quite remarkably, biological networks behave differently: in these cases, in fact, frustrated patterns are under-estimated by any benchmark; even more so, according to both the SRGM-FT and the SCM-FT, the nodes belonging to the GRNs tend to avoid creating the triangular, balanced ones. null models of the kind. Third, heterogeneous benchmarks such as the SCM Figure 6: Evolution of the \(z\)-scores of the SFI (left panel) and the WFI (right panel). Top panels refer to the 13 snapshots of 4 years each of the CoW dataset (covering the period 1946-1997). Middle panels refer to the snapshots of the MMOG dataset. Bottom panels refer to our biological, socio-political and financial networks. \(z\)-scores are computed under the SRGM (green), the SRGM-FT (orange), the SCM (red) and the SCM-FT (purple). Once the percentage of unbalanced motifs is considered, frustration is over-estimated by all benchmarks. On the CoW dataset, the SCM is the model performing best in reproducing both the SFI and the WFI; on the MMOG dataset, the SCM competes with the SRGM in reproducing the WFI but is outperformed by both the SRGM and the SCM-FT in reproducing the SFI. Biological networks are markedly different from socio-political and financial networks, being characterized by a non-trivial level of frustration. and the SCM-FT tend to favour the SBT; in fact, although the empirical patterns are, now, explained much better (the most accurate predictions are those provided by the SCM and concerning the SFI and the WFI), the observations which happen not to be explained by null models of the kind are related to the motifs deemed as unbalanced by the SBT, i.e. the ones with an odd number of negative links - interestingly enough, the triadic motif with just one, negative link attains a \(z\)-score which is sistematically smaller than the one attained by the triadic motif with three, negative links. Fourth, according to either variant of the BT, frustration evaluated by means of the percentage of unbalanced motifs is over-estimated by any benchmark. The study of the HOFI, however, returns a quite different picture: in fact, as Fig. 7 shows, algorithms that disregard the information provided by the edge signs depict networks whose level of mesoscopic frustration is quite large. To better illustrate this point, let us carry out a comparative analysis of the mesoscale, structural organization of the bunch of years of the CoW dataset covering the period 1986-1989. On the one hand, maximizing Newman's modularity reveals a structure with six modules (interestingly, USSR and USA, together with their allies, belong to different communities); still, the value of the HOFI attained by such a partition is \(\text{HOFI}\simeq 0.13\), i.e. one order of magnitude larger than the value of the HOFI output by the algorithm designed to minimize it and described in Appendix E (in words, the algorithm is fed with the number of modules output by modularity and partitions nodes in order to minimize the number of positive links between communities and the number of negative links within communities). Interestingly, a more efficient way of recovering partitions with a lower level of frustration is that of running the Girvan-Newman (GN) algorithm [48] that ranks edges according to their betweenness and removes them in a top-down fashion. As it needs to be stopped when the desired number of modules is reached, it can be fed with the one output by modularity, i.e. six: upon doing so, the GN algorithm individuates a partition for which \(\text{HOFI}\simeq 0.07\). Such a result confirms the presence of correlations between purely structural and signed quantities (in fact, negative edges are among the ones with the largest values of betweenness centrality) that can be exploited to gain more insight about the organizing principles of signed networks. ## V V. Conclusions Our work confirms that frustration is dependent on, at least, three factors: the measure adopted to quantify it, the nature of the data, the benchmark employed for the analysis. Concerning the first and the second point, it is inter Figure 7: Partitions of the snapshot of the CoW dataset covering the period 1986-1989. The one on the left is recovered by maximizing Newman’s modularity; the one on the right is recovered by minimizing the HOFI, i.e. the percentage of ‘misplaced’ links (computed as the total number of positive links between communities plus the total number of negative links within communities, divided by the total number of links panel). The value of the HOFI attained on the first partition is \(\text{HOFI}\simeq 0.13\), i.e. one order of magnitude larger than the value of the HOFI attained on the second one. Such a result confirms the presence of correlations between purely structural and signed quantities (negative edges are among the most central ones, according to the betweenness variant) that can be exploit to gain more insight about the organizing principles of signed networks (see Appendix E for more details on the implementation of the algorithms maximizing Newman’s modularity and the HOFI). esting to notice that whereas natural networks tend to establish a significantly large number of triadic relationships with an odd number of negative links, social networks do not. Still, the extent to which this happens crucially depends on the model employed as a benchmark: the presence of two pairs of alternative properties (homogeneous VS heterogeneous; free-topology VS fixed-topology) allows four, different models to be definable. Generally speaking, adopting fixed-topology benchmarks seems to enhance the detection of frustration with the corresponding, homogeneous (heterogeneous) variant favouring the WBT (SBT). The first part of such a conclusion is in line with other findings (e.g. the results in [2], where the authors evaluate the statistical significance of all, signed triads under the SRGM-FT, concluding that the WBT frames empirical patterns better than the SBT); its second part, instead, is a novel result of the present contribution, which has pointed out the importance of model selection for testing social theories on real-world systems. A behavioural explanation of the aforementioned results can be provided: agents that cannot choose with whom to interact but only 'how' adopt a seemingly 'intolerant' behaviour, strongly avoiding to engage in frustrated relationships; on the other hand, agents that are free to choose their neighbours seem to be more 'tolerant', as they establish a significantly large number of patterns composed by entirely positive edges while accepting a potential,'residual' level of frustration. At a structural level, the evidence that all \(z\)-scores provided by fixed-topology benchmarks are (in some cases, much) smaller than the ones provided by models letting the topology vary along with the edge signs suggests that shuffling the relatively few, negative links characterizing our datasets is enough to create (strongly disfavoured) structures that are unbalanced according to one of the variants of the BT. When frustration is measured as the percentage of unbalanced triangles, instead, all models considered in the present paper over-estimate it; still, datasets exist whose level of frustration is perfectly compatible with that predicted by the SCM, i.e. once the tendency of single nodes to establish connections is accounted for: an example is provided by the CoW dataset, for which we find \(z_{\text{SFI}}\simeq z_{\text{WFI}}\lesssim 0\). This is no longer true when non-local definitions of frustration are inspected: an example is provided by the HOFI which is not optimized alongside modularity. A similar conclusion is reached in [14] where authors analyze cycles of any length via a spectral approach, finding that real-world networks are far from balance at higher-order scales. In [22], the authors find that the frustration of the _Slashdot_ dataset is minimum when the number of communities is two, in agreement with the SBT, while the frustration of the _Epinions_ dataset is minimum when the number of communities is larger than two, in agreement with the WBT; here, we find the HOFI of the bunch of years of the CoW dataset covering the period 1986-1989 to be minimum when the number of modules equals two, in agreement with the SBT (as confirmed by several runs of the algorithm described in Appendix E, carried out by imposing an increasing number of modules). ## Acknowledgements This work is supported by the European Union Horizon 2020 Program under the scheme 'INFRAIA-01-2018-2019 - Integrating Activities for Advanced Communities', grant agreement n. 871042, 'SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics'. DG acknowledges support from the Dutch Econophysics Foundation (Stichting Econophysics, Leiden, the Netherlands) and the Netherlands Organization for Scientific Research (NWO/OCW). RL acknowledges support from the EPSRC grants n. EP/V013068/1 and EP/V03474X/1. We also thank Michael Szell for sharing the Pardus dataset employed for the present analysis.
2302.08039
Lattice piecewise affine approximation of explicit nonlinear model predictive control with application to trajectory tracking of mobile robot
To promote the widespread use of mobile robots in diverse fields, the performance of trajectory tracking must be ensured. To address the constraints and nonlinear features associated with mobile robot systems, we apply nonlinear model predictive control (MPC) to realize the trajectory tracking of mobile robots. Specifically, to alleviate the online computational complexity of nonlinear MPC, this paper devises a lattice piecewise affine (PWA) approximation method that can approximate both the nonlinear system and control law of explicit nonlinear MPC. The kinematic model of the mobile robot is successively linearized along the trajectory to obtain a linear time-varying description of the system, which is then expressed using a lattice PWA model. Subsequently, the nonlinear MPC problem can be transformed into a series of linear MPC problems. Furthermore, to reduce the complexity of online calculation of multiple linear MPC problems, we approximate the optimal solution of the linear MPC by using the lattice PWA model. That is, for different sampling states, the optimal control inputs are obtained, and lattice PWA approximations are constructed for the state control pairs. Simulations are performed to evaluate the performance of our method in comparison with the linear MPC and explicit linear MPC frameworks. The results show that compared with the explicit linear MPC, our method has a higher online computing speed and can decrease the offline computing time without significantly increasing the tracking error.
Kangbo Wang, Kaijie Zhang, Yating Huang, Jun Xu
2023-02-16T02:30:34Z
http://arxiv.org/abs/2302.08039v1
Lattice piecewise affine approximation of explicit nonlinear model predictive control with application to trajectory tracking of mobile robot*+ ###### Abstract To promote the widespread use of mobile robots in diverse fields, the performance of trajectory tracking must be ensured. To address the constraints and nonlinear features associated with mobile robot systems, we apply nonlinear model predictive control (MPC) to realize the trajectory tracking of mobile robots. Specifically, to alleviate the online computational complexity of nonlinear MPC, this paper devises a lattice piecewise affine (PWA) approximation method that can approximate both the nonlinear system and control law of explicit nonlinear MPC. The kinematic model of the mobile robot is successively linearized along the trajectory to obtain a linear time-varying description of the system, which is then expressed using a lattice PWA model. Subsequently, the nonlinear MPC problem can be transformed into a series of linear MPC problems. Furthermore, to reduce the complexity of online calculation of multiple linear MPC problems, we approximate the optimal solution of the linear MPC by using the lattice PWA model. That is, for different sampling states, the optimal control inputs are obtained, and lattice PWA approximations are constructed for the state control pairs. Simulations are performed to evaluate the performance of our method in comparison with the linear MPC and explicit linear MPC frameworks. The results show that compared with the explicit linear MPC, our method has a higher online computing speed and can decrease the offline computing time without significantly increasing the tracking error. explicit nonlinear MPC, successive linearization, lattice piecewise affine approximation ## I Introduction Due to scientific advancements, mobile robots, especially wheeled mobile robots (WMRs), have been widely used in the military, exploration and other fields as well as dangerous operations. The task execution ability and intelligence of mobile robots depend on their trajectory tracking performances. With the development of control technology, mobile robots need to perform more difficult tasks in more demanding environments, which imposes stricter requirements on the accuracy and speed of trajectory tracking. The pioneering research on the trajectory tracking of WMRs was performed by [1], who designed a Proportional-Integral-Derivative tracking controller to track the reference speed of the mobile robot. Subsequently, [2] combined the integral sliding mode surface with the adaptive observer to design an output feedback sliding mode controller. In addition, [3] designed an adaptive neural sliding-mode controller based on backstepping control and sliding mode control. However, existing algorithms do not consider the various constraints existing in actual systems. Thus, the model predictive control (MPC) algorithm has been widely applied for trajectory tracking owing to its ability to solve optimization problems with constraints. The MPC algorithm predicts the future dynamic behavior of the system based on the model. By adding constraints on the future input, output or state variables, the constraints can be explicitly expressed in a programming problem solved online, as shown in [4, 5]. For example, [6] used the MPC method to design the trajectory tracking controller of a mobile robot with a nonlinear kinematic model. In addition, the continuous linearization method was proposed in [7] to alleviate the large computational burden associated with nonlinear MPC. Moreover, explicit MPC has emerged as a promising strategy to reduce online computational complexity. According to [8, 9], the explicit state feedback solutions of quadratic optimal control problems for discrete linear time-invariant systems subjected to constraints can be obtained in advance. Therefore, online computation can be transformed into a simple table-lookup process, which is explicit MPC. This method can accelerate the online computation and expand the application scenarios of MPC. Most technical processes are nonlinear in nature, and explicit solutions, which are characterized by high computational efficiency and verifiability, are appropriate for addressing nonlinear MPC problems. For example, [10] applied explicit nonlinear MPC to construct a control design framework for the optimal trajectory tracking of small helicopters based on a multi-time scale structure. Explicit multi-parametric nonlinear MPC relies on multi-parametric nonlinear programming (mpNLP) algorithms to derive control laws. However, the solution of mpNLP problems is extremely complex, and it is challenging to identify an exact solution. To address these limitations, [11] proposes a numerical algorithm to approximate mpNLP for nonlinear systems, and locally approximates mpNLP problems through multi-parametric quadratic program (mpQP) solutions in each partition. In [12], the nonlinear model of the mobile robot was approximated by a continuous linear time-varying model, and the explicit linear MPC problem was solved in each discrete time to obtain the trajectory tracking control law. Although explicit MPC algorithms can facilitate online computation, the numbers of state partitions and control laws increase dramatically with increasing problem complexity, which increases the required storage space and online lookup time. To alleviate online computational complexity, the lattice piecewise affine (PWA) model can be used to represent the optimal offline-calculated control rules [13]. [14] specified the necessary and sufficient conditions and related algorithms for irredundant lattice PWA models, which were applied to express the solutions of explicit linear MPC problems. Subsequently, [15] implemented the lattice PWA model using very-large-scale integrated circuits, which enhanced the computing speed and decreased the resource consumption. In this study, the lattice PWA model is used both in approximating the nonlinear dynamics of the mobile robot and optimal control laws of explicit linear MPCs. First, we perform successive linearization along the trajectory of the mobile robot and a lattice PWA approximation model is constructed, based on which the global approximation error is defined. Second, for each explicit linear MPC problem, we obtain a lattice PWA approximation for the obtained control laws to prevent the division of all of the critical regions in the traditional explicit MPC. This step accelerates offline and online calculations and alleviates the computational complexity while ensuring the tracking accuracy. The remainder of this paper is structured as follows. Section II describes the modeling of the WMR and successive linearization of the model along the trajectory. Section III describes the process of solving explicit MPC problems offline and the lattice approximation of the optimal control laws. Additionally, the online evaluation method is introduced. Section IV presents the comparative simulation results of linear MPC, explicit linear MPC and our method to demonstrate the advantages of our method in trajectory tracking applications. Section V ends the paper with the concluding remarks. ## II Formulation of trajectory tracking problem of the WMR based on successive linearization The trajectory tracking problem of the WMR is formulated as a nonlinear MPC problem, and the nonlinear kinematic model of the WMR is successively linearized to obtain successive linear MPC problems. ### _Kinematic model of the WMR_ Fig. 1 shows the kinematic model of the WMR. In the inertial coordinate system OXY, the relevant variables of the kinematic model are the axis coordinates of the rear and front axle, denoted as \((X_{r},Y_{r})\) and \((X_{f},Y_{f})\), respectively. The center speed of the rear and front axle are denoted as \(v\) and \(v_{f}\). The deflection angle of the front wheel, the heading angle of the body and the wheelbase of the vehicle are respectively denoted as \(\delta_{f}\), \(\varphi\) and \(l\). For convenience, the front and rear axles are not distinguished by subscripts in the remainder of this paper. According to the geometric relationship shown in Fig. 1, the kinematic model of the WMR can be defined as follows: \[\boldsymbol{\dot{\xi}}=\left[\begin{array}{c}\dot{x}\\ \dot{y}\\ \dot{\varphi}\end{array}\right]=\left[\begin{array}{c}\cos\varphi\\ \sin\varphi\\ \frac{\tan\delta}{l}\end{array}\right]v=f(\boldsymbol{\xi},\boldsymbol{u}) \tag{1}\] where \(\boldsymbol{\xi}=\left[x,y,\varphi\right]^{T}\) and \(\boldsymbol{u}=\left[v,\delta\right]^{T}\) denote the state and control vectors of the robot, respectively. The reference trajectories considered herein are reachable trajectories, i.e., each point on the trajectories satisfies the kinematic equation. Therefore, the function of the reference trajectory can be expressed as follows: \[\boldsymbol{\dot{\xi}}_{r}=f\left(\boldsymbol{\xi}_{r},\boldsymbol{u}_{r}\right) \tag{2}\] For convenience, we uniformly use \(\boldsymbol{x}\) to represent the state vector \(\boldsymbol{\xi}\). ### _Model linearization_ #### Ii-B1 Successive linearization of the kinematic model As the kinematic model of the WMR system is nonlinear, the resulting nonlinear MPC problem is difficult to solve. Thus, it's necessary to perform linearization. In a previous study, the continuous linearization method was applied to linearize a system model at the reference trajectory points corresponding to each sampling time [12]. Similarly, in our study, the nonlinear kinematic model is linearized around all reference trajectory points, and the corresponding linear functions are obtained through Taylor expansion. These linear functions are connected to construct a lattice PWA model of the original nonlinear system, and then the approximation error is analyzed. Fig. 2 shows the lattice PWA approximation process for the function \(f(\boldsymbol{x},\boldsymbol{u})\). We sample the reference trajectory to be tracked at fixed time interval \(T\) to obtain \(K\) reference points \((\boldsymbol{x}_{k},\boldsymbol{u}_{k})\). The Fig. 1: The kinematics model of WMR first-order Taylor expansion is applied to Equation (1) at each reference trajectory point, and the higher-order terms are ignored to obtain \[\begin{array}{l}\dot{\mathbf{x}}\,=\,\frac{\partial f(\mathbf{x},\mathbf{u})}{\partial\mathbf{x }}\bigg{|}_{\mathbf{x}\mathbf{\cdot}\mathbf{x}\mathbf{=}\mathbf{u}\mathbf{=}\mathbf{u}_{k}}\cdot\mathbf{x}+ \frac{\partial f(\mathbf{x},\mathbf{u})}{\partial\mathbf{u}}\bigg{|}_{\mathbf{x}\mathbf{\cdot}\bm {=}\mathbf{u}\mathbf{=}\mathbf{u}_{k}}\cdot\mathbf{u}+f(\mathbf{x}_{k},\mathbf{u}_{k})\\ -\,\frac{\partial f(\mathbf{x},\mathbf{u})}{\partial\mathbf{x}}\bigg{|}_{\mathbf{x}\mathbf{=}\mathbf{ =}\mathbf{x}_{k},\mathbf{u}\mathbf{=}\mathbf{u}_{k}}\cdot\mathbf{x}_{k}-\frac{\partial f(\mathbf{x}, \mathbf{u})}{\partial\mathbf{u}}\bigg{|}_{\mathbf{x}\mathbf{=}\mathbf{x}_{k},\mathbf{u}\mathbf{=}\mathbf{u}_{ k}}\cdot\mathbf{u}_{k}\end{array} \tag{3}\] Because the last three terms are related only to the state and control input of the reference trajectory, they can be regarded as constant terms. Therefore, after discretization through the first-order forward difference, Equation (3) can be rewritten in the following concise form: \[\mathbf{x}(k+1)=A(k)\mathbf{x}(k)+B(k)\mathbf{u}(k)+b(k) \tag{4}\] Assume that the number of reference points is \(K\). Thus, after obtaining all of the discrete linear functions, we can construct a lattice PWA model (5) to linearly approximate the original nonlinear system function in a piecewise fashion according to the algorithm proposed in [16]. \[\begin{array}{l}\hat{f}=f_{\text{LPWA}}=\max_{i=0,\ldots,K-1}\left\{\min_{ j\in I_{\geq,i}}\{l_{j}\}\right\}\\ I_{\geq,i}=\left\{k\mid l_{k}\left(\mathbf{x}_{i},\mathbf{u}_{i}\right)\geq l_{\text{ act}(i)}\left(\mathbf{x}_{i},\mathbf{u}_{i}\right)\right\}\end{array} \tag{5}\] where \(l_{j}\) is the affine function defined in (4), Specifically, we collect all the affine functions \([A(k)B(k)b(k)]\cdot[\mathbf{x}^{T},\mathbf{U}^{T},1]^{T}\), identify the distinct ones and label them as \(l_{1},\ldots\). In this case, \(l_{\text{act}(i)}\left(\mathbf{x}_{i},\mathbf{u}_{i}\right)\) is the activation function at point \((\mathbf{x}_{i},\mathbf{u}_{i})\). \(I_{\geq,i}\) is the set of subscripts of affine functions. \(\min_{j\in I_{>,i}}\{l_{j}\}\) is called term of the lattice PWA model and \(s\) is the number of terms in the lattice model. The affine plane \(l_{j}\) in each term is called the literal. #### Iii-B2 Analysis of the global approximation error The following assumptions are made to analyze the error between the original and approximated system. For a nonlinear function \(f(\mathbf{x},\mathbf{u})\), an approximate continuous PWA function \(\hat{f}(\mathbf{x},\mathbf{u})\) can be constructed by linearizing the function at each reference point. The global approximation error can be obtained through the lattice PWA approximation of the kinematic model of the WMR. **Assumption 1**.: _Given that \(l_{\text{act}(i)}(\mathbf{x},\mathbf{u})\) is the activation function of \(f(\mathbf{x},\mathbf{u})\) at the linearization point \((\mathbf{x}_{i},\mathbf{u}_{i})\), then \(f\left(\mathbf{x}_{i},\mathbf{u}_{i}\right)=l_{\text{act}(i)}\left(\mathbf{x}_{i},\mathbf{u}_ {i}\right)\), assume \(\min_{j\in I_{\geq,i}}\{l_{j}\}\leq l_{i}\left(\mathbf{x}_{i},\mathbf{u}_{i}\right)\)_ It has been proved in [17] that if Assumption 1 holds, we have \[\hat{f}\left(\mathbf{x}_{i},\mathbf{u}_{i}\right)=f\left(\mathbf{x}_{i},\mathbf{u}_{i}\right). \tag{6}\] **Assumption 2**.: _The original system \(f(\mathbf{x},\mathbf{u})\) is Lipschitz continuous, i.e., \(\forall(\mathbf{x}_{1},\mathbf{u}_{1}),(\mathbf{x}_{2},\mathbf{u}_{2})\) in the domain of \(f\), we have_ \[\|f(\mathbf{x}_{1},\mathbf{u}_{1})-f(\mathbf{x}_{2},\mathbf{u}_{2})\|\leq L_{1}\|x_{1}-x_{2}\|,\] _in which \(L_{1}\) is the Lipschitz constant._ Under these two assumptions, the error between the original and approximated system follows Theorem 1. **Theorem 1**.: _Suppose Assumption 1 and 2 hold. Assume that for any \((\mathbf{x},\mathbf{u})\in\Omega\), in which \(\Omega\) is the domain of \(f(\mathbf{x},\mathbf{u})\), there is any sample point \((\mathbf{x}_{i},\mathbf{u}_{i})\) such that \(\|(\mathbf{x},\mathbf{u})-(\mathbf{x}_{i},\mathbf{u}_{i})\|\leq\sigma\), we have \(\|f(\mathbf{x},\mathbf{u})-\hat{f}(\mathbf{x},\mathbf{u})\|\leq L\sigma,\forall(\mathbf{x},\mathbf{u} )\in\Omega\), where \(L\) is a constant related to \(f\) and \(\hat{f}\)._ Proof.: We first show that continuous PWA functions are Lipschitz continuous. For any two points \((\mathbf{x},\mathbf{u})\) and \((\mathbf{x}_{i},\mathbf{u}_{i})\), assume that the line segment \([(\mathbf{x},\mathbf{u}),(\mathbf{x}_{i},\mathbf{u}_{i})]\) intersect the linear subregion of \(\hat{f}\) at points \((a_{1},b_{1}),\ldots,(a_{z},b_{z})\), as shown in Fig. 3. Here linear subregions refer to regions where the continuous PWA function admits an affine expression. In this case, the following expression holds. \[\begin{array}{l}||\hat{f}(\mathbf{x}_{i},\mathbf{u}_{i})-\hat{f}(\mathbf{x},\mathbf{u})||\\ \leq||\hat{f}\left(\mathbf{x}_{i},\mathbf{u}_{i}\right)-\hat{f}(a_{1},b_{1})+\hat{f}( a_{1},a_{1})\\ \quad-\hat{f}(a_{2},b_{2})+\cdots+\hat{f}(a_{z},b_{z})-\hat{f}(\mathbf{x},\mathbf{u})|| \\ =||l_{\text{act}(1)}\left(\mathbf{x}_{i},\mathbf{u}_{i}\right)-l_{\text{act}(1)}(a_{1 },b_{1})+l_{\text{act}(2)}(a_{1},b_{1})\\ \quad-l_{\text{act}(2)}(a_{2},b_{2})+\cdots+l_{\text{act}(z)}(a_{z},b_{z})-l_{ \text{act}(z)}(\mathbf{x},\mathbf{u})||\\ \leq M_{1}\left\|(\mathbf{x}_{i},\mathbf{u}_{i})-(a_{1},b_{1})\right\|+M_{2}\|(a_{1},b _{1})-(a_{2},b_{2})\|\\ \quad+\cdots+M_{n}\|(a_{z},b_{z})-(\mathbf{x},\mathbf{u})\|\\ \leq N\cdot M_{\text{max}}\,\left\|\mathbf{x}_{i}-\mathbf{x}\right\|\end{array} \tag{7}\] where \(l_{\text{act}(i)}(a_{i},b_{i})=l_{\text{act}(i+1)}(a_{i},b_{i}),i=1,\ldots,z-1\). Let \(L_{2}=N\cdot M_{\text{max}}\), so \(\left\|\hat{f}\left(\mathbf{x}_{i},\mathbf{u}_{i}\right)-\hat{f}(\mathbf{x},\mathbf{u})\right\| \leq L_{2}\left\|(\mathbf{x}_{i},\mathbf{u}_{i})-(\mathbf{x},\mathbf{u})\right\|\) Fig. 3: The intersection points and linear subregions Fig. 2: The sample trajectory points with different linearized models When Assumption 1 holds, according to Lipschitz continuity, we have \[\begin{split}&\left\|f(\mathbf{x},\mathbf{u})-\hat{f}(\mathbf{x},\mathbf{u})\right\| \end{split} \tag{8}\] \[\begin{split}&=\left\|f(\mathbf{x},\mathbf{u})-f\left(\mathbf{x}_{i},\mathbf{u}_ {i}\right)+\hat{f}\left(\mathbf{x}_{i},\mathbf{u}_{i}\right)-\hat{f}(\mathbf{x},\mathbf{u}) \right\|\\ &\leq L_{1}\left\|(\mathbf{x},\mathbf{u})-(\mathbf{x}_{i},\mathbf{u}_{i})\right\| +L_{2}\left\|(\mathbf{x},\mathbf{u})-(\mathbf{x}_{i},\mathbf{u}_{i})\right\|\\ &=(L_{1}+L_{2})\left\|(\mathbf{x},\mathbf{u})-(\mathbf{x}_{i},\mathbf{u}_{i}) \right\|\end{split}\] If \(\left\|(\mathbf{x},\mathbf{u})-(\mathbf{x}_{i},\mathbf{u}_{i})\right\|\leq\sigma\) and \(L=L_{1}+L_{2}\), then \(\left\|f(\mathbf{x},\mathbf{u})-\hat{f}(\mathbf{x},\mathbf{u})\right\|\leq L\sigma\) Therefore, provided that the distance between \((\mathbf{x},\mathbf{u})\) and \((\mathbf{x}_{i},\mathbf{u}_{i})\) is bounded by \(\sigma\), the conclusion holds. We can change the error between the original and approximated system by adjusting the density of the linearization points, which can be achieved by changing the sampling time \(T\). ### _Trajectory tracking problem based on MPC_ The control objective of trajectory tracking is to ensure that the WMR rapidly attains the reference trajectory while satisfying the state, input and system model constraints, and keep the increment of control input as small as possible. Assuming the control and predictive horizons are both \(N\), the MPC problem associated with trajectory tracking can be described as follows: \[\begin{split}&\min_{U}\ |J(k)=\sum_{k=0}^{N-\mathrm{I}}\!\!\left(\mathbf{x}(k)- \mathbf{x}_{r}(k)\right)^{\mathrm{T}}\!Q(\mathbf{x}(k)-\mathbf{x}_{r}(k))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+( \mathbf{u}(k)-\mathbf{u}_{r}(k))^{\mathrm{T}}\!R(\mathbf{u}(k)-\mathbf{u}_{r}(k))\right||\\ s.t.\ \ Subsequently, the approximated control law \(\mathbf{u}_{real,2}\) and state \(\mathbf{x}_{real,3}\) are obtained in a similar manner. The abovementioned steps are repeated until the last time instant is reached. At this point, the actual trajectory of the mobile robot is fully defined. To obtain the lattice PWA approximation \(\hat{\mathbf{u}}_{1,L\text{PWA}},\ldots,\), we determine \(u^{*}(\mathbf{x})\) with respect to \(\mathbf{x}\) for a specific linear MPC problem and construct the lattice PWA approximation for the optimal solution of a linear MPC problem, as described in the following subsections. ### _Offline determination of the optimal solution of linear MPC_ To calculate the optimal solution of the linear MPC problem for the sample points, we adopt the idea of explicit MPC and express the optimal solution \(\mathbf{u}^{*}(\mathbf{x})\) as an affine function of \(\mathbf{x}\). \(X(k)=[\mathbf{x}(k),...,\mathbf{x}(k+N-1)]^{\mathrm{T}}\) and \(U(k)=[\mathbf{u}(k),...,\mathbf{u}(k\!+\!N\!-\!1)]^{\mathrm{T}}\) represent the sequence of state and control quantities from the current time \(k\) to the future time \(k+N-1\), respectively. \(\mathbf{x}\) represents \(\mathbf{x}(k)\), for simplicity. Now, the optimization problem (10e) can be restated as the following mpQP problem: \[\begin{array}{l}\min_{U}\frac{1}{2}U^{T}HU+\left(\mathbf{x}^{T}F+C_{f}\right)U \\ \text{s.t.}\quad GU\leq W+E\mathbf{x}\end{array} \tag{11}\] in which matrices \(H,\;F,\;C_{f},\;G,\;W,\;E\) can be obtained through (10e). By defining \(\mathbf{z}=U+H^{-1}F^{T}\mathbf{x}\), we can deform problem (11) into the following standard quadratic programming form (12). \[\begin{array}{l}V_{\mathbf{z}}(\mathbf{x})=&\min_{\mathbf{z}}\frac{1}{2}\mathbf{z}^{T}H\mathbf{ z}\\ \text{s.t.}\quad G\mathbf{z}\leq\omega+S\mathbf{x}\end{array} \tag{12}\] The Karush--Kuhn-Tucker (KKT) conditions for the standard quadratic programming problem are \[H\mathbf{z}^{*}+G_{\mathcal{A}^{*}}^{T}\lambda^{*}+G_{\mathcal{N}^{* }}^{T}\mu^{*}=0 \tag{13a}\] \[G_{\mathcal{A}^{*}}\mathbf{z}^{*}=\omega_{\mathcal{A}^{*}}+S_{ \mathcal{A}^{*}}\mathbf{x}\] (13b) \[G_{\mathcal{N}^{*}}\mathbf{z}^{*}<\omega_{\mathcal{N}^{*}}+S_{ \mathcal{N}^{*}}\mathbf{x}\] (13c) \[\lambda^{*}\geq 0\] (13d) \[\mu^{*}\geq 0\] (13e) \[\lambda^{*T}\left(G_{\mathcal{A}^{*}}\mathbf{z}^{*}-\omega_{\mathcal{ A}^{*}}-S_{\mathcal{A}^{*}}\mathbf{x}\right)=0\] (13f) \[\mu^{*T}\left(G_{\mathcal{N}^{*}}\mathbf{z}^{*}-\omega_{\mathcal{N}^{ *}}-S_{\mathcal{N}^{*}}\mathbf{x}\right)=0 \tag{13g}\] where (13b) and (13c) are the active and inactive constraints of the optimal solution \(\mathbf{z}^{*}\), respectively. Assuming that \(G_{j},\omega_{j},S_{j}\) represents line \(j\) of \(G,\omega,S\), the sets of indices for the active and inactive constraints can be represented as follows: \[\mathcal{A}^{*}=\{j\in\{1,\ldots,p\}\mid G_{j}\mathbf{z}^{*}=\omega_{j}+S_{j}\mathbf{ x}\}\] \[\mathcal{N}^{*}=\{j\in\{1,\ldots,p\}\mid G_{j}\mathbf{z}^{*}<\omega_{j}+S_{j}\mathbf{ x}\}\] For a fixed set of active constraints, if \(G_{\mathcal{A}^{*}}\) is full row rank, we can obtain the optimal solution and critical region \(CR_{i}\) corresponding to the active constraint set. Given that \(\mathbf{z}=U+H^{-1}F^{T}\mathbf{x}\), we can obtain the explicit expression of the control sequence \(U\) in this region with respect to the state quantity \(\mathbf{x}\). \[{U_{i}}^{*}\left(\mathbf{x}\right)\ =\!\!H^{-1}G_{\mathcal{A}^{*}}^{T}\left(G_{ \mathcal{A}^{*}}H^{-1}G_{\mathcal{A}^{*}}^{T}\right)^{-1}\left(\omega_{ \mathcal{A}^{*}}+S_{\mathcal{A}^{*}}\mathbf{x}\right) \tag{14}\] \[-H^{-1}\left(\mathbf{x}^{T}F+C_{f}\right)^{T}\] The obtained \(U_{i}^{*}\left(\mathbf{x}\right)\) is the optimal control sequence. Only the first term \(\mathbf{u}_{i}^{*}\left(\mathbf{x}\right)\), which is also an affine function of \(\mathbf{x}\), is considered. It is defined as follows: \[\mathbf{u}_{i}^{*}(x)=[\mathbf{1}_{n_{u}},\mathbf{0}_{(N-1)\times n_{u}}]\cdot U_{i}^{*}( \mathbf{x}), \tag{15}\] where \(\mathbf{1}_{n_{u}}\) and \(\mathbf{0}_{(N-1)\times n_{u}}\) are the all-one and all-zero vectors, respectively, with lengths \(n_{u}\) and \((N-1)\times n_{u}\). **Remark 1**.: _If \(G_{\mathcal{A}^{*}}\) is not full row rank, the linear independence constraints qualification is violated. Assuming that the rank of \(G_{\mathcal{A}^{*}}\) is r, we can arbitrarily select \(r\) independent constraints as the new active constraint set to simplify the constraints._ ### _Lattice PWA approximation of the optimal solution of the linear MPC problem_ For the linear MPC problem corresponding to each linearized model, the dataset consisting of the state and control law information is generated by sampling and resampling procedure. #### Iii-C1 Data sampling In [17], sample states are generated in the domain of interest. In the current study, because the actual state is near the reference points, the sample states are generated near the reference points \(\mathbf{x}_{1},\ldots\). Consider \(\mathbf{x}_{1}\) as an example. We define a spherical region with radius \(r\) centered on \(\mathbf{x}_{1}\) and obtain the sample dataset \(\mathcal{X}_{1}\times\mathcal{U}_{1}\) by sampling and resampling within the region. The value of \(r\) can be determined by considering the distance between the actual and reference states. Provided that the sphere \(\mathcal{B}(\mathbf{x}_{1},r)\) covers all of the possible states, the lattice PWA approximations constructed offline can yield a solution close to the optimal solution of the linear MPC problem corresponding to \(\mathbf{x}_{1}\). We first calculate the distance of all of the adjacent reference points, with the minimum distance being \(d\). \(r\) is determined to be \(r=d/2\). Fig. 5 illustrates the sampling of the state points around linearization points \(\mathbf{x}_{1},\ldots,\mathbf{x}_{7}\). After sampling and resampling in \(\mathcal{B}(\mathbf{x}_{i},r)\), we obtain sample points \(\mathbf{x}_{i,1},\ldots,\mathbf{x}_{i,N_{i}}\) for each linearization point \(\mathbf{x}_{i}\) on the reference trajectory. Next, we calculate the corresponding Fig. 5: Sampling of state points around the linearization points. affine function according to (14). The obtained sample point \((\mathbf{x}_{i,j},\mathbf{u}_{i}^{*}\left(\mathbf{x}_{i,j}\right))\) is stored in the \(i\)th dataset \(\mathcal{X}_{i}\times\mathcal{U}_{i}\). #### Iii-B2 Lattice approximation After obtaining the sample dataset, the expression of lattice PWA can be derived based on \(\mathcal{X}_{i}\times\mathcal{U}_{i}\): \[\hat{\mathbf{u}}_{i,\textit{LWA}}(\mathbf{x})=\max_{k=1,\ldots,N_{i}}\left\{\min_{j \in J_{\geq,i,k}}\mathbf{u}_{i,j}(\mathbf{x})\right\} \tag{16}\] where the index set \(J_{\geq,i,k}\) is defined as \[J_{\geq,i,k}=\{j|\mathbf{u}_{i,j}(\mathbf{x}_{i,k})\geq\mathbf{u}_{i,k}(\mathbf{x}_{i,k})\},\] and the affine plane \(\mathbf{u}_{j}(\mathbf{x})\) and \(\min_{j\in J_{\geq,i}}\mathbf{u}_{j}(\mathbf{x})\) represent a literal and a term of the lattice PWA approximation, respectively. The affine plane \(\mathbf{u}_{i,j}\) is obtained through (14) and (15), and superscript \(*\) is omitted for simplicity. [17] demonstrates that if all of the distinct affine functions are sampled, the approximation coincides with the optimal solution of the linear MPC problem. **Lemma 1**.: _Assume that all of the distinct affine functions are sampled. The following expression holds if the lattice PWA approximation is constructed using Equation (16):_ \[\hat{\mathbf{u}}_{i,\textit{LWA}}(\mathbf{x})=\mathbf{u}^{*}(\mathbf{x}),\forall\mathbf{x}\in \Gamma\left(\mathbf{x}_{i,k}\right),\forall\mathbf{x}_{i,k}\in\mathcal{X}_{i} \tag{17}\] _where \(\Gamma(\mathbf{x}_{i})\) is the unique order region containing \(\mathbf{x}_{i,k}\), i.e., the order of affine functions \(u_{i,k}\) remains unchanged in \(\Gamma(\mathbf{x}_{i})\)._ In general, it is challenging to ensure the validity of the assumption in Lemma 1. Therefore, [17] proposed a resampling method to sample as more functions as possible. It is noted that as the sampling region around each linearization point is relatively small, it is not hard to guarantee that all the distinct affine functions in the sample region have been sampled. #### Iii-B3 Simplification The lattice PWA model obtained using this method will generate redundant parameters. To reduce the memory occupied by terms and literals and the complexity of online evaluation, we use the method proposed in [14] to remove both the redundant terms and literals of the lattice PWA model and save the remaining values. The whole process for obtaining lattice PWA approximations is shown in Algorithm 1. #### Iii-B4 Online evaluation After obtaining the lattice PWA expressions corresponding to all reference points on the reference trajectory, we calculate the control laws online. The overall process of online evaluation is shown in Algorithm 2. ``` 0: Nonlinear kinematic model of WMR and \(K\) reference trajectory points 0:\(f_{\textit{LWA}}(\mathbf{x},\mathbf{u})\) and \(\hat{\mathbf{u}}_{i,\textit{LWA}}(\mathbf{x}),i=1,\ldots,K\), as defined in (5) and (16), respectively. 1: Initialization: Prediction and control horizons, \(N\); Number of state sample points, \(N_{k},k=1,\ldots,K\). 2:for\(m=1:K\)do 3: Obtain the linearized model \(l_{m}(\mathbf{x},\mathbf{u})\) by Taylor expansion at each sample point 4:endfor 5: Construct the lattice approximation model \(f_{\textit{LPWA}}(\mathbf{x},\mathbf{u})\)\(=\max_{m=1,\ldots,K}\left\{\min_{j\in I_{\geq,m}}\left\{l_{j}\right\}\right\}\) 6:for\(i=1:K\)do 7:for\(k=1:N_{k}\)do 8: Sample a point \(x_{i,k}\) near the \(k\)th reference trajectory point \(\mathbf{x}_{i}\) and solve the mpQP problem to obtain \(\mathbf{u}_{i,k}\left(\mathbf{x}_{i}\right)\) 9: Add sample point \((\mathbf{x}_{i,k},\mathbf{u}_{i,k}\left(\mathbf{x}_{i,k}\right))\) to dataset \(\mathcal{X}_{i}\times\mathcal{U}_{i}\) 10:endfor 11: Construct the lattice approximation model \(\hat{\mathbf{u}}_{k,\textit{LPWA}}(\mathbf{x})\)\(=\max_{i=1,\ldots,N_{k}}\left\{\min_{j\in J_{\geq,i,k}}\mathbf{u}_{i,k}(\mathbf{x})\right\}\) based on \(\mathcal{X}_{i}\times\mathcal{U}_{i}\). 12: Simplification. 13:endfor ``` **Algorithm 1** Offline calculation The control law evaluated online for the actual state value \(\mathbf{x}_{(k)}\) at the \(k\)th moment of the robot can be described as \(\hat{\mathbf{u}}_{k}(\mathbf{x}_{(k)})=\hat{\mathbf{u}}_{i,\textit{LPWA}}(\mathbf{x}(k))\). \(\mathbf{u}_{k}\left(\mathbf{x}_{k}\right)\) can be obtained by substituting \(\mathbf{x}_{(k)}\) into the corresponding \(k-th\) expression. Subsequently, the actual state value \(\mathbf{x}_{(k+1)}\) that the robot reaches when driven by this control input can be determined using the system kinematic equation (1). In this manner, the WMR trajectory can be tracked online. ## IV Experimental Results Equation (1) is used as the kinematic model of the WMR with the following parameters: vehicle wheelbase \(l=0.1m\), prediction and control horizons \(N=10\), and discrete sampling period \(T=0.1s\). In general, trajectory tracking requires high accuracies for the horizontal and vertical coordinates of the mobile robot and low accuracy for the heading angle of the mobile robot. Therefore, the weight matrix of the objective function (10e) is selected as follows: \[Q=\left[\begin{array}{ccc}10&0&0\\ 0&10&0\\ 0&0&0.5\end{array}\right],R=\left[\begin{array}{ccc}0.1&0\\ 0&0.1\end{array}\right] \tag{18}\] To demonstrate the superiority of our method, identical parameter conditions are set for the nonlinear MPC, linear MPC and traditional explicit linear MPC. ### _Circular reference trajectory_ We consider a circular reference trajectory and perform a trajectory tracking simulation experiment using the WMR. The initial state of the robot is \(\left[\begin{array}{ccc}1.9&0&1.57\end{array}\right]^{T}\). Considering the actual situation of the WMR, the state and control quantity constraints are set as follows: \[\left[\begin{array}{c}-3.0\\ -3.0\\ -3\pi\end{array}\right]\leq\mathbf{x}\leq\left[\begin{array}{c}3.0\\ 3.0\\ 3\pi\end{array}\right],\left[\begin{array}{c}-2\\ -\frac{\pi}{2}\end{array}\right]\leq\mathbf{u}\leq\left[\begin{array}{c}2\\ \frac{\pi}{2}\end{array}\right] \tag{19}\] Next, we linearize the kinematic model at each reference point and obtain 360 linearized models. Finally, for each explicit linear MPC problem, we sample 300 points near the reference point. The tracking performance of the three methods is evaluated in terms of the online computing time, offline computing time and average tracking error. Fig. 6 shows that all of the methods can track the reference trajectory. The tracking data are summarized in Table I. Compared with the offline and online calculation time values of the explicit linear MPC algorithm, those of the lattice PWA approximation method are reduced to 2.009% and 0.238%, respectively. Moreover, the average tracking error of the lattice PWA approximation method is not significantly higher than those of the other methods. ### _8-shaped trajectory_ The reference trajectory is set to be in the form of the digit 8 to demonstrate the superiority of our framework. The initial state of the robot is \(\left[\begin{array}{ccc}0.25&0&1.3\end{array}\right]^{T}\). Considering the actual situation of WMRs, the state and control quantity constraints are set as \[\left[\begin{array}{c}-2.5\\ -1.5\\ -2\pi\end{array}\right]\leq\mathbf{x}\leq\left[\begin{array}{c}2.5\\ 1.5\\ 2\pi\end{array}\right],\left[\begin{array}{c}-2\\ -\frac{\pi}{2}\end{array}\right]\leq\mathbf{u}\leq\left[\begin{array}{c}2\\ \frac{\pi}{2}\end{array}\right] \tag{20}\] We obtain 252 linearized models for this trajectory and compare the tracking performances of the three methods. Table II shows that compared with the offline and online calculation time values of the explicit linear MPC problem, those of the lattice PWA approximation method are reduced to 1.944% and 0.247%, respectively, and the average tracking error is not significantly higher. The results are consistent with those of the circular trajectory and demonstrate that the tracking performance of our method is superior to the tracking performances of existing methods. ## V Conclusion This paper proposes a lattice PWA approximation method based on explicit linear MPC to effectively track the trajectories of fast nonlinear robots. First, we successively linearize the kinematic model of the WMR along its trajectory to simplify the trajectory tracking calculations. And we get the global approximation error of the system using lattice PWA model. Second, for the explicit linear MPC problem corresponding to each linearization point, we perform lattice PWA offline to approximate the explicit control laws for decreasing the offline Fig. 6: Comparison of tracking performances for a circular reference trajectory Fig. 7: Comparison of tracking performances on an 8-shaped trajectory calculation time. In addition, the lattice PWA approximation is further simplified to reduce the online complexity and memory requirement. Finally, in the online tracking stage, the control input is calculated by substituting the actual state into the corresponding lattice PWA expression to reduce the online search time. The simulation and experimental results show that compared with the explicit linear MPC, our method exhibits superior trajectory-tracking performance, given its higher online computing speed and reduced offline computing time and memory consumption.
2303.08476
Bayesian Learning for the Robust Verification of Autonomous Robots
Autonomous robots used in infrastructure inspection, space exploration and other critical missions operate in highly dynamic environments. As such, they must continually verify their ability to complete the tasks associated with these missions safely and effectively. Here we present a Bayesian learning framework that enables this runtime verification of autonomous robots. The framework uses prior knowledge and observations of the verified robot to learn expected ranges for the occurrence rates of regular and singular (e.g., catastrophic failure) events. Interval continuous-time Markov models defined using these ranges are then analysed to obtain expected intervals of variation for system properties such as mission duration and success probability. We apply the framework to an autonomous robotic mission for underwater infrastructure inspection and repair. The formal proofs and experiments presented in the paper show that our framework produces results that reflect the uncertainty intrinsic to many real-world systems, enabling the robust verification of their quantitative properties under parametric uncertainty.
Xingyu Zhao, Simos Gerasimou, Radu Calinescu, Calum Imrie, Valentin Robu, David Flynn
2023-03-15T09:29:27Z
http://arxiv.org/abs/2303.08476v2
# Bayesian Learning for the Robust Verification of Autonomous Robots ###### Abstract We develop a novel Bayesian learning framework that enables the runtime verification of autonomous robots performing critical missions in uncertain environments. Our framework exploits prior knowledge and observations of the verified robotic system to learn expected ranges of values for the occurrence rates of its events. We support both events observed regularly during system operation, and singular events such as catastrophic failures or the completion of difficult one-of tasks. Furthermore, we use the learnt event-rate ranges to assemble interval continuous-time Markov models, and we apply quantitative verification to these models to compute expected intervals of variation for key system properties. These intervals reflect the uncertainty intrinsic to many real-world systems, enabling the robust verification of their quantitative properties under parametric uncertainty. We apply the proposed framework to the case study of verification of an autonomous robotic mission for underwater infrastructure inspection and repair. Mobile robots are increasingly used to perform critical missions in extreme environments, which are inaccessible or hazardous to humans.[40, 41, 44, 45] These missions range from the inspection and maintenance of offshore wind-turbine mooring chains and high-voltage cables to nuclear reactor repair and deep-space exploration.[42, 43] Using robots for such missions poses major challenges.[38, 33] First and foremost, the robots need to operate with high levels of autonomy, as in these harsh environments their interaction and communication with human operators is severely restricted. Additionally, they frequently need to make complex mission-critical decisions, with errors endangering not just the robot--itself an expensive asset, but also the important system or environment being inspected, repaired or explored. Last but not least, they need to cope with the considerable uncertainty associated with these missions, which often comprise one-off tasks or are carried out in settings not encountered before. Addressing these major challenges is the focus of intense research worldwide. In the UK alone, a recent $24.5M research programme has tackled technical and certification challenges associated with the use of robotics and AI in the extreme environments encountered in offshore energy ([https://orcahub.org](https://orcahub.org)), space exploration ([https://www.fairspacehub.org](https://www.fairspacehub.org)), nuclear infrastructure ([https://rainhub.org.uk](https://rainhub.org.uk)), and management of nuclear waste ([https://www.ncnr.org.uk](https://www.ncnr.org.uk)). This research has initiated a step change in the assurance and certification of autonomous robots--not least through the emergence of new concepts such as _dynamic assurance_[13] and _self-certification_[37] for robotic systems. Dynamic assurance requires a robot to respond to failures, environmental changes and other disruptions not only by reconfiguring accordingly,[10] but also by producing new assurance evidence which guarantees that the reconfigured robot will continue to achieve its mission goals.[13] Self-certifying robots must continually verify their health and ability to complete missions in dynamic, risk-prone environments.[37] In line with the "defence in depth" safety engineering paradigm,[23] this runtime verification has to be performed independently of the front-end planning and control engine of the robot. Despite these advances, current dynamic assurance and self-certification methods rely on quantitative verification techniques (e.g., probabilistic[26, 29] and statistical[85] model checking) that do not handle well the parametric uncertainty that autonomous robots encounter in extreme environments. Indeed, quantitative verification operates with stochastic models that demand single-point estimates of uncertain parameters such as task execution and failure rates. These estimates capture neither epistemic nor aleatory parametric uncertainty. As such, they are affected by arbitrary estimation errors which--because stochastic models are often nonlinear--can be amplified in the verification process,[9] and may lead to invalid robot reconfiguration decisions, dynamic assurance and self-certification. In this paper, we present a robust quantitative verification framework that employs novel Bayesian learning techniques to overcome this limitation. Our framework requires only partial and limited prior knowledge about the verified robotic system, and exploits its runtime observations (or lack thereof) to learn ranges of values for the system parameters. These parameter ranges are then used to compute the quantitative properties that underpin the robot's decision making (e.g., probability of mission success, and expected energy usage) as intervals that--unique to our framework--capture the parametric uncertainty of the mission. We start by introducing our robust verification framework, which comprises Bayesian techniques for learning the occurrence rates of both singular events (e.g., catastrophic failures and completion of one-off tasks) and events observed regularly during system operation. Next, we describe the use of the framework for an offshore wind-turbine inspection and maintenance robotic mission. Finally, we discuss the framework in the context of related work, and we suggest directions for further research. ## 1 Robust Bayesian verification framework ### Quantitative verification. Quantitative verification is a mathematically based technique for analysing the correctness, reliability, performance and other key properties of systems with stochastic behaviour.[3, 31] The technique captures this behaviour into _Markov models_, formalises the properties of interest as _probabilistic temporal logic_ formulae over these models, and employs efficient algorithms for their analysis. Examples of such properties include the probability of mission failure for an autonomous robot, and the expected battery energy required to complete a robotic mission. In this paper, we focus on the quantitative verification of _continuous-time Markov chains_ (CMTCs). CTMCs are Markov models comprising (i) a finite set of _states_ corresponding to real-world states of the system that are relevant for the analysed properties; and (ii) the _rates of transition_ between these states. Efficient quantitative verification algorithms for CTMCs are available, and are implemented by widely used probabilistic model checkers such as PRISM[30] and Storm.[15] While the transition rates of the CTMCs verified in this way must be known and constant, recent advances in quantitative verification[7] support the analysis of CTMCs whose transition rates are _intervals_. Here we introduce a Bayesian framework for computing these intervals in ways that reflect the parametric uncertainty of real-world systems such as autonomous robots. ### Bayesian learning of CTMC transition rates. Given two states \(s_{i}\) and \(s_{j}\) of a CTMC such that transitions from \(s_{i}\) to \(s_{j}\) are possible and occur with rate \(\lambda\), each transition from \(s_{i}\) to \(s_{j}\) is independent of how state \(s_{i}\) was reached (the Markov property). Furthermore, the time spent in state \(s_{i}\) before a transition to \(s_{j}\) is modelled by a homogeneous Poisson process of rate \(\lambda\). Accordingly, the likelihood that 'data' collected by observing the CTMC shows \(n\) such transitions occurring within a combined time \(t\) spent in state \(s_{i}\) is given by the conditional probability: \[l(\lambda)\!=\!Pr(\text{data}\,|\,\lambda)\!=\!\frac{(\lambda t)^{n}}{n!}e^{- \lambda t} \tag{1}\] In practice, the rate \(\lambda\) is typically unknown, but prior beliefs about its value are available (e.g., from domain experts or from past missions performed by the system modelled by the CTMC) in the form of a probability (density or mass) function \(f(\lambda)\). In this common scenario, the Bayes Theorem can be used to derive a _posterior probability function_ that combines the likelihood \(l(\lambda)\) and the prior \(f(\lambda)\) into a better estimate for \(\lambda\) at time \(t\): \[f(\lambda\,|\,\text{data})\!=\!\frac{l(\lambda)f(\lambda)}{\int_{0}^{\infty}\! l(\lambda)f(\lambda)\mathrm{d}\lambda} \tag{2}\] where the Lebesgue-Stieltjes integrala from the denominator is introduced to ensure that \(f(\lambda\,|\,\text{data})\) is a probability function. We calculate the posterior estimate for the rate \(\lambda\) at time \(t\) as the expectation of (2):b Footnote b: We use capital letters for random variables and lower case for their realizations. \[\lambda^{(t)}\!=\!\mathbb{E}[\Lambda\,|\,\text{data}]\!=\!\frac{\int_{0}^{ \infty}\!\lambda(\lambda)f(\lambda)\mathrm{d}\lambda}{\int_{0}^{\infty}\!l( \lambda)f(\lambda)\mathrm{d}\lambda}\,. \tag{3}\] ### Interval Bayesian inference for singular events. In the autonomous-robot missions considered in our paper, certain events are extremely rare, and treated as _unique_ from a modelling viewpoint. These events include major failures (after each of which the system is modified to remove or mitigate the cause of the failure), and the successful completion of difficult one-off tasks. Using Bayesian inference to estimate the CTMC transition rates associated with such events is challenging because, with no observations of these events, the posterior estimate is highly sensitive to the choice of a suitable prior distribution. Furthermore, only limited domain knowledge is often available to select and justify a prior distribution for these singular events. To address this challenge, we develop a _Bayesian inference using partial priors_ (BIPP) estimator that requires only _limited, partial prior knowledge_ instead of the complete prior distribution typically needed for Bayesian inference. For one-off events, such knowledge is both more likely to be available and easier to justify. BIPP provides bounded posterior estimates that are robust in the sense that the ground truth rate values are within the estimated intervals. To derive the BIPP estimator, we note that for one-off events the likelihood (1) becomes \[l(\lambda)\!=\!Pr(\text{data}|\lambda)\!=\!e^{-\lambda\cdot t} \tag{4}\] because \(n\!=\!0\). Instead of a prior distribution \(f(\lambda)\) (required to compute the posterior expectation (3)), we assume that we only have limited partial knowledge consisting of \(m\!\geq\!2\) confidence bounds on \(f(\lambda)\): \[Pr(\epsilon_{i-1}\!<\!\lambda\!\leq\!\epsilon_{i})\!=\!\theta_{i} \tag{5}\] where \(1\!\leq\!i\!<\!m\), \(\theta_{i}\!>\!0\), and \(\sum_{i=1}^{m}\theta_{i}\!=\!1\).c We note that \(Pr(\lambda\geq\epsilon_{0})=Pr(\lambda\leq\epsilon_{m})=1\) and that, when no specific information is available, we can use \(\epsilon_{0}\!=\!0\) and \(\epsilon_{m}\!=\!+\infty\). Footnote c: The use of such bounds is a common practice for safety-critical systems. As an example, the IEC61508 safety standard[62] defines safety _integrally_ levels (SILs) for the critical functions of a system based on the bounds for their probability of failure on demand (\(p\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!f\!d\!\!f\!d\!\!f\!d\!\!f\!d\!\!f\!d\!\!f\!d\!\!f\!d\!\!f\!d\!\!f\!d\!\!f\!\!d\!\!f\!\!d\!\!f\!d\!\!f\!\!d\!\!f\!d\!\!f\!\!d\!\!f\!\!d\!\!f\!d\!\!f\!\!d\!\!\!f\!\!d\!\!f\!\!d\!\!f\!\!d\!\!f\!\!d\!\!f\!\!d\!\!f\!\!d\!\!f\!\!d\!\!\!f\!d\!\!\!f\!\!d\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!f\!\!d\!\!\!f\!\!d\!\!f\!\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!\!d\!\!f\!\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!\!d\!\!f\!\!d\!\!\!f\!\!d\!\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!f\!\!d\!\!\!\!f\!\!\!d\!\!\!f\!\!d\!\!\!\!f\!\!d\!\!\!\!f\!\!d\!\!\!\!d\!\!\!f\!\!d\!\!\!\!f\!\!d\!\!\!\!d\!\!\!\!f\!\!\!d\!\!\!\!f\!\!d\!\!\!\!d\!\!\!f\!\!\!d\!\!\!\!f\!\!d\!\!\!\!d\!\!\!\!f\!\!d\!\!\!\!\!d\!\!\!f\!\!\!d\!\!\!\!d\!\!\!\!f\!\!\!d\!\!\!\!d\!\!\!\!f\!\!\!d\!\!\!\!\!d\!\!\!\!f\!\!\!d\!\!\!\!d\!\!\!\!f\!\!d\!\!\!\!\!d\!\!\!\!f\!\!\!d\!\!\!\!\!d\!\!\!\!f\!\!\!d\!\!\!\!\!d\!\!\!\!\!f\!\!d\!\!\!\!\!d\!\!\!\!\!f\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!f\!\!d\!\!\!\!\!d\!\!\!\!\!f\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!d\!\!\!\!\!\!d\!\!\!\!\!d\!\!\!\ \[\lambda_{u}\!=\!\max\!\left\{\frac{\sum_{m=1}^{m}\lambda_{l}\lambda_{l}(\lambda_{l}) \theta_{i}}{\sum_{i=1}^{m}l(\lambda_{i})\theta_{i}}\bigg{|}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! PRISM-PSY Model Checker[14], to compute value intervals for key system properties. As shown in Fig. 1 and illustrated in the next section, these properties range from dependability (e.g., safety, reliability and availability)1 and performance (e.g., response time and throughput) properties to resource use and system utility. Finally, changes in the value ranges of these properties may prompt the dynamic reconfiguration of the system by a Controller module responsible for ensuring that the system requirements are satisfied at all times. ## 2 Robust verification of robotic mission ### Offshore infrastructure maintenance. We demonstrate how our online robust verification and reconfiguration framework can support an autonomous underwater vehicle (AUV) to execute a structural health inspection and cleaning mission of the substructure of an offshore wind farm. Similar scenarios for AUV use in remote, subsea environments have been described in other large-scale robotic demonstration projects, such as the PANDORA EU FP7 project[34]. Compared to remotely operated vehicles that must be tethered with expensive oceanographic surface vessels run by specialised personnel, AUVs bring significant advantages, including reduced environmental footprint (since no surface vessel consuming fuel is needed), reduced cognitive fatigue for the involved personnel, increased frequency of mission execution, and reduced operational and maintenance cost. The offshore wind farm comprises multiple floating wind turbines, with each turbine being a buoyant foundation structure secured to the sea bed with floating chains tethered to anchors weighing several tons. Wind farms with floating wind turbines offer increased wind exploitation (since they can be installed in deeper waters where winds are stronger and more consistent), reduced installation costs (since there is no need to build solid foundations), and reduced impact on the visual and maritime life (since they are further from the shore)[36]. The AUV is deployed to collect data about the condition of \(k\!\geq\!1\) floating chains to enable the post-mission identification of problems that could affect the structural integrity of the asset (floating chain). When the visual inspection of a chain is hindered due to accumulated biofuling or marine growth, the AUV can use its on-board high-pressure water jet to clean the chain and continue with the inspection task[34]. The high degrees of _aleatoric uncertainty_ in navigation and the perception of the marine environment entail that the AUV might fail to clean a chain. This uncertainty originates from the dynamic conditions of the underwater medium that includes unexpected water perturbations coupled with difficulties in scene understanding due to reduced visibility and the need to operate close to the floating chains. When this occurs, the AUV can retry the cleaning task or skip the chain and move to the next. ### Stochastic mission modelling. Fig. 2 shows the parametric CTMC model of the floating chain inspection and cleaning mission. The AUV inspects the \(i\)-th chain with rate \(r^{\mathrm{inspect}}\) and consumes energy \(e_{i}\). The chain is clean with probability \(p\) and the AUV travels to the next chain with rate \(r^{\mathrm{travel}}\) consuming energy \(e_{t}\), or the chain needs cleaning with probability \(1\!-\!p\). When the AUV attempts the cleaning (\(x_{i}\!=\!1\)), the task succeeds with chain-dependent rate \(r^{\mathrm{clean}}_{i}\), causes catastrophic damage to the floating chain or itself with rate \(r^{\mathrm{damage}}\) or fails with rate \(r^{\mathrm{fail}}\). If the cleaning fails, the AUV prepares to retry with known and fixed rate \(r^{\mathrm{prepare}}\) requiring energy \(e_{p}\), and it either retries cleaning (\(x_{i}\!=\!1\)) or skips the current chain and moves to chain \(i\!+\!1\) (\(x_{i}\!=\!0\)). After executing the tasks on the \(k\)-th chain, the AUV returns to its base and completes the mission. When cleaning is needed for the \(i\)-th chain, the AUV controller synthesises a plan by determining the control parameter \(x_{i}\!\in\!\{0,\!1\}\) for all remaining chains \(i\!,\!i\!+\!1\),...\(k\) so that the system requirements in Table 1 are satisfied. ### Results We demonstrate our solution for robust verification and adaptation using a mission in which the AUV was deployed to inspect and, if needed, clean six chains placed Figure 1: The integration of BIPP and IPSP Bayesian inference with interval CTMC model checking supports the online robust quantitative verification and reconfiguration of autonomous systems under parametric uncertainty. in a hexagonal arrangement (Fig. 3). During this mission, the AUV performs the model checking at every cleaning attempt so that runtime observations are incorporated into the decision making process entailing also that the currently synthesised plan is not necessarily used at subsequent chains. Hence, the AUV only needs to check system configurations where at least the current chain is to be cleaned, thus having the number of configurations to be checked (since configurations with \(x_{i}\!=\!0\) need not be checked). If all of these checks that consider \(x_{i}\!=\!1\) fail to satisfy the requirements from Table 1, then the AUV decides to skip the current chain and proceed to inspect and clean the next chain. If a cleaning attempt at chain \(i\) failed, the AUV integrates this observation in (13)(14), and performs model checking to determine whether to retry the cleaning or skip the chain. Since the AUV has consumed energy for the failed cleaning attempt, the energy available is reduced accordingly, which in turn can reduce the number of possible system configurations that can be employed and need checking. The observation of a failed attempt reduces the lower bound for the reliability of cleaning \(x_{i}\), and may result in a violation of the reliability requirement R1 (Table 1), which may further reduce the number of feasible configurations. If the AUV fails to clean chain \(i\) repeatedly, this lower bound will continue to decrease, potentially resulting in the AUV having no feasible configuration, and having to skip the current chain. Although skipping a chain overall decreases the risk of a catastrophic failure (as the number of cleaning attempts is reduced), leaving uncleaned chains will incur additional cost as a new inspection mission will need to be launched, e.g., using another AUV or human personnel. Fig. 3 displays the AUV performing one instance of the inspection and cleaning mission, with details of the probabilistic model checking carried out during the inspection and cleaning of chain 3. The simulator used for the AUV mission, developed on top of the open-source MOOS-lvP middleware,[4] and a video showing the execution of this AUV mission instance are available at [http://github.com/gerasimou/RBV](http://github.com/gerasimou/RBV). ## 3 Discussion Unlike single-point estimators of Markov model parameters,[12, 16, 18, 19] our Bayesian framework provides interval estimates that capture the inherent uncertainty of these parameters, enabling the robust quantitative verification of systems such as autonomous robots. Through its ability to exploit prior knowledge, the framework differs fundamentally from, and is superior to, a recently introduced approach to synthesising intervals for unknown transition parameters based on the frequentist theory of simultaneous confidence intervals.[9, 11] Furthermore, instead of applying the same estimator to all Markov-model transition parameters like existing approaches, our framework is the first to handle parameters corresponding to singular and regular events differently. This is an essential dis \begin{table} \begin{tabular}{l l l} \hline \hline **ID** & **Informal description** & **Formal specification1** \\ \hline **R1** & The probability of mission failure must not exceed 5\% & \(P_{\leq 0.05}[F\) damage] \\ **R2** & The expected energy consumption must not exceed the remaining energy \(E_{\mathrm{left}}\) & \(R_{\leq E_{\mathrm{in}}}^{\mathrm{energy}}[F\) finish] \\ **R3** & Subject to R1 and R2 being met, maximise the number of cleaned chains & find _argmax_\(\sum_{i=1}^{n}x_{i}\) such that \(R1\!\wedge\!R2\) \\ \hline \hline \end{tabular} \end{table} Table 1: System requirements for the AUV floating chain inspection and cleaning mission Figure 2: CTMC of the floating chain cleaning and inspection mission, where \(e_{i}\!=\!x_{i}e_{c}\!+\!(1\!-\!x_{i})e_{t}\) for \(i\!=\!1,\!2,\!...,\!n\) function, especially for the former type of parameter, for which the absence of observations violates a key premise of existing estimators. Our BIPP estimator avoids this invalid premise, and computes two-sided bounded estimates for singular CTMC estimators. We also provide a comparison of the proposed estimator with the existing estimators. Figure 3: We simulated an AUV mission involving the inspection of six wind farm chains and, if required, their cleaning (top). At each chain that requires cleaning, the AUV decides whether to attempt to clean or skip the current chain. The middle screenshot from the simulation timeline shows the AUV at the third chain; with four chains remaining, there will be \(2^{3}\!=\!8\) system configurations to consider, corresponding to \(x_{3}\!=\!1\) and \(x_{4}\),\(x_{5}\),\(x_{6}\!\in\!\{0,\!1\}\). The plots below the mission simulation timeline show the outcome of the model checking carried out by the AUV at this chain. The AUV only succeeded to clean this chain at the third attempt, and the results of the model checking analyses for these attempts are shown in successive columns, with each row depicting the analysis of one of the requirements from Table 1. A system configuration is feasible if it satisfies requirements **R1—** the AUV will not encounter a catastrophic failure with probability of at least 0.95 (row 1), and **R2—**the energy consumption does not exceed what the AUV has remaining (row 2). If multiple feasible configurations exist, then the winner is the configuration that maximises the number of chains cleaned (requirement **R3**, row 3). If there is still a tie, the configuration is chosen randomly from those that clean the most chains. In the AUV first attempt at chain 3, all the configurations are feasible, so configuration 1 (highlighted, and corresponding to the highest number of chains cleaned) is selected. This attempt fails and a second assessment is made. This time, only system configurations 2–8 are feasible, and as configurations 2, 3, and 5 maximise **R3**, a configuration is chosen randomly from this subset (in this case configuration 3). This attempt also fails, and on the third attempt only configurations 4–8 are feasible, with 5 maximising **R3**, and the AUV adopts this configuration and succeeds in cleaning the chain. transition rates--a significant extension of our preliminary work to devise one-sided bounded estimates for the singular transition probabilities of discrete-time Markov chains.[49] The proposed Bayesian framework is underpinmed by the theoretical foundations of imprecise probabilities[46; 47] and Conservative Bayesian Inference (CBI),[50; 39; 51] integrated with recent advances in the verification of interval CTMCs.[14] In particular, our BIPP theorems for singular events extend CBI significantly in several ways. First, BIPP operates in the continuous domain for a Poisson process, while previous CBI theorems are applicable to Bernoulli processes in the discrete domain. As such, BIPP enables the runtime quantitative verification of interval CTMCs, and thus the analysis of important properties that are not captured by discrete-time Markov models. Second, CBI is one-side (upper) bounded, and therefore only supports the analysis of undesirable singular events (e.g., catastrophic failures). In contrast, BIPP provides two-sided bounded estimates, therefore also enabling the analysis of "positive" singular events (e.g., the completion of difficult one-off tasks). Finally, BIPP can operate with any _arbitrary_ number of confidence bounds as priors, which greatly increases the flexibility of exploiting different types of prior knowledge. As illustrated by its application to an AUV infrastructure maintenance mission, our robust quantitative verification framework removes the need for precise prior beliefs, which are typically unavailable in many real-world verification tasks that require Bayesian inference. Instead, the framework enables the exploitation of Bayesian combinations of partial or imperfect prior knowledge, which it uses to derive informed estimation errors (i.e., intervals) for the predicted model parameters. Combined with existing techniques for obtaining this prior knowledge, e.g., the Delphi method and its variants[24] or reference class forecasting,[20] the framework increases the trustworthiness of Bayesian inference in highly uncertain scenarios such as those encountered in the verification of autonomous robots. ## 4 Methods ### Quantitative verification of CTMCs. CTMCs are formal models for continuous-time stochastic processes over countable state spaces. We use the following definition adapted from the probabilistic model checking literature.[3; 31] **Definition 1**.: _A continuous-time Markov chain is a tuple_ \[\mathcal{M}\!=\!(S_{0},\mathbf{R}), \tag{15}\] _where \(S\) is a finite set of states, \(s_{0}\!\in\!S\) is the initial state, and \(\mathbf{R}\!:\!S\!\times\!S\!\!\rightarrow\!\mathbb{R}\) is a transition rate matrix such that the probability that the CTMC will leave state \(s_{i}\!\in\!S\) within \(t\!\!>\!0\) time units is \(1\!-\!e^{-t\sum_{k\in S}(s_{i})\mathbf{R}(s_{i},s_{k})}\) and the probability that the new state is \(s_{j}\!\in\!S\!\setminus\!\{s_{i}\}\) is \(p_{ij}\!=\!\mathbf{R}(s_{i}s_{j})/\sum_{s_{k}\in S\!\setminus\!\{s_{i}\}} \mathbf{R}(s_{i}s_{k})\)._ The range of properties that can be verified using CTMCs can be extended by annotating the states and transitions with non-negative quantities called _rewards_. **Definition 2**.: _A reward structure over a CTMC \(\mathcal{M}\!=\!(S,s_{0},\)\(\mathbf{R})\) is a pair of functions \((\rho,\epsilon)\) such that \(\rho\!:\!S\!\rightarrow\!\mathbb{R}_{\geq 0}\) is a state reward function (a vector), and \(\epsilon\!:\!S\!\times\!S\!\rightarrow\!\mathbb{R}_{\geq 0}\) is a transition reward function (a matrix)._ CTMCs support the verification of quantitative properties expressed in continuous stochastic logic (CSL)\({}^{2}\) extended with rewards.[31] **Definition 3**.: _Given a set of atomic propositions \(AP\), \(a\!\in\!AP\), \(p\!\in\![0,1]\), \(I\!\subseteq\!\mathbb{R}_{\geq 0}\), \(r,t\!\in\!\mathbb{R}_{\geq 0}\) and \(\bowtie\!\!\in\!\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! of \(g\) satisfies \[\frac{d^{2}g}{dw^{2}}\!=\!\frac{d}{dw}\!\left[-\frac{\ln\!w}{t}\!-\!\frac{1}{t} \right]\!=\!-\frac{1}{wt}\!<\!0. \tag{16}\] Thus, \(g(w)\) is concave. **Proposition 1**.: _With the notation from Theorem 1, there exist \(m\) values \(\lambda_{1}\!\in\!(\epsilon_{0},\epsilon_{1}]\), \(\lambda_{2}\!\in\!(\epsilon_{1},\epsilon_{2}]\),..., \(\lambda_{m}\!\in\!(\epsilon_{m-1},\epsilon_{m}]\) such that \(\sup\mathcal{S}_{\lambda}\) is the posterior estimate (3) obtained by using as prior the \(m\)-point discrete distribution with probability mass \(f(\lambda_{i})\!=\!Pr(\lambda\!=\!\lambda_{i})\!=\!\theta_{i}\) for \(i\!=\!1,\!2,\!...,\!m\)._ Proof.: Since \(f(\lambda)\!=\!0\) for \(\lambda\!\notin\![\epsilon_{0},\epsilon_{m}]\), the Lebesgue-Stieltjes integration from the objective function (3) can be rewritten as: \[\mathbb{E}(\Lambda\!\mid\!\text{data})\!=\!\frac{\sum\limits_{i=1}^{m}\!\!\int _{\epsilon_{i-1}}^{\epsilon_{i}}\!\!\lambda l(\lambda)f(\lambda)\mathrm{d}\lambda }{\sum\limits_{i=1}^{m}\!\!\!\int_{\epsilon_{i-1}}^{\epsilon_{i}}\!\!l( \lambda)f(\lambda)\mathrm{d}\lambda} \tag{17}\] The first mean value theorem for integrals (e.g. [21, p. 249]) ensures that, for every \(i=1,2,...,m\), there are points \(\lambda_{i},\!\lambda_{i}^{\prime}\!\in\![\epsilon_{i-1},\epsilon_{i}]\) such that: \[\int_{\epsilon_{i-1}}^{\epsilon_{i}}\!\!l(\lambda)f(\lambda) \mathrm{d}\lambda\!=\!l(\lambda_{i})\!\int_{\epsilon_{i-1}}^{\epsilon_{i}}\! \!f(\lambda)\mathrm{d}\lambda\!=\!l(\lambda_{i})\theta_{i} \tag{18}\] \[\int_{\epsilon_{i-1}}^{\epsilon_{i}}\!\!\lambda l(\lambda)f( \lambda)\lambda\!=\!\lambda_{i}^{\prime}\!l(\lambda_{i}^{\prime})\!\int_{ \epsilon_{i-1}}^{\epsilon_{i}}\!\!f(\lambda)\mathrm{d}\lambda\!=\!\lambda_{i} ^{\prime}\!l(\lambda_{i}^{\prime})\theta_{i} \tag{19}\] or, after simple algebraic manipulations of the previous results, \[l(\lambda_{i})\!= \mathbb{E}[\!l(\Lambda)\!\mid\!\epsilon_{i-1}\!\leq\!\Lambda\! \leq\!\epsilon_{i}] \tag{20}\] \[\lambda_{i}^{\prime}\!l(\lambda_{i}^{\prime})\!= \mathbb{E}[\!\Lambda\!\cdot\!l(\Lambda)\!\mid\!\epsilon_{i-1}\! \leq\!\Lambda\!\leq\!\epsilon_{i}] \tag{21}\] Using the shorthand notation \(w\!=\!l(\lambda)\) for the likelihood function (4) (hence \(w\!>\!0\)), we define \(g\!:\!(0,\!\infty)\!\to\mathbb{R}\), \(g(w)\!=\!w\!\cdot\!l^{-1}(w)\). According to Lemma 1, \(g(\cdot)\) is a concave function, and thus we have: \[\lambda_{i}^{\prime}\!(\lambda_{i}^{\prime})\!= \mathbb{E}[\!\Lambda\!\cdot\!l(\Lambda)\!\mid\!\epsilon_{i-1}\! \leq\!\Lambda\!\leq\!\epsilon_{i}]\] \[= \mathbb{E}[\!W\!\cdot\!l^{-1}(W)\!\mid\!\epsilon_{i-1}\!\leq\!l^{ -1}(W)\!\leq\!\epsilon_{i}]\] \[= \mathbb{E}[\!g(W)\!\mid\!\epsilon_{i-1}\!\leq\!l^{-1}(W)\!\leq\! \epsilon_{i}] \tag{22}\] \[\leq g\!\left(\mathbb{E}[\!W\!\mid\!\epsilon_{i-1}\!\leq\!l^{-1}(W) \!\leq\!\epsilon_{i}]\right)\] \[= \mathbb{E}[\!W\!\mid\!\epsilon_{i-1}\!\leq\!l^{-1}(W)\!\leq\! \epsilon_{i}]\!\cdot\] \[l^{-1}\!\left(\mathbb{E}[\![W\!\mid\!\epsilon_{i-1}\!\leq\!l^{ -1}(W)\!\leq\!\epsilon_{i}]\right)\] \[= \mathbb{E}[\![l(\Lambda)\!\mid\!\epsilon_{i-1}\!\leq\!l(\Lambda) \!\leq\!\epsilon_{i}]\!\cdot\!l^{-1}(\mathbb{E}[\![l(\Lambda)\!\mid\!\epsilon_ {i-1}\!\leq\!l(\Lambda)\!\leq\!\epsilon_{i}]\!)\] \[= l(\lambda_{i})\!\cdot\!l^{-1}(l(\lambda_{i}))\] \[= \lambda_{i}\!\cdot\!l(\lambda_{i}), \tag{23}\] where the inequality step (22) is obtained by applying Jensen's inequality.[6, 25] We can now use (18), (19) and (23) to establish an upper bound for the objective function (17): \[\mathbb{E}(\Lambda\!\mid\!\text{data})\!=\!\frac{\sum\limits_{i=1}^{m}\!\!\lambda _{i}^{\prime}\!l(\lambda_{i}^{\prime})\theta_{i}}{\sum\limits_{i=1}^{m}\!\!l( \lambda_{i})\theta_{i}}\leq\frac{\sum\limits_{i=1}^{m}\!\lambda_{i}\!l( \lambda_{i})\theta_{i}}{\sum\limits_{i=1}^{m}\!\!l(\lambda_{i})\theta_{i}} \tag{24}\] This upper bound is attained by selecting an \(m\)-point discrete distribution \(f_{u}(\lambda)\) with probability mass \(\theta_{i}\) at \(\lambda=\lambda_{i}\), for \(i\!=\!1,\!2,\!...,\!m\) (since substituting \(f(\cdot)\) from (17) with this \(f_{u}(\cdot)\) yields the rhs result of (24)). As such, maximising this bound reduces to an optimisation problem in the \(m\)-dimensional space of \((\lambda_{1},\!\lambda_{2},\!...,\!\lambda_{m})\!\in\!(\epsilon_{0},\epsilon_{1 }]\!\times\!(\epsilon_{1},\!\epsilon_{2}]\!\times\!\times\!(\epsilon_{m-1},\! \epsilon_{m}]\). This optimisation problem can be solved numerically, yielding a supremum (rather than a maximum) for \(\mathcal{S}_{\lambda}\) in the case when the optimised prior distribution has points located at \(\lambda_{i}\!=\!\epsilon_{i-1}\) for \(i\!=\!1,\!2,\!...,\!m\). **Proposition 2**.: _With the notation from Theorem 1, there exist \(m\) values \(x_{1},\!x_{2},\!...,\!x_{m}\!\in\![0,\!1]\) such that \(\inf\!\mathcal{S}_{\lambda}\) is the posterior estimate (3) obtained by using as prior the \((m\!+\!1)\)-point discrete distribution with probability mass \(f(\epsilon_{0})=Pr(\lambda\!=\!\epsilon_{0})=x_{i}\theta_{1}\), \(f(\epsilon_{i})=Pr(\lambda\!=\!\epsilon_{i})=(1-x_{i})\theta_{i}+x_{i+1}\theta_{i+1}\) for \(1\!\leq\!i\!<\!m\), and \(f(\epsilon_{m})\!=\!Pr(\lambda\!=\!\epsilon_{m})\!=\!(1\!-\!x_{m})\theta_{m}\)._ Proof.: We reuse the reasoning steps from Proposition 1 up to inequality (22), which we replace with the following alternative inequality derived from the Converse Jensen's Inequality [27, 32] and the fact that \(g(w)\) is a concave function (cf. Lemma 1): \[\lambda_{i}^{\prime}\!(\lambda_{i}^{\prime})\!= \mathbb{E}[\!g(W)\!\mid\!\epsilon_{i-1}\!\leq\!l^{-1}(W)\!\! \leq\!\epsilon_{i}]\] \[\geq \frac{l(\epsilon_{i-1})\!-\!\mathbb{E}[\!W\!\mid\!\epsilon_{i-1}\! \leq\!l^{-1}(W)\!\!\leq\!\epsilon_{i}]}{l(\epsilon_{i-1})\!-\!l(\epsilon_{i})}g(l( \epsilon_{i}))\] \[+\frac{\mathbb{E}[\!W\!\mid\!\epsilon_{i-1}\!\leq\!l^{-1}(W)\! \leq\!\epsilon_{i}]\!-\!l(\epsilon_{i})}{l(\epsilon_{i-1})\!-\!l(\epsilon_{i})}g(l( \epsilon_{i-1}))\] \[= \frac{l(\epsilon_{i-1})\!-\!l(\lambda_{i})}{l(\epsilon_{i-1})\!-\!l( \epsilon_{i})}e_{i}l(\epsilon_{i})\!+\!\frac{l(\lambda_{i})\!-\!l(\epsilon_{i})}{l( \epsilon_{i-1})\!-\!l(\epsilon_{i})}\epsilon_{i-1}l(\epsilon_{i-1}) \tag{25}\] We can now establish a lower bound for (17): \[\mathbb{E}(\Lambda\!\mid\!\text{data})\!=\!\frac{\sum\limits_{i=1}^{m}\!\! \lambda_{m+i}l(\lambda_{m+i Furthermore, the points on the boundaries of two successive intervals are overlapping, which effectively reduces the number of points from \(2m\) to \(m+1\). Expanding (27) yields an \((m+1)\)-point discrete distribution \(f_{l}(\lambda)\) with probability mass \(f_{l}(\epsilon_{0})=x_{1}\theta_{1}\), \(f_{l}(\epsilon_{i})=(1-x_{i})\theta_{l}+x_{i+1}\theta_{i+1}\) for \(1\leq i<m\) and \(f_{l}(\epsilon_{m})=(1-x_{m})\theta_{m}\). As such, minimising (27) reduces to an \(m\)-dimensional optimisation problem in \(x_{1},x_{2},...,x_{m}\), which can be solved numerically given other model parameters. Finally, since (5) requires that \(\epsilon_{i-1}<\lambda_{i}\leq\epsilon_{i}\), we have \(0\leq x_{i}<1\), and thus the posterior estimate is an infimum (rather than a minimum) of \(\mathcal{S}_{\lambda}\) when the solution of the optimisation problem corresponds to a combination of \(x_{1},x_{2},...,x_{m}\) values that includes one or more values of \(1\). We can now prove the main theoretical result from Section 1.2. In the supplementary material, we use this result to prove Corollaries 1 and 2. **Proof of Theorem 1.** Propositions 1 and 2 imply that the set of posterior estimates \(\lambda\) over all priors that satisfy the constraints (5) has: 1. the infimum \(\lambda_{l}\) from (6), obtained by using the prior \(f(\lambda)\) from Proposition 2 in (3); 2. the supremum \(\lambda_{u}\) from (7), obtained by using the prior \(f(\lambda)\) from Proposition 1 in (3). ### IPSP estimator proofs. A formal proof for the results from (13) and (14) is provided below. **Proof of Theorem 2.** To find the extrema for the posterior rate \(\lambda^{(t)}\), we first differentiate (12) with respect to \(\lambda^{(0)}\): \[\frac{d}{d\lambda^{(0)}}\Big{(}\lambda^{(t)}\Big{)}=\frac{t^{(0)}}{t+t^{(0)}}.\] As \(t^{(0)}>0\) and \(t>0\), this derivative is always positive, so \[\lambda^{(t)}=\min_{t^{(0)}\in\underline{t}^{(0)},\,\overline{t}^{(0)}]}\frac {t^{(0)}\underline{\lambda}^{(0)}+n}{t^{(0)}+t} \tag{29}\] and \[\overline{\lambda}^{(t)}=\max_{t^{(0)}\in\underline{t}^{(0)},\,\overline{t}^{ (0)}]}\frac{t^{(0)}\overline{\lambda}^{(0)}+n}{t^{(0)}+t}. \tag{30}\] We now differentiate the quantity that needs to be minimised in (29) with respect to \(t^{(0)}\): \[\frac{d}{dt^{(0)}}\left(\frac{t^{(0)}\underline{\lambda}^{(0)}+n}{t^{(0)}+t} \right)=\frac{\underline{\lambda}^{(0)}(t^{(0)}+t)-(t^{(0)}\underline{ \lambda}^{(0)}+n)\cdot 1}{(t^{(0)}+t)^{2}}=\frac{\underline{\lambda}^{(0)}t-n}{(t^{(0)}+t)^ {2}}.\] As this derivative is non-positive for \(\underline{\lambda}^{(0)}\in\big{(}0,\frac{n}{t}\big{]}\) and positive for \(\underline{\lambda}^{(0)}>\frac{n}{t}\), the minimum from (29) is attained for \(t^{0}=\overline{t}^{(0)}\) in the former case, and for \(t^{0}=\underline{t}^{(0)}\) in the latter case, which yields the result from (13). Similarly, the derivative of the quantity to maximise in (30), i.e., \[\frac{d}{dt^{(0)}}\left(\frac{t^{(0)}\overline{\lambda}^{(0)}+n}{t^{(0)}+t} \right)=\frac{\overline{\lambda}^{(0)}t-n}{(t^{(0)}+t)^{2}},\] is non-positive for \(\overline{\lambda}^{(0)}\in\big{(}0,\frac{n}{t}\big{]}\) and positive for \(\overline{\lambda}^{(0)}>\frac{n}{t}\), so the maximum from (30) is attained for \(t^{0}=\underline{t}^{(0)}\) in the former case, and for \(t^{0}=\overline{t}^{(0)}\) in the latter case, which yields the result from (14) and completes the proof. ### BIPP estimator evaluation. Fig. 4 shows the results of experiments we carried out to evaluate the BIPP estimator in scenarios with \(m=3\) (Figs. 3(a)-3(c)) and \(m=2\) (Fig. 3(d)) confidence bounds by varying the characteristics of the partial prior knowledge. For \(m=3\), the upper bound computed by the estimator exhibits a three-stage behaviour as the time over which no singular event occurs increases. These stages correspond to the three \(\lambda_{u}\) regions from (9). They start with a steep \(\lambda_{u}\) decrease for \(t<\frac{1}{2}\) in stage 1, followed by a slower \(\lambda_{u}\) decreasing trend for \(\frac{1}{2}<t<\frac{1}{2}\) in stage 2, and approaching the asymptotic value \(\frac{c_{1}(\theta_{1}+\theta_{2})}{\theta_{1}}\) as the mission progresses through stage 3. Similarly, the lower bound \(\lambda_{l}\) demonstrates a two-stage behaviour, as expected given its two-part definition (8), with the overall value approaching 0 as the mission continues and no singular event modelled by this estimator (e.g., a catastrophic failure) occurs. Fig. 3(a) shows the behaviour of the estimator for different \(\theta_{1}\) values and fixed \(\theta_{2}\), \(\epsilon_{1}\) and \(\epsilon_{2}\) values. For higher \(\theta_{1}\) values, more probability mass is allocated to the confidence bound \((\epsilon_{0},\epsilon_{1}]\), yielding a steeper decrease in the upper bound \(\lambda_{u}\) and a lower \(\lambda_{u}\) value at the end of the mission. The lower bound \(\lambda_{l}\) presents limited variability across the different \(\theta_{1}\) values, becoming almost constant and close to \(0\) as \(\theta_{1}\) increases. A similar decreasing pattern is observed in Fig. 3(b), which depicts the results of experiments with \(\theta_{1}\),\(\epsilon_{1}\) and \(\epsilon_{2}\) fixed, and \(\theta_{2}\) variable. The upper bound \(\lambda_{u}\) in the long-term is larger for higher \(\theta_{2}\) values, resulting in a wider posterior estimate bound as \(\lambda_{u}\) converges towards its theoretical asymptotic value. Allocating the same probability mass to the confidence bounds, i.e., \(\theta_{1}=\theta_{2}=0.3\) and changing the prior knowledge bounds \(\epsilon_{1}\) and \(\epsilon_{2}\) affects greatly the behaviour of the BIPP estimator (Fig. 3(c)). When \(\epsilon_{1}\) and \(\epsilon_{2}\) have relatively high values compared to the duration of the mission (e.g., see the first three plots in Fig 3(c)), the upper bound \(\lambda_{u}\) of the BIPP estimator rapidly converges to its asymptotic value, leaving no room for subsequent improvement as the mission progresses. Similarly, the earlier the triggering point for switching between the two parts of the lower bound \(\lambda_{l}\) calculation (8), the earlier \(\lambda_{l}\) reaches a plateau close to 0. Finally, Fig. 3(d) shows experimental results for the special scenario comprising only \(m=2\) confidence bounds. In this scenario, replacing \(\theta_{2}=0\) in (8) as required by Corollary 2 gives a constant lower bound \(\lambda_{l}=0\) irrespective of the other BIPP estimator parameters. As expected, the upper bound \(\lambda_{u}\) demonstrates a twofold behaviour, featuring a rapid decrease until \(t=\frac{1}{c1}\), followed by a steady state behaviour where \(\lambda_{u}=\frac{\epsilon_{1}}{\theta_{1}}\). ### IPSP estimator evaluation. Fig. 5 shows the results of experiments we performed to analyse the behaviour of the IPSP estimator in scenarios with varying ranges for the prior knowledge \([\underline{t}^{(0)},\overline{t}^{(0)}]\) and \([\underline{\lambda}^{(0)},\overline{\lambda}^{(0)}]\). A general observation is that the posterior rate intervals \([\underline{\lambda}^{(t)},\overline{\lambda}^{(t)}]\) become narrower as the mission progresses, irrespective of the level of trust assigned to the prior knowledge, i.e., across all columns of plots (which Figure 4: Systematic experimental analysis of the BiPP estimator showing the bounds \(\lambda_{l}\) and \(\lambda_{u}\) of the posterior estimates for the occurrence probability of singular events for the duration of a mission. Each plot shows the effect of different partial prior knowledge encoded in (5) on the calculation of the lower (6) and upper (7) posterior estimate bounds. The red circles indicate the time points when the different formulae for the lower and upper bounds in (8) and (9), respectively, become active. correspond to different \([\underline{t}^{(0)},\overline{t}^{(0)}]\) intervals) from Fig. 5a. Nevertheless, this trust level affects how the estimator incorporates observations into the calculation of the posterior interval. When the trust in the prior knowledge is weak (in the plots from the leftmost columns of Fig. 5a), the impact of the prior knowledge on the posterior estimation is low, and the IPSP calculation is heavily influenced by the observations, resulting in a narrow interval. In contrast, when the trust in the prior knowledge is Figure 5: Systematic experimental analysis of the IPSP estimator showing the bounded posterior estimators for regular events. stronger (in the plots from the rightmost columns), the contribution of the prior knowledge to the posterior estimation becomes higher, and the IPSP estimator produces a wider interval. In the experiments from the first row of plots in Fig. 4(a), the (unknown) actual rate \(\overline{\lambda}=3\) belongs to the prior knowledge interval \([\lambda^{(0)},\overline{\lambda}^{(0)}]\). As a result, the posterior rate interval \([\lambda^{(t)},\overline{\lambda}^{(t)}]\) progressively becomes narrower, approximating \(\overline{\lambda}\) with high accuracy. As expected, the narrower prior knowledge (blue dotted line) produces a narrower posterior rate interval than the wider and more conservative prior knowledge (green dashed line). When the prior knowledge interval \([\underline{\lambda}^{(0)},\overline{\lambda}^{(0)}]\) overestimates or underestimates the actual rate \(\overline{\lambda}\) (second and third rows of plots from Fig. 4(a), respectively), the ability of IPSP to adapt its estimations to reflect the observations heavily depends on the characteristics of the sets of priors. For example, if the width of the prior knowledge \([\lambda^{(0)},\overline{\lambda}^{(0)}]\) is close to \(\overline{\lambda}\) and \(t^{(0)}\ll t\), then IPSP more easily approaches \(\overline{\lambda}\), as shown by the narrow prior knowledge (blue dotted line) in Fig 4(a) for \([t^{(0)},\overline{t}^{(0)}]\in[\{10,10\},[100,100],[1000,1000])\). In contrast, wider narrow prior knowledge (green dashed line) combined with higher levels of trust in the prior, e.g., \([t^{(0)},\overline{t}^{(0)}]\in\{[2000,2000]\}\), entails that more observations are needed for the posterior rate to approach the actual rate \(\overline{\lambda}\). When the actual rate is, in addition, nonstationary, change-point detection methods can be employed to identify these changes[17, 48] and recalibrate the IPSP estimator. Finally, Fig. 4(b) shows the behaviour of IPSP for different actual rate \(\overline{\lambda}\) values, i.e., \(\overline{\lambda}\in\{0.03,0.3,3.0\}\). As \(\overline{\lambda}\) increases, more observations are produced in the same time period, resulting in a smoother and narrower posterior bound estimate. **Acknowledgments:** This project has received funding from the ORCA-Hub PRF project 'COVE', the Assuring Autonomy International Programme, the UKRI project EP/V026747/1 'Trustworthy Autonomous Systems Node in Resilience', and the European Union's Horizon 2020 project SESAME (grant agreement No 101017258). ## References * [1] A. Avizienis, J. Laprie, B. Randell, and C. Landwehr. Basic concepts and taxonomy of dependable and secure computing. _IEEE Transactions on Dependable and Secure Computing_, 1(1):11-33, 2004. * [2] Adnan Aziz, Kumud Sanwal, Vigyan Singhal, and Robert Brayton. Verifying continuous time Markov chains. In _Computer Aided Verification_, pages 269-276. Springer, 1996. * [3] C. Baier, B. Haverkort, H. Hermanns, and J. P. Katoen. Model-checking algorithms for continuous-time Markov chains. _IEEE Transactions on Software Engineering_, 29(6):524-541, June 2003. * [4] Michael R. Benjamin, Henrik Schmidt, Paul M. Newman, and John J. Leonard. Autonomy for unmanned marine vehicles with MOOS-lVP. In Mae L. Seto, editor, _Marine Robot Autonomy_, pages 47-90. Springer, 2013. * [5] Jose M. Bernardo and Adrian F. M. Smith. _Bayesian theory_. Wiley, 1994. * [6] Peter Bishop, Robin Bloomfield, Bev Littlewood, Andrey Povyakalo, and David Wright. Toward a formalism for conservative claims about the dependability of software-based systems. _IEEE Transactions on Software Engineering_, 37(5):708-717, 2011. * [7] Lubos Brim, Milan Ceska, Sven Drazan, and David Safranek. Exploring parameter space of stochastic biochemical systems using quantitative model checking. In _Computer Aided Verification (CAV)_, pages 107-123, 2013. * 158, 2018. * [9] Radu Calinescu, Carlo Ghezzi, Kenneth Johnson, Mauro Pezze, Yasmin Rafiq, and Giordano Tamburrelli. Formal verification with confidence intervals to establish quality of service properties of software systems. _IEEE Transactions on Reliability_, 65(1):107-125, 2016. * [10] Radu Calinescu, Carlo Ghezzi, Marta Kwiatkowska, and Raifaela Mirandola. Self-adaptive software needs quantitative verification at runtime. _Communications of the ACM_, 55(9):69-77, 2012. * [11] Radu Calinescu, Kenneth Johnson, and Colin Paterson. FACT: A probabilistic model checker for formal verification with confidence intervals. In Marsha Chechik and Jean-Francois Raskin, editors, _Tools and Algorithms for the Construction and Analysis of Systems_, pages 540-546, Berlin, Heidelberg, 2016. Springer Berlin Heidelberg. * [12] Radu Calinescu, Yasmin Rafiq, Kenneth Johnson, and Mehmet Emin Bakiur. Adaptive model learning for continual verification of non-functional properties. In _Proc. of the 5th Int. Conf. on Performance Engineering_, pages 87-98, NY, USA, 2014. ACM. * [13] Radu Calinescu, Danny Weyns, Simos Gerasimou, Muhammad Usman Iftikhar, Ibrahim Habli, and Tim Kelly. Engineering trustworthy self-adaptive software with dynamic assurance cases. _IEEE Transactions on Software Engineering_, 44(11):1039-1069, 2017. * [14] Milan Ceska, Petr Pilar, Nicola Paoletti, Lubos Brim, and Marta Kwiatkowska. PRISM-PSY: Precise GPU-accelerated parameter synthesis for stochastic systems. In Marsha Chechik and Jean-Francois Raskin, editors, _Tools and Algorithms for the Construction and Analysis of Systems_, volume 9636 of _LNCS_, pages 367-384, Berlin, Heidelberg, 2016. Springer Berlin Heidelberg. * [15] C. Dehnert, S. Junges, J.-P. Katoen, and M. Volk. A Storm is coming: A modern probabilistic model checker. In _29th International Conference on Computer Aided Verification (CAV)_, pages 592-600, 2017. * [16] Ilenia Epifani, Carlo Ghezzi, Raffaela Mirandola, and Giordano Tamburrelli. Model evolution by run-time parameter adaptation. In _Proc. of the 31st Int. Conf. on Software Engineering_, pages 111-121, Washington, DC, USA, 2009. IEEE. * [17] Ilenia Epifani, Carlo Ghezzi, and Giordano Tamburrelli. Change-point detection for black-box services. In _Proc. of the 18th ACM SIGSOFT Int. Symp. on Foundations of Software Engineering_, FSE '10, pages 227-236, New York, NY, USA, 2010. ACM. * [18] Antonio Filieri, Carlo Ghezzi, and Giordano Tamburrelli. A formal approach to adaptive software: continuous assurance of non-functional requirements. _Formal Aspects of Computing_, 24(2):163-186, 2012. * [19] Antonio Filieri, Lars Grunske, and Alberto Leva. Lightweight adaptive filtering for efficient learning and updating of probabilistic models. In _Proc. of the 37th Int. Conf. on Software Engineering_, pages 200-211, Piscataway, NJ, USA, 2015. IEEE Press. * [20] Bent Flyvbjerg. Curbing optimism bias and strategic misrepresentation in planning: Reference class forecasting in practice. _European Planning Studies_, 16(1):3-21, 2008. * [21] I. S. Gradshteyn and I. M. Ryzhik. Definite integrals of elementary functions. In Daniel Zwillinger and Victor Moll, editors, _Table of Integrals, Series, and Products_. Elsevier Science, 8th edition, 2015. * Functional safely of electrical/electronic/programmable electronic safety-related systems, 2010. * [23] International Nuclear Safety Advisory Group. Defence in Depth in Nuclear Safety (INSAG 10), 1996. * [24] Akira Ishikawa, Michio Amagasa, Tetsuo Shiga, Giichi Tomizawa, Rumi Tatsuta, and Hiroshi Meno. The max-min deplphi method and fuzzy delphi method via fuzzy integration. _Fuzzy sets and systems_, 55(3):241-253, 1993. * [25] J. L. W. V. Jensen. Sur les fonctions convexes et les inegalites entre les valeurs moyennes. _Acta Mathematica_, 30:175-193, 1906. * [26] Joost-Pieter Katoen. The Probabilistic Model Checking Landscape. In _Proc. of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science_, LICS '16, pages 31-45, New York, NY, USA, 2016. ACM. * 6575, 2012. * [28] Daniel Krapelik, Frank PA Coolen, and Louis JM Aslett. Imprecise probability inference on masked multicomponent system. In _International Conference Series on Soft Methods in Probability and Statistics_, pages 133-140. Springer, 2018. * [29] M. Kwiatkowska. Quantitative verification: Models, techniques and tools. In _Proc. 6th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE)_, pages 449-458. ACM Press, September 2007. * [30] M. Kwiatkowska, G. Norman, and D. Parker. PRISM 4.0: Verification of probabilistic real-time systems. In _Proc. of the 23rd Int. Conf. on Computer Aided Verification_, volume 6806 of _LNCS_, pages 585-591. Springer, 2011. * [31] Marta Kwiatkowska, Gethin Norman, and David Parker. Stochastic model checking. In _Intl. Conf. on Formal Methods for Performance Eval._, pages 220-270, 2007. * [32] P. Lah and M. Ribaric. Converse of Jensen's inequality for convex functions. _Publikacaje Elektrotehnickog fakulteta. Serija Matematika i fizika_, 412/460:201-205, 1973. * [33] David Lane, David Bisset, Rob Buckingham, Geoff Pegman, and Tony Prescott. New foresight review on robotics and autonomous systems. Technical Report No. 2016.1, Lloyd's Register Foundation, London, U.K., 2016. * [34] David M Lane, Francesco Maurelli, Petar Kormushev, Marc Carreras, Maria Fox, and Konstantinos Kyriakopoulos. PANDORA-persistent autonomy through learning, adaptation, observation and replanning. _IFAC-PapersOnLine_, 48(2):238-243, 2015. * [35] Axel Legay, Benoit Delahaye, and Saddek Bensalem. Statistical Model Checking: An Overview. In Howard Barringer, Ylies Falcone, Bernd Finkbeiner, Klaus Havelund, Insup Lee, Gordon Pace, Grigore Rosu, Oleg Sokolsky, and Nikolai Tillmann, editors, _Runtime Verification_, volume 6418 of _LNCS_, pages 122-135, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. * [36] Anders Myhr, Catho Bjerkseter, Anders Agotnes, and Tor A Nygaard. Levelised cost of energy for offshore floating wind turbines in a life cycle perspective. _Renewable energy_, 66:714-728, 2014. * [37] Valentin Robu, David Flynn, and David Lane. Train robots to self-certify as safe. _Nature_, 553(7688):281-281, 2018. * The Partnership for Robotics in Europe. Robotics 2020 multi-annual roadmap for robotics in Europe, 2015. * [39] Lorenzo Strigini and Andrey Povyakalo. Software faultfreeness and reliability predictions. In Friedemann Bitsch, Jeremie Guiochet, and Mohamed Kaaniche, editors, _Computer Safety, Reliability, and Security_, volume 8153 of _LNCS_, pages 106-117, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. * [40] The Headquarters for Japan's Economic Revitalization. New Robot Strategy, February 2015. * [41] The Partnership for Robotics in Europe. Robotics 2020 Multi-Annual Roadmap, December 2016. * [42] UK Robotics & Autonomous Systems Network. Robotic and Autonomous Systems for Resilient Infrastructure, 2018. * [43] UK Robotics & Autonomous Systems Network. Space Robotics & Autonomous Systems: Widening the horizon of space exploration, 2018. * [44] UK Technology Strategy Board RAS SIG. RAS 2020 Robotics and Autonomous Systems, July 2014. * [45] US Computing Community Consortium. A Roadmap for US Robotics: From Internet to robotics, October 2016. * 88, 2017. * [47] Gero Walter and Thomas Augustin. Imprecision and prior-data conflict in generalized Bayesian inference. _Journal of Statistical Theory and Practice_, 3(1):255-271, 2009. * [48] Xingyu Zhao, Radu Calinescu, Simos Gerasimou, Valentin Robu, and David Flynn. Interval change-point detection for runtime probabilistic model checking. In _2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE)_, pages 163-174. IEEE, 2020. * [49] Xingyu Zhao, Valentin Robu, David Flynn, Fateme Dimohammadi, Michael Fisher, and Matt Webster. Probabilistic model checking of robots deployed in extreme environments. In _Proc. of the 33rd AAAI Conference on Artificial Intelligence_, volume 33, pages 8076-8084, Honolulu, Hawaii, USA, 2019. * [50] Xingyu Zhao, Kizito Salako, Lorenzo Strigini, Valentin Robu, and David Flynn. Assessing safety-critical systems from operational testing: A study on autonomous vehicles. _Information and Software Technology_, 128:106393, 2020. Supplementary Material: Bayesian Learning for the Robust Verification of Autonomous Robots Xingyu Zhao\({}^{1}\), Simos Gerasimou\({}^{2}\), Radu Calinescu\({}^{2,3}\), Calum Imrie\({}^{2,3}\), Valentin Robu\({}^{4}\), and David Flynn\({}^{5}\) \({}^{1}\)Department of Computer Science, University of Liverpool, Liverpool, UK. \({}^{2}\)Department of Computer Science, University of York, York, UK \({}^{3}\)Assuring Autonomy International Programme, University of York, York, UK. \({}^{4}\)Intelligent and Autonomous Systems, Group, Centrum Wiskunde & Informatica, NL. \({}^{5}\)School of Engineering, University of Glasgow, Glasgow, UK. ## 1 Introduction This supplementary material document includes: * The proofs to **Corollary 1** and **Corollary 2** from Section 1.3 of the main paper. * Details of the experimental settings for the offshore infrastructure maintenance case study from Section 2 of the main paper. ## 2 Corollary Proofs **Corollary 1**.: _When \(m=3\), the bounds (6) and (7) in **Theorem 1** of the main paper satisfy:_ \[\lambda_{l}\geq\begin{cases}\frac{\epsilon_{1}l(\epsilon_{1})\theta_{2}}{ \theta_{1}+l(\epsilon_{1})\theta_{2}},&\text{if }\frac{\theta_{2}(\epsilon_{1}-\epsilon_{2})}{\theta_{1}}>\frac{\epsilon_{2}l (\epsilon_{2})-\epsilon_{1}l(\epsilon_{1})}{l(\epsilon_{1})l(\epsilon_{2})}\\ \frac{\epsilon_{2}l(\epsilon_{2})\theta_{2}}{\theta_{1}+l(\epsilon_{2}) \theta_{2}},&\text{otherwise}\end{cases}\] (S1) _and_ \[\lambda_{u}<\begin{cases}\frac{\epsilon_{1}l(\epsilon_{1})\theta_{1}+\epsilon _{2}l(\epsilon_{2})\theta_{2}+\frac{1}{2}l(\frac{1}{2})(1-\theta_{1}-\theta_{ 2})}{l(\epsilon_{1})\theta_{1}},&\text{if }t<\frac{1}{\epsilon_{2}}\\ \frac{\epsilon_{1}l(\epsilon_{1})\theta_{1}+l(\frac{1}{2})\theta_{2}+\epsilon _{2}l(\epsilon_{2})(1-\theta_{1}-\theta_{2})}{l(\epsilon_{1})\theta_{1}},& \text{if }\frac{1}{\epsilon_{2}}\leq t\leq\frac{1}{\epsilon_{1}}\\ \frac{\epsilon_{1}l(\epsilon_{1})(\theta_{1}+\theta_{2})+\epsilon_{2}l( \epsilon_{2})(1-\theta_{1}-\theta_{2})}{l(\epsilon_{1})\theta_{1}},&\text{ otherwise}\end{cases}\] (S2) Proof.: When \(m=3\), Eq. (7) of **Theorem 1** says, there is a supremum \(\lambda_{u,m=3}\): \[\lambda_{u,m=3}=\max_{\{0\leq\lambda_{1}\leq\epsilon_{1}<\lambda_{2}\leq \epsilon_{2}<\lambda_{3}<+\infty\}}\frac{\lambda_{1}l(\lambda_{1})\theta_{1}+ \lambda_{2}l(\lambda_{2})\theta_{2}+\lambda_{3}l(\lambda_{3})(1-\theta_{1}- \theta_{2})}{l(\lambda_{1})\theta_{1}+l(\lambda_{2})\theta_{2}+l(\lambda_{3}) (1-\theta_{1}-\theta_{2})}\] (S3) Similarly, Eq. (6) of **Theorem 1** shows, when \(m=3\), there is an infimum \(\lambda_{l,m=3}\): \[\lambda_{l,m=3}=\min_{\{0\leq x_{i}\leq 1,\forall i\in[1.3]\}}\frac{\sum \limits_{i=1..3}[\epsilon_{i}l(\epsilon_{i})(1-x_{i})\theta_{i}+\epsilon_{i-1} l(\epsilon_{i-1})x_{i}\theta_{i}]}{\sum\limits_{i=1..3}[l(\epsilon_{i})(1-x_{i}) \theta_{i}+l(\epsilon_{i-1})x_{i}\theta_{i}]}\] (S4) where \(\epsilon_{0}=0\) and \(\epsilon_{3}=+\infty\) (and thus \(l(\epsilon_{0})=1\), \(\lim\limits_{\epsilon_{3}\rightarrow+\infty}l(\epsilon_{3})=0\) and \(\lim\limits_{\epsilon_{3}\rightarrow+\infty}\epsilon_{3}l(\epsilon_{3})=0\)). First, we prove the result of (S2).By taking the partial derivative of the objective function in (S3) w.r.t. \(\lambda_{1}\), we know the derivative is always positive, irrespective of the values \(\lambda_{2}\) and \(\lambda_{3}\) take in their respective ranges, as shown below (note \(0\leq\lambda_{1}\leq\epsilon_{1}<\lambda_{2}\leq\epsilon_{2}<\lambda_{3}<+\infty\)): \[\frac{\partial\lambda_{1}l(\lambda_{1})\theta_{1}+\lambda_{2}l( \lambda_{2})\theta_{2}+\lambda_{3}l(\lambda_{3})(1-\theta_{1}-\theta_{2})}{l( \lambda_{1})\theta_{1}+l(\lambda_{2})\theta_{2}+l(\lambda_{3})(1-\theta_{1}- \theta_{2})}=\] \[\frac{e^{-\lambda_{1}t}\theta_{1}\left[e^{-\lambda_{1}t}\theta_{1 }+e^{-\lambda_{2}t}\theta_{2}\left(1-(\lambda_{1}-\lambda_{2})t\right)+e^{- \lambda_{3}t}(1-\theta_{1}-\theta_{2})(1-(\lambda_{1}-\lambda_{3})t)\right]}{ (e^{-\lambda_{1}t}\theta_{1}+e^{-\lambda_{2}t}\theta_{2}+e^{-\lambda_{3}t}(1- \theta_{1}-\theta_{2}))^{2}}>0\] (S5) This implies that the maximum point lies in the hyperplane of \(\lambda_{1}=\epsilon_{1}\). Thus, we substitute \(\lambda_{1}=\epsilon_{1}\) into (S3) and reduce the problem to: \[\lambda_{u,m=3} =\max_{\{\epsilon_{1}<\lambda_{2}\leq\epsilon_{2}<\lambda_{3}<+ \infty\}}\frac{\epsilon_{1}l(\epsilon_{1})\theta_{1}+\lambda_{2}l(\lambda_{2} )\theta_{2}+\lambda_{3}l(\lambda_{3})(1-\theta_{1}-\theta_{2})}{l(\epsilon_{1 })\theta_{1}+l(\lambda_{2})\theta_{2}+l(\lambda_{3})(1-\theta_{1}-\theta_{2})}\] (S6) \[<\max_{\{\epsilon_{1}<\lambda_{2}\leq\epsilon_{3}<\lambda_{3}<+ \infty\}}\frac{\epsilon_{1}l(\epsilon_{1})\theta_{1}+\lambda_{2}l(\lambda_{2} )\theta_{2}+\lambda_{3}l(\lambda_{3})(1-\theta_{1}-\theta_{2})}{l(\epsilon_{1 })\theta_{1}}\] (S7) \[\leq\begin{cases}\frac{\epsilon_{1}l(\epsilon_{1})\theta_{1}+2l( \epsilon_{2})\theta_{2}+1}{l(\epsilon_{1})\theta_{1}}&t<\frac{1}{\epsilon_{2} }\\ \frac{\epsilon_{1}l(\epsilon_{1})\theta_{1}+\frac{1}{2}l(\frac{1}{2})\theta_{2}+ 2\epsilon_{2}l(\epsilon_{2})(1-\theta_{1}-\theta_{2})}{l(\epsilon_{1}) \theta_{1}}&\frac{1}{\epsilon_{2}}\leq t\leq\frac{1}{\epsilon_{1}}\\ \frac{\epsilon_{1}l(\epsilon_{1})\theta_{1}+\theta_{2})+2\epsilon_{2}(\epsilon _{2})(1-\theta_{1}-\theta_{2})}{l(\epsilon_{1})\theta_{1}}&t>\frac{1}{ \epsilon_{1}}\end{cases}\] (S8) where the last step is due to the fact that the function \(xl(x)\) is unimodal over \([0,1]\) with a maximum point at \(x=\frac{1}{t}\). Thus, the last step says: * When \(t<\frac{1}{\epsilon_{2}}\) (i.e. \(\epsilon_{2}<\frac{1}{t}\)): the function \(\lambda_{3}l(\lambda_{3})\) can reach its maximum at \(\lambda_{3}=\frac{1}{t}\) in the range \((\epsilon_{2},+\infty)\); While, since \(\lambda_{2}\in(\epsilon_{1},\epsilon_{2}]\), the function \(\lambda_{2}l(\lambda_{2})\) cannot reach \(\lambda_{2}=\frac{1}{t}\), so we set \(\lambda_{2}=\epsilon_{2}\) to maximise the objective function. * When \(\frac{1}{\epsilon_{2}}\leq t\leq\frac{1}{\epsilon_{1}}\) (i.e. \(\epsilon_{1}\leq\frac{1}{t}\leq\epsilon_{2}\)): the function \(\lambda_{2}l(\lambda_{2})\) can attain its maximum at \(\lambda_{2}=\frac{1}{t}\) in the range \((\epsilon_{1},\epsilon_{2}]\); While, since \(\lambda_{3}\in(\epsilon_{2},+\infty]\), the function \(\lambda_{3}l(\lambda_{3})\) cannot reach \(\lambda_{3}=\frac{1}{t}\), so we set \(\lambda_{3}=\epsilon_{2}\) to maximise the objective function. * When \(t>\frac{1}{\epsilon_{1}}\) (i.e. \(\frac{1}{t}<\epsilon_{1}\)) both the functions \(\lambda_{3}l(\lambda_{3})\)\(\lambda_{2}l(\lambda_{2})\) take the left endpoints in their range to maximise the objective function, so we set \(\lambda_{3}=\epsilon_{2}\) and \(\lambda_{2}=\epsilon_{1}\). Substitute the values of \(\lambda_{2}\) and \(\lambda_{3}\) into the objective function in those three cases, we obtain the results of (S2). Now we prove the result of (S1).If we denote the objective function in (S4) as a fraction \(\frac{Nu(x_{1},x_{2},x_{3})}{De(x_{1},x_{2},x_{3})}\), then take its partial derivative w.r.t. \(x_{3}\): \[\frac{\partial\frac{Nu(x_{1},x_{2},x_{3})}{De(x_{1},x_{2},x_{3})}}{\partial x_{3 }}=\frac{l(\epsilon_{2})(1-\theta_{1}-\theta_{2})[((1-x_{1})\theta_{1}+x_{2} \theta_{2})(\epsilon_{2}-\epsilon_{1})l(\epsilon_{1})+\epsilon_{2}x_{1}\theta _{1}]}{De(x_{1},x_{2},x_{3})^{2}}>0\] (S9) Thus to minimise the objective function, we set \(x_{3}=0\). Then we take its partial derivative w.r.t. \(x_{1}\): \[\frac{\partial\frac{Nu(x_{1},x_{2},0)}{De(x_{1},x_{2},0)}}{\partial x_{1}}=\frac {-\theta_{1}[\epsilon_{1}l(\epsilon_{1})De(x_{1},x_{2},0)+(1-l(\epsilon_{1}))Nu( x_{1},x_{2},0)]}{De(x_{1},x_{2},0)^{2}}<0\] (S10) Thus to minimise the objective function, we set \(x_{1}=1\). Now we take its partial derivative w.r.t. \(x_{2}\): \[\frac{\partial\frac{Nu(1,x_{2},0)}{De(1,x_{2},0)}}{\partial x_{2}}=\frac{ \theta_{2}[\theta_{2}(\epsilon_{1}-\epsilon_{2})l(\epsilon_{1})l(\epsilon_{2})+ \theta_{1}\epsilon_{1}l(\epsilon_{1})-\theta_{1}\epsilon_{2}l(\epsilon_{2})]}{De(1,x_{2},0)^{2}}\] (S11) whose sign is determined by other model parameters. Thus, we set \(x_{2}=\mathbf{1}_{\theta_{2}(\epsilon_{1}-\epsilon_{2})l(\epsilon_{1})l( \epsilon_{2})+\theta_{1}\epsilon_{1}l(\epsilon_{1})-\theta_{1}\epsilon_{2}l( \epsilon_{2})<0}\) where \(\mathbf{1}_{\mathbf{s}}\) is an indicator function - it equals 1 when predicate S is true, and 0 otherwise. Substitute \(x_{1}=1,x_{3}=0\) and \(x_{2}=\mathbf{1}_{\theta_{2}(\epsilon_{1}-\epsilon_{2})l(\epsilon_{1})l( \epsilon_{2})+\theta_{1}\epsilon_{1}l(\epsilon_{1})-\theta_{1}\epsilon_{2}l( \epsilon_{2})<0}\) into \(\frac{Nu(x_{1},x_{2},x_{3})}{De(x_{1},x_{2},x_{3})}\), we obtain two cases in (S1). **Corollary 2**.: _The closed-form BIPP bounds for \(m=2\) can be obtained respectively by setting \(\epsilon_{2}=\epsilon_{1}\) and \(\theta_{2}=0\) in the results (S1) and (S2)._ Proof.: When \(m=2\), Eq. (7) of **Theorem 1** becomes the supremum \(\lambda_{u,m=2}\) such that (note, \(\theta_{2}=1-\theta_{1}\)): \[\lambda_{u,m=2}=\max_{\{0\leq\lambda_{1}\leq\epsilon_{1}<\lambda_{2}<+\infty \}}\frac{\lambda_{1}l(\lambda_{1})\theta_{1}+\lambda_{2}l(\lambda_{2})(1- \theta_{1})}{l(\lambda_{1})\theta_{1}+l(\lambda_{2})(1-\theta_{1})}\] (S12) Similarly, Eq. (6) of **Theorem 1** becomes the infimum \(\lambda_{l,m=2}\): \[\lambda_{l,m=2}=\min_{\{0\leq x_{1}\leq 1,0\leq x_{2}\leq 1\}}\frac{\epsilon_{ 0}l(\epsilon_{0})x_{1}\theta_{1}+\epsilon_{1}l(\epsilon_{1})(1-x_{1})\theta _{1}+\epsilon_{1}l(\epsilon_{1})x_{2}(1-\theta_{1})+\epsilon_{2}l(\epsilon_{ 2})(1-x_{2})(1-\theta_{1})}{l(\epsilon_{0})x_{1}\theta_{1}+l(\epsilon_{1})(1- x_{1})\theta_{1}+l(\epsilon_{1})x_{2}(1-\theta_{1})+l(\epsilon_{2})(1-x_{2})(1- \theta_{1})}\] (S13) where \(\epsilon_{0}=0\) and \(\epsilon_{2}=+\infty\). **First, we prove the bound \(\lambda_{u,m=2}\) satisfies:** \[\lambda_{u,m=2}<\begin{cases}\frac{\epsilon_{1}l(\epsilon_{1})\theta_{1}+ \frac{1}{2}l(\frac{1}{2})(1-\theta_{1})}{l(\epsilon_{1})\theta_{1}}&t<\frac{ 1}{\epsilon_{1}}\\ \frac{\epsilon_{1}}{\theta_{1}}&t\geq\frac{1}{\epsilon_{1}}\end{cases}\] (S14) for which we proceed in two steps: 1. We show the optimised point in the two dimensional space of \(\lambda_{1}\) and \(\lambda_{2}\) must lie in the plane of \(\lambda_{1}=\epsilon_{1}\). 2. In the plane of \(\lambda_{1}=\epsilon_{1}\), a closed-form expression can be derived from the monotonicity analysis of \(\lambda_{2}\). By taking the partial derivative of the objective function in (S12) w.r.t. \(\lambda_{1}\), we know the derivative is always positive, irrespective of the value take \(\lambda_{2}\) in its respective range, as shown in (S15) below (note, \(0\leq\lambda_{1}\leq\epsilon_{1}<\lambda_{2}<+\infty\)): \[\frac{\partial\frac{\lambda_{1}e^{-\lambda_{1}t}\theta_{1}+\lambda_{2}e^{- \lambda_{2}t}(1-\theta_{1})}{e^{-\lambda_{1}t}\theta_{1}+e^{-\lambda_{2}t}(1- \theta_{1})}}{\partial\lambda_{1}}=\frac{e^{-\lambda_{1}t}\theta_{1}\left[e^{ -\lambda_{1}t}\theta_{1}+e^{-\lambda_{2}t}(1-\theta_{1})(1-(\lambda_{1}- \lambda_{2})t)\right]}{(e^{-\lambda_{1}t}\theta_{1}+e^{-\lambda_{2}t}(1-\theta _{1}))^{2}}>0\] (S15) This implies that the maximum point lies in the plane of \(\lambda_{1}=\epsilon_{1}\). Now we reduce the optimisation problem from a two-dimensional space to the one-dimensional space of \(\lambda_{2}\). Thus, by substituting \(\lambda_{1}=\epsilon_{1}\) in to the r.h.s. of (S12), we have: \[\lambda_{u,m=2} \leq\max_{\{\lambda_{2}>\epsilon_{1}\}}\frac{\epsilon_{1}l( \epsilon_{1})\theta_{1}+\lambda_{2}l(\lambda_{2})(1-\theta_{1})}{l(\epsilon_{ 1})\theta_{1}+l(\lambda_{2})(1-\theta_{1})}\] \[<\max_{\{\lambda_{2}>\epsilon_{1}\}}\frac{\epsilon_{1}l(\epsilon _{1})\theta_{1}+\lambda_{2}l(\lambda_{2})(1-\theta_{1})}{l(\epsilon_{1})\theta _{1}}\] \[<\begin{cases}\frac{\epsilon_{1}l(\epsilon_{1})\theta_{1}+\frac{ 1}{2}l(\frac{1}{2})(1-\theta_{1})}{l(\epsilon_{1})\theta_{1}}&t<\frac{1}{ \epsilon_{1}}\\ \frac{\epsilon_{1}}{\theta_{1}}&t\geq\frac{1}{\epsilon_{1}}\end{cases}\] (S16) where the last step of (S16) is because of the monotonicity analysis of the term \(\lambda_{2}l(\lambda_{2})\) as follows. Depends on the the observable \(t\): * When \(\epsilon_{1}<\frac{1}{t}\), \(\lambda_{2}l(\lambda_{2})\) attains its maximum at the critical point \(\lambda_{2}=\frac{1}{t}\), in the range \(\lambda_{2}>\epsilon_{1}\). Thus, we substitute \(\lambda_{2}=\frac{1}{t}\) and obtain the first case in result (S16). * When \(\epsilon_{1}\geq\frac{1}{t}\), in the range \(\lambda_{2}>\epsilon_{1}\), we know the supremum of \(\lambda_{2}l(\lambda_{2})\) is attained at the boundary point \(\lambda_{2}=\epsilon_{1}\). Thus, we substitute \(\lambda_{2}=\epsilon_{1}\) and obtain the second case in result (S16). **Second, we prove the infimum \(\lambda_{l,m=2}=0\) with the optimal point at \(x_{1}=1,x_{2}=0\).** Since \(l(0)=1\), \(\lim_{\epsilon_{2}\rightarrow+\infty}l(\epsilon_{2})=0\) and \(\lim_{\epsilon_{2}\rightarrow+\infty}\epsilon_{2}l(\epsilon_{2})=0\), (S13) can be rewritten as: \[\lambda_{l,m=2}=\min_{\{0\leq x_{1}\leq 1,0\leq x_{2}\leq 1\}}\frac{\epsilon_{1}l( \epsilon_{1})(1-x_{1})\theta_{1}+\epsilon_{1}l(\epsilon_{1})x_{2}(1-\theta_{1}) }{x_{1}\theta_{1}+l(\epsilon_{1})(1-x_{1})\theta_{1}+l(\epsilon_{1})x_{2}(1- \theta_{1})}\] (S17) The partial derivative of the objective function in (S17) w.r.t. \(x_{2}\) is: \[\frac{\partial\frac{\epsilon_{1}l(\epsilon_{1})(1-x_{1})\theta_{1}+ \epsilon_{1}l(\epsilon_{1})x_{2}(1-\theta_{1})}{x_{1}\theta_{1}+l(\epsilon_{1} )(1-x_{1})\theta_{1}+l(\epsilon_{1})x_{2}(1-\theta_{1})}}{\partial x_{1}}= \frac{\epsilon_{1}l(\epsilon_{1})(1-\theta_{1})\theta_{1}x_{1}}{[((x_{1}+x_{2 }-1)\theta_{1}-x_{2})l(\epsilon_{1})-\theta_{1}x_{1}]^{2}}>0\] (S18) Thus we set \(x_{2}=0\) in (S17) to reduce the problem to: \[\lambda_{l,m=2}=\min_{\{0\leq x_{1}\leq 1\}}\frac{\epsilon_{1}l(\epsilon_{1}) (1-x_{1})\theta_{1}}{x_{1}\theta_{1}+l(\epsilon_{1})(1-x_{1})\theta_{1}}\] (S19) The partial derivative of the objective function in (S19) w.r.t. \(x_{1}\) is: \[\frac{\partial\frac{\epsilon_{1}l(\epsilon_{1})(1-x_{1})\theta_{1}}{x_{1} \theta_{1}+l(\epsilon_{1})(1-x_{1})\theta_{1}}}{\partial x_{1}}=\frac{- \epsilon_{1}l(\epsilon_{1})}{[x_{1}+(1-x_{1})l(\epsilon_{1})]^{2}}<0\] (S20) Thus we set \(x_{1}=1\) in (S19), and obtain \(\lambda_{l,m=2}=0\). Note, the result of \(0\) is attainable meaning we cannot find a lower bound that bigger than \(0\) for the given optimisation problem. Finally,substitute \(\epsilon_{2}=\epsilon_{1}\) and \(\theta_{2}=0\) in the results (S2) and (S1), we obtain the results of (S14) and \(0\) which are the closed-form BIPP bounds for \(m=2\). ## 3 Offshore Infrastructure Maintenance Experiments ### Simulation Platform In Section 2 of the main paper, we demonstrate the application of our robust Bayesian verification framework using a case study that involves an autonomous underwater vehicle (AUV) executing a structural health inspection and cleaning mission of the substructure of an offshore wind farm. The offshore wind farm consists of multiple floating wind turbines. Each turbine is a buoyant foundation structure secured to the sea bed with floating chains tethered to anchors. The AUV is deployed to collect data about the condition of the floating chains to enable the post-mission identification of problems that could affect the structural integrity of the chains. Figure 1 shows the AUV during the inspection of the last floating chain. The AUV-based mission is built on top of the open-source framework MOOS-IvP1, a widely used platform for the implementation of autonomous applications with AUVs. When used for the execution of oceanic missions, Figure 1: Illustration of our robust Bayesian verification framework for the structural health inspection and cleaning mission using an autonomous underwater vehicle (AUV) at the point when the AUV inspects the final floating chain. MOOS-IvP is deployed on the payload computer of an AUV, facilitating the decoupling of the vehicle's autonomy from the navigation and control system running on the main AUV computer [1]. An AUV-based system leveraging MOOS-IvP is structured as a community of independent applications running in parallel that communicate via a MOOS database (MOOSDB) using a publish-subscribe architecture. Figure 2 shows the high-level architecture of MOOS-IvP. Applications publish messages in the form of key-value pairs with specified frequencies, sharing information about AUV components that an application monitors. Interested listening applications can use the keys to subscribe to messages and receive a notification when an update of that message becomes available. The autonomous operation in MOOV-IvP is instrumented through a collection of behaviours, i.e., combinations of boolean logic constraints and piecewise-linear utility functions parametrised, for example, with parameters of the navigation and control system such as heading, speed or depth. During mission execution, the IvP Helm, the decision-making component of MOOS-IvP, periodically collects and reconciles the instantiated behaviours. If multiple behaviours are active simultaneously, the IvP Helm executes Interval Programming (IvP) multi-objective optimisation to determine the optimal action, i.e., an optimal point in the decision space defined by the constraints and utility functions. This optimal action is expressed as a set of key-value pairs and is published to the MOOSDB so that interested (subscribing) applications can receive this update and act upon it. To realise the AUV-based floating chain inspection and maintenance mission, we extended the MOOS-IvP framework and developed a new MOOS application (called RBV in Figure 2) that implements the overall mission scenario and controls the mission execution. In particular, the RBV application employs the built-in behaviours MOOS-IvP (e.g., waypoint and station keep) to model the AUV mission and leverages the starting and ending condition of these behaviours to instrument the decision-making via the IvP Helm. Furthermore, the RBV application provides several configuration parameters that enable the execution of custom experiments. For instance, users can define the probabilities and rates characterising the behaviour of each chain (i.e., specialising the continuous-time Markov chain - CTMC, model in the main paper), thus, affecting the UAV behaviour. Using a seed as a configuration parameter enables to reduce the non-determinism of the simulator, thus enhancing the reproducibility of the experiments and the robustness of the results obtained. The open-source RBV source code, the full experimental results, additional information about RBV, including a video of the floating chain inspection and maintenance mission, are available at [https://github.com/gerasimou/RBV](https://github.com/gerasimou/RBV). ### Experimental Methodology We evaluated the capabilities of our RBV framework by performing a wide range of experiments that assess both the decision support offered by the framework and its overheads. Accordingly, we instrumented the simulation platform described in Section 3.1 with the implemented RBV framework (main paper, Figure 1) and realised the AUV-driven structural health inspection and cleaning mission presented in Section 2 of the main paper. Given the parametric CTMC model of the mission (main paper, Figure 2), we consider as unknown parameters the chain-dependent transition rate for cleaning the \(i\)-th chain (\(r_{i}^{\text{clean}}\)), and the mission-dependent transition rates for causing catastrophic damage to a floating chain or itself (\(r^{\text{drange}}\)) and for failing to clean (\(r^{\text{fail}}\)).2 Footnote 2: Since the floating chains are spatially located in the same area, we model the failure rate \(r^{\text{fail}}\) as a homogeneous parameter affecting all chains of the mission similarly. Nevertheless, our RBV framework can be easily adapted to support modelling an individual transition rate for failing to clean (\(r_{i}^{\text{fail}}\)) each \(i\)-th chain. Figure 2: High-level MOOS-IvP architecture with the RBV framework implementation We assemble the interval CTMC model using the BIPP and IPSP estimators to learn these unknown model parameters. In particular, we use the BIPP estimator to quantify the rate values associated with the singular events of cleaning the \(i\)-th chain (\(r_{i}^{\text{clean}}\)) and encountering a catastrophic failure (\(r^{\text{damage}}\)). The former corresponds to successfully completing a difficult one-off task, and the latter models a major failure. Since the AUV may try multiple times to clean a particular chain, we model the corresponding transition rate (\(r^{\text{fail}}\)) using the IPSP estimator, which is suitable for events observed regularly during system operation. ### Results We have already presented how our RBV framework supports the runtime verification of mission-critical autonomous robots for a typical scenario of the AUV-based offshore wind-turbine inspection and maintenance mission (main paper, Figure 3). Furthermore, we systematically analysed the operation of both BIPP and IPSP estimators in several scenarios with varying levels of partial prior knowledge (main paper, Figures 4 and 5). In this section, we present additional results for the end-to-end application of the RBV framework, focusing on the AUV behaviour over multiple failed attempts to clean a specific chain and the overheads associated with executing the online verification process. Figure 3 shows the verification results for requirements R1 - quantifying the probability of the mission completing successfully (top) and R2 - quantifying the expected energy consumption of the AUV (bottom) across successive attempts for the same AUV configuration. In each of these plots and irrespective of the system property measured, the computed value intervals become wider as the number of failed AUV attempts to clean the chain increases. For instance, consider requirement R1 and configuration 1 (shown on the top left in Figure 3), which shows a small increase in the reliability interval for the three initial attempts to clean the chain. Despite the interval becoming wider, the reliability threshold of 0.95 is satisfied; thus, this configuration is feasible and is included in the candidates set for further analysis using requirement R3 - selecting the configuration that maximises the number of chains cleaned. In contrast, the computed reliability interval for the fourth attempt violates the reliability threshold; thus, this configuration is infeasible. No valid configuration exists in the fourth attempt, and the AUV decides to skip the chain and move to the next. A similar pattern of wider value intervals is also observed for the energy consumption property (R2). In this case, the energy threshold decreases for each new attempt as the AUV has consumed energy trying to clean the chain in the previous attempts. Consequently, this requirement is more restrictive and leads to excluding further configurations; see, for instance, the violated energy threshold in attempt 3 for configurations 4 and 9. Figure 3: Computed value intervals for the reliability requirement R1, the probability that the AUV will not encounter a catastrophic failure during its mission (top) and energy requirement R2, the expected energy consumption (bottom), over successive attempts for the same AUV configuration. After a failed attempt, each new attempt for the same chain and AUV configuration results in a wider interval for the key system requirements R1 and R2. The wider intervals over each successive failed attempt correspond to the increased uncertainty concerning the AUV's operation and its capacity to fulfil the mission successfully. The rationale underpinning this behaviour is that since both transition rates \(r_{i}^{\text{clean}}\) and \(r^{\text{damage}}\) employ the BIPP estimator, the posterior estimate bounds for both transition rates are wider and converge towards their theoretical asymptotic values (main paper, Section 4.4). However, since the prior knowledge for the \(r_{i}^{\text{clean}}\) rate is higher than the \(r^{\text{damage}}\) rate, the posterior bounds for the \(r_{i}^{\text{clean}}\) rate decline much faster than those of the \(r^{\text{damage}}\) rate, leading to a more conservative estimate and a wider interval for requirements R1 and R2. Figure 4 shows the computation overheads incurred by the RBV framework for executing the AUV-based mission. The values comprising each boxplot have been collected over 10 independent runs. Each value denotes the time consumed for a single online robust quantitative verification and reconfiguration step when the AUV attempts to clean the indicated chain. For instance, the boxplot associated with the 'Chain 1' ('Chain 2') label on the x-axis signifies that the AUV attempts to clean chain 1 (chain 2) and corresponds to the time consumed by the RBV framework to analyse 64 (32) configurations. Overall, the time overheads are reasonable for the purpose of this mission. Since the AUV has more configurations to analyse at the earlier stages of the mission (e.g., when inspecting chain 1), the results follow the anticipated exponential pattern. The number of configurations decreases by half each time the AUV progresses further into the mission and moves to the next chain. Another interesting observation is that the length of each boxplot is small, i.e., the lower and upper quartiles are very close, indicating that the RBV framework showcases a consistent behaviour in the time taken for its execution. The consumed time comprises (1) the time required to compute the posterior estimate bounds of the modelled transition rates, \(r_{i}^{\text{clean}}\), \(1\leq i\leq k\), \(r^{\text{damage}}\), and \(r^{\text{fail}}\), using the BIPP and IPSP estimators; (2) the time required to compute the value intervals for requirements R1 and R2 using the probabilistic model checker PrismPsy[2]; and (3) the time needed to find the best configuration satisfying requirements R1 and R2, and maximising requirement R3. Our empirical analysis provided evidence that the execution of the BIPP and IPSP estimators and the selection of the best configuration have negligible overheads with almost all time incurred by PrismPsy. This outcome is not surprising and is aligned with the results reported in [2] concerning the execution overheads of the model checker.
2304.03584
Back to almost Ricci solitons
In the paper, we study complete almost Ricci solitons using the concepts and methods of geometric dynamics and geometric analysis. In particular, we characterize Einstein manifolds in the class of complete almost Ricci solitons. Then, we examine compact almost Ricci solitons using the orthogonal expansion of the Ricci tensor, this allows us to substantiate the concept of almost Ricci solitons.
Vladimir Rovenski, Sergey Stepanov, Irina Tsyganok
2023-04-07T10:50:06Z
http://arxiv.org/abs/2304.03584v1
# Back to almost Ricci solitons ###### Abstract In the paper, we study complete almost Ricci solitons using the concepts and methods of geometric dynamics and geometric analysis. In particular, we characterize Einstein manifolds in the class of complete almost Ricci solitons. Then, we examine compact almost Ricci solitons using the orthogonal expansion of the Ricci tensor, this allows us to substantiate the concept of almost Ricci solitons. **Keywords**: Almost Ricci soliton; energy density; infinitesimal harmonic transformation; conformal Killing vector. **Mathematics Subject Classifications (2010)** Primary: 53C21; Secondary: 58J05. ## 1 Introduction One of the important components of the theory of Ricci flow are self-similar solutions called Ricci solitons, see [7, pp. 153-176]. Ricci solitons, which are a generalization of Einstein manifolds, have been studied more and more intensively in the last twenty years. This theory, besides being known after G. Perelman's proof of the Poincare conjecture (for details see [12]), has a wide range of applications in differential geometry and theoretical physics. In turn, the study of almost Ricci solitons, which are a generalization of quasi-Einstein manifolds and Ricci solitons, was started by Pigola, Rigoli, Rimoldi, and Setty, see [15]. An \(n\)-dimensional (\(n\geq 2\)) Riemannian manifold \((M,g)\) is called an _almost Ricci soliton_, if there exist a smooth complete vector field \(\xi\) and a function \(\lambda\in C^{\infty}(M)\) such that \[\operatorname{Ric}=\frac{1}{2}\,\mathcal{L}_{\xi}\,g+\lambda\,g. \tag{1.1}\] Here, \(Ric\) is the Ricci tensor and \(\mathcal{L}_{\xi}\) is the Lie derivative operator in the direction of \(\xi\). Namely, \((\mathcal{L}_{\xi}\,g)(X,Y)=g(\nabla_{X}\xi,Y)+g(\nabla_{Y}\xi,X)\) for all smooth vector fields \(X,Y\) on \(M\), where \(\nabla\) is the covariant derivative (Levi-Civita connection). Denote by \((M,g,\xi,\lambda)\) an almost Ricci soliton. For \(\lambda=const\), it is a Ricci soliton. Note that when \(\xi\) is a Killing vector field, i.e., \(\mathcal{L}_{\xi}\,g=0\), an almost Ricci soliton \((M,g,\xi,\lambda)\) is _Einstein manifold_, i.e., \(Ric=\frac{s}{n}\,g\), from which we can apply Schur's lemma, e.g., [11], to obtain \(\lambda=const\). In the special case, where \(\xi=\nabla f\) for some function \(f\in C^{\infty}(M)\), we say that \((M,g,\xi,\lambda)\) is a gradient almost Ricci soliton with potential function \(f\). In [15], almost Ricci complete gradient solitons are considered. Other more recent papers have studied compact almost Ricci solitons (e.g., [3, 2, 8]) or almost Ricci solitons on manifolds with additional geometric structures, e.g., [10, 14]. There are also attempts to find applications of almost Ricci solitons in theoretical physics, see, e.g., [9]. In Sections 2-3, we study complete almost Ricci solitons using concepts and methods of geometric dynamics and geometric analysis. In Section 4, we study compact almost Ricci solitons applying the orthogonal expansion of symmetric two-tensors (see [5, p. 130]) to the Ricci tensor. In particular, this will make it possible to substantiate the concept of almost Ricci solitons. Complete almost Ricci solitons Here, we study complete almost Ricci solitons from the point of view of geometric dynamics, see [19, 1]. Denote by \(\theta\) the \(g\)-dual one-form of \(\xi\) and \(\bar{\Delta}=\nabla^{*}\nabla\) the Laplace operator for the formal adjoint operator \(\nabla^{*}\) of \(\nabla\). First, we formulate a lemma needed to prove our main results. **Lemma 2.1**.: _The vector field \(\xi\) of an almost Ricci soliton \((M,g,\xi,\lambda)\) satisfies the equation_ \[\bar{\Delta}\,\theta=\operatorname{Ric}(\xi,\cdot)-(n-2)\,d\lambda\,. \tag{2.1}\] Recall that a vector field \(\xi\) generates a flow on a manifold, which is a one-parameter group of infinitesimal self-diffeomorphisms [11, pp. 12-14]. A vector field \(\xi\) is an _infinitesimal harmonic transformation_ on \((M,g)\) if the local one-parameter group of infinitesimal self-diffeomorphisms generated by \(\xi\) is a group of harmonic self-diffeomorphisms. A vector field \(\xi\) is an infinitesimal harmonic transformation in \((M,g)\) if and only if \(\bar{\Delta}\,\theta=\operatorname{Ric}(\xi,\,\cdot)\), where \(\theta\) is the \(g\)-dual one-form of \(\xi\), see [17]. In particular, the Killing vector field is an example of an infinitesimal harmonic transformation on \((M,g)\), see [16]. Moreover, a vector field \(\xi\) associated with a Ricci soliton \((M,g,\xi,\lambda)\) is also an infinitesimal harmonic transformation on \((M,g)\)[17, 18]. Note that a local one-parameter group of infinitesimal harmonic transformations, or a harmonic flow generated by \(\xi\), is directly related to De Turck harmonic flows [7, pp. 113-117]. The next corollary follows from Lemma 2.1. **Corollary 2.1**.: _An \(n\)-dimensional \((n\geq 3)\) almost Ricci soliton \((M,g,\xi,\lambda)\) is a Ricci soliton if and only if its vector field \(\xi\) is an infinitesimal harmonic transformation. At the same time, the vector field \(\xi\) associated with a two-dimensional almost Ricci soliton \((M,g,\xi,\lambda)\) is an infinitesimal harmonic transformation._ The function \[e(\xi):=\frac{1}{2}\,\|\xi\|^{2}=\frac{1}{2}\,g(\xi,\xi).\] is said to be the _energy density_ of the flow generated by the vector field \(\xi\), see [19, pp. 273-274]. The _kinetic energy_ of the flow of \(\xi\) is defined by the integral formula, see [1, pp. 2; 19; 37], \[E(\xi)=\int_{M}\,e(\xi)\,d\operatorname{vol}_{g}.\] The kinetic energy can be infinite or finite (e.g., on a compact manifold). Note that the kinetic energy plays an impotent role in Hamilton dynamics, see, e.g., [1]. Based on the above definition and Lemma 2.1, we formulate our main theorem. **Theorem 2.1**.: _Let \((M,g,\xi,\lambda)\) be an \(n\)-dimensional \((n\geq 3)\) complete almost Ricci soliton such that the rate of change of \(\lambda\) along the trajectories of the \(\xi\)-flow is pointwise bounded from below by \(\operatorname{Ric}(\xi,\xi):\)_ \[\mathcal{L}_{\xi}\,\lambda\geq\operatorname{Ric}(\xi,\xi). \tag{2.2}\] _If \(E(\xi)<\infty\), then \(\xi\) is a parallel vector field and \((M,g)\) is an Einstein manifold. Furthermore, if the soliton has infinite volume, then \(\xi=0\)._ According to the definition of the Lie derivative, e.g., [11, pp. 29-30], the Lie derivative of a function \(f\in C^{1}(M)\) with respect to vector field \(\xi\) in (2.2) is given by \[\mathcal{L}_{\xi}f=\xi(f)=df(\xi)=\nabla_{\xi}f=g(\nabla f,\xi).\] In particular, it follows from Theorem 2.1 that not every complete Riemannian manifold supports an almost Ricci soliton structure, see also [15, Corollary 1.5 and Example 2.4]. _Remark 2.1_.: If a Riemannian manifold \((M,g)\) admits a complete parallel vector field, then \((M,g)\) is reducible, i.e., is locally the metric product of a real line and some other Riemannian manifold. In Theorem 2.1, instead of condition "infinite volume" one can assume that \((M,g)\) is not reducible. Theorem 2.1 can be supplemented as follows. Let \((M,g,\xi,\lambda)\) be a two-dimensional complete almost Ricci soliton satisfying \(Ric(\xi,\xi)\leq 0\) and \(E(\xi)<\infty\), then \((M,g,\xi,\lambda)\) is isometric to Euclidean plane or one of the flat complete surfaces: cylinder, torus, Mobius band and Klein bottle. The following assertion follows from (1.1) and Theorem 2.1. **Corollary 2.2**.: _Let \((M,g,\xi,\lambda)\) be a complete almost Ricci soliton such that \(E(\xi)<\infty\). If \(\lambda\) is a non-decreasing function along trajectories of this flow and \(\mathcal{L}_{\xi}\sqrt{e(\xi)}\leq-\lambda\), then \((M,g)\) is an Einstein manifold. Furthermore, if the soliton has infinite volume, then \(\xi=0\)._ Recall that the _volume form_ of \((M,g)\) is defined by equality \(\omega_{g}(\partial_{1},\dots,\partial_{n})=\sqrt{\det g}\) for \(\partial_{k}=\partial/\partial x^{k}\) with respect to local coordinates \(x^{1},\dots,x^{n}\). Note also that a Riemannian manifold \((M,g)\) has a (global) volume element if and only if \((M,g)\) is orientable, see [13, p. 195]. A volume form on a connected manifold \((M,g)\) has a single global invariant, namely the (overall) volume, \(\operatorname{Vol}(M,g)=\int_{M}\omega_{g}\,d\operatorname{vol}_{g}\), which is invariant under volume-form preserving transformations. The volume \(\operatorname{Vol}(M,g)\) can be infinite or finite (e.g., \(\operatorname{Vol}(M,g)<\infty\) for a compact manifold \(M\)). On the other hand, a complete non-compact Riemannian manifold with non-negative Ricci curvature has infinite volume, see [4]. For the volume form \(\omega_{g}\) of \((M,g)\), one can consider its _Lie derivative_ along trajectories of the flow of \(\xi\). Namely, we have the following, see [12, p. 281]: \(\mathcal{L}_{\xi}\,\omega_{g}=(\operatorname{div}\xi)\,\omega_{g}\). According to the definition of the Lie derivative, \(\mathcal{L}_{\xi}\,\omega_{g}\) measures the rate of the change of the volume form \(\omega_{g}\) under deformations determined by a one-parameter group of differentiable transformations (or a flow) generated by the vector field \(\xi\). In the well-known monograph [13, p. 195] the function \(\operatorname{div}\xi\) was called the _logarithmic rate of change of volume_ (or, in other words, _rate of volume expansion_) under the flow generated by the vector field \(\xi\). On the other hand, the condition \(\operatorname{div}\xi=0\) is equal to \(\mathcal{L}_{\xi}\,\omega_{g}=0\). This means that the one-parameter group of differentiable transformations leaves \(\omega_{g}\) invariant or, in other words, the vector field \(\xi\) is an infinitesimal automorphism of the volume form, see [19, p. 6]. In dynamic, such a vector field \(\xi\) is said to be _divergence-free_ and the flow generated by it is said to be _incompressible_, see [13, p. 125]. The geometric dynamics of divergence-free vector fields was studied in detail in the monograph [1]. The following proposition is true. **Lemma 2.2**.: _Let \((M,g,\xi,\lambda)\) be a complete oriented almost Ricci soliton such that the length of \(\xi\) is integrable. If the logarithmic rate of volumetric expansion doesn't change sign on \(M\) under deformations determined by the flow of \(\xi\), then (1.1) has the form_ \[\operatorname{Ric}=\frac{1}{2}\,\mathcal{L}_{\xi}\,g+\ \frac{s}{n}\,g\,,\] _where components of the right hand side are orthogonal to each other with respect to the standard pointwise scalar product. On the other hand, if (1.1) is of the form indicated above, then the flow of \(\xi\) is incompressible._ **Corollary 2.3**.: _Let \((M,g,\xi,\lambda)\) be a complete oriented almost Ricci soliton such that the length of \(\xi\) is integrable. If the logarithmic rate of volumetric expansion doesn't change sign on \(M\) under deformations determined by the flow of \(\xi\), then \(\xi\) satisfies the equation_ \[\bar{\Delta}\,\theta=Ric(\xi,\cdot)-\frac{n-2}{n}\,ds.\] Proof of results in Section 2 **Proof of Lemma 2.1 and Corollary 2.1**. The equation (1.1) has the following form with respect to local coordinates \(x^{1}\),..., \(x^{n}\): \[R_{ij}=\frac{1}{2}\,\mathcal{L}_{\xi}\,g_{ij}+\lambda\,g_{ij}, \tag{3.1}\] where \(R_{ij}\), \(g_{ij}\) and \(\xi_{i}\) stand, respectively, for the components of the Ricci tensor \(Ric\), the metric tensor \(g\), and the components \(\xi_{i}=g_{ij}\,\xi^{j}\) of \(\xi\). Also \(\theta=\xi^{j}\) is the one-form corresponding to \(\xi\) under the duality defined by the metric \(g\). According the formula \(\mathcal{L}_{\xi}\,g_{ij}=\nabla_{i}\,\xi_{j}+\nabla_{j}\,\xi_{i}\), where \(\nabla_{i}=\nabla_{\partial/\partial x^{i}}\), we obtain from (3.1) the equality \[\operatorname{div}\xi:=\nabla_{i}\,\xi^{i}=s-n\,\lambda \tag{3.2}\] for the scalar curvature \(s\) of the metric \(g\). Applying the operator \(\nabla^{i}=g^{ij}\nabla_{j}\) to (3.1), we find \[\nabla^{i}\,\nabla_{i}\,\xi_{j}+\nabla^{i}\nabla_{j}\,\xi_{i}=\nabla_{j}\,s -2\,\nabla_{j}\,\lambda\,. \tag{3.3}\] Using the contracted second Bianchi identity, see [11], and \(\nabla_{i}\,\xi^{i}\) of (3.2), we have \[\nabla_{i}\nabla_{j}\,\xi^{i}=\nabla_{j}\nabla_{i}\,\xi^{i}+R_{ij}\,\xi^{i}= \nabla_{j}\,s-n\nabla_{j}\,\lambda+R_{ij}\,\xi^{i}.\] Using this and noting that \(\bar{\Delta}=-\nabla^{i}\,\nabla_{i}\) (the Laplace operator \(\bar{\Delta}=\nabla^{*}\nabla\) and its expression in coordinates coincide, see [5, Paragraph 1.55]), we rewrite (3.3) in the following form: \[\bar{\Delta}\,\xi_{j}=R_{ij}\,\xi^{i}-(n-2)\,\nabla_{j}\,\lambda\,. \tag{3.4}\] In coordinate-free form, (3.4) coincides with (2.1) that proves Lemma 2.1. By this and (2.1), we complete the proof of Corollary 2.1. \(\square\) **Proof of Theorem 2.1**. In our case, the second _Kato inequality_, \(\|\xi\|\,\Delta\|\xi\|\geq-g(\bar{\Delta}\,\xi,\xi)\), see [4, p. 380], \[\|\xi\|\,\Delta\|\xi\|\geq-g(\bar{\Delta}\,\xi,\xi),\] using Lemma 2.1, can be rewritten in the following form: \[\|\xi\|\,\Delta\|\xi\|\geq(n-2)\,\mathcal{L}_{\xi}\lambda-Ric(\xi,\xi),\] where \(\Delta\) is the _Laplace-Beltrami operator_ defined by the equality \(\Delta\,f=\operatorname{trace}_{g}\nabla\,df\) for an arbitrary function \(f\in C^{2}(M)\). The assumption (2.2) for \(n\geq 3\) implies that \(\|\xi\|\,\Delta\|\xi\|\geq 0\). Then by the classical theorem of geometric analysis (see [21]), either \(\int_{M}\|\xi\|^{p}\,d\operatorname{vol}_{g}=\infty\) for a positive number \(p>1\), or \(\|\xi\|=const\). Thus, if \(\|\xi\|\in L^{p}(M,g)\) at least for one \(p>1\), then \(\|\xi\|=const\). Note that the inequality \(E(f)<\infty\) is equivalent to \(\|\xi\|\in L^{2}(M,g)\). By the above (for \(p=2\)), and the condition \(E(f)<\infty\), we get \(\|\xi\|=const\). Using this and (3.4), we derive \[0=\frac{1}{2}\,\Delta\,g(\xi,\xi)=-g(\bar{\Delta}\,\xi,\xi)+\|\nabla\,\xi\|^{ 2}=-Ric(\xi,\,\xi)+(n-2)\,\mathcal{L}_{\xi}\,\lambda+\|\nabla\,\xi\|^{2}. \tag{3.5}\] By (3.5), \(\xi\) is a parallel vector field, in particular, \(\mathcal{L}_{\xi}\,g=0\). From (1.1) we get \(Ric=\lambda\,g\). Since \(n\geq 3\), by Schur's lemma, e.g., [11], we get \(\lambda=const\). Thus, \((M,g)\) is an Einstein manifold. Next, if \(\operatorname{Vol}(M,g)=\infty\), then using \(E(f)<\infty\) and \(\|\xi\|=const\), we get \(\xi=0\). \(\square\) **Proof of Corollary 2.2**. From (3.1) we derive the following equation: \[R_{ij}\,\xi^{i}\xi^{j}=\xi^{i}(\nabla_{i}\,\xi_{j})\,\xi^{j}+\lambda\,\|\xi\|^ {2}, \tag{3.6}\] where \(\xi^{i}(\nabla_{i}\,\xi_{j})\xi^{j}=\frac{1}{2}\,\xi^{i}\nabla_{i}(\xi_{j}\, \xi^{j})=\mathcal{L}_{\xi}\,e(\xi)\). Thus, we can rewrite (3.6) in the form \[\operatorname{Ric}(\xi,\xi)=\lambda\,\|\xi\|^{2}+\mathcal{L}_{\xi}\,e(\xi).\] Therefore, the condition \(Ric(\xi,\xi)\leq 0\) is equivalent to the inequality \(\mathcal{L}_{\xi}\sqrt{e(\xi)}\leq-\lambda\). From the above, the validity of Corollary 2.2 follows. **Proof of Lemma 2.2 and Corollary 2.3**. Recall the following theorem, see [6]: Let \(\xi\) be a smooth vector field on a complete oriented Riemannian manifold \((M,g)\) such that \(\|\xi\|\in L^{1}(M,g)\) and \(\operatorname{div}\xi\) does not change sign on \((M,g)\), then \(\operatorname{div}\xi=0\) on \((M,g)\). In particular, if \(\xi\) is the vector field of a complete, noncompact and oriented almost Ricci soliton \((M,g,\xi,\lambda)\), then from (3.2) we obtain \(s=n\,\lambda\). In this case, (1.1) can be rewritten in the form \[\operatorname{Ric}=\frac{1}{2}\,\mathcal{L}_{\xi}\,g+\frac{s}{n}\,g. \tag{3.7}\] Hence, \(g(\frac{1}{2}\,\mathcal{L}_{\xi}\,g,\frac{s}{n}\,g)=\frac{s}{n}\operatorname {div}\xi=0\). Therefore, the terms of the right hand side of (3.7) are orthogonal to each other with respect to the pointwise scalar product. In turn, (2.1) takes the form \(\bar{\Delta}\,\theta=\operatorname{Ric}(\xi,\,\cdot)+\frac{n-2}{n}\,ds\). ## 4 Compact almost Ricci solitons Here, we study compact almost Ricci solitons using the orthogonal expansion of the Ricci tensor, obtained using the Bece expansion of the space of symmetric two-tensors, see [6, p. 130]. Denote by \(S^{p}M\) the space of symmetric covariant \(p\)-tensors on a compact Riemannian manifold \((M,g)\), and define the _global scalar product_ for any \(\varphi,\varphi^{\prime}\in S^{p}M\) by the formula \[\langle\varphi,\ \varphi^{\prime}\rangle=\int_{M}g(\varphi,\,\varphi^{\prime}) \,d\operatorname{vol}_{g}. \tag{4.1}\] Let \(\delta^{*}:C^{\infty}(S^{1}M)\to C^{\infty}(S^{2}M)\) be the first-order differential operator defined by \(\delta^{*}\theta=\frac{1}{2}\,\mathcal{L}_{\xi}\,g\) for any smooth one-form \(\theta\) and its \(g\)-dual vector field \(\xi\), see [5, p. 117; 514]. Let also \(\delta:C^{\infty}(S^{2}M)\to C^{\infty}(S^{1}M)\) be the formal adjoint operator for \(\delta^{*}\), which is called the divergence of symmetric two-tensors. In this case, \(\langle\varphi,\delta^{*}\theta\rangle=\langle\delta\varphi,\theta\rangle\) is true for any \(\varphi\in C^{\infty}(S^{2}M)\) and \(\theta\in C^{\infty}(S^{1}M)\). For a compact Riemannian manifold \((M,g)\), the algebraic sum \(\operatorname{Im}\delta^{*}+C^{\infty}(M)\,\cdot\,g\) is closed in \(S^{2}M\), and the following decomposition is true: \[S^{2}M=(\operatorname{Im}\delta^{*}+C^{\infty}(M)\,\cdot\,g)\oplus(\delta^{- 1}(0)\cap\operatorname{trace}_{g}^{-1}(0)); \tag{4.2}\] furthermore, both factors in (4.2) are infinite-dimensional and orthogonal to each other with respect to the global scalar product (4.1), see [5, p. 130]. **Lemma 4.1**.: _Let \((M,g)\) be a compact \(n\)-dimensional Riemannian manifold and_ \[\operatorname{Ric}=\frac{1}{2}\,\mathcal{L}_{\xi}\,g+\lambda\,g+\varphi\] _the orthogonal expansion \((\)with respect to the global scalar product\()\) of the Ricci tensor for some vector field \(\xi\) and divergence-free and trace-free symmetric two-form \(\varphi\). Then_ 1. _for_ \(n\geq 3\)_, the vector field_ \(\xi\) _is an infinitesimal harmonic transformation if and only if_ \(\lambda=const\)_, and for_ \(n=2\)_, the vector field_ \(\xi\) _is an infinitesimal harmonic transformation;_ 2. _the assumptions_ \(n\geq 3\) _and_ \(\int_{M}(\mathcal{L}_{\xi}\,s)\,d\operatorname{vol}_{g}\geq 0\) _imply that_ \(\varphi=Ric-\frac{s}{n}\,g\) _for the scalar curvature_ \(s=const\)_, and that_ \(\xi\) _is a conformal Killing vector field._ Proof.: For the Ricci tensor, decomposition (4.2) has the form \[\operatorname{Ric}=\big{(}\frac{1}{2}\,\mathcal{L}_{\xi}\,g+\lambda\,g\big{)}+\varphi \tag{4.3}\] for some divergence-free and trace-free tensor \(\varphi\in C^{\infty}(S^{2}M)\) and function \(\lambda\in C^{\infty}(M)\). From (4.3) we get \[-\delta\theta:=\operatorname{div}\xi=s-n\,\lambda.\] Applying the operator \(\delta\) to (4.3), we find, see also (2.1), \[\bar{\Delta}\,\theta=\operatorname{Ric}(\xi,\,\cdot)-(n-2)\,d\lambda.\] Therefore, for \(n\geq 3\), \(\xi\) is an infinitesimal harmonic transformation if and only if \(\lambda=const\), and for \(n=2\), the vector field \(\xi\) is an infinitesimal harmonic transformation. On the other hand, using \(n\,\lambda=\delta\,\theta+s\), we derive the following equalities: \[0 =\left\langle\varphi,\,\delta^{*}\theta+\lambda g\right\rangle= \left\langle\operatorname{Ric}-\delta^{*}\theta-\lambda\,g,\delta^{*}\theta+ \lambda\,g\right\rangle\] \[=\left\langle\operatorname{Ric},\delta^{*}\theta\right\rangle- \left\langle\delta^{*}\theta,\delta^{*}\theta\right\rangle-\left\langle \lambda\,g,\delta^{*}\theta\right\rangle+\left\langle\operatorname{Ric},\, \lambda\,g\right\rangle-\left\langle\delta^{*}\theta,\lambda\,g\right\rangle- \left\langle\lambda\,g,\lambda\,g\right\rangle\] \[=\left\langle\operatorname{Ric},\theta\right\rangle-\left\langle \delta^{*}\theta,\delta^{*}\theta\right\rangle-2\left\langle\lambda\,g, \delta^{*}\theta\right\rangle+\int_{M}(\lambda\,s)\,d\operatorname{vol}_{g}-n \int_{M}\lambda^{2}\,d\operatorname{vol}_{g}\] \[=-\frac{1}{2}\left\langle ds,\theta\right\rangle-\left\langle \delta^{*}\theta,\delta^{*}\theta\right\rangle+2\left\langle d\lambda,\theta \right\rangle+\int_{M}\lambda(s-n\lambda)\,d\operatorname{vol}_{g}\] \[=-\frac{1}{2}\left\langle ds,\theta\right\rangle-\left\langle \delta^{*}\theta,\delta^{*}\theta\right\rangle+2\left\langle d\lambda,\theta \right\rangle-\int_{M}(\lambda\,\delta\,\theta)\,d\operatorname{vol}_{g}\] \[=-\frac{1}{2}\left\langle ds,\theta\right\rangle-\left\langle \delta^{*}\theta,\delta^{*}\theta\right\rangle+\left\langle d\lambda,\theta \right\rangle. \tag{4.4}\] Hence, \[n\left\langle d\lambda,\theta\right\rangle=\left\langle d(\delta\,\theta+s), \theta\right\rangle=\left\langle d\delta\theta,\theta\right\rangle+\left\langle ds,\theta\right\rangle=\left\langle\delta\theta,\delta\theta\right\rangle+ \left\langle ds,\theta\right\rangle. \tag{4.5}\] Therefore, from (4.4) and (4.5) we derive \[\frac{n-2}{2n}\int_{M}(\mathcal{L}_{\xi}\,s)\,d\operatorname{vol}_{g}=-\left \langle\delta^{*}\theta,\delta^{*}\theta\right\rangle+\frac{1}{n}\left\langle \delta\theta,\delta\theta\right\rangle\leq 0,\] because \(\|\varphi\|^{2}\geq\frac{1}{n}\left(\operatorname{trace}_{g}\varphi\right)^{2}\) for any covariant two-tensor \(\varphi\). The assumptions \(n\geq 3\) and \(\int_{M}(\mathcal{L}_{\xi}\,s)\,d\operatorname{vol}_{g}\geq 0\), or, in particular, \(\mathcal{L}_{\xi}\,s\geq 0\), imply \[\left\langle\delta^{*}\theta,\delta^{*}\theta\right\rangle-\frac{1}{n}\left\langle \delta\theta,\delta\theta\right\rangle=0. \tag{4.6}\] On the other hand, the following equality is valid: \[\left\|\frac{1}{2}\,\mathcal{L}_{\xi}\,g-\frac{1}{n}\left(\operatorname{div} \xi\right)g\right\|^{2}=g(\delta^{*}\theta,\delta^{*}\theta)-\frac{1}{n}\,( \delta\,\theta)^{2}. \tag{4.7}\] From (4.6) and (4.7) we find that \[\frac{1}{2}\,\mathcal{L}_{\xi}\,g=\frac{1}{n}\left(\operatorname{div}\xi \right)g,\] i.e., \(\xi\) is a conformal Killing vector field. In this case, from (4.3) we deduce that \[\varphi=\operatorname{Ric}-\frac{s}{n}\,g,\] to which we can apply Schur's lemma and then conclude that \(s=const\). A statement similar to the following corollary was proved in [17] for Ricci solitons. **Corollary 4.1**.: _Let \((M,g,\xi,\lambda)\) be an \(n\)-dimensional \((n\geq 3)\) compact almost Ricci soliton such that_ \[\int_{M}(\mathcal{L}_{\xi}\,s)\,d\operatorname{vol}_{g}\geq 0\] _for its scalar curvature \(s\). Then \((M,g)\) is isometric to a Euclidean \(n\)-sphere._ Proof.: For \(\varphi=0\), equations (4.3) have the form of almost Ricci soliton equations (1.1). Thus, if \((M,g,\xi,\lambda)\) is a compact almost Ricci soliton such that \[\int_{M}\left(\mathcal{L}_{\xi}\,s\right)d\,\mathrm{vol}_{g}\geq 0,\] or, in particular, \(\mathcal{L}_{\xi}\,s\geq 0\), then it is an Einstein manifold, and \(\xi\) is a conformal Killing vector field. By [20, Corollary 4.5], the soliton is a Euclidean sphere. _Remark 4.1_.: The concept of almost Ricci soliton was introduced in [15] as a Riemannian manifold \((M,g)\) satisfying the equation \[\mathrm{Ric}+\frac{1}{2}\,\mathcal{L}_{V}\,g=\lambda\,g,\] where \(\lambda\in C^{\infty}(M)\) and \(V\) is a smooth vector field on \(M\). The above equation and (1.1) are equivalent. Namely, if we suppose that \(\xi=-V\), then we derive (1.1) from the above equation. Thus, the inequalities \(\mathcal{L}_{\xi}\,s\geq 0\) and \(\int_{M}(\mathcal{L}_{\xi}\,s)\,d\,\mathrm{vol}_{g}\geq 0\) (with the scalar curvature \(s\) of the metric \(g\)) can be rewritten in the form \(\mathcal{L}_{V}\,s\leq 0\) and \(\int_{M}(\mathcal{L}_{V}\,s)\,d\,\mathrm{vol}_{g}\leq 0\), respectively, see also [3], where inequalities are replaced by equalities. For example, if \(s=const\) along the trajectories of the flow of \(\xi\) on a compact almost Ricci soliton \((M,g,\xi,\lambda)\) with a nonconstant function \(\lambda\), then \((M,g)\) is isometric to a Euclidean \(n\)-sphere, see [3].
2303.10239
Topic Modeling in Density Functional Theory on Citations of Condensed Matter Electronic Structure Packages
With an increasing number of new scientific papers being released, it becomes harder for researchers to be aware of recent articles in their field of study. Accurately classifying papers is a first step in the direction of personalized catering and easy access to research of interest. The field of Density Functional Theory (DFT) in particular is a good example of a methodology used in very different studies, and interconnected disciplines, which has a very strong community publishing many research articles. We devise a new unsupervised method for classifying publications, based on topic modeling, and use a DFT-related selection of documents as a use case. We first create topics from word analysis and clustering of the abstracts from the publications, then attribute each publication/paper to a topic based on word similarity. We then make interesting observations by analyzing connections between the topics and publishers, journals, country or year of publication. The proposed approach is general, and can be applied to analyze publication and citation trends in other areas of study, beyond the field of Density Function Theory.
Marie Dumaz, Camila Romero-Bohorquez, Donald Adjeroh, Aldo H. Romero
2023-02-16T17:04:20Z
http://arxiv.org/abs/2303.10239v1
Topic Modeling in Density Functional Theory on Citations of Condensed Matter Electronic Structure Packages ###### Abstract With an increasing number of new scientific papers being released, it becomes harder for researchers to be aware of recent articles in their field of study. Accurately classifying papers is a first step in the direction of personalized catering and easy access to research of interest. The field of Density Functional Theory (DFT) in particular is a good example of a methodology used in very different studies, and interconnected disciplines, which has a very strong community publishing many research articles. We devise a new unsupervised method for classifying publications, based on topic modeling, and use a DFT-related selection of documents as a use case. We first create topics from word analysis and clustering of the abstracts from the publications, then attribute each publication/paper to a topic based on word similarity. We then make interesting observations by analyzing connections between the topics and publishers, journals, country or year of publication. The proposed approach is general, and can be applied to analyze publication and citation trends in other areas of study, beyond the field of Density Function Theory. ## 1 Introduction For many years, scientific documents in basic sciences were classified using the Physics and Astronomy Classification Scheme (PACS) method, which maps a series of codes with keywords to cover the main topics of the publication. This is usually done by the author, and provides relative freedom for the actual classification. Though this is still one of the methods used by several journals, this step is transitioning into a more focused type of topic classification[13]. Basically, this change is driven by the fact that the institute that introduced this approach (the American Institute of Physics, AIP) has decided to discontinue its support, primarily due to the huge cost of administration and paper handling during manuscript submission. Therefore, a more dynamical and flexible method is necessary. A method were different topics capture the continuous change in the scientific development and scientific interest and are able to map reviewers and even journal in this arena. In this respect, the use of topic modeling from machine learning is gaining a lot of attention [14, 2]. In this paper, we propose an unsupervised learning technique that focuses on finding text correlations and thus classify documents according to a small set of topics. These topics are found through an unsupervised machine learning technique that creates clusters of meaningful words in the document selection, by finding similarity between them in a latent space. Though the idea of topic modeling has been used in the past to classify research topics [10, 15], in this work, we start from a well-defined database, and apply this methodology on publications related to the use of electronic structure packages. While a general method to categorize topics in basic sciences would be ideal, in this paper, we have decided to focus in a particular research field, where we can have access to a very well controlled database [6]. Density Functional Theory (DFT) is the most used methodology to characterize materials at the atomic scale [9, 8]. It falls into the so called electronic structure methods and it has the benefit that a large number of computational codes and software employ this method, which have created communities around them, supporting the research efforts. We base our document selection on research papers citing those computational code libraries, in order to reduce our scope to only DFT-related papers. Some efforts to analyze DFT-related papers from different subfields have been made, but they classify the articles based on the PACS numbers [1, 5]. However, our dataset downloaded from the Web of Science database, does not keep records of the PACS numbers and only retains keywords. Moreover, as DFT is a methodology with many different applications, in fields other than physics, not all papers follow the PACS structure. This shows again, the necessity for a new classification system, based purely on the content of the documents. Our approach is not only able to classify documents on content but also can be used to find novel topics and relationships between topics which can be used to identify new and emerging research fields. ## 2 Methodology The research papers and their abstracts were obtained using the same methodology as in previous work [6], through the Web of Science database (WoS), but expanded to include 2020 articles. Once we extracted the abstracts from the WoS database, we performed several pre-processing steps to capture only the most meaningful words out of the text. First, we deleted certain groups of words, such as "left arrow" or "vertical bar," that are only used to create mathematical representations. For the same reason, we removed all characters between two dollar signs, as this is a standard command to make equations, which would only confuse the algorithm. There is one exception to this rule if an underscore is found between the two dollar signs, as that pattern is used for chemical formulas that are essential and meaningful notions in sub-fields of DFT. We also remove publishing and copyright tags, DOI numbers, and all non-alphanumeric characters. Then, the leftover sentences get tokenized, which means the text is broken down into units, creating a list of tokens from the text. From those lists, we take out all stopwords, or words that are so common they do not help differentiate different sub-fields. We used the standard NLTK [3] list of stopwords and added others based on our own experience with the data after rounds of trial and feedback. For example, "theory" and "method" are not specific to some areas of physics like magnetic properties or gas absorption. We also remove all single letters and digits, often used in mathematical expressions or to quantify a value and don't hold meaning intrinsically. We then create bigrams and trigrams, groups of two or three tokens that often appear one after the other and will now be linked as one word by an underscore. Examples would be "ab initio" and "density functional theory." Finally, the tokens are lemmatized to reduce words to their root, and the resulting list of tokens is filtered to only keep the ones that are in at least two documents and in less than 20% of the papers. From this final list are created a dictionary and a Bag of Words (BoW) which are the input to our algorithm. Latent Dirichlet Allocation (LDA) [4] is a topic modeling algorithm which uses Bayesian statistical methods to generate text, from documents passed in as input in the BoW. This means that LDA does not take into account the order of words. In fact, an LDA model creates a set of topics where each topic is a probability distribution over a corpus of words drawn from the distribution \(\beta_{k}\sim\text{Dirichlet}(\eta)\), and each document is a probability distribution over topics drawn from \(\theta_{k}\sim\text{Dirichlet}(\alpha)\). Here, we define \(M\) as the total number of documents, \(N\) as the total number of words in a document \(m\), and \(k\) as the number of topics. As shown in Figure 1, LDA spans over three layers. The parameters \(\alpha\) and \(\eta\) are corpus-level parameters, which means that they are sampled only once per corpus. \(\theta\) denote document-level variables, sampled once per document. Finally, variables \(z_{n}\) and \(w_{n}\) are at word-level: they are sampled once for each word in each document. \(w_{n}\) is also the only observable variable, picked from the bag of words given as input. For topic modeling, we implemented a Latent Dirichlet Allocation (LDA) algorithm with the Python library Gensim [11] that already has a handy set of functions to create, evaluate and visualize LDA models. We used two bag of words (BOWs): one with all documents in our dataset from 1990 to 2019, and one with only documents published in 2020. The 2020 corpus was used as a test set. We created 16 models with a different number of topics (\(k\)), from 5 to 80, while keeping all other parameters null. For this analysis, we chose to tune the parameters left for 15, 25, 35 and 45 topics. All models were evaluated with the \(C_{v}\) coherence score [12], which is the coherence score with the strongest correlation to human rankings. We then ran 480 models to tune hyperparameters including \(\alpha\) and \(\eta\), for each of the 5 different values of \(k\). For each value of \(k\), the model that gives the highest \(C_{v}\) score is used through the rest of this article. To speed up training, we run the parameter tuning on several cores. We should note that the stochastic optimization process used in online LDA includes the randomization of data partition, which makes it impossible to ensure exact reproducibility if coupled with parallelization. To discover trends and interests, we assign topics to our publications bibliometric attributes. We assign topics by considering the probabilities returned by the LDA model as weight. We create a matrix \(A\) where each row represents a document, and each column represents a topic. The elements of this matrix are the probabilities for a given topic to be assigned to a given document. This matrix is particularly useful for the analyses made of topic distributions over countries, journals and years. This way, when computing the number of publications per topic per country, for example, we can use those weights to perform a more precise analysis. For these analyses, we create a matrix where each row is a topic and each column is a country. If a document was created in the USA for example, then each topic would add their respective weight for this document in the "USA" column. These analyses are also all normalized, which means each value in the matrix is divided by the sum of all values in the \(A\) matrix, which is equal to the total number of documents. ## 3 Results Here, we describe our dataset, experiments and results. To create a dataset of research papers focused on DFT, we used the graphical interface "Web of Science" (WOS) and downloaded all articles citing some of the most commonly used computational packages to study crystalline systems. Figure 1: Graphical representation of a LDA model [4] We acquired all papers from the first citation up to 2019 for our training set; and all citations in the 2020 year for our test set. [6] ### Distribution of documents and topics Figure 2 shows the distribution of documents in our dataset over topics. This means that it displays how many documents contain each topic. Note, we use the probabilities of a topic appearing in a document as weight, as described in the Methodology section. We add up the weights for all documents in each topic, and divide the results by the total number of documents. The figure gives us an idea of how well distributed the topics identified from each model is, and what topic is more common within the documents. We can see that the model with 25 topics is extremely unbalanced, with only 4 topics accounting for up to 80% of the documents. Next, the LDA models for \(k=15\) and \(k=45\) seem to be somewhat well distributed, as no topics is contained in more than 18% and 11% of the documents. This can easily be deduced via the y-axis of the histograms, that are also a quick way to get a general idea of a distribution: if the y-axis range is larger, it means that at least one topic takes a bigger part of the documents, which means the distribution is less uniform. Similarly, we can study the range of the attributed normalized number of documents. The model with 25 topics has the bigger range at 0.255, Figure 2: Document distribution over topics (for publications from 1990-2019) followed by 15 topics at 0.175 and 45 topics at 0.105. The smallest range, and hence the model that has a more uniform distribution is for \(k=35\) with 0.086. Indeed, to get 90% of the documents, it takes the first 8 topics (or 53% of the topics) with the highest proportion for \(k=15\), 24% of the topics for \(k=25\), 57% for \(k=35\) and 37% for \(k=45\). Moreover, the model with 35 topics is also the one that has the highest minimum percentage of documents. The diversity in the models for \(k=35\) and \(k=45\) also seems to imply that 15 topics is not enough for our corpus. In fact, we find evidence for topics specialization that further confirm the benefits of a higher number of topics. For example, in the \(k=15\) model, Topic 3 contains the following words: "formation", "activity", "hydrogen", "metal", "reaction", "mechanism", "process", "catalyst", "oxidation", and "li". We can map this topic to two different topics in the \(k=35\) model: it shares the words "activity", "metal", "reaction", "catalyst", "oxidation" with Topic 6 and the words "reaction", "formation", "mechanism", "process" with Topic 27. We can distinctly see that it splits into two categories, one is more focused on specific processes and applications (Topic 6) and one is more general (Topic 27). We can conclude from these analyses that the most balanced model is for \(k\)\(=35\). For these two reasons, we will mostly focus on this particular LDA model and its results in the rest of this section. ### Closer look at the Topics Figure 3 shows the word clouds for the 35 topics in our LDA model. Each word cloud represents the 10 words with the highest probability in the topic. The size of each word is proportional to their probability within their topics. Bigger words are terms with a higher probability and a bigger impact in the topic. We also added topic names to give a quick idea of what each word cloud represents. Even though some topics overlap, most topics are rather independent and clearly defined. For example, in Topic 13, we can find words relating to magnetic problems, while Topic 7 was more about hydrogen and hydride. Hydrogen is not a magnetic atom so those two subjects are not expected to be clustered together, and indeed, they were not: they are quite different. Topic 1 on alloy, however, can be related to magnetism as some alloy metals are magnetic and yet, it is once again put into a separate group (Topic 13), which shows how good the LDA model is at detecting different word relationships. Topic 14 is another good example of terms that could apply to many different fields with words such as "data", "simulation", "model" or "approach", but are associated to each other in a unique topic. We can also analyze the topics of our model by visualizing them with the t-SNE reduction method [7]. Figure 4 displays the data reduction to 2-dimensions, where each point is a document, positioned depending on their similarity with other documents, and each color is their dominant topic. Note that some topics are not represented as they are not a dominant topic in any document, this is the case for topics 0 and 29 for example. Since the topics are well distributed over the documents, we can see a lot of topics being repres overshadowing the entire map. We can also notice that some topics are more dispersed, such as the subject of new methods (e.g., Topic 31). That is expected as researchers are developing new processes and data simulations in many different fields. However, other topics are clustered, and clearly defined like the first topic on alloying and crystal formation. This figure also helps visualizing the similarity between topics. The closer two topics are, the more interconnected they are. Figure 3: Top 10 words making up each topic in the LDA model for \(k\) = 35 For example, Topic 27 (energy barriers/reaction paths/diffusion/activation processes) is close to Topic 6 (catalysis/oxidation) and Topic 3 (polymers/molecular properties) but is on the opposite side of Topics 1 (alloying/crystal formation) and 13 (magnetic properties). Indeed, molecular properties and activation processes like catalysis are chemistry related while magnetism and alloying are more often studied in physics. Figure 4: T-SNE reduction for the LDA model for \(k=35\) Figure 5: Document distribution over topics, for \(k\)=35 topics. Left: 1990-2019 corpus. Right: 2020 corpus Temporal evolution of topics Figure 5 compares the distribution of documents over topics, for the 1990-2019 corpus on the left (that was already shown in Figure 2), and for the 2020 corpus on the right. The two plots are similar and both display a good distribution of the documents. While Topic 10 (temperature/pressure crystal phase transitions) and Topic 14 (atomistic modelling/classical potential/classical molecular dynamics) were the most common subjects up to 2019, a number of other topics caught up to them in 2020, including Topic 31 (development of new methods) which now has the highest number of assigned documents. This could also mean that while Topic 8 (materials growth/interfaces/electron transport) and Toipc 10 were very important up to 2019, and still are, Topic 4 (monolayers/topological properties), Topic 24 (electrochemical processes/lithium batteries) and Topic 31 (development of new methods) are gaining more momentum recently. The similarity between the two plots also indicate that our model is generalizing correctly and can efficiently attribute topics to new documents. We computed, for each topic independently, the normalized and weighted number of documents per year. Every year, there is an exponential amount of DFT-related articles being published, and consequently, almost all topics also Figure 6: Normalized and weighted number of documents over the years, per topic for \(k=35\). Number inside each plot (top left) denotes the best fitting power factor. grew exponentially. To compare them, we calculated the power law factor of each curve since 2000. Even though some topics' evolution are better fitted with an exponential curve, the power law function yields the highest mean \(r^{2}\) over all topics. The results are shown in Figure 6, where the number in each top left corner of the subplots are the power factor best fitting for each topic. We chose to only consider publications after 2000 as this is the period in which DFT software really started to to get globally used. A total of nine topics have a power factor lower than 1 and while they still grew in importance since 2000, a few of them are starting to stabilize and even loose some interest from the DFT community. For example, Topic 18 on DFT schemes and Topic 19 on nanotubes, which have the lowest factors recorded, have not gained any new momentum in the recent years and Topic 18 also observed a decrease of publications in 2017 for the first time ever. On the contrary, Topics 24 related to electrochemical processes and lithium batteries, as well as Topic 4 (monolayers/topological properties), Topic 6 (catalysis/oxidation) and Topic 31 (development of new methods) are growing at a significantly high rate and seem to be among the most important subjects in the field of DFT today. Other subjects are all developing at intermediate rates, representing the diversity of DFT and how important the methodology is for different topics. We report in Figure 7 the two topics with the smallest and highest growth out of the 35 in our model. We fit a power law and an exponential curve to both of the curves and reported the factors in the top left corner. Topic 24 (electrochemical processes/lithium batteries) is an example of a growth better fitted with an exponential function, getting a \(r^{2}\) of 0.998, even though a power law function also get a high \(r^{2}\) score of 0.988. On top of the important difference in growth, we can also notice how "bumpy" the evolution of Topic 18 (DFT schemes) is. The many irregularities, compared to the smooth curve of Topic 24 might also be a sign of waning (or at least, inconsistent) interest. Figure 8 also shows the growth of topics over the years, in the form of a Figure 7: Comparison of the two topic with the lowest and highest growth, fit to the best exponential curve and powerlaw curve. Left: Topic 18 with lowest power law factor. Right: Topic 24 with highest power law factor. heat map. We notice that no important increase happened before 2000, which confirms our choice to study the exponential factors only after 2000. Moreover, most topics only start gaining momentum after at least 2005. It is interesting to note that the three topics with the highest exponential factors (Topic 4, 6 and 24) grew extremely fast in a few years, which is the reason for the high factor. Topic 24 especially only started to differentiate itself from the others just in 2018. It is also interesting how out of those three topics, two of them (Topics 6 and 24) are not part of the topics with the highest proportion of documents. However, given their high power factor, we can only predict that they will become more important in the next few years. Similarly, by looking at the trajectory of some subjects such as Topics 1 or 17, we can confidently forecast that the interest in those subfields will grow in the coming years. The 2020 column in this figure seems to agree with those predictions as Topics 1, 4, 6, 17 and 24 all gained in proportion during the last year. On the other hand, we can also easily determine which topics are stabilizing and are loosing momentum. For example, Topic 3 (polymers/molecular properties) and Topic 18 (DFT schemes/approximations in DFT) have had interest from the scientific community for at least a decade and yet, are not growing as fast as others. It is interesting to compare the evolution of Topics 14 and Topic 31, as they are related to each other. Topic 14 contains words such as "data", "model" Figure 8: Normalized and weighted number of documents over the years, per topic, for \(k\) = 35 and "simulation" which represent the community's effort to incorporate code, machine learning and new computing methods into their work. This topic has been growing steadily over the years, as it becomes a bigger part of today's experiments. Secondly, Topic 31 relating to the improvement and optimization of code and processes, only started gaining momentum in 2014, about 10 years after Topic 14. However, its growth was faster, which places both of the topics at the top of the most published subjects in 2020 in DFT. We can also mention the evolution of Topic 4 related to graphene and monolayers. It started to grow a few years after 2010, which is the year that the Nobel Prize in physics was awarded to Andre Geim and Kostya Novoselov for their work regarding the two-dimensional material graphene. ## 5 Trends in citations Looking at the number of citations per year, per topic in Figure 9, we notice the same trends as noted in the previous figure, only pushed a few years back. Indeed, papers published today are citing older research articles so a raise in publications in 2019 means a raise in citations of older papers, typically at least 3 years prior. This gap is surely due to the research process of reading the literature, putting together a new idea and hypothesis, getting data and experiences and finally drawing conclusions and then publishing. Figure 9: Normalized and weighted number of citations over the years, per topic, for \(k\) = 35 Hence, looking at the trends of citations over the years might give us more insight and more predictive value than only the number of published papers. For topics that are only starting to attract more interest, like Topics 6 and 24, we can trace back the beginning of the trend to the late 2000s or early 2010s whereas topics that have been in place for longer such as Topics 8, 10 or 14 can be attributed to discoveries in the late 1990s. ### Spatial evolution of topics We also studied the spatial evolution of topics in DFT over the years, especially, by countries. To do so, we identify countries using the affiliations of all authors. However, we only count a country once for each paper. Figure 10 divides the heat maps of topics per country over four different time periods. Before 2005, the USA were dominating the field of DFT and published the highest number of publications in a larger number of topics. Then, the proportions slowly shifted to China, which now generates more articles, and diversified its interests. China started to get involved in DFT after 2005 through Figure 10: Normalized and weighted number of documents per country (only top 15 shown), per topic, in different time periods, for \(k\) = 35 the subject of crystal phase transitions (Topic 10). Pressure crystal phase transitions is the main strength of DFT and one of the most active subfields, which explains why it would be the first topic to attract interest. Topic 10 stayed an important part of work in China until now, as it was still one of the main published subjects by authors from China between 2015 and 2019, along with Topics 4 and 12. Nowadays, the USA still contributes to many different topics in DFT, and grew interest in Topic 14, but decreased its number of publications in Topic 5 (adsorption/diffusion). We can accurately predict that China will soon dominate the field of DFT as its number of publications will only keep growing. Figure 11, representing the normalized distribution of documents per topics over countries, for the year 2020 confirms the influence of China in DFT. Moreover, even though most software counted in this work are developed in Europe, they are the most used in China and the USA. Germany, the third country with the highest number of publications, and country of origin to Turbomole, the sixth most used code in our dataset is barely even competing with China and the USA after 2005. Similarly, the impact of the United Kingdom and France kept decreasing over the years, even though they were very active in several different topics before 2005. Figure 11: Normalized and weighted number of documents per country (only top 15 shown), per topic, for articles published in 2020, for \(k\) = 35 ## 6 Topics distribution by journals and publishers Similarly to figure 10, figure 12, shows the normalized number of publications (computed with the probabilities as weight) per topic over journals, dividing the papers over four 5-year periods based on their publication year. In the same manner, these plots express the diversification of DFT publications over different journals, while still highlighting the importance of a very small number of prominent members. However, while figure 10 displays signs of diversification amongst countries as soon as 2005, most papers were still published in Physical Review B until a lot more recently, and no other journal seems to be taking over its number of publications. The 15 journals displayed have the highest number of publications and are Figure 12: Normalized and weighted number of documents per journal (only top 15 shown), per topic, in different time periods, for \(k\) = 35 arranged in order from left to right. Even though some journals like Surface Science are specializing in one topic, and sticking with it over time, we can notice how most journals are present in several different subjects. Two key journals, Physical Review B and Journal of Physical Chemistry C are both influential in DFT and were expected to be the largest contributors to the field. Moreover, as DFT is mostly used in physics and chemistry, it is not surprising that those two journals both focused on condensed matter analysis each represent one of the two domains. In fact, the three journals in positions 2, 3 and 4 are related to the the chemistry side of DFT. Even though, they have similar targets, they evenly distribute the publications between them. The Journal of Chemical Physics focuses on Topic 14 and 18, Journal of Physical Chemistry C contributes more to Topics 5, 6, 8 and a few others while Physical Chemistry Chemical Physics has a more diverse range of subjects. Hence, it seems that the Journal of Chemical Physics contributes more to methodologies and not material characteristics, judging by the words present in Topics 14 and 18 (modeling/approximations). Even though physics researchers mostly publish their articles in one particular journal, chemists have more options available depending on their interests. Figure 13 seem to confirm our deductions from the previous figure. However, even though journals publish papers in a lot of different topics, they seem to be cited for only a few, which might steer their editorial agenda in the future. In fact, the four topics most cited in Physical Review B, are also the four topics most published in the same journal. With such hypothesis we could predict a raise in the number of papers published in Topics 16 and 27, as they are more cited and published, as of 2019. We should note that even tough the values are normalized, a few highly cited articles might skew the results. Figure 14 is similar to Figure 12, but the x-axis shows publishers (and not journals). The other difference is that we created this plot with only articles published in open-access journals. It is interesting to see how the two figures have different interests. For example, open-access journals seem to have a strong interest in Topic 31 while subscription journals in Figure 12 only published a small amount of their articles relating to this subject. The other three most important topics in open-access journals are Topics 4, 8 and 10 which are also well published in paid journals but to different levels. While Topics 4 and 8 grew in proportion, Topics 10, 13 and 14 all saw their share of documents decrease in open-access publishers, even though they still remain relevant. Moreover, it seems that open-access journals are more focused on a few topics, instead of spreading to many different subjects, like paid journals. We can also notice the rather good correlation with the evolution of topics over time as the most published papers in open-access journals are also the ones that are the most published in 2020 (see Figure 8). ## 7 Discussion Bibliometrics and topic modeling are powerful tools that can be used to study and evaluate the evolution and impact of any given field. In this paper, we developed several analyses related to the evolution of topics with emphasis on the specific field of DFT. Here, evolution was considered in terms of time (over the years), and space (across countries), and also with respect to journals, publishers, institutions, and subjects based on information from over 120,000 papers in Density Functional Theory. We find that automatically classifying papers based on their abstract is a promising option for improved, faster, and more objective categorization. In fact, most of the 35 topics resulting from our LDA model Figure 13: Normalized and weighted number of citations per topic and per journal (only top 15 shown), with only open access journals considered, for \(k=35\) were consistent with our document corpus and could be attributed a name. Even though most packages are developed in Europe, they are mostly used in the USA and China, that are dominating the amount of publications created in the DFT field. While the two countries share interests in a lot of different subjects, American scientists work with many of the most important DFT packages, and researchers in China use mostly VASP, which is a licensed package. This difference might come from the intrinsic set of tools each software has, and the subfields they serve the most. Indeed, China seems to direct most of its work to materials science and applications as they focus on 2D materials (Topic 4), chemical reactions (Topic 6) or batteries (Topic 24) while researchers in the USA published most of their work in simulations (Topic 14) and new methodologies (Topic 31). Moreover, two of the most recurrent topics in our analysis, and that receive the most interest from influential journals and countries are related to code development, simulations and modeling. As our work focuses on computational packages, it is not surprising that some papers focus on the methods Figure 14: Normalized and weighted number of documents per topic and per publisher (only top 10 shown), with only open access journals considered, for \(k\) = 35. and processes of software. However, the large amount of interest in the subjects suggests a recent and strong effort in the physics and chemistry fields to develop and optimize code packages and libraries. The topic modeling done in this article was created using the abstracts of papers and already resulted in coherent and clear topics. Future work could include replicating our methodology on the full-text of all 120,000 papers in our dataset. Further, the approach used is quite general, and can easily be adapted to apply to other fields of study, beyond DFT. ## Additional Information The authors declare no competing interests. ## Data Availability The dataset that supports the findings of this study are available in Figshare with the identifier [https://doi.org/10.6084/m9.figshare.12494654.v2](https://doi.org/10.6084/m9.figshare.12494654.v2).
2308.05220
Visual Aspects of Gaussian Periods and Analogues
Gaussian periods have been studied for centuries in the realms of number theory, field theory, cryptography, and elsewhere. However, it was only within the last decade or so that they began to be studied from a visual perspective. By plotting Gaussian periods in the complex plane, various interesting and insightful patterns emerge, leading to various conjectures and theorems about their properties. In this paper, we offer a description of Gaussian periods, along with examples of the structure that can occur when plotting them in the complex plane. In addition to this, we offer two ways in which this study can be generalized to other situations -- one relating to supercharacter theory, the other relating to class field theory -- along with discussions and visual examples of each. We end the paper by including some code for readers to generate images on their own.
Samantha Platt
2023-08-09T20:53:47Z
http://arxiv.org/abs/2308.05220v3
# Visual aspects of Gaussian periods and analogues ###### Abstract. Gaussian periods have been studied for centuries in the realms of number theory, field theory, cryptography, and elsewhere. However, it was only within the last decade or so that they began to be studied from a visual perspective. By plotting Gaussian periods in the complex plane, many interesting and insightful patterns can be seen, leading to various conjectures and theorems about their properties. In this paper, we offer a description of Gaussian periods, along with examples of the structure that can occur when plotting them in the complex plane. In addition to this, we offer two ways in which this study can be generalized to other situations--one relating to supercharacter theory, the other relating to class field theory--along with discussions and visual examples of each. \({}^{\dagger}\)Partially supported by the Paul and Harriet Civin Memorial Graduate Student Award and the E. M. Johnson Memorial Scholarship. However, another (seemingly less explored) framework is that of number theory and class field theory. In particular, the well-known result of Kronecker and Weber states that every finite abelian extension of the rational numbers are contained in a cyclotomic field. From this perspective, one then wonders about generalizing Gaussian periods to other base fields, the behavior of these generalizations, and what sort of insight this study can provide. It is the goal of this paper to continue the visual exploration of Gaussian periods, as well as to begin and to motivate the visual exploration of certain generalizations from various perspectives. ### Acknowledgements The author would like to thank the following people for their help at various points of this project. First, we thank Ellen Eischen for the original idea for the project and for the suggestions and advice throughout. We thank Benjamin Young for his serendipitous observations about the project, which led to interesting insights. We would also like to thank David Lowry-Duda, John Voight, and Joseph Silverman for their helpful conversations and suggestions about the implementation of coding ideas, which took place mainly at the Algorithmic Number Theory Symposium in 2022 and at an MSRI workshop in winter 2023. We would also like to thank April Wade for her extensive help with the code itself, especially with helping the code run efficiently. ## 2. Definitions and Motivations We begin by clarifying the definition of Gaussian periods which we will be using. We note that throughout this paper, we define \(e(x):=e^{2\pi ix}\). **Definition 1**.: Let \(n\) be an integer, and let \(\omega\) be an integer coprime to \(n\). Thus \(\omega\in(\mathbb{Z}/n\mathbb{Z})^{\times}\), and so we let \(d\) be the multiplicative order of \(\omega\) modulo \(n\); that is, \(d\) is the smallest positive integer such that \(\omega^{d}\equiv 1\bmod n\). For an integer \(k\) (and using notation similar to [9]), we define the following map: \[\eta_{n,\omega}:\mathbb{Z}/n\mathbb{Z}\to\mathbb{C},\qquad\quad\eta_{n,\omega }(k):=\sum_{j=0}^{d-1}e\left(\frac{\omega^{j}k}{n}\right).\] We call \(\eta_{n,\omega}(k)\) a _Gaussian period of modulus \(n\) and generator \(\omega\)_; note that we do not require \(k\) to be relatively prime to \(n\) in our definition. Additionally, we call \(G(n,\omega):=\operatorname{img}(\eta_{n,\omega})\) the _Gaussian period plot of modulus \(n\) and generator \(\omega\)_ (or simply the _Gaussian period plot_ where \(n\) and \(\omega\) are clear from context). In Figure 1, we provide examples of Gaussian period plots for various choices of \(n\) and \(\omega\). Two Guassian periods \(\eta_{n,\omega}(k)\) and \(\eta_{n,\omega}(k^{\prime})\) have the same color if \(k\equiv k^{\prime}\bmod c\), where \(c\) is a chosen color modulus. We won't focus too much on the color scheme in this paper, but the reader can refer to [9] and section 3 of [10] for more information. It is now that one might wonder if Gaussian period plots might be described or explained mathematically, as they obviously seem to have some sort of structure and symmetry to them. While there are some cases in which such a description is still unknown, there are many in which we do indeed have an explanation. We offer one such explanation, proved by Duke, Garcia, and Lutz [8], as motivation. **Theorem 2**.: _Let \(n=p^{a}\), where \(p\) is an odd prime. Choose \(\omega\in(\mathbb{Z}/p^{a}\mathbb{Z})^{\times}\) so that it has multiplicative order \(d\) dividing \(p-1\). Let \(\Phi_{d}(x)\) denote the \(d\)-th cyclotomic polynomial. Then the Gaussian period plot \(G(n,\omega)\) is contained in the image of the Laurent polynomial function \(g_{d}:\mathbb{T}^{\varphi(d)}\to\mathbb{C}\) defined by_ \[g_{d}(z_{1},z_{2},\ldots,z_{\varphi(d)})=\sum_{j=0}^{d-1}\prod_{m=0}^{\varphi( d)-1}z_{m+1}^{c_{mj}},\] _where the constants \(c_{mj}\) are defined by the following relations:_ \[x^{j}\equiv\sum_{m=0}^{\varphi(d)-1}c_{mj}x^{m}\text{ mod }\Phi_{d}(x).\] _Moreover, for a fixed \(d\), as \(p^{a}\) tends to infinity (where \(p\), \(a\), and the choice of \(\omega\) are allowed to vary, but where \(d\) always divides \(p-1\)), every nonempty open disc contained in \(\text{img}(g_{d})\) eventually contains points in \(G(n,\omega)\). In other words, \(\text{img}(g_{d})\) is "filled out" by Gaussian periods as \(n\) goes to infinity, assuming the necessary congruence relations._ Figure 1. Examples of Gaussian period plots for various choices of \(n\) and \(\omega\) It is interesting to note that in the case where \(d\) is itself also a prime, the image \(\operatorname{img}(g_{d})\) becomes a \(d\)_-sided hypocycloid_. We provide examples of this phenomenon (for \(d\) both prime and composite) in Figure 2. Additionally, it may be of interest to the reader to compute \(g_{d}\) for various values of \(d\); to this end, we wrote some code in Sage that generates \(g_{d}\), which is included in the GitHub link provided in section 6. ### Supercharacter Theory As mentioned previously, there are two main perspectives which this paper will use when viewing Gaussian periods and their analogs: supercharacter theory and class field theory. We start by defining supercharacter theory. **Definition 3**.: Let \(G\) be a finite group with identity \(1\), let \(\mathcal{K}\) be a partition of \(G\), and let \(\mathcal{X}\) be a partition of the set of irreducible characters of \(G\). Then \((\mathcal{X},\mathcal{K})\) is a _supercharacter theory_ for \(G\) if the following hold: * \(\{1\}\in\mathcal{K}\) * \(|\mathcal{X}|=|\mathcal{K}|\) * For each \(X\in\mathcal{X}\), the function \(\sigma_{X}=\sum_{\chi\in X}\chi(1)\chi\) is constant on each \(K\in\mathcal{K}\). The elements \(K\in\mathcal{K}\) are called _superclasses_, and the functions \(\sigma_{X}\) are called _supercharacters_. Figure 2. Examples of Duke–Garcia–Lutz Theorem for various values of \(d\) Using the above definition, let \(G=\mathbb{Z}/n\mathbb{Z}\). For every \(x\in G\), there exists an irreducible character \(\chi_{x}\) such that \(\chi_{x}(y)=e\left(\frac{xy}{n}\right)\) for every \(y\in G\), and these characters describe all of the irreducible characters of \(G\). For the cyclic subgroup \(\langle\omega\rangle\subseteq(\mathbb{Z}/n\mathbb{Z})^{\times}\), let \(\mathcal{K}\) be the partition of \(G\) corresponding to the orbits of the action \(a\cdot x=ax\) for \(a\in\langle\omega\rangle\). Additionally, let \(\mathcal{X}\) be the partition of the irreducible characters of \(G\) corresponding to the action \(a\cdot\chi_{x}=\chi_{a^{-1}x}\). One can then check that \(\mathcal{X}\) and \(\mathcal{K}\) are compatible as in the definition given above, and so \((\mathcal{X},\mathcal{K})\) define a supercharacter theory on \(\mathbb{Z}/n\mathbb{Z}\). Thus the Duke-Garcia-Lutz Theorem, for example, gives a geometric description of the values which show up in a "supercharacter table" for \((\mathcal{X},\mathcal{K})\). _Remark 4_.: It should be noted that the above construction of the partitions \(\mathcal{X}\) and \(\mathcal{K}\) work for _any_ subgroup \(\Gamma\subseteq(\mathbb{Z}/n\mathbb{Z})^{\times}\) and not just the cyclic subgroups as we have described them. In this paper, however, we mainly restrict ourselves to the case in which \(\Gamma\) is cyclic, even though we believe using non-cyclic subgroups of \((\mathbb{Z}/n\mathbb{Z})^{\times}\) to achieve a supercharacter theory hasn't received much attention and is something worth exploring. _Remark 5_.: Supercharacter theory was first described axiomatically by Diaconis and Isaacs in 2008 [7]. As mentioned in [10], supercharacter theory has been used in the study of a variety of objects, including the Hopf algebra of symmetric functions of non-commuting variables, random walks on upper triangular matrices, and Ramanujan sums. ### Class Field Theory _Remark 6_.: It is worth noting here that we won't be describing class field theory in great detail in this paper, as we will be providing only a general overview for motivation. Because of this, it isn't expected that the reader know these details in order to follow any results; the reader may need to take certain statements as a black box, but we believe that very little will be lost in doing so. For further reading about class field theory, we recommend [3]. Given a base field \(K\), the main goal of class field theory is to describe all finite abelian extensions of \(K\) using its local properties. In most cases, class field theory allows us to compute the Galois groups of these field extensions fairly easily; however, explicitly finding the fields corresponding to such Galois groups is often a much harder task. In fact, this is the subject of Hilbert's 12th problem, which asks which algebraic numbers must be adjoined to \(K\) in order to generate all its abelian extensions. The answer to Hilbert's 12th problem is known in only very few cases. For example, when \(K=\mathbb{Q}\), the Kronecker-Weber Theorem states that every finite abelian extension of \(\mathbb{Q}\) is contained in some finite cyclotomic extension of \(\mathbb{Q}\). Thus the roots of unity are the algebraic numbers needed to generate abelian extensions of \(\mathbb{Q}\). Another case in which the answer is known (and which is a source of exploration later in this paper) is the case in which \(K\) is a quadratic imaginary field, where the well-known theory of complex multiplication tells us that all finite abelian extensions of \(K\) are generated by adjoining certain values of the modular \(j\)-function along with coordinates of torsion points of elliptic curves. With this in mind, we offer the following definitions from class field theory. **Definition 7**.: Let \(K\) be a number field, and let \(\mathcal{O}_{K}\) be its ring of integers. Given an ideal \(\mathfrak{m}\subseteq\mathcal{O}_{K}\) called the _modulus_, we obtain a _ray class group of modulus_\(\mathfrak{m}\), which will be denoted \(Cl_{K}(\mathfrak{m})\) in this paper. For each ray class group of modulus \(\mathfrak{m}\), there exists a _ray class field of modulus_\(\mathfrak{m}\), denoted \(K[\mathfrak{m}]\), whose Galois group \(\operatorname{Gal}(K[\mathfrak{m}]/K)\) is isomorphic to the ray class group \(Cl_{K}(\mathfrak{m})\) and whose set of primes that ramify over \(K\) are only those which divide \(\mathfrak{m}\). In the special case where \(\mathfrak{m}=(1)\), we call \(K[1]\) the _Hilbert class field_, whose Galois group \(\operatorname{Gal}(K[1]/K)\) is isomorphic to the ideal class group \(Cl_{K}(1)\) of \(K\). In addition to the above definition, we make two important notes about ray class groups and ray class fields. First, if \(\mathfrak{n}\) is an ideal dividing \(\mathfrak{m}\), then \(Cl_{K}(\mathfrak{n})\subseteq Cl_{K}(\mathfrak{m})\) and \(K[\mathfrak{n}]\subseteq K[\mathfrak{m}]\). One important implication of this is that the Hilbert class field is an intermediate field extension for every modulus \(\mathfrak{m}\); that is, \(K\subseteq K[1]\subseteq K[\mathfrak{m}]\) for every \(\mathfrak{m}\). Additionally, it is worth noting an alternate description of ray class fields. Rather than viewing them through the correspondence provided above, one can instead define ray class fields of modulus \(\mathfrak{m}\) as the maximal abelian extension of \(K\) which is ramified only at the primes dividing \(\mathfrak{m}\). These two characterizations turn out to be equivalent, and we will be using them interchangeably in our discussion. _Example 1_.: In the case where \(K=\mathbb{Q}\), the ring of integers is simply \(\mathbb{Z}\). The ideal class group of \(\mathbb{Q}\) is trivial, and so the Hilbert class field is \(\mathbb{Q}\) itself. Also, given an ideal \((m)\subseteq\mathbb{Z}\), the ray class group of modulus \(m\) is isomorphic to \((\mathbb{Z}/m\mathbb{Z})^{\times}\), and the corresponding ray class field is \(\mathbb{Q}(\mu_{m})\), where \(\mu_{m}\subseteq\mathbb{C}^{\times}\) denotes the subgroup of \(m\)-th roots of unity. Returning to the discussion of Gaussian periods, note that \(e\left(\frac{k}{n}\right)\) is an \(n\)-th root of unity for every integer \(k\). Thus every summand in \(\eta_{n,\omega}(k)\) sits inside the \(n\)-th cyclotomic field \(\mathbb{Q}[n]=\mathbb{Q}(\mu_{n})\). Since \(\omega\) is assumed to be in \((\mathbb{Z}/n\mathbb{Z})^{\times}\cong\operatorname{Gal}(\mathbb{Q}[n]/ \mathbb{Q})\), then the Gaussian periods of modulus \(n\) and generator \(\omega\) are sums over the Galois action of the cyclic subgroup \(\langle\omega\rangle\subseteq(\mathbb{Z}/n\mathbb{Z})^{\times}\). In particular, this implies that \(\eta_{n,\omega}(k)\in\mathbb{Q}[n]^{\langle\omega\rangle}\), the subfield of \(\mathbb{Q}[n]\) fixed by the action of \(\langle\omega\rangle\). In fact, Gaussian periods are not only _contained_ in subfields of ray class fields, but they are _generators_ of those subfields. These subfields are important objects of study in number theory, as they are still abelian extensions of \(\mathbb{Q}\) ramified only at the primes dividing the modulus of the ray class field; they just aren't the maximal such extension. With this perspective in mind, we see that the Duke-Garcia-Lutz Theorem shows how these generators are distributed in \(\mathbb{C}\), as well as a description of their asymptotic behavior. ## 3. Observations and Generalizations: Supercharacter Theory Perspective In this section, we explore certain aspects of the Duke-Garcia-Lutz Theorem from the supercharacter theory perspective, as well as prove a generalization of this theorem. As the Duke-Garcia-Lutz Theorem will be discussed at length in this section, we recommend the reader review the theorem above. ### Gaussian Periods as Trace Maps We first explore some observations made by the authors in sections 5 and 6 in [10]. These observations relate to Gaussian period plots of modulus \(n\) and generator \(\omega\) when in the setting of the Duke-Garcia-Lutz Theorem. In particular, they discuss the Laurent polynomials \(g_{d}\) showing up in the theorem, where \(d\) is the multiplicative order of \(\omega\). They start by noting that when \(d\) is a prime, then \(g_{d}\) has an easily described form. Since the cyclotomic polynomial \(\Phi_{d}(x)=x^{d-1}+x^{d-2}+\cdots+x+1\) when \(d\) is prime, then we get the following: \[g_{d}(z_{1},\ldots,z_{d-1})=z_{1}+\cdots+z_{d-1}+\frac{1}{z_{1}z_{2}\cdots z_{d -1}}.\] Because each \(z_{i}\) is an element of \(\mathbb{T}\) (i.e. a complex number on the unit circle), we can view the sum on the right-hand side as being the trace of some \(d\times d\) special unitary matrix. That is, the trace of a matrix is the sum of its eigenvalues, and the eigenvalues of any special unitary matrix are on the unit circle and multiply to \(1\). Thus any special unitary matrix \(U\in\operatorname{SU}(d)\) has eigenvalues \(\{\lambda_{1},\)\(\ldots,\)\(\lambda_{d}\}\), where the \(\lambda_{i}\) are allowed to be arbitrary elements of the unit circle for \(1\leq i\leq d-1\) and where \[\lambda_{d}=\frac{1}{\prod_{i=1}^{d-1}\lambda_{i}}\] so that the determinant of \(U\) is equal to \(1\). Thus for any \(U\in\mathrm{SU}(d)\), the trace of \(U\) is given as some output of \(g_{d}\). There is, in fact, a little more to this story. In writing the code needed for plotting Gaussian periods, it seems that everyone (including this author) decided simply to plot all of the Gaussian periods at once. That is, given an \(n\) and \(\omega\), most algorithms that were written would compute all of the Gaussian periods, plot them all at once, and then return the result. However, when one instead plots the Gaussian periods \(\eta_{n,\omega}(k)\) in batches instead of all at once, an interesting behavior takes place. This behavior was first discovered (after a happy coding mishap) by Benjamin Young, and we formalize it below. Given a modulus \(n\) and element \(\omega\) of multiplicative order \(d\) in \((\mathbb{Z}/n\mathbb{Z})^{\times}\), choose a constant \(C\) that is small relative to \(n\) (the author recommends \(C\approx\sqrt{n}\), though sometimes the behavior is more clear with smaller or larger \(C\)). We then create \(\lceil n/C\rceil\) "frames" by plotting the Gaussian periods \(\eta_{n,\omega}(k)\) in batches of size approximately \(C\). That is, the \(e\)-th frame will be the plotted image of \(\eta_{n,\omega}(k)\) for all \(0\leq k<e\cdot C\). In this way, every frame adds approximately \(C\) new points to the Gaussian period plot. Stringing these frames together gives us an animation showing how the Gaussian periods fill the Gaussian period plot \(G(n,\omega)\) as \(k\) increases. _Example 2_.: We consider the \(d\)-sided hypocycloid case from the Duke-Garcia-Lutz Theorem; that is, \(n\) is a power of an odd prime and \(d\) is itself also a prime, and so \(G(n,\omega)\) is contained within the image of a hypocycloid with \(d\) sides. If we instead plot these Gaussian periods in frames using the constant \(C=\sqrt{n}\), it becomes apparent that a \((d-1)\)-sided hypocycloid rolls counterclockwise along the inner boundary of the \(d\)-sided hypocycloid. So, for example, if \(d=3\), then a \(2\)-sided hypocycloid (i.e. a straight line) rolls along the inside of a \(3\)-sided hypocycloid, and when \(d=5\), a \(4\)-sided hypocycloid rolls along in the inside of a \(5\)-sided hypocycloid. We include some still frames of this phenomenon in Figure 3. The behavior above can be described more generally by looking at the following homomorphism: \[\varphi:\mathrm{SU}(m-1)\times\mathrm{U}(1)\hookrightarrow\mathrm{SU}(m), \hskip 28.452756pt(U,e^{i\theta})\mapsto\mathrm{diag}(e^{-i\theta}U,e^{i(m-1) \theta}),\] where \(\mathrm{diag}(e^{-i\theta}U,e^{i(m-1)\theta})\) represents the block-diagonal matrix with the \((m-1)\times(m-1)\) block \(e^{-i\theta}U\) and \(1\times 1\) block \(e^{i(m-1)\theta}\). Thus we see that \(\mathrm{Tr}(\varphi(U,e^{i\theta}))=e^{-i\theta}\mathrm{Tr}(U)+e^{i(m-1)\theta}\). Since \(U\in\mathrm{SU}(m-1)\), then \(\mathrm{Tr}(U)\) is contained in an \((m-1)\)-sided hypocycloid. Thus, if we vary \(U\in\mathrm{SU}(n-1)\) and \(\theta\in[0,2\pi)\), we get a filled-in \((m-1)\)-sided hypocycloid rolling along the inside of an \(m\)-sided hypocycloid. Figure 3. A \(4\)-sided hypocycloid rolling along the inside of a \(5\)-sided hypocycloid when \(n=11^{5}\) and \(\omega=37107\) The relationship between this homomorphism and example 2 is simple enough to see. For a given prime power \(n\), a prime \(d\), \(\omega\) of order \(d\), and \(k\in(\mathbb{Z}/n\mathbb{Z})\), recall the definition of a Gaussian period: \[\eta_{n,\omega}(k)=e^{\frac{2\pi ik}{n}}+e^{\frac{2\pi i\omega k}{n}}+\cdots+e^{ \frac{2\pi i\omega d^{-1}k}{n}}.\] If we let \(\theta=\frac{2\pi k}{(d-1)n}\), then we have the following: \[\eta_{n,\omega}(k)=e^{i(d-1)\theta}+e^{-i\theta}\left(e^{i\theta((d-1)\omega+ 1)}+\cdots+e^{i\theta((d-1)\omega^{d-1}+1)}\right).\] One can then verify that \[\prod_{j=1}^{d-1}e^{i\theta((d-1)\omega^{j}+1)}=1,\] and so \(e^{i\theta((d-1)\omega+1)}\),..., \(e^{i\theta((d-1)\omega^{d-1}+1)}\) are the eigenvalues of some matrix \(U\in\mathrm{SU}(d-1)\). Thus \(\eta_{n,\omega}(k)=e^{i(d-1)\theta}+e^{-i\theta}\mathrm{Tr}(U)\), which shows that \(\eta_{n,\omega}(k)\) is the trace of some matrix in the image of \(\varphi\). Additionally, note that \(\theta=\frac{2\pi ik}{(d-1)n}\) increases as \(k\) increases, which is why the \((d-1)\)-sided hypocycloid rolls smoothly counterclockwise along the inner boundary of the \(d\)-sided hypocycloid. We summarize the above observation as a proposition. **Proposition 8**.: _Let \(n=p^{a}\) be a power of a prime, \(d\) a prime dividing \(p-1\), and \(\omega\in(\mathbb{Z}/n\mathbb{Z})^{\times}\) an element of order \(d\). Let \(k\in\mathbb{Z}/n\mathbb{Z}\). Then the value of the \(k\)-th Gaussian period of modulus \(n\) and generator \(\omega\) is contained in a \((d-1)\)-sided hypocycloid centered at \(e\left(\frac{k}{n}\right)\) and rotated by a factor of \(e\left(\frac{k}{(d-1)n}\right)\). That is,_ \[\eta_{n,\omega}(k)\in\left\{e\left(\frac{k}{n}\right)+h\cdot e\left(\frac{k}{ (d-1)n}\right)\text{ : }h\in H_{d-1}\right\},\] _where \(H_{d-1}\) represents the filled-in \((d-1)\)-sided hypocycloid centered at the origin in the complex plane._ One might now be wondering what occurs in the case where \(d\) is not itself prime. Unfortunately, the geometrical shapes that one gets are much harder to describe succinctly, though the general behavior of "a smaller shape rolling counterclockwise along the boundary" seems to hold based on experimentation. But again, describing the "smaller shape" remains as elusive as describing the overall shape of the Gaussian period plots in these cases. It is our hope that someone more knowledgeable about this type of geometry might be able to offer some insight. ### Generalization of Duke-Garcia-Lutz We now return to viewing Gaussian periods as values of a supercharacter theory. Recall that the supercharacter theory used in Gaussian periods is constructed using a cyclic subgroup \(\langle\omega\rangle\subseteq(\mathbb{Z}/n\mathbb{Z})^{\times}\), where the supercharacters and superclasses are defined using a compatible action of \(\langle\omega\rangle\) on the group \(\mathbb{Z}/n\mathbb{Z}\) and its group of characters. We can generalize this setting in the following way. Let \(G=(\mathbb{Z}/n\mathbb{Z})^{m}\). The irreducible characters of \(G\) are simply products of irreducible characters on \(\mathbb{Z}/n\mathbb{Z}\); in particular, given \(\mathbf{x}=(x_{1},\ldots,x_{m})\in G\), we obtain an irreducible character \(\chi_{\mathbf{x}}\) such that for \(\mathbf{y}=(y_{1},\ldots,y_{m})\in G\), we have the following: \[\chi_{\mathbf{x}}(\mathbf{y})=\chi_{x_{1}}(y_{1})\cdot\chi_{x_{2}}(y_{2}) \cdots\chi_{x_{m}}(y_{m})=e\left(\frac{x_{1}y_{1}+\cdots+x_{m}y_{m}}{n}\right).\] The analogy of \((\mathbb{Z}/n\mathbb{Z})^{\times}\) in this setting is the set of automorphisms \(\mathrm{Aut}(G)\), which is isomorphic to \(\mathrm{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\). So we choose a matrix \(A\in\mathrm{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\) of order \(d\), and take the cyclic subgroup \(\Gamma:=\langle A\rangle\). We then define the (right) action of \(\Gamma\) on \(G\) to be \(\gamma\cdot\mathbf{x}=\gamma^{T}\mathbf{x}\), where \(\gamma\in\Gamma\), \(\gamma^{T}\) is the transpose of the matrix \(\gamma\), and \(\mathbf{x}\in G\) (viewed as a column vector). Let \(\mathcal{K}\) be the partition of \(G\) corresponding to the orbits of this action. Using the same notation, we define the (right) action of \(\Gamma\) on the irreducible characters \(\hat{G}\) to be given by \(\gamma\cdot\chi_{\mathbf{x}}=\chi_{\gamma^{-1}\mathbf{x}}\) for \(\chi_{\mathbf{x}}\in\hat{G}\). Let \(\mathcal{X}\) be the partition of \(\hat{G}\) corresponding to the orbits of this action. We can then check that \((\mathcal{X},\mathcal{K})\) defines a supercharacter theory on \((\mathbb{Z}/n\mathbb{Z})^{m}\). To check that \((\mathcal{X},\mathcal{K})\) is a supercharacter theory on \((\mathbb{Z}/n\mathbb{Z})^{m}\), it is easy enough to check that \(\{0\}\in\mathcal{K}\) and that \(|\mathcal{X}|=|\mathcal{K}|\) (for the second one, note that the orbits in this partitions are simply cosets of \(\Gamma^{T}\) and \(\Gamma^{-1}\), which are both in bijection with cosets of \(\Gamma\)). We then need to show that for \(X\in\mathcal{X}\), the supercharacter function \(\sigma_{X}=\sum_{\chi\in X}\chi(0)\chi\) is constant on every \(K\in\mathcal{K}\). First, note that an element \(X\in\mathcal{X}\) is the \(\Gamma\)-orbit of some character \(\chi_{\mathbf{x}}\in\hat{G}\). Thus \(X=\{\chi_{\mathbf{x}},\chi_{A^{-1}\mathbf{x}},\ldots,\chi_{A^{-(d-1)}\mathbf{ x}}\}\). Now let \(K\in\mathcal{K}\), and note that if \(\mathbf{k}\), \(\mathbf{k}^{\prime}\in K\), then there exists an exponent \(j\in\{0,1,\ldots,d-1\}\) such that \(\mathbf{k}=(A^{T})^{j}\mathbf{k}^{\prime}\). We then have the following: \[\sigma_{X}(\mathbf{k}) =\chi_{\mathbf{x}}(\mathbf{k})+\chi_{A^{-1}\mathbf{x}}(\mathbf{k })+\cdots+\chi_{A^{-(d-1)}\mathbf{x}}(\mathbf{k})\] \[=e\left(\frac{\mathbf{x}\cdot\mathbf{k}}{n}\right)+e\left(\frac{ (A^{-1}\mathbf{x})\cdot\mathbf{k}}{n}\right)+\cdots+e\left(\frac{(A^{-(d-1)} \mathbf{x})\cdot\mathbf{k}}{n}\right).\] Let \(i\in\{0,1,\ldots,d-1\}\), and note that by the properties of the dot product, we have that \[(A^{i}\mathbf{x})\cdot((A^{T})^{j}\mathbf{k})=(A^{i}\mathbf{x})\cdot((A^{j})^ {T}\mathbf{k})=(A^{j}A^{i}\mathbf{x})\cdot\mathbf{k}.\] We note that it is this interaction with the dot product that required us to define the action of \(\Gamma\) on \(G\) using the transpose. We then have the following: \[\sigma_{X}(\mathbf{k}^{\prime}) =e\left(\frac{(A^{j}\mathbf{x})\cdot\mathbf{k}}{n}\right)+e \left(\frac{(A^{j-1}\mathbf{x})\cdot\mathbf{k}}{n}\right)+\cdots+e\left(\frac {(A^{j-(d-1)}\mathbf{x})\cdot\mathbf{k}}{n}\right)\] \[=e\left(\frac{\mathbf{x}\cdot\mathbf{k}}{n}\right)+e\left(\frac{ (A^{-1}\mathbf{x})\cdot\mathbf{k}}{n}\right)+\cdots+e\left(\frac{(A^{-(d-1)} \mathbf{x})\cdot\mathbf{k}}{n}\right),\] where the second equality comes from reordering the summands. Thus we have that \(\sigma_{X}(\mathbf{k})=\sigma_{X}(\mathbf{k}^{\prime})\), and so \(\sigma_{X}\) is constant on \(K\). Thus \((\mathcal{X},\mathcal{K})\) defines a supercharacter theory on \((\mathbb{Z}/n\mathbb{Z})^{m}\). Now, each irreducible character on \(G\) is given by a choice of vector \(\mathbf{x}\in(\mathbb{Z}/n\mathbb{Z})^{m}\). In the case \(m=1\) (i.e. the case of Gaussian periods), we've seen that our choice of matrix \(A\) simply describes scalar multiplication. In order to get the vectors \(\mathbf{x}\) used for the irreducible characters \(\chi_{\mathbf{x}}\), we used the set of vectors given by the orbit \(\langle A\rangle\cdot 1\), where \(1\) is the identity element in \((\mathbb{Z}/n\mathbb{Z})^{\times}=\mathrm{GL}_{1}(\mathbb{Z}/n\mathbb{Z})\). As noted in Proposition 2.2 of [8], if we instead choose any other nonzero \(r\in\mathbb{Z}/n\mathbb{Z}\) for the orbit \(\langle A\rangle\cdot r\), the corresponding plot of supercharacter values can be embedded within the plot corresponding to \(r=1\). This then justifies our use of the orbit \(\langle A\rangle\cdot 1\) to get our characters. Analogously for the case where \(m>1\), the vectors used for the irreducible characters will be the set of vectors given by the orbit \(\langle A\rangle\cdot\mathbf{1}\), where \(\mathbf{1}=(1,1,\ldots,1)^{T}\in(\mathbb{Z}/n\mathbb{Z})^{m}\) is viewed as a column vector. For \(\mathbf{x}\in(\mathbb{Z}/n\mathbb{Z})^{m}\), we define the following supercharacter: \[\theta_{n,m,A}:(\mathbb{Z}/n\mathbb{Z})^{m}\to\mathbb{C},\qquad\quad\theta_{n,m,A}(\mathbf{x}):=\sum_{j=0}^{d-1}e\left(\frac{A^{j}\bullet\mathbf{x}}{n} \right),\] where \(A^{j}\bullet\mathbf{x}\) represents \((A^{j}\mathbf{1})\cdot\mathbf{x}\); that is, the dot product between \(A^{j}\mathbf{1}\) and \(\mathbf{x}\). We provide examples of these supercharacter theory plots in Figure 4. Our goal is to prove the following generalization of the DGL Theorem. **Theorem 9**.: _Let \(n\in\mathbb{Z}_{\geq 2}\) and \(m\in\mathbb{Z}_{\geq 1}\). Suppose \(d\mid(\#\mathit{GL}_{m}(\mathbb{Z}/n\mathbb{Z}))\), and choose a matrix \(A\in\mathit{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\) of order \(d\) such that \(\Phi_{d}(A)=0\) in \(\mathit{Mat}_{m}(\mathbb{Z}/n\mathbb{Z})\), where \(\Phi_{d}\) is the \(d\)-th cyclotomic polynomial. Let \(\theta_{n,m,A}:(\mathbb{Z}/n\mathbb{Z})^{m}\to\mathbb{C}\) be the cyclic supercharacter corresponding to \(A\). Then the image of \(\theta_{n,m,A}\) is contained in the image of the Laurent polynomial function \(g_{d}:\mathbb{T}^{\varphi(d)}\to\mathbb{C}\) defined by the following:_ \[g_{d}(z_{1},z_{2},\dots,z_{\varphi(d)})=\sum_{k=0}^{d-1}\prod_{j=0}^{\varphi(d)- 1}z_{j+1}^{c_{jk}},\] _where the \(c_{jk}\) are given by the relations_ \[x^{k}\equiv\sum_{j=0}^{\varphi(d)-1}c_{jk}x^{j}\text{ mod }\Phi_{d}(x).\] _Additionally, for a fixed order \(d\), and as \(n\) tends to infinity with the property that there exists a matrix \(A\in\text{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\) such that \(\Phi_{d}(A)=0\text{ mod }n\) (where \(n\) and \(A\) are allowed to vary), every nonempty open disk in the image of \(g_{d}\) eventually contains points in the image of \(\theta_{n,m,A}\). In other words, the image of \(g_{d}\) is "filled out" as the modulus grows without bound, assuming some conditions on \(n\)._ Before we begin the proof of Theorem 9, we make some necessary detours. First, a brief remark about the areas in which this generalizes the DGL Theorem. _Remark 10_.: There are two directions in which this theorem generalizes the original DGL Theorem. First, it looks at the group \((\mathbb{Z}/n\mathbb{Z})^{m}\) for any \(m\geq 1\), rather than just the case \(m=1\). Second, it allows for composite moduli \(n\), with the restriction that there is some matrix \(A\in\text{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\) of order \(d\) such that \(\Phi_{d}(A)=0\text{ mod }n\). To see why this new condition on \(n\) truly is a generalization of the original DGL Theorem, recall that the DGL Theorem assumes the modulus is \(p^{e}\) for some power of a prime \(p\). The group being studied is then \(\mathbb{Z}/p^{e}\mathbb{Z}\), and the group of automorphisms is \(\text{GL}_{1}(\mathbb{Z}/p^{e}\mathbb{Z})\cong(\mathbb{Z}/p^{e}\mathbb{Z})^{\times}\). When \(p\) is odd, \((\mathbb{Z}/p^{e}\mathbb{Z})^{\times}\) is cyclic. The condition that \(d\mid(p-1)\) and \(\omega\in(\mathbb{Z}/p^{e}\mathbb{Z})^{\times}\) has order \(d\) in the DGL Theorem turns out to be equivalent to the condition that \(d\mid(\#\text{GL}_{1}(\mathbb{Z}/p^{e}\mathbb{Z}))\) and \(\Phi_{d}(\omega)=0\text{ mod }p^{e}\). This is because \((\mathbb{Z}/p^{e}\mathbb{Z})^{\times}\) is cyclic of size \((p-1)p^{e-1}\), and an element \(\omega\) has order \(d\mid(p-1)\) if and only if \(\Phi_{d}(\omega)=0\text{ mod }p^{e}\). Note that the discussion above hints at another possible generalization of the DGL Theorem. The fact that \(A\in\text{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\) has order \(d\) implies that \(x^{d}-1\) is a multiple of the minimal polynomial of \(A\). Since \(\text{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\) isn't cyclic when \(m>1\) or when \(n\) isn't a power of a prime, then we are no longer guaranteed that \(\Phi_{d}(x)\) is a multiple of the minimal polynomial for \(A\). One might then wonder if we can say anything about the behavior of \(\theta_{n,m,A}\) when \(\Phi_{d}(A)\neq 0\text{ mod }n\). It turns out that the there is something that can be said about the general shape of \(\theta_{n,m,A}\), but that the Figure 4. Examples of supercharacter theory plots for a variety of \(m\), \(n\), and \(A\). asymptotic behavior no longer necessarily holds. We discuss this more in Remark 13 after the proof. We now work toward the proof of Theorem 9, which has two main parts. First, we must show that the image of \(\theta_{n,m,A}\) is in fact contained in the image of \(g_{d}\), which will come directly from the fact that \(\Phi_{d}(A)=0\ \mathrm{mod}\ n\). And second, we must show that the image of \(g_{d}\) is filled out asymptotically as described. In order to describe the asymptotic behavior of these cyclic supercharacters, we need the following concept. **Definition 11**.: Let \(s\) be a positive integer and \(\Lambda\) a finite subset of \(\mathbb{R}^{s}\). We write \[\hat{\Lambda}=\left\{(\lambda_{1}-\lfloor\lambda_{1}\rfloor,\ldots,\lambda_{s }-\lfloor\lambda_{s}\rfloor)\in[0,1)^{s}\,:\,(\lambda_{1},\ldots,\lambda_{s}) \in\Lambda\right\}.\] The _discrepancy_ of the set \(\Lambda\) is \[\sup_{B}\bigg{|}\frac{\#(B\cap\hat{\Lambda})}{\#\hat{\Lambda}}-\mathrm{vol}(B )\bigg{|},\] where the supremum is taken over all boxes \(B=[a_{1},b_{1})\times\cdots\times[a_{s},b_{s})\subseteq[0,1)^{s}\) and \(\mathrm{vol}(B)\) denotes the volume of \(B\). We say that a sequence \((\Lambda_{t})_{t=1}^{\infty}\) of finite subsets of \(\mathbb{R}^{s}\) is _uniformly distributed mod 1_ if the discrepancy of \(\Lambda_{t}\) goes to zero as \(t\to\infty\). We now offer Weyl's criterion (stated as Lemma 1 in [10]) for determining if a sequence is uniformly distributed mod 1, something that will end up being a critical lemma in our proof of Theorem 9. **Lemma 12**.: _A sequence \((\Lambda_{t})_{t=1}^{\infty}\) of finite subsets of \(\mathbb{R}^{s}\) is uniformly distributed mod 1 if and only if for every nonzero \(\mathbf{v}\in\mathbb{Z}^{s}\), we have_ \[\lim_{t\to\infty}\frac{1}{\#\Lambda_{t}}\sum_{\mathbf{u}\in\Lambda_{t}}e( \mathbf{u}\cdot\mathbf{v})=0,\] _where \(\mathbf{u}\cdot\mathbf{v}\) denotes the usual dot product._ We are now ready to prove Theorem 9. Proof.: First, we show that the image of \(\theta_{n,m,A}\) is in fact contained in the image of \(g_{d}\). Let \(n\), \(d\), and \(A\) have the properties as stated in the theorem. Since we assume that \(\Phi_{d}(A)=0\ \mathrm{mod}\ n\), then for \(k\in\{0,1,\ldots,d-1\}\), we obtain the relations \[A^{k}\equiv\sum_{j=0}^{\varphi(d)-1}c_{jk}A^{j}\ \mathrm{mod}\ n,\] where the \(c_{jk}\) are the constants mentioned in the statement of the theorem. Then for any \(\mathbf{v}\in(\mathbb{Z}/n\mathbb{Z})^{m}\), we now have the following: \[\theta_{n,m,A}(\mathbf{v})=\sum_{k=0}^{d-1}e\left(\frac{A^{k}\bullet\mathbf{ v}}{n}\right)=\sum_{k=0}^{d-1}e\left(\frac{\sum_{j=0}^{\varphi(d)-1}c_{jk}A^{j} \bullet\mathbf{v}}{n}\right)=\sum_{k=0}^{d-1}\prod_{j=0}^{\varphi(d)-1}e \left(\frac{A^{j}\bullet\mathbf{v}}{n}\right)^{c_{jk}}.\] Since \(e(x)\) is contained in \(\mathbb{T}\) for any \(x\in\mathbb{R}\), and since \(\frac{A^{j}\bullet\mathbf{v}}{n}\in\mathbb{R}\) for all \(j\), we see that the image of \(\theta_{n,m,A}\) is indeed contained in the image of \(g_{d}\) as defined above. We now need to show that the image of \(\theta_{n,m,A}\) fills out the image of \(g_{d}\) as \(n\to\infty\). To do this, we need to prove that the roots of unity showing up in the supercharacter sums get asymptotically close to any element \((z_{1},\ldots,z_{\varphi(d)})\) in the domain of \(g_{d}\). We show this by using Weyl's criterion to prove that the exponents of those roots of unity are uniformly distributed mod 1. We start by indexing our sets \(\Lambda_{n}\) in the following way. Let \(\mathcal{N}\subseteq\mathbb{N}\) be the set of all positive integers \(n\) such that \(\mathrm{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\) has a matrix \(A\) of order \(d\) such that \(\Phi_{d}(A)=0\ \mathrm{mod}\ n\). Create a sequence \((n_{i})_{i=1}^{\infty}\) using all the elements of \(\mathcal{N}\), indexed so that \(n_{1}<n_{2}<n_{3}<\cdots\). Additionally, let \(A_{i}\) denote our choice of matrix of order \(d\) modulo \(n_{i}\). We define the sets \[\Lambda_{n_{i}}=\left\{\frac{1}{n_{i}}\left(A_{i}^{0}\bullet\mathbf{x},A_{i}^{1 }\bullet\mathbf{x},\ldots,A_{i}^{\varphi(d)-1}\bullet\mathbf{x}\right)\in[0,1 )^{\varphi(d)}\,:\,\mathbf{x}\in(\mathbb{Z}/n_{i}\mathbb{Z})^{m}\right\},\] where \(A_{i}^{j}\bullet\mathbf{x}\) is implicitly thought of as being modulo \(n_{i}\). We need to show that \((\Lambda_{n_{i}})_{i=1}^{\infty}\) is uniformly distributed mod \(1\). Thus, using Lemma 12, we need to show that for any nonzero vector \(\mathbf{v}\in\mathbb{Z}^{\varphi(d)}\), the following is true: \[\lim_{i\to\infty}\frac{1}{\#\Lambda_{n_{i}}}\sum_{\mathbf{u}\in\Lambda_{n_{i}} }e(\mathbf{u}\cdot\mathbf{v})=0.\] Note that for any \(i\), we have that \(\#\Lambda_{n_{i}}\leq n_{i}^{m}\), and so we need to show that \[\lim_{i\to\infty}\frac{1}{n_{i}^{m}}\sum_{\mathbf{u}\in\Lambda_{n_{i}}}e( \mathbf{u}\cdot\mathbf{v})=0.\] In order to prove this, first let us consider the vectors \(A_{i}^{j}\cdot\mathbf{1}\) more closely. For each power \(j\in\{0,1,\ldots,\varphi(d)-1\}\), let us write \(A_{i}^{j}=:(a_{bc,i}^{j})_{1\leq b,c\leq m}\). Then we have the following: \[A_{i}^{j}\cdot\begin{pmatrix}1&1&\cdots&1\end{pmatrix}^{T}=\left(\sum_{c=1}^{ m}a_{1c,i}^{j}&\sum_{c=1}^{m}a_{2c,i}^{j}&\cdots&\sum_{c=1}^{m}a_{cm,i}^{j} \right)^{T}.\] That is, if \(A_{i}^{j}\cdot\mathbf{1}=(w_{1,i}^{j},w_{2,i}^{j},\ldots,w_{m,i}^{j})^{T}\), then \(w_{k,i}^{j}\) is the sum of the elements in the \(k\)-th row of the matrix \(A_{i}^{j}\). Let us write \(\mathbf{w}_{i}^{j}:=A_{i}^{j}\cdot\mathbf{1}\) in order to simplify notation. Returning to the computation at hand, let \(\mathbf{v}=(v_{0},\ldots,v_{\varphi(d)-1})\) be any nonzero vector in \(\mathbb{Z}^{\varphi(d)}\). Then we have the following: \[\sum_{\mathbf{u}\in\Lambda_{n_{i}}}e(\mathbf{u}\cdot\mathbf{v}) =\sum_{\mathbf{x}\in(\mathbb{Z}/n_{i}\mathbb{Z})^{m}}e\left( \sum_{j=0}^{\varphi(d)-1}\frac{\mathbf{w}_{i}^{j}\cdot\mathbf{x}\cdot v_{j}}{ n_{i}}\right)\] \[=\sum_{x_{1},\ldots,x_{m}=0}^{n_{i}-1}e\left(\sum_{j=0}^{\varphi( d)-1}\frac{(w_{1,i}^{j}x_{1}+w_{2,i}^{j}x_{2}+\cdots+w_{m,i}^{j}x_{m})v_{j}}{ n_{i}}\right)\] \[=\left[\sum_{x_{1}=0}^{n_{i}-1}e\left(\sum_{j=0}^{\varphi(d)-1} \frac{w_{1,i}^{j}x_{1}v_{j}}{n_{i}}\right)\right]\cdots\left[\sum_{x_{m}=0}^{ n_{i}-1}e\left(\sum_{j=0}^{\varphi(d)-1}\frac{w_{m,i}^{j}x_{m}v_{j}}{n_{i}} \right)\right].\] For \(\ell\in\{1,\ldots,m\}\), we define \(\alpha_{\ell,i}:=\sum_{j=0}^{\varphi(d)-1}w_{\ell,i}^{j}\cdot v_{j}\). Additionally, define \(r_{\ell,i}:=\frac{n_{i}}{\gcd(n_{i}\alpha_{\ell,i})}\) so that for any \(x_{\ell}\in\{0,1\ldots,n_{i}-1\}\), we can decompose \(x_{\ell}=k_{\ell}r_{\ell,i}+s_{\ell}\) for some \(k_{\ell}\in\{0,1,\ldots,\frac{n_{i}}{r_{\ell,i}}-1\}\) and some \(s_{\ell}\in\{0,1,\ldots,r_{\ell,i}-1\}\). Then we have the following: \[\sum_{\mathbf{u}\in\Lambda_{n_{i}}}e(\mathbf{u}\cdot\mathbf{v})= \left[\sum_{x_{1}=0}^{n_{i}-1}e\left(\sum_{j=0}^{\varphi(d)-1}\frac{w_{1,i}^{j} x_{1}v_{j}}{n_{i}}\right)\right]\cdots\left[\sum_{x_{m}=0}^{n_{i}-1}e\left(\sum_{j=0}^{ \varphi(d)-1}\frac{w_{m,i}^{j}x_{m}v_{j}}{n_{i}}\right)\right]\] \[=\left[\sum_{x_{1}=0}^{n_{i}-1}e\left(\frac{\alpha_{1,i}x_{1}}{n _{i}}\right)\right]\cdots\left[\sum_{x_{m}=0}^{n_{i}-1}e\left(\frac{\alpha_{m, i}x_{m}}{n_{i}}\right)\right]\] \[=\left[\sum_{k_{1}=0}^{\frac{n_{i}}{r_{1,i}}-1}\sum_{s_{1}=0}^{r _{1,i}-1}e\left(\frac{k_{1}\alpha_{1,i}}{\gcd(n_{i},\alpha_{1,i})}+\frac{ \alpha_{1,i}s_{1}}{n_{i}}\right)\right]\cdots\left[\sum_{k_{m}=0}^{\frac{n_{i }}{r_{m,i}}-1}\sum_{s_{m}=0}^{r_{m,i}-1}e\left(\frac{k_{m}\alpha_{m,i}}{\gcd(n _{i},\alpha_{m,i})}+\frac{\alpha_{m,i}s_{m}}{n_{i}}\right)\right]\] \[=\left[\sum_{k_{1}=0}^{\frac{n_{i}}{r_{1,i}}-1}\sum_{s_{1}=0}^{r _{1,i}-1}e\left(\frac{\alpha_{1,i}s_{1}}{n_{i}}\right)\right]\cdots\left[\sum_ {k_{m}=0}^{\frac{n_{i}}{r_{m,i}}-1}\sum_{s_{m}=0}^{r_{m,i}-1}e\left(\frac{ \alpha_{m,i}s_{m}}{n_{i}}\right)\right]\] \[=\frac{n_{i}^{m}}{r_{1,i}\cdot r_{2,i}\cdots r_{m,i}}\left[\sum_{ s_{1}=0}^{r_{1,i}-1}e\left(\frac{\alpha_{1,i}s_{1}}{n_{i}}\right)\right]\cdots \left[\sum_{s_{m}=0}^{r_{m,i}-1}e\left(\frac{\alpha_{m,i}s_{m}}{n_{i}}\right)\right]\] \[=\begin{cases}n_{i}^{m}&\text{if $n_{i}\mid\alpha_{\ell,i}$ for all $\ell\in\{1,2,\ldots,m\}$,}\\ 0&\text{otherwise.}\end{cases}\] Note that the final equality comes from some basic facts about sums of roots of unity; i.e. that \(\sum_{t=0}^{n}e(t/n)\) is equal to \(n\) if \(n\) divides \(t\) (making the exponent an integer) and is equal to \(0\) if \(n\) does not divide \(t\). From here, we will show that there are at most finitely many \(i\) such that \(n_{i}\) divides \(\alpha_{\ell,i}\) for all \(\ell\in\{1,2,\ldots,m\}\). First, define a polynomial \(f_{\mathbf{v}}(x)=v_{0}+v_{1}x+\cdots+v_{\varphi(d)-1}x^{\varphi(d)-1}\), and note that when we input the matrix \(A_{i}\), we get the following: \[f_{\mathbf{v}}(A_{i}) =v_{0}A_{i}^{0}+v_{1}A_{i}^{1}+\cdots+v_{\varphi(d)-1}A_{i}^{ \varphi(d)-1}\] \[=v_{0}\begin{pmatrix}a_{11,i}^{0}&\cdots&a_{1m,i}^{0}\\ \vdots&\ddots&\vdots\\ a_{m1,i}^{0}&\cdots&a_{mm,i}^{0}\end{pmatrix}+v_{1}\begin{pmatrix}a_{11,i}^{1}& \cdots&a_{1m,i}^{1}\\ \vdots&\ddots&\vdots\\ a_{m1,i}^{1}&\cdots&a_{mm,i}^{1}\end{pmatrix}+\cdots+v_{\varphi(d)-1}\begin{pmatrix} a_{11,i}^{\varphi(d)-1}&\cdots&a_{1m,i}^{\varphi(d)-1}\\ \vdots&\ddots&\vdots\\ a_{m1,i}^{\varphi(d)-1}&\cdots&a_{mm,i}^{\varphi(d)-1}\end{pmatrix}\] \[=\begin{pmatrix}\sum_{j=0}^{\varphi(d)-1}v_{j}a_{11,i}^{j}& \cdots&\sum_{j=0}^{\varphi(d)-1}v_{j}a_{1m,i}^{j}\\ \vdots&\ddots&&\vdots\\ \sum_{j=0}^{\varphi(d)-1}v_{j}a_{m1,i}^{j}&\cdots&\sum_{j=0}^{\varphi(d)-1}v_{ j}a_{mm,i}^{j}\end{pmatrix}\] Note that if we sum the entries in the topmost row, we get the following: \[\sum_{j=0}^{\varphi(d)-1}v_{j}\left(\sum_{c=1}^{m}a_{1c,i}^{j}\right)=\sum_{j=0 }^{\varphi(d)-1}v_{j}w_{1,i}^{j}=\alpha_{1,i}.\] Analogously, for any \(\ell\in\{1,\ldots,m\}\), the sum of the entries in the \(\ell\)-th row is equal to \(\alpha_{\ell,i}\). We can write this observation succinctly as the following \(m\times 1\) matrix: \[f_{\mathbf{v}}(A_{i})\begin{pmatrix}1&1&\cdots&1\end{pmatrix}^{T}=\begin{pmatrix} \alpha_{1,i}&\alpha_{2,i}&\cdots&\alpha_{m,i}\end{pmatrix}^{T}.\] Consider now the cyclotomic polynomial \(\Phi_{d}(x)\). Since \(f_{\mathbf{v}}(x)\) is a polynomial of degree \(\varphi(d)-1\), and since \(\Phi_{d}(x)\) is irreducible over \(\mathbb{Q}\) of degree \(\varphi(d)\), then it must be true that \(\gcd(f_{\mathbf{v}}(x),\Phi_{d}(x))=1\) in \(\mathbb{Q}[x]\). Thus there exist polynomials \(P(x),Q(x)\in\mathbb{Q}[x]\) such that \(P(x)f_{\mathbf{v}}(x)+Q(x)\Phi_{d}(x)=1\). By clearing out denominators, we then obtain polynomials \(R(x),S(x)\in\mathbb{Z}[x]\) such that \(R(x)f_{\mathbf{v}}(x)+S(x)\Phi_{d}(x)=t\) for some fixed nonzero integer \(t\). Note that the above equality holds for the polynomial \(f_{\mathbf{v}}(x)\) and depends only on the choice of \(d\) and the vector \(\mathbf{v}\in\mathbb{Z}^{\varphi(d)}\); that is, the equality does not depend on the choices of \(n_{i}\) nor \(A_{i}\). So, in particular, when we input any chosen \(A_{i}\) with the properties mentioned in the statement of the theorem, we find that \(R(A_{i})f_{\mathbf{v}}(A_{i})+S(A_{i})\Phi_{d}(A_{i})=t\cdot I_{m}\), where \(I_{m}\) is the \(m\times m\) identity matrix. Since \(\Phi_{d}(A_{i})=0\) mod \(n_{i}\), we then find that \(R(A_{i})f_{\mathbf{v}}(A_{i})\equiv t\cdot I_{m}\) mod \(n_{i}\). Now, \(R(A_{i})\) is an \(m\times m\) matrix, so let \(R(A_{i})=(\rho_{bc,i})_{1\leq b,c\leq m}\). Then, using the congruence above and after multiplying both sides on the right by the matrix \(\begin{pmatrix}1&1&\cdots&1\end{pmatrix}^{T}\), we get the following result: \[R(A_{i})f_{\mathbf{v}}(A_{i})\begin{pmatrix}1&\cdots&1\end{pmatrix}^{T} \equiv\begin{pmatrix}t&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&t\end{pmatrix}\begin{pmatrix}1&\cdots&1\end{pmatrix}^{T}\text{ mod }n_{i}\] \[\begin{pmatrix}\rho_{11,i}&\cdots&\rho_{1m,i}\\ \vdots&\ddots&\vdots\\ \rho_{m1,i}&\cdots&\rho_{mm,i}\end{pmatrix}\begin{pmatrix}\alpha_{1,i}&\cdots &\alpha_{m,i}\end{pmatrix}^{T}\equiv\begin{pmatrix}t&\cdots&t\end{pmatrix}^{T} \text{ mod }n_{i}\] \[\begin{pmatrix}\sum_{\ell=1}^{m}\rho_{1\ell,i}\alpha_{\ell,i}&\sum_{\ell=1}^{ m}\rho_{2\ell,i}\alpha_{\ell,i}&\cdots&\sum_{\ell=1}^{m}\rho_{m\ell,i}\alpha_{ \ell,i}\end{pmatrix}^{T}\equiv\begin{pmatrix}t&\cdots&t\end{pmatrix}^{T}\text{ mod }n_{i}\] Thus if \(n_{i}\) divides \(\alpha_{\ell,i}\) for every \(\ell\), then every entry of the matrix on the left-hand side is divisible by \(n_{i}\). This then implies \(n_{i}\) must also divide \(t\). However, as stated previously, \(t\) is a nonzero integer which is fixed for all choices of \(n_{i}\) and \(A_{i}\). Since there are at most finitely many \(i\) such that \(n_{i}\) divides \(t\), then there can be at most finitely many \(i\) such that \(n_{i}\) divides \(\alpha_{\ell,i}\) for all \(\ell\). Choose \(N\) to be the largest integer such that \(n_{N}\) divides \(\alpha_{\ell,N}\) for all \(\ell\), letting \(N=0\) if there is no such integer. Then for all \(i>N\), we have shown that \(\sum_{\mathbf{u}\in\Lambda_{n_{i}}}e(\mathbf{u}\cdot\mathbf{v})=0\). Hence, we have shown that \[\lim_{i\rightarrow\infty}\frac{1}{\#\Lambda_{n_{i}}}\sum_{\mathbf{u}\in \Lambda_{n_{i}}}e(\mathbf{u}\cdot\mathbf{v})=0.\] Thus the sequence \((\Lambda_{n_{i}})_{i=1}^{\infty}\) is uniformly distributed mod \(1\) by Weyl's criterion, finishing our proof of Theorem 9. As discussed after the statement of Theorem 9, one might wonder if we can describe the behavior of \(\theta_{n,m,A}\) when \(\Phi_{d}(A)\neq 0\) mod \(n\). We now return to this discussion with the following remark. _Remark 13_.: Since \(x^{d}-1\in\mathbb{Z}[x]\) decomposes as \[x^{d}-1=\prod_{k|d}\Phi_{k}(x),\] then the fact that \(A^{d}-I=0\) mod \(n\) implies that \(\Phi_{k_{1}}(A)\Phi_{k_{2}}(A)\cdots\Phi_{k_{\ell}}(A)=0\) mod \(n\) for some \(k_{i}\) dividing \(d\). Let \(\mu(x)\in\mathbb{Z}[x]\) be the polynomial with the smallest degree of all polynomials dividing \(x^{d}-1\) that are \(0\) mod \(n\) when evaluated at \(A\). If \(\mu(x)\) is irreducible over \(\mathbb{Z}\), then \(\mu=\Phi_{d}\) since \(A\) has order \(d\), and this fits the situation of the theorem. Then, to rephrase the question from before, can we generalize Theorem 9 to situations where \(\mu\) isn't irreducible over \(\mathbb{Z}\)? It turns out that Theorem 9 doesn't generalize to situations when \(\mu\) is reducible. At least, the asymptotic filling out behavior no longer holds, though we can still describe the general shape of the supercharacter values. For example, given \(\mu(x)\mid(x^{d}-1)\), we can define an analogous Laurent polynomial \(g_{\mu}:\mathbb{T}^{\deg(\mu)}\to\mathbb{C}\) given by the following: \[g_{\mu}(z_{1},\ldots,z_{\deg(\mu)})=\sum_{k=0}^{d-1}\prod_{j=0}^{\deg(\mu)-1}z_{ j+1}^{b_{jk}},\] where the \(b_{jk}\) are given by the relations \[x^{k}\equiv\sum_{j=0}^{\deg(\mu)-1}b_{jk}x^{j}\bmod\mu(x).\] Then, using the same reasoning as in the proof of Theorem 9, we see that \(\theta_{n,m,A}\) is contained in the image of \(g_{\mu}\). However, it is no longer necessary that the image of \(g_{\mu}\) be filled out asymptotically as \(n\to\infty\). Since Weyl's criterion gives us an equivalent condition on when \(g_{\mu}\) gets filled out asymptotically, then following the notation in the proof of Theorem 9, we note that for any \(\mathbf{v}\in\mathbb{Z}^{\deg(\mu)}\), we have \[\lim_{i\to\infty}\frac{1}{\#\Lambda_{n_{i}}}\sum_{\mathbf{u}\in\Lambda_{n_{i} }}e(\mathbf{u}\cdot\mathbf{v})=0\] if and only if \(f_{\mathbf{v}}(A_{i})\mathbf{1}=0\bmod n_{i}\) for at most finitely many \(i\). As a counterexample, consider the case where \(m=6\), \(d=15\), and \(A\) is the companion matrix to the polynomial \(\Phi_{3}(x)\Phi_{5}(x)\) (so that \(\mu=\Phi_{3}\Phi_{5}\) is reducible). We can then choose the vector \(\mathbf{v}=(1,1,1,1,1,0)\in\mathbb{Z}^{\deg(\mu)}\) so that \(f_{\mathbf{v}}(x)=\Phi_{5}(x)\). One can compute directly that \[f_{\mathbf{v}}(A)\mathbf{1}=\begin{pmatrix}0&0&0&0&0&0\end{pmatrix}^{T},\] where the multiplication is happening over \(\mathbb{Z}\). Since \(f_{\mathbf{v}}(A)\mathbf{1}=0\) over \(\mathbb{Z}\), then \(f_{\mathbf{v}}(A)\mathbf{1}=0\bmod n\) for any integer \(n\). Thus \(g_{\mu}\) does _not_ get filled out asymptotically. Theorem 9 thus seems to be the most general version of the original DGL Theorem that is possible. To conclude this section, we include some examples of the phenomenon described in Theorem 9 in Figure 5. Figure 5. Examples of Theorem 9 ## 4. Observations and Generalizations: Class Field Theory Perspective We now wish to step away from supercharacter theory and return to Gaussian periods, this time viewing them from the perspective of class field theory. As a reminder, we stated in section 2.2 that the ray class field for \(\mathbb{Q}\) of modulus \((n)\subseteq\mathbb{Z}\) is the cyclotomic field \(\mathbb{Q}(\mu_{n})\), where \(\mu_{n}\subseteq\mathbb{C}^{\times}\) is the subset of \(n\)-th roots of unity. The ray class group of modulus \((n)\) is isomorphic to \((\mathbb{Z}/n\mathbb{Z})^{\times}\), and we can choose an element \(\omega\in(\mathbb{Z}/n\mathbb{Z})^{\times}\) to get a cyclic subgroup \(\langle\omega\rangle\subseteq(\mathbb{Z}/n\mathbb{Z})^{\times}\). From this perspective, Gaussian periods correspond to elements that generate the subfield \(\mathbb{Q}(\mu_{n})^{\langle\omega\rangle}\subseteq\mathbb{Q}(\mu_{n})\) fixed by the action of this subgroup. Now, this is the whole story for the rational field \(\mathbb{Q}\), but we would like to generalize this to other base fields. Ostensibly, this story is simple enough to recreate: take a number field \(K\) and its ray class field \(K[\mathfrak{m}]\) and ray class group \(Cl_{K}(\mathfrak{m})\) of modulus \(\mathfrak{m}\), choose an element \(\omega\in Cl_{K}(\mathfrak{m})\), and then use the analogous Galois action of \(\langle\omega\rangle\) to generate elements of the fixed subfield \(K[\mathfrak{m}]^{\langle\omega\rangle}\). However, in order to recreate this story _explicitly_, one needs to have explicit elements of the ray class fields \(K[\mathfrak{m}]\), which--as stated previously--is a problem to which we have very few answers. As of the writing of this paper, there are two main classes of fields other than \(\mathbb{Q}\) for which we have explicit descriptions of their ray class fields. The first is quadratic imaginary fields (fields of the form \(\mathbb{Q}(\sqrt{-d})\) for positive square-free \(d\)), for which the theory of complex multiplication gives us our answer. The second is the much more recent case of totally real fields, in which Dasgupta and Kakde [6] showed that the ray class fields can be generated using Brumer-Stark units. For the rest of this section, we will explore the case of quadratic imaginary fields, starting with a (very) brief overview of elliptic curves and the theory of complex multiplication. For further reading on elliptic curves and complex multiplication, we recommend [15] and [16] (Chapter II in particular). ### Elliptic Curves and Complex Multiplication Our goal is to describe the construction of ray class fields of quadratic imaginary fields, along with some theorems that will be used later. We assume some basic knowledge about elliptic curves, and so we start with the definition of complex multiplication. **Definition 14**.: Let \(E\) be an elliptic curve defined over \(\mathbb{C}\). Then \(E\) is said to have _complex multiplication_ (CM) if \(\operatorname{End}(E)\) is isomorphic to a quadratic imaginary order \(\mathcal{O}\). If \(\mathcal{O}\subseteq K\), then we say that \(E\) has CM by \(K\). Elliptic curves with CM have many special properties. For this paper, however, we will only be focusing on CM and its relation to abelian extensions of quadratic imaginary fields. With this in mind, we start with the following theorem. **Theorem 15**.: _Let \(K=\mathbb{Q}(\sqrt{-d})\) be a quadratic imaginary field. Let \(\mathcal{E}(K)\) denote the set of elliptic curves with CM by \(K\), up to isomorphism. Then \(\mathcal{E}(K)\) is finite, and there exists a bijection between \(\mathcal{E}(K)\) and the ideal class group \(Cl_{K}(1)\)._ Recall that the Hilbert class field \(K[1]\) has degree \([K[1]:K]=\#Cl_{K}(1)\) over \(K\). We will be using the fact that there is a bijection between \(Cl_{K}(1)\) and \(\mathcal{E}(K)\) to obtain generators of the Hilbert class field, but before we can do that, we must first describe a different characterization of elliptic curves over \(\mathbb{C}\). **Theorem 16** (Uniformization Theorem).: _Let \(E\) be an elliptic curve over \(\mathbb{C}\). Then there exists a \(\mathbb{Z}\)-lattice \(\Lambda\subseteq\mathbb{C}\), unique up to homothety, such that \(E\) is isomorphic to \(\mathbb{C}/\Lambda\) via the complex analytic isomorphism_ \[\phi:\mathbb{C}/\Lambda\to E,\qquad\quad\phi(z)=(\phi(z;\Lambda),\wp^{\prime} (z;\Lambda)),\] _where \(\wp\) is the Weierstrass \(\wp\)-function defined to be the following:_ \[\wp(z;\Lambda)=\frac{1}{z^{2}}+\sum_{\lambda\in\Lambda\setminus\{0\}}\left(\frac{ 1}{(z-\lambda)^{2}}-\frac{1}{\lambda^{2}}\right).\] The Weierstrass \(\wp\)-function will be important to us later for computational purposes, but for now we wish to focus on the fact that choosing an elliptic curve (up to isomorphism) is equivalent to choosing a lattice \(\Lambda\subseteq\mathbb{C}\) (up to homothety). We are now ready to describe how to obtain the generators of the Hilbert class fields of quadratic imaginary fields. **Theorem 17**.: _Let \(K\) be a quadratic imaginary field. Let \(E_{1}\),..., \(E_{\ell}\) be representatives of all the isomorphism classes of elliptic curves with CM by \(K\). Let \(\Lambda_{1}\),..., \(\Lambda_{\ell}\) be the lattices in \(\mathbb{C}\) such that \(E_{i}\cong\mathbb{C}/\Lambda_{i}\), and write \(\Lambda_{i}=\mathbb{Z}+\mathbb{Z}\tau_{i}\), where \(\tau_{i}\) is in the upper half-plane. There exists a weight 0 modular function \(j\) such that the Hilbert class field \(K[1]\) is achieved by adjoining the values \(j(\tau_{1})\),..., \(j(\tau_{\ell})\). In fact, the \(j(\tau_{i})\) are all algebraic conjugates, and so \(K[1]=K(j(\tau_{i}))\) for any choice of \(i\in\{1,\ldots,\ell\}\)._ _Remark 18_.: It is common to abuse notation for the function \(j\) by allowing elliptic curves as inputs rather than elements of the upper half-plane. That is, if \(E\) is an elliptic curve which is isomorphic to \(\mathbb{C}/\Lambda\) for some lattice \(\Lambda\subseteq\mathbb{C}\), then we understand \(j(E)\) to mean \(j(\tau)\), where \(\Lambda=\mathbb{Z}+\mathbb{Z}\tau\) with \(\tau\) in the upper half-plane. In this way, we can write the result of the theorem above as saying that \(K[1]=K(j(E))\), where \(E\) is any elliptic curve with CM by \(K\). Since we can now construct the Hilbert class fields of quadratic imaginary fields, we would now like to construct abelian extensions that allow ramification at certain primes. To that end, we require the following definitions. **Definition 19**.: Let \(E\) be an elliptic curve over \(\mathbb{C}\) with CM by a quadratic imaginary field \(K\). Let \(\mathfrak{m}\subseteq\mathcal{O}_{K}\) be an ideal. We define the \(\mathfrak{m}\)_-torsion subgroup of \(E\)_ to be \[E[\mathfrak{m}]=\{t\in E:\,[\alpha]t=0\text{ for every }\alpha\in\mathfrak{m}\},\] where \(0\) represents the identity element of \(E\) and \([\alpha]\) represents the endomorphism action of \(\alpha\). Note that in the special case that \(\mathfrak{m}=(m)\) is generated by an integer, then \(E[m]\cong\mathbb{Z}/m\mathbb{Z}\times\mathbb{Z}/m\mathbb{Z}\) due to the fact that \(E\cong\mathbb{C}/\Lambda\). **Definition 20**.: Let \(E\) be an elliptic curve over \(\mathbb{C}\) with CM by a quadratic imaginary field \(K\). Suppose \(E\) is defined by the equation \(y^{2}=x^{3}+Ax+B\). Then we define a _Weber function_ to be a finite map \(h:E\to E/\mathrm{Aut}(E)\). For our purposes, we will use the following Weber function [16] (II.5.5.1): \[h(x,y)=\begin{cases}x&AB\neq 0,\\ x^{2}&B=0,\\ x^{3}&A=0.\end{cases}\] _Remark 21_.: The two special cases of \(A=0\) and \(B=0\) correspond to the cases where \(E\) has CM by \(\mathbb{Q}(\sqrt{-3})\) and \(\mathbb{Q}(\sqrt{-1})\). These are the only quadratic imaginary fields which contain roots of unity other than \(-1\) and \(1\) (i.e. where \(\mathrm{Aut}(E)\) is strictly larger than \(\{\pm 1\}\)), and it is for this reason that their Weber functions are different. However, in most cases, one can see that the Weber function is simply \(x\)-coordinate projection. While Weber functions might currently seem out of the blue, they are important for the class field theory to work out correctly. We now use all of this to construct the ray class fields of quadratic imaginary fields. **Theorem 22**.: _Let \(K\) be a quadratic imaginary field with ring of integers \(\mathcal{O}_{K}\). Let \(\mathfrak{m}\subseteq\mathcal{O}_{K}\) be an ideal. Let \(E\) be an elliptic curve over \(\mathbb{C}\) with CM by \(K\), and let \(h:E\to E/\text{Aut}(E)\) be a Weber function. Then the field_ \[K(j(E),h(E[\mathfrak{m}]))\] _is the ray class field of modulus \(\mathfrak{m}\)._ _Remark 23_.: The result of the above theorem can be stated in the following way. Start with a quadratic imaginary field \(K\), a modulus \(\mathfrak{m}\subseteq\mathcal{O}_{K}\), and an elliptic curve \(E\) with CM by \(K\). In order to obtain the ray class field \(K[\mathfrak{m}]\), one must first adjoin the value \(j(E)\) to \(K\), followed by adjoining the \(x\)-coordinates (in most cases) of the \(\mathfrak{m}\)-torsion points of \(E\). Thus the analogy of roots of unity for \(\mathbb{Q}\) are certain \(j\)-values and torsion points on elliptic curves for quadratic imaginary fields. To continue the last point of Remark 23, we would like to note that the roots of unity are quite simple to describe geometrically, as they are simply points lying on the unit circle in \(\mathbb{C}\). However, the \(j\)-values and \(\mathfrak{m}\)-torsion points of elliptic curves are more complicated to describe geometrically. In fact, to the author's knowledge, it seems that there hasn't been much study of even just elliptic curve torsion points from a graphical perspective. One reason this might be the case is that elliptic curves inherently live in a four-dimensional \(\mathbb{R}\)-vector space, which are notoriously difficult to represent graphically (to say the least). However, since our study of elliptic curve torsion points will focus almost exclusively on just the \(x\)-coordinates, we thought it might be of interest to generate images of the torsion points' \(x\)- and \(y\)-coordinates separately in the complex plane. To this end, we offer examples of these images in Figure 6. The first few images have no special properties other than their coloring, which is simply the coloring that Python automatically applies to scatter plots. The next few images take inspiration from [11], in which we size the torsion points inversely based on their additive order in the torsion group; that is, the smaller the order of the torsion point, the larger the dot. We will discuss our computational methods for these images and others later in this section. ### The Galois Group and Its Action Our goal now is to compute the ray class group of modulus \(\mathfrak{m}\) for a quadratic imaginary field \(K\) and then to use that group to get a Galois action on the ray class field. Now, recall that the ideal class group \(Cl_{K}(1)\) is contained in \(Cl_{K}(\mathfrak{m})\) for any modulus \(\mathfrak{m}\). Thus, in order to compute the full ray class group of modulus \(\mathfrak{m}\), we also need to be able to compute the ideal class group for any quadratic imaginary field \(K\). This part of the computation can be rather complex, and its difficulty depends on the base field. That is, computing the full group \(Cl_{K}(\mathfrak{m})\cong\text{Gal}(K[\mathfrak{m}]/K)\) can be quite difficult. However, computing the subgroup \(\text{Gal}(K[\mathfrak{m}]/K[1])\) proves to be much easier using class field theory. The following proposition records this result. **Proposition 24**.: _Let \(K\) be a quadratic imaginary field and \(\mathfrak{m}\subseteq\mathcal{O}_{K}\) be an ideal. Let \(K[\mathfrak{m}]\) be the ray class field for \(K\) of modulus \(\mathfrak{m}\), and let \(K[1]\) be its Hilbert class field. Let \(\rho\) be a primitive third root of unity. Then the field extension \(K[1]\subseteq K[\mathfrak{m}]\) has Galois group_ \[\text{Gal}(K[\mathfrak{m}]/K[1])\cong\left(\mathcal{O}_{K}/\mathfrak{m} \mathcal{O}_{K}\right)^{\times}/\mathcal{O}_{K}^{\times},\] _where \(\mathcal{O}_{K}^{\times}\) is viewed here as the cyclic multiplicative group \(\{\pm 1,\pm i\}\), \(\{\pm 1,\pm\rho,\pm\bar{\rho}\}\), or \(\{\pm 1\}\), depending on if \(K\) is \(\mathbb{Q}(\sqrt{-1})\), \(\mathbb{Q}(\sqrt{-3})\), or \(\mathbb{Q}(\sqrt{-\bar{d}})\) for other positive square-free \(d\), respectively._ Proof.: First, using class field theory, we know that \(\text{Gal}(K[\mathfrak{m}]/K)\cong\mathbb{A}_{K}^{\times}/U_{\mathfrak{m}}K ^{\times}\), where \(\mathbb{A}_{K}^{\times}\) is the group of ideles and \[U_{\mathfrak{m}}=\mathbb{C}^{\times}\cdot\prod_{v<\infty,v|\mathfrak{m}} \mathcal{O}_{K,v}^{\times}\cdot\prod_{v<\infty,v|\mathfrak{m}}(1+\mathfrak{p} _{v}^{\text{ord}_{v}(\mathfrak{m})}\mathcal{O}_{K,v}),\] where \(\mathcal{O}_{K,v}\) is the completion of \(\mathcal{O}_{K}\) with respect to the place \(v\) and where \(\mathfrak{p}_{v}\) is the prime ideal corresponding to the place \(v\). Similarly, we have that \(\text{Gal}(K[1]/K)\cong\mathbb{A}_{K}^{\times}/U_{1}K^{\times}\). By Galois theory, we know that \(\operatorname{Gal}(K[\mathfrak{m}]/K[1])\) is a subgroup of \(\operatorname{Gal}(K[\mathfrak{m}]/K)\) and that \(\operatorname{Gal}(K[1]/K)\cong\operatorname{Gal}(K[\mathfrak{m}]/K)/ \operatorname{Gal}(K[\mathfrak{m}]/K[1])\). Thus there exists a homomorphism \(\varphi:\operatorname{Gal}(K[\mathfrak{m}]/K)\twoheadrightarrow\operatorname {Gal}(K[1]/K)\) such that \(\ker(\varphi)=\operatorname{Gal}(K[\mathfrak{m}]/K[1])\). In order to compute \(\operatorname{Gal}(K[\mathfrak{m}]/K[1])\), we start by constructing such a homomorphism and computing its kernel. Define the following homomorphism: \[\psi:\mathbb{A}_{K}^{\times}/U_{\mathfrak{m}}K^{\times}\to\mathbb{A}_{K}^{ \times}/U_{1}K^{\times},\qquad(a_{v})_{v}+U_{\mathfrak{m}}K^{\times}\mapsto(a _{v})_{v}+U_{1}K^{\times}.\] Figure 6. Coordinates 400-torsion points of elliptic curves with CM by \(K\) Note that \(U_{\mathfrak{m}}\subseteq U_{1}\), and so this map is surjective. Additionally, the elements of \(\ker(\psi)\) are cosets of \(U_{\mathfrak{m}}K^{\times}\) that are contained in \(U_{1}K^{\times}\). Thus \(\ker(\psi)=U_{1}K^{\times}/U_{\mathfrak{m}}K^{\times}\), and so \(\operatorname{Gal}(K[\mathfrak{m}]/K[1])\cong U_{1}K^{\times}/U_{\mathfrak{m} }K^{\times}\). We now let \(G=U_{1}\) and \(H=U_{\mathfrak{m}}K^{\times}\). Then \(U_{1}K^{\times}/U_{\mathfrak{m}}K^{\times}=GH/H\), and by an isomorphism theorem we have that \(GH/H\cong G/(G\cap H)\). Note that \(G\cap H=U_{1}\cap(U_{\mathfrak{m}}K^{\times})\). Note that an element of \(U_{\mathfrak{m}}K^{\times}\) is of the form \((a_{v}\alpha)_{v}\) for \((a_{v})_{v}\in U_{\mathfrak{m}}\) and \(\alpha\in K^{\times}\). Since this element must also be in \(U_{1}=\mathbb{C}^{\times}\cdot\prod_{v<\infty}\mathcal{O}_{K,v}^{\times}\), then \(\alpha\) must be in \(\mathcal{O}_{K,v}^{\times}\) for every place \(v\). This happens if and only if \(\alpha\in\mathcal{O}_{K}^{\times}\). Then, since \(U_{\mathfrak{m}}\subseteq U_{1}\), we have that \(U_{1}\cap(U_{\mathfrak{m}}K^{\times})=U_{\mathfrak{m}}\mathcal{O}_{K}^{\times}\). Thus we have the following: \[U_{1}K^{\times}/U_{\mathfrak{m}}K^{\times}\cong U_{1}/(U_{1}\cap(U_{ \mathfrak{m}}K^{\times}))=U_{1}/U_{\mathfrak{m}}\mathcal{O}_{K}^{\times}.\] Using the definitions of \(U_{1}\) and \(U_{\mathfrak{m}}\), we then have the following: \[U_{1}/U_{\mathfrak{m}}\mathcal{O}_{K}^{\times}\cong\left(\prod_{v<\infty,v| \mathfrak{m}}\mathcal{O}_{K,v}^{\times}\right)/\left(\prod_{v<\infty,v| \mathfrak{m}}1+\mathfrak{p}_{v}^{\operatorname{ord}_{v}(\mathfrak{m})} \mathcal{O}_{K,v}\right)\mathcal{O}_{K}^{\times}.\] By the Chinese Remainder Theorem, we need only compute \(\mathcal{O}_{K,v}^{\times}/(1+\mathfrak{p}_{v}^{\operatorname{ord}_{v}( \mathfrak{m})}\mathcal{O}_{K,v})\) for each place \(v\mid\mathfrak{m}\). For a fixed place \(v\) and a fixed \(k\in\mathbb{N}\), we construct a map \(f:\mathcal{O}_{K,v}^{\times}\to(\mathcal{O}_{K,v}/\mathfrak{p}_{v}^{k} \mathcal{O}_{K,v})^{\times}\) sending a unit \(u\mapsto u\bmod\mathfrak{p}_{v}^{k}\). This is a homomorphism, and it we claim it is surjective. If we can an element \(u\in(\mathcal{O}_{K,v}/\mathfrak{p}_{v}^{k}\mathcal{O}_{K,v})^{\times}\), then \(u=a_{0}+a_{1}\pi+\cdots+a_{k-1}\pi^{k-1}\), where \(\pi\) is a uniformizer of \(\mathfrak{p}_{v}\), \(a_{i}\in\mathcal{O}_{K,v}/\mathfrak{p}_{v}\), and \(a_{0}\neq 0\); if \(a_{0}=0\), then we would have that \(u\equiv 0\bmod\mathfrak{p}_{v}\), which is a contradiction since \(u\) is a unit. Then, since \(a_{0}\neq 0\), there exists an element \(\tilde{u}\in\mathcal{O}_{K,v}^{\times}\) such that \(\tilde{u}=a_{0}+a_{1}\pi+\cdots+a_{k-1}\pi^{k-1}\) which maps to \(u\). Now, note that the kernel of \(f\) is any element \(u\in\mathcal{O}_{K,v}^{\times}\) such that \(u\equiv 1\bmod\mathfrak{p}_{v}^{k}\). That is, \(u\in(1+\mathfrak{p}_{v}^{k}\mathcal{O}_{K,v})\). Thus we have that \(\ker(f)=(1+\mathfrak{p}_{v}^{k}\mathcal{O}_{K,v})\), and so we have that \[\mathcal{O}_{K,v}^{\times}/(1+\mathfrak{p}_{v}^{k}\mathcal{O}_{K,v})\cong( \mathcal{O}_{K,v}/\mathfrak{p}_{v}^{k}\mathcal{O}_{K,v})^{\times}.\] And by a well-known fact about completions, we also have that \[(\mathcal{O}_{K,v}/\mathfrak{p}_{v}^{k}\mathcal{O}_{K,v})^{\times}\cong( \mathcal{O}_{K}/\mathfrak{p}_{v}^{k}\mathcal{O}_{K})^{\times}.\] Thus we have the following: \[U_{1}/U_{\mathfrak{m}}\mathcal{O}_{K}^{\times}\cong\left(\prod_{v<\infty,v| \mathfrak{m}}(\mathcal{O}_{K}/\mathfrak{p}_{v}^{\operatorname{ord}_{v}( \mathfrak{m})}\mathcal{O}_{K})^{\times}\right)/\mathcal{O}_{K}^{\times}\cong( \mathcal{O}_{K}/\mathfrak{m}\mathcal{O}_{K})^{\times}/\mathcal{O}_{K}^{\times},\] where the last equality comes from reversing the direction of the Chinese Remainder Theorem. Now that we know the structure of \(\operatorname{Gal}(K[\mathfrak{m}]/K[1])\), we need to determine its action on \(K[\mathfrak{m}]\) in a computable way (for our purposes at least). Since \(K[\mathfrak{m}]=K(j(E),h(E[\mathfrak{m}]))\) and \(K[1]=K(j(E))\), then the action of \(\operatorname{Gal}(K[\mathfrak{m}]/K[1])\) on \(K[\mathfrak{m}]\) permutes only the elliptic curve torsion points while leaving the \(j\)-values fixed. In other words, we want to determine the action by the group \((\mathcal{O}_{K}/\mathfrak{m}\mathcal{O}_{K})^{\times}/\mathcal{O}_{K}^{\times}\) on the set \(h(E[\mathfrak{m}])\). We first need an elliptic curve with CM by \(K\). Recall that choosing an elliptic curve is equivalent to choosing a lattice in \(\mathbb{C}\) by the Uniformization Theorem, and so the simplest way to choose an elliptic curve with CM by \(K\) is to choose the lattice generated by \(\mathcal{O}_{K}\). This lattice will automatically be closed under multiplication (i.e. closed under homothety) by elements of \(\mathcal{O}_{K}\). Let \(\alpha\) be the quadratic imaginary integer such that \(\alpha\) is in the fundamental domain (in the modular forms sense) and such that \(\mathcal{O}_{K}=\mathbb{Z}[\alpha]\), and so our lattice of choice is \(\Lambda:=\mathbb{Z}+\mathbb{Z}\alpha\). To be more explicit, if \(K=\mathbb{Q}(\sqrt{-d})\), then \[\alpha=\begin{cases}\sqrt{-d}&d\equiv 1,2\bmod 4,\\ \frac{-1+\sqrt{-d}}{2}&d\equiv 3\bmod 4.\end{cases}\] Note that in the second case, we could have equivalently chosen \(\alpha=\frac{1+\sqrt{-d}}{2}\); we chose the element we did so that \(\alpha\) is a third root of unity in the case \(d=3\). We now have an elliptic curve \(E\cong\mathbb{C}/\Lambda\) that has CM by \(K\). Note that a point \((x,y)\) on our elliptic curve can be found by taking a complex number \(z\in\mathbb{C}/\Lambda\) and then using the Weierstrass \(\wp\)-function to get \((\wp(z;\Lambda),\wp^{\prime}(z;\Lambda))\). Because of this, we will be using \(\mathbb{C}/\Lambda\) for the majority of our calculations, as it is much simpler to find torsion points, add two elements, and compute the Galois action on \(E\) in this setting rather than viewing \(E\) as a subset of \(\mathbb{P}^{2}\), even though the latter viewpoint is the one needed in the end. Additionally, we will restrict our attention to the case in which \(\mathfrak{m}=(m)\) is an ideal generated by an integer. The analogy of Gaussian periods we're using allows any integral ideal \(\mathfrak{m}\), but we believe it is more intuitive and insightful to discuss only the \(\mathfrak{m}=(m)\) case for the time being. To determine the \(m\)-torsion points of \(E\), we must find the complex numbers \(z\) such that \(mz\in\Lambda\). If we restrict \(z\) to the fundamental parallelogram, then an \(m\)-torsion point \(z\) must be of the form \(z=\frac{1}{m}(a+b\alpha)\), where \(a,b\in\{0,1,\ldots,m-1\}\). This gives all \(m^{2}\) of the \(m\)-torsion points, matching the remark made at the end of Definition 19. Note that \(z\) can also be viewed as an element of \(\mathcal{O}_{K}/m\mathcal{O}_{K}\). We now define the Galois action. Let \(\beta\in(\mathcal{O}_{K}/m\mathcal{O}_{K})^{\times}/\mathcal{O}_{K}^{\times}\), and so we can write \(\beta=c+d\alpha+\mathcal{O}_{K}^{\times}\) for \(c,d\in\mathbb{Z}/m\mathbb{Z}\). Let \(\rho\in h(E[m])\); that is, \(\rho\) is an element of \(K[m]\). Since \(\rho\) is a power of the \(x\)-coordinate of an \(m\)-torsion point on \(E\), then there exists \(a,b\in\mathbb{Z}/m\mathbb{Z}\) and a power \(e\in\{1,2,3\}\) such that \(\rho=\wp(z;\Lambda)^{e}\), where \(z=\frac{1}{m}(a+b\alpha)\). The action of \(\beta\) on \(\rho\) is given by \([\beta]\cdot\rho=\wp(\beta z;\Lambda)^{e}\), where \(\beta z\) is the standard multiplication of complex numbers and where we view \(\beta\) as simply \(c+d\alpha\) instead of as a coset. _Remark 25_.: For computational purposes, it is best if we view \(z\) and \(\beta\) as matrices in \(\operatorname{Mat}_{2}(\mathbb{Z}/m\mathbb{Z})\), where we represent \(\alpha\) with the companion matrix of its minimal polynomial. More explicitly, we use the matrix \[C_{\alpha}=\begin{cases}\begin{pmatrix}1&0\\ 0&-d\end{pmatrix}&d\equiv 1,2\bmod 4,\\ \begin{pmatrix}1&-\frac{d+1}{2}\\ 0&-1\end{pmatrix}&d\equiv 3\bmod 4.\end{cases}\] to represent \(\alpha\). Then we can view \(\mathcal{O}_{K}/m\mathcal{O}_{K}\) as a subring embedded into \(\operatorname{Mat}_{2}(\mathbb{Z}/m\mathbb{Z})\) via the map sending \(\alpha\mapsto C_{\alpha}\) and \(1\mapsto I\), the \(2\times 2\) identity matrix. Thus an element \(\gamma\in\mathcal{O}_{K}/m\mathcal{O}_{K}\) is given by \(\gamma=aI+bC_{\alpha}\) for some \(a,b\in\mathbb{Z}/m\mathbb{Z}\), and \(\gamma\in(\mathcal{O}_{K}/m\mathcal{O}_{K})^{\times}\) if \(\det(\gamma)\) is relatively prime to \(m\). ### Computation and Examples Our goal here is to explicitly describe the analogy of Gaussian periods for quadratic imaginary fields and to provide some examples of these computations. We start with a definition. **Definition 26**.: Let \(K\) be a quadratic imaginary field. Choose \(\alpha\in K\) such that \(\mathcal{O}_{K}=\mathbb{Z}[\alpha]\) and \(\alpha\) is in the upper half-plane. Let \(E\) be the elliptic curve isomorphic to \(\mathbb{C}/\Lambda\), where \(\Lambda=\mathbb{Z}+\mathbb{Z}\alpha\), and so \(E\) has CM by \(K\). Choose \(A\in(\mathcal{O}_{K}/\mathfrak{m}\mathcal{O}_{K})^{\times}/\mathcal{O}_{K}^{\times}\), and let \(r\) denote the order of \(A\) in this group. Let \(\wp(z):=\wp(z;\Lambda)\) represent the Weierstrass \(\wp\)-function. Let \(z\in\mathcal{O}_{K}/\mathfrak{m}\mathcal{O}_{K}\), represented as an element of \(\mathbb{C}/\Lambda\). Then we define the following map: \[\eta_{K,\mathfrak{m},A}:\mathcal{O}_{K}/\mathfrak{m}\mathcal{O}_{K}\to\mathbb{ C},\qquad\qquad\eta_{K,\mathfrak{m},A}(z)=\sum_{j=0}^{r-1}\wp(A^{j}z).\] We call \(\eta_{K,\mathfrak{m},A}(z)\) a _ray class field period_ (RCFP) of modulus \(\mathfrak{m}\) and generator \(A\). Additionally, we call \(R(K,\mathfrak{m},A):=\operatorname{img}(\eta_{K,\mathfrak{m},A})\) to be the _ray class field period plot_ (RCFP plot) of modulus \(\mathfrak{m}\) and generator \(A\). _Remark 27_.: In the above definition, one can choose \(A\) by first choosing an element \(B\in(\mathcal{O}_{K}/\mathfrak{m}\mathcal{O}_{K})^{\times}\) and computing the cyclic subgroup \(\langle B\rangle\). If \(\langle B\rangle\cap\mathcal{O}_{K}^{\times}\) is strictly larger than the trivial group, then one can choose \(A\) by adjusting \(B\): by setting \(A=B^{e}\) for an exponent \(e\) which allows \(\langle A\rangle\) to avoid \(\mathcal{O}_{K}^{\times}\), or by setting \(A=B\) and then dividing the order of \(B\) by the size of \(\langle B\rangle\cap\mathcal{O}_{K}^{\times}\) in order to find the order of \(A\). The values of \(\eta_{K,\mathfrak{m},A}\) will be the same in either case. In Figure 7, we provide some examples of RCFP plots for various choices of fields \(K\), ideals \(\mathfrak{m}=(m)\), and elements \(A\). Note that in these examples, we use fields with class number \(1\) to avoid the issue of computing the Hilbert class field and ideal class group as discussed above. It should be noted that the scales for the real and imaginary axes varies in these images. The reason for doing this is because the values of \(\wp\) vary largely for different choices of \(K\), \(\mathfrak{m}\), and \(z\) and in particular, as the norm of the modulus \(\mathfrak{m}\) increases, the values of the RCFPs gets further from the origin (and so the points "go off to infinity" in a sense). This problem did not exist in the Gaussian period setting since all roots of unity lie on the unit circle and so were bounded in size. We would like the option of dealing with this issue by rescaling the RCFPs in some "nice" way. We would like to map our points to the unit disc so that all the images are uniform in size, and the "nice" property we would like is that our mapping be conformal. Unfortunately, there are no conformal mappings from \(\mathbb{C}\) to the unit disc, and so we decided to make do with a less nice rescaling of points. If \(w\in R(K,\mathfrak{m},A)\), we rescale using the map \[w\mapsto\frac{w}{|w|+\sqrt[4]{\mathrm{Nm}(\mathfrak{m})}},\] where Nm is the norm map from \(K\) to \(\mathbb{Q}\). In the case where \(\mathfrak{m}=(m)\), the map is just \(w\mapsto\frac{w}{|w|+\sqrt{m}}\). Additionally, we would like to comment about the 4th root on the norm; this was an ad hoc choice that seemed to generate images whose points were both close enough and far away enough to see patterns. We lack knowledge in this area, so there could very well be a rescaling map that handles this issue better. We provide some examples of RCFP plots using this map in Figure 8. ### Observations There are many observations that can be made about these RCFP plots, though these patterns seem difficult to explain for reasons that will be discussed shortly. We begin by exploring what happens in the analogous situation to Duke-Garcia-Lutz Theorem and Theorem 9. That is, we choose a quadratic imaginary field \(K=\mathbb{Q}(\sqrt{-d})\) and a modulus \(m=p^{e}\), where \(p\) is a rational prime and \(e\) is a power. We then choose an element \(A\) of the Galois group such that the multiplicative order of \(A\) mod \(p^{e}\) divides \(p^{2}-1\) when \(p\) is inert or divides \(p-1\) when \(p\) is split or ramified (these are the sizes of the residue fields). We then want to determine the behavior of these RCFP plots as \(p^{e}\) increases. We provide examples of this situation in Figure 9. Note that for these examples, we use the field \(K=\mathbb{Q}(\sqrt{-7})\) and we fix the order of \(A\) to be \(3\). One notes that there appear to be certain areas where points seem to accumulate more densely, and this pattern seems to be more pronounced in the case when \(p\) is inert (when \(p=5\) for these examples) or split (\(p=919\)) compared to the ramified case (\(p=7\)). Additionally, these accumulation areas appear to show up at regular intervals, though the number of such areas doesn't seem to have any obvious correlation with the primes showing up in the modulus. The fact that the accumulation points don't seem to correlate with the modulus is something we might expect if we view this as an analogue to the DGL Theorem (recall that the important property in the DGL Theorem was the order of the chosen element, not the modulus itself). For the next situation, we again choose a modulus \(m=p^{e}\), but instead of choosing \(A\) such that the order of \(A\) doesn't divide \(p\), we instead choose \(A\) so that \(\mathrm{order}(A)=p^{b}\) for some \(b<e\). Note that in the Gaussian period setting, this situation leads to Gaussian period plot that is a circle of radius \(p^{b}\) along with a point at the origin. The situation for RCFP plots is a bit more interesting. We provide examples of this in Figure 10, where we again use the field \(K=\mathbb{Q}(\sqrt{-7})\). Also, note that we used the map mentioned previously which scales all the points to the unit disc; we did this because we believe it makes the patterns more visible in this situation. The images in this situation are rather striking, as there tend to be easily identifiable lines showing up, along with some additional patterns. In fact, similarly to the previous situation we looked at, there again seem to be areas where points seem to accumulate more densely; however, in this situation, these accumulation areas have a much more obvious spiral pattern appearing. We've taken to calling these areas "gravitational centers," as they are reminiscent of images of galaxies. Another observation to make is that--unlike the Duke-Garcia-Lutz situation discussed previously--there seems to be a correlation between the number of these gravitational centers and the primes in the modulus. For example, one notes that when \(m=5^{4}\) and \(\mathrm{order}(A)=5\) in Figure 9(a), one can make out five gravitational centers toward the middle of the plot that are red, green, and purple, as well as five gravitational centers that are primarily blue. This pattern seems to continue as the points get further away from the origin, but it's difficult to make out the fine details because our rescaling map squishes the points together in a way that muddles the spirals. Another example of this phenomenon is when \(m=3^{6}\) in Figure 9(b), where there are three gravitational centers toward the middle that are primarily blue, red, and green. Another thing to note here is that if \(A\) and \(B\) are two elements of the Galois group with the same multiplicative order \(p^{b}\), then their respective RCFP plots are not necessarily the same. This is different than the Gaussian period case, where the choice of \(\omega\) with order \(p^{b}\) mod \(p^{e}\) always gave the same Gaussian period plot. This is because \((\mathbb{Z}/p^{e}\mathbb{Z})^{\times}\) is cyclic while \((\mathcal{O}_{K}/p^{e}\mathcal{O}_{K})^{\times}\) isn't cyclic when \(p\) splits or when \(e>1\). In fact, when \(K\) has class number \(1\), we have the following isomorphism as abstract abelian groups: \[(\mathcal{O}_{K}/p^{e}\mathcal{O}_{K})^{\times}\cong\begin{cases}\mathbb{F}_{p ^{2}}^{\times}\times(\mathbb{Z}/p^{e-1}\mathbb{Z}\times\mathbb{Z}/p^{e-1} \mathbb{Z})&\text{$p$ is inert},\\ (\mathbb{Z}/p^{e}\mathbb{Z})^{\times}\times(\mathbb{Z}/p^{e}\mathbb{Z})^{ \times}&\text{$p$ splits},\\ \mathbb{F}_{p^{\times}}^{\times}\times(\mathbb{Z}/p^{e}\mathbb{Z}\times \mathbb{Z}/p^{e-1}\mathbb{Z})&\text{$p$ is ramified}.\end{cases}\] We provide an example of this in Figure 11, where \(K=\mathbb{Q}(\sqrt{-7})\), \(m=5^{4}\), and where the choice of \(A\) has order \(5\), but their RCFP plots are different in all three cases. Note in particular that the differences between these plots are more substantial than a simple change of coloring. For example, note the behavior of the points along the positive real axis; for Figure (a)a, the string of points stays entirely on the real axis, while in Figures (b)b and (c)c, the string of points veers off toward either the upper or lower half-plane respectively. ### Obstacles Thus far in this section, we have been avoiding a discussion of RCFP plots in the same rigorous way that we had with Gaussian periods. The reason for this is due almost entirely to the relative complexity of computing the Galois action of ray class fields of quadratic imaginary fields compared to computing the Galois action on cyclotomic fields. For the cyclotomic case, the action of the Galois group can be described concretely as a function on roots of unity. In particular, an element \(\sigma_{\omega}\in\operatorname{Gal}(\mathbb{Q}(\zeta_{n})/\mathbb{Q})\) corresponding to \(\omega\in(\mathbb{Z}/n\mathbb{Z})^{\times}\) can be described as a function on roots of unity that takes \(\zeta_{n}^{a}\mapsto\zeta_{n}^{\omega a}\). This function is easily described, and the arithmetic dynamics of this function is (mostly) easily studied. However, for the quadratic imaginary case, an element \(\sigma_{A}\in\operatorname{Gal}(K[\mathfrak{m}]/K[1])\) corresponds to an element \(A\in(\mathcal{O}_{K}/\mathfrak{m}\mathcal{O}_{K})^{\times}/\mathcal{O}_{K}^{\times}\). If \(x\) is the \(x\)-coordinate of an \(\mathfrak{m}\)-torsion point of an elliptic curve with CM by \(K\) (when \(K\) is not \(\mathbb{Q}(i)\) or \(\mathbb{Q}(\zeta_{3})\)), then \(x=\wp(z_{x};\mathcal{O}_{K})\) for some \(z_{x}\in\mathbb{C}/\mathcal{O}_{K}\), where \(\wp\) is the Weierstrass \(\wp\)-function. We've described the action of \(A\) on \(x\) to be the multiplication \(Az_{x}\) in \(\mathbb{C}/\mathcal{O}_{K}\) and then mapped via \(\wp\) back to \(\mathbb{C}\). That is, the action of \(A\) on \(x\) is given by \(A\cdot x=\wp(Az_{x};\mathcal{O}_{K})\). The question then arises whether or not we can describe this action in a "nice" way--i.e. where the arithmetic dynamics can be studied somewhat easily. Unfortunately, that appears not to be the case. For \(A\in(\mathcal{O}_{K}/\mathfrak{m}\mathcal{O}_{K})^{\times}/\mathcal{O}_{K}^{\times}\), define a function \(f_{A}:\mathbb{C}\to\mathbb{C}\) such that \(x\mapsto\wp(Az_{x};\mathcal{O}_{K})\). Then \(f_{A}\) is an elliptic function relative the lattice \(\mathcal{O}_{K}\). In fact, \(f_{A}\) is an even elliptic function, and so \(f_{A}(x)\) is actually a rational function in \(x=\wp(z_{x};\mathcal{O}_{K})\) by the well-known result that all even elliptic functions can be written as a rational function in \(\wp\) (see, for example, Theorem VI.3.2 in [15]). Note that RCFPs were defined as \(\eta_{K,\mathfrak{m},A}(z_{x})=\sum_{j=0}^{r-1}\wp(A^{j}z_{x};\mathcal{O}_{ K})\), and since \(\wp(A^{j}z_{x};\mathcal{O}_{K})=f_{A}^{(j)}(x)\) (the composition of \(f_{A}\) with itself \(j\) times), then we can rewrite RCFPs in the following way: \[\eta_{K,\mathfrak{m},A}:h(E[\mathfrak{m}])\to\mathbb{C},\qquad\qquad\eta_{K, \mathfrak{m},A}(x)=\sum_{j=0}^{r-1}f_{A}^{(j)}(x).\] Thus, if we want to study RCFP plots, then we need to study the arithmetic dynamics of the rational functions \(f_{A}\). In particular, studying the arithmetic dynamics of \(f_{A}\) requires computing \(f_{A}(x)\) explicitly. If \(A\) is just an integer, then \(f_{A}(x)\) can be built out the standard division polynomials for the elliptic curve isomorphic to \(\mathbb{C}/\mathcal{O}_{K}\) (i.e. polynomials whose roots are the \(A\)-torsion points). These are defined recursively and depend on the choice of elliptic curve (see Exercise 3.7 in [15]). For general \(A\in(\mathcal{O}_{K}/\mathfrak{m}\mathcal{O}_{K})^{\times}\), computing \(f_{A}\) becomes more difficult since computing the generalized division polynomials becomes more difficult. Algorithms do exist for computing these polynomials: to the author's knowledge, Satoh [14] in 2004 was the first, followed by an optimization of Kucuksakall [12] in 2015. Regardless, computing these division polynomials remains difficult, and the study of the arithmetic dynamics of the resulting \(f_{A}\) even more so. It is the author's hope that this paper might spark some interest in studying the rational functions \(f_{A}\) more explicitly, as it is the hope that studying these functions might prove as fruitful as studying the analogous Galois action for cyclotomic fields. ## 5. Questions and Strategies As we have seen, there are many striking and beautiful visual properties of Gaussian periods and their analogs. While this and previous papers have seen and been able to explain a few of these, there are many, many unanswered questions left. With this in mind, we conclude this article with some open questions that arose while studying these sums, along with some strategies and techniques to aid further study. While some of these questions and strategies have already been mentioned in previous sections, we thought it convenient to consolidate them here. ### Questions 1. What happens to the values of the supercharacter theories and RCFPs when using the action of a non-cyclic subgroup of the automorphism group? 2. Can anything interesting be said about Gaussian periods when restricting \(\eta_{n,\omega}\) to \(k\in(\mathbb{Z}/n\mathbb{Z})^{\times}\) (that is, only the values relatively prime to \(n\))? 3. Similarly, can anything interesting be said about RCFPs when restricting \(\eta_{K,\mathfrak{m},A}\) to \(z\in(\mathcal{O}_{K}/\mathfrak{m}\mathcal{O}_{K})^{\times}\)? 4. Can the behavior of supercharacter theory values be succinctly described for composite moduli \(n=p_{1}^{e_{1}}\cdots p_{\ell}^{e_{\ell}}\) when the order of the matrix \(A\bmod p_{i}^{e_{i}}\) is different for different choices of \(i\)? (Compare this to Theorem 9, where we assumed the order of \(A\bmod p_{i}^{e_{i}}\) must always be \(d\) in order for \(\Phi_{d}(A)\) to be \(0\bmod n\).) 5. Can the geometry of \(\operatorname{img}(g_{d})\) be described succinctly for general \(d\) and not just prime \(d\)? On this note, can something be said about these images as the traces of some subgroup of \(\operatorname{SU}(d)\)? 6. Is it possible to study the rational functions \(f_{A}\) described in section 4.5 in a general way? If so, can the arithmetic dynamics of \(f_{A}\) be described? 7. Why do the patterns mentioned in section 4.4 show up in RCFP plots? This is related to the previous question, though not necessarily reliant upon it. ### Strategies There are a few strategies and heuristics that we have found helpful in our studies, and so we offer a brief overview of what we think to be the important ones. From a theoretical perspective, we have found it the most fruitful to start by finding the prime factorization of the modulus \(n\) and then using the Chinese Remainder Theorem to determine the behavior of \(\omega\) (for Gaussian periods) or \(A\) (for more general cyclic supercharacters and for RCFPs) modulo prime powers. In fact, Theorem 9 originated partly with the observation that the conclusion of the DGL Theorem was true for \(n=p_{1}^{e_{1}}\cdots p_{\ell}^{e_{\ell}}\) as long as \(\omega\) had order \(d\) modulo \(p_{i}^{e_{i}}\) for every \(i\). The reader might view Section 3 of [13] to see how this sort of strategy has been utilized previously. From a computational perspective, the main issues that one constantly brushes up against is computing power and time. An astute reader might have already noticed that our choices for moduli in our examples are often fairly small, especially outside of the few examples of Gaussian period plots where some moduli were allowed to be sever or eight digits long. This is, of course, almost entirely due to the quickly increasing number of computations that are needed to generate these plots as the modulus grows larger. For example, generating the cyclic supercharacter plots for \(G=(\mathbb{Z}/n\mathbb{Z})^{m}\) requires (approximately) \(n^{m}\) computations. So even for modest choices of \(n\) and even if \(m=2\), the amount of time needed to generate these plots can get out of hand quite quickly. Because of this, we are often quite limited in the choices of modulus with which we are allowed to experiment. Because of this, it is often good practice to construct \(n\) oneself by choosing certain primes in the factorization rather than choosing an \(n\) at random. One then has an easier time finding elements in the automorphism group with the desired properties. Since the desired properties often include the order of the element in the automorphism group, we offer a few basic observations that might be useful if one hasn't worked much with these groups. * All the automorphism groups arising the study of Gaussian periods and the cyclic supercharacters described in Section 3.2 are \(\operatorname{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\). If \(n=p_{1}^{e_{1}}\cdots p_{\ell}^{e_{\ell}}\), one can use the Chinese Remainder Theorem to decompose this group as \[\operatorname{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\cong\operatorname{GL}_{m}( \mathbb{Z}/p_{1}^{e_{1}}\mathbb{Z})\times\cdots\times\operatorname{GL}_{m}( \mathbb{Z}/p_{\ell}^{e_{\ell}}\mathbb{Z}).\] The size of \(\operatorname{GL}_{m}(\mathbb{Z}/p^{e}\mathbb{Z})\) for some prime \(p\) is \[p^{(e-1)m^{2}}(p^{e}-1)(p^{e}-p)\cdots(p^{e}-p^{e-1}),\] and so the order of any element in this group must divide these factors. * As for finding the possible orders of elements in \((\mathcal{O}_{K}/p^{e}\mathcal{O}_{K})^{\times}\) for RCFPs, we refer the reader to the isomorphism presented toward the end of Section 4.4. Also, determining a prime \(p\in\mathbb{Z}\) ramifies, splits, or is inert in \(\mathcal{O}_{K}=\mathbb{Z}[\alpha]\) amounts to determining if the minimal polynomial for \(\alpha\) has double roots, distinct roots, or is irreducible mod \(p\) (respectively). * Finding an element \(A\in\operatorname{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\) of order \(d\) such that \(\Phi_{d}(A)=0\bmod n\) is often laborious, and we haven't found a method much better than simply letting Sage run through all the matrices in \(\operatorname{Mat}_{m}(\mathbb{Z}/n\mathbb{Z})\), checking its multiplicative order (if the determinant is relatively prime to \(n\)), and then testing if it satisfies \(\Phi_{d}\). However, we have noticed that such an element exists in \(\operatorname{GL}_{m}(\mathbb{Z}/p^{e}\mathbb{Z})\) only when \(d\) divides \(\#\operatorname{GL}_{m}(\mathbb{Z}/p\mathbb{Z})=(p^{e}-1)(p^{e}-p)\cdots(p^{e} -p^{e-1})\), and so such an element exists in \(\operatorname{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\) only when \(d\mid\#\operatorname{GL}_{m}(\mathbb{Z}/p\mathbb{Z})\) for every \(p\mid n\). Of course, the converse is not necessarily true, as (for example) one can verify computationally that there is no \(A\in\operatorname{GL}_{2}(\mathbb{Z}/25\mathbb{Z})\) which satisfies \(\Phi_{5}\). * Otherwise, when trying to find an element \(A\in\operatorname{GL}_{m}(\mathbb{Z}/n\mathbb{Z})\) of order \(d\) that doesn't need to satisfy \(\Phi_{d}\), we often found it to be faster simply to find the order of some random matrix \(B\) (say the order is \(c\)), determine if \(d\) divides \(c\), and then compute \(B^{c/d}\). * When trying to find elements \(A\in\operatorname{GL}_{m}(\mathbb{Z}/p^{e}\mathbb{Z})\) of order \(p^{a}\) (that don't need to satisfy \(\Phi_{p^{a}}\)), one can verify using the Binomial Theorem that if \(B\in\operatorname{Mat}_{m}(\mathbb{Z}/p^{a}\mathbb{Z})\) and \(A=I+p^{e-a}B\) is invertible, then \(A\) will have order dividing \(p^{a}\). When \(m=1\), we can explicitly write out all the elements \(\omega\in(\mathbb{Z}/p^{e}\mathbb{Z})^{\times}\) of order \(p^{a}\) as being of the form \(\omega=1+p^{e-a}\beta\), where \(\beta\in(\mathbb{Z}/p^{a}\mathbb{Z})^{\times}\). * The above statement similarly works for the automorphism group used in RCFPs. That is, if \(\beta\in(\mathcal{O}_{K}/p^{a}\mathcal{O}_{K})^{\times}\) and \(A=1+p^{e-a}\beta\) is an element of \((\mathcal{O}_{K}/p^{e}\mathcal{O}_{K})^{\times}\), then \(A\) will have order \(p^{e-a}\). While many of these observations may seem obvious to some, we felt it might be helpful in lowering the barrier of entry to those interested in exploring these interesting phenomena on their own. ## 6. Code Most of the code used to generate these images was written in Python, though the code for the animations (explained in section 3.1) and the generation of the Laurent polynomials \(g_{d}\) (explained in Theorem 2 and section 3.1) was written in Sage for convenience reasons. Many algorithmic aspects of computing elliptic curve torsion points were based on the algorithms in [4, 5], and these are often cited in the comments of the code itself. Readers may access our code at the following GitHub link: [https://github.com/SamanthaPlatt/GaussianPeriodsandAnaloguesCode](https://github.com/SamanthaPlatt/GaussianPeriodsandAnaloguesCode)
2304.13797
Architecting Complex, Long-Lived Scientific Software
Software is a critical aspect of large-scale science, providing essential capabilities for making scientific discoveries. Large-scale scientific projects are vast in scope, with lifespans measured in decades and costs exceeding hundreds of millions of dollars. Successfully designing software that can exist for that span of time, at that scale, is challenging for even the most capable software companies. Yet scientific endeavors face challenges with funding, staffing, and operate in complex, poorly understood software settings. In this paper we discuss the practice of early-phase software architecture in the Square Kilometre Array Observatory's Science Data Processor. The Science Data Processor is a critical software component in this next-generation radio astronomy instrument. We customized an existing set of processes for software architecture analysis and design to this project's unique circumstances. We report on the series of comprehensive software architecture plans that were the result. The plans were used to obtain construction approval in a critical design review with outside stakeholders. We conclude with implications for other long-lived software architectures in the scientific domain, including potential risks and mitigations.
Neil A. Ernst, John Klein, Marco Bartolini, Jeremy Coles, Nick Rees
2023-04-26T19:29:26Z
http://arxiv.org/abs/2304.13797v1
# Architecting Complex, Long-Lived Scientific Software ###### Abstract Software is a critical aspect of large-scale science, providing essential capabilities for making scientific discoveries. Large-scale scientific projects are vast in scope, with lifespans measured in decades and costs exceeding hundreds of millions of dollars. Successfully designing software that can exist for that span of time, at that scale, is challenging for even the most capable software companies. Yet scientific endeavors face challenges with funding, staffing, and operate in complex, poorly understood software settings. In this paper we discuss the practice of early-phase softwarearchitecture in the Square Kilometre Array Observatory's Science Data Processor. The Science Data Processor is a critical software component in this next-generation radio astronomy instrument. We customized an existing set of processes for software architecture analysis and design to this project's unique circumstances. We report on the series of comprehensive software architecture plans that were the result. The plans were used to obtain construction approval in a critical design review with outside stakeholders. We conclude with implications for other long-lived software architectures in the scientific domain, including potential risks and mitigations. ## 1 Introduction The Square Kilometre Array (SKA) Project is developing two next-generation radio telescopes which, when completed, will be the world's largest radio telescopes and have unprecedented sensitivity. To build and operate the telescopes, seven countries ratified a treaty to form an intergovernmental organization, the Square Kilometre Array Observatory (SKAO) (with nine more countries showing an interest to join). The SKAO's telescopes will be able to answer many fundamental questions including describing the formation of the first stars in the universe and the nature of gravity (Labate et al., 2019). A critical part of the SKAO will be its software systems, which will control the physical instruments that collect the data, and include an exascale computing system to process and analyze the results. It is therefore critical to carefully plan, design, implement, and operate SKAO's software systems. The telescopes are composed of many inter-related components, each a substantial and novel engineering effort: Building the physical arrays and dishes; delivering high-speed data throughput in remote locations; securing the computing environment; maintaining data quality; and finally, managing the humans involved in building, operating, and using the system. The SKAO's telescopes are therefore complex socio-technical systems of systems. To manage this complexity, the system architecture is arranged as an ultra-large-scale system of systems (Feiler et al., 2006), depicted in Figure 1. Quality measures for doing science with the telescopes are similarly lofty, given the data rates and observational goals. The SKA Project has a long timeline compared to many software and computing projects. Although the treaty that formed the SKAO came into effect in January 2021, planning and design of the telescopes started in the 1990s. The Operation Readiness Review, the project milestone when the telescopes are handed over to the operations teams, is currently planned for mid-2027. The operational lifetime of the Observatory is planned to be at least 50 years. The software architecture designs must therefore recognize two real-world problems, namely: 1) the telescopes will operate in a context that their initial designers could not fully understand: for example, faster processors, more Figure 1: System of systems high level architecture of the SKAO’s telescopes. SKA Regional Centres provide local access to science data products. efficient software, and improved networking; 2) providing this flexibility for future changes must still respect today's cost, schedule, and technical constraints, such as processing efficiencies. The software architecture challenges faced by SKAO, and in particular, the Science Data Processor component, are the focus of this paper. We explain the software requirements aspects of the SKAO's Science Data Processor, and explain how that software fits with the other elements (for example, the signal processing pipeline). We describe how we used and customized existing software architecture analysis techniques to develop the design for the Science Data Processor, and how those same techniques were also used in critical design reviews to produce risk registers representing the 'known unknowns' that the project needed to address. We begin by introducing the scientific background of radio astronomy and SKAO's telescopes needed to understand the paper1. We also introduce our architecture analysis framework, a suite of tools and methods developed at the Software Engineering Institute. Footnote 1: see [https://www.skao.int](https://www.skao.int) for more details. The bulk of this paper reports our experiences applying this framework to the SKAO's telescope software design, framed as a narrative summary. We then explain the current SKA Project context and how the design approaches have held up as iterative development has proceeded. We conclude with four software engineering insights from our work on this project, and explain their implications for both researchers and practitioners. ## 2 Background We introduce both the software architecture techniques used and the details of the organizational context to orient the reader. There is more detail here than in a typical paper but it is necessary to understand the context of this experience report. ### The Square Kilometre Array Observatory #### 2.1.1 Science background Radio astronomy detects electromagnetic radiation in the centimeter- to meter-wavelength spectrum to interpret the universe. Telescope performance is measured as sensitivity and resolution. In radio astronomy performance is increased by using more receivers (grouped in an array) or longer baselines (physical separation between receivers). The scale of the SKA Project's telescopes can be appreciated in at least these four dimensions: physical size, data rate, computation, and development organization, as shown in Table 1. Given the development and construction costs, the telescopes are expected to operate for 50 years. #### 2.1.2 Development Organization and Governance During the design phase, the SKAO leadership relied on influence and good-will to coordinate the design work performed by over 200 independent organizations (largely funded by national government grants) organised into nine design consortia with each consortium responsible for the design of one system _element_. Elements included the hardware components of the dishes and antennae, supercomputers for data processing, and the software to run it all. As noted above, these telescopes are socio-technical systems of systems, with multiple integration points and interactions among the constituents. The constituents include human systems (e.g., the teams of astronomers, academics, engineers involved in the design), software systems, and hardware systems (e.g., the physical dishes, the infrastructure to support operations such as roads and cabling). In the scope of this experience report, SKAO's authority was primarily gate reviews for each element at the end of the design phase. Gate reviews are a leading practice on capital-intensive projects like these radio telescopes (NASA, 2020). A critical design review (CDR) demonstrated that the design of an element was mature enough to proceed with physical construction and software implementation. These CDRs were high stakes events--there was no opportunity to repeat a CDR that had unsatisfactory findings. This paper focuses on one system element, the Science Data Processor (SDP). The SDP takes relatively unprocessed data and applies algorithms (as found in, for example, CASA2) to create science data products. During our design process, we uncovered two equally important quality attribute requirements for the SDP: the software must be long-lived and maintainable, in order to support the planned 50 year lifespan of the Observatory; the software must also be able \begin{table} \begin{tabular}{l l} \hline \hline **Dimension** & **Scale** \\ \hline Physical size & SKA-Mid (South Africa)—100s of dish antennae + digital beamformer. \\ & SKA-Low (Australia)—130,000 antennae grouped into 512 stations using digital beamformer. \\ \hline Data Rate & \(>5\) terabytes per second of raw data and an estimated 300 petabytes per year of processed data (Scaife, 2020). \\ \hline Computation & Central computer for each telescope will have performance of 125 PFLOPs (assuming a computation efficiency of 10\%). \\ \hline Development Organization & 200 independent organizations with minimal top-down authority (discussed in the next subsection). \\ \hline \hline \end{tabular} \end{table} Table 1: SKAO Telescope Scale to meet some of the highest performance goals of any extant system, in order to process the data volumes this novel instrument will produce. These qualities are somewhat conflicting, so the design must make tradeoffs to strike an acceptable balance. ### Architecture Analysis Framework Readers familiar with architecture analysis methods developed by the Software Engineering Institute (SEI) can safely skip this section. The methods used in the engagement described in this report are based on the principle that the design of a software architecture should be shaped by quality attributes. Quality attributes are the properties of a system that stakeholders use to judge its quality. The architecture design and analysis process begins by understanding and prioritizing quality attributes, then designing structures that reflect tradeoffs among the quality attributes, and finally analyzing and evaluating the design based on the prioritized quality attributes (Kazman et al., 2012). By themselves, quality attribute names--scalability, performance, availability, etc.--are too broad to act on (Bass et al., 2012). Saying "The telescope software should be scalable" does not tell the architect which part of the system to scale, and what level of scale is sufficient. _Quality attribute scenarios_ can characterize these attributes in a way that is measurable and verifiable. A quality attribute scenario consists of 6 parts: _source_: an entity that generates a stimulus; _stimulus_: a condition that affects the system; _artifact_: the part of the system that was stimulated by the stimulus; _environment_: the condition under which the stimulus occurred; _response_: the activity that results because of the stimulus; and, _response measure_: the measure by which the system's response will be evaluated. During the engagement reported here, we used the Mission Thread Workshop, ATAM, and ad hoc discussions to elicit quality attributes as scenarios and to prioritize them, and then used the prioritized scenarios to analyze and evaluate the architecture designs. #### 2.2.1 Mission Thread Workshop The Mission Thread Workshop (MTW) (Gagliardi et al., 2013) is a facilitated, stakeholder-centric workshop to elicit and refine end-to-end quality attribute, capability, and engineering considerations for mission threads, focusing on inter-operation among system elements. A mission thread traces the data and control exchanges between elements as the processing of a stimulus event flows through the system. At each step, the architecture and engineering considerations are identified, such as interface provides/requires semantics, failure modes and error handling, resource utilization, and technical and domain-related constraints. The outputs of a MTW are quality attributes and architectural challenges, and identification of gaps in capability, functionality, documentation, and engineering. #### 2.2.2 Architecture Tradeoff Analysis Method The Architecture Tradeoff Analysis Method(r) (ATAM(r)) assesses the consequences of architectural decisions in light of quality attributes and business goals (Clements et al., 2002; Kazman et al., 2012). It brings together three groups: a trained evaluation team; the architecture's decision makers (the architect, senior designers, the project manager); and representatives of the architecture's stakeholders. Quality attributes are elicited, represented as quality attribute scenarios, and linked to business or mission goals. Scenarios are prioritized, and high priority scenarios are analyzed by having the architect explain how a system built using the architecture would satisfy the scenario. During the analysis, the evaluation team and stakeholders ask questions to identify _risks_ (potentially problematic architecture decisions), _non-risks_ (good architecture decisions), and _sensitivity points_ and _tradeoffs_ (architecture decisions that have a significant impact on one or more than one quality attribute). #### 2.2.3 Views and Beyond Views and Beyond (V+B) is an approach to documenting a software architecture (Clements et al., 2011). Views are chosen to represent stakeholder concerns and used to document software structures (elements, relations, and properties of each) including interfaces and behavior. Beyond the set of documented views, architects add information to describe the overall architecture rationale and approach, and to map between views. ## 3 Experience Report This section explains how we combined the existing techniques for software design analysis and applied them to the SKAO's telescope problems. We explore how we adapted the delivery of these approaches for the unique characteristics of this system. Our efforts had four distinct phases, shown in Figure 2 and discussed in the sections that follow. Phase 0 was preparatory work. In Phase 1, we conducted system-level scenario analysis using the Mission Thread Workshop. In Phase 2, we used the ATAM to explore design alternatives for the Science Data Processor (SDP) element. Finally, in Phase 3 we used the ATAM to organize the SDP element CDR. We conclude each section reflecting on what worked well and what had to be adjusted from each technique. In Section 4, we explain the Figure 2: Steps involved in software design of the SKAO’s SDP software element. current state of the SKAO software design, and in Section 5, we derive insights for practitioners and researchers based on our experiences. ### Phase 0: Launching the Engagement As the overall SKAO design phase got underway, the Head of Computing and Software3 was looking to reduce element integration risk. One of the few levers he controlled was specifying the design review process and what artifacts were to be delivered. He decided to use a suite of methodologies and training developed by the Software Engineering Institute (SEI) in order to "gain confidence (primarily through documentation) that all of these pieces of software will fit together and do what we need". This decision was largely based on a peer recommendation, and the SEI's reputation. Footnote 3: At the time of the initial engagement, the first two authors (NE and JK) were researchers with the SEI. NR was Head of Computing and Software at SKAO; and MB was Software Quality Engineer at SKAO. ### Phase 1: Using the Mission Thread Workshop To Identify Challenges The first engagement between the SEI experts and SKA Project Team was a Mission Thread Workshop (MTW). As noted above, a MTW is designed specifically for systems of systems integration. We used the notion of a mission thread to motivate training on architecture documentation using the Views and Beyond (V+B) discussed above--when the explanations of processing or interactions at a thread step became complicated, we identified the architecture view that would support the reasoning. The workshop and training took place over seven consecutive days in January 2017. There were more than 30 attendees representing ten countries, from all element design consortia and bringing extensive expertise in radio telescope design and operation. We chose three mission threads--end-to-end system control and data flows--to trace through the envisioned architecture of the telescope as a whole. For example, one thread was "normal operation," describing how one of SKAO's telescopes should work: scheduling a scientist's observing time, pointing the instrument at the desired patch of sky, receiving data, processing the data, and delivering a scientific product (for example, the radio visibility values for that patch of sky over a period of time). At each step in the thread, attendees identified potential risks in the design. In practice, many software integration issues occur at the organizational (consortia) boundaries. In SKA's case this risk was greatest between the design consortia, for example, where data is handed over from initial acquisition to the science data processor. The end-to-end scope of mission threads readily revealed these flawed assumptions. For example, one element design incorrectly assumed that certain processing was applied before it receives a data stream. Although this started as a training engagement, working through SKA-specific examples and scenarios during the training proved to be an important way to motivate the approach and the artifacts produced during the training exercises delivered immediate value from the training investment. **Outcomes/Next Steps:** This workshop increased the SKA Project team's confidence that the plan and design might be feasible. All consortia members became familiar with SEI jargon and approach (scenarios, quality attributes, documentation templates, and so on), which would be important for the upcoming CDRs. There was a new awareness of the details of other components of the project and the project's scope, and participants made personal connections with consortia peers responsible for other components. Each consortium then went back to its own work on detailed designs and preparing for the element's CDR with SKAO. ### Phase 2: Architecture Design and Documentation of the Science Data Processor Component Shortly after the initial Mission Thread Workshop with the element design teams, SEI experts met with the SDP consortium team, to provide coaching in the application of the architecture documentation and analysis methods to help the SDP team prepare for their CDR with the SKAO. This phase began with an in-person working meeting in March 2017 between two SEI experts, the principal members of the SDP architecture design team, and observers from SKAO. Techniques from the ATAM (scenarios, utility tree, and prioritization voting) were used to elicit and prioritize quality attributes. The SEI team helped to document other architecture drivers, such as constraints and assumptions. The working meeting then turned to applying the V+B architecture documentation approach. The SDP team identified architecture documentation stakeholders and documentation uses, and chose a set of views to address stakeholder needs. Stakeholders included the SDP implementers and the CDR review panel. The set of views comprised two module decomposition and dependency views (one for the science processing workflows and one for the framework that executes the workflows), two component and connector views (data processing and security), and a data model. Several _beyond views_ documentation artifacts were also defined: a hardware decomposition and dependency view to provide context for deployment and performance analysis, a functional architecture view to provide requirements traceability, and use case views showing how science workflows are created and executed. Final versions of these architecture documents can be found at [https://web.archive.org/web/20221202173301/https://ska-sdp.org/publications/sdp-cdr-closeout-documentation](https://web.archive.org/web/20221202173301/https://ska-sdp.org/publications/sdp-cdr-closeout-documentation). The SDP team had already created a number of documentation artifacts, which were mapped into these chosen views. All the main artifacts that had been produced during the SDP architecture design found a place in the Views and Beyond outline. The SDP team was having difficulty making SDP architecture concerns visible to SKAO. For example, the functional and quality attribute tradeoffs at the interface of the cluster manager and the SDP execution framework were complex and difficult to explain to non-experts. Traceability from business/mission goals to quality attributes and scenarios to architecture tradeoffs helped the SDP team expose the complexity and impact of the architecture decisions. **Outcomes/Next Steps:** The SDP team reorganized their documentation artifacts into the V+B outline created at the working meeting. There were still many empty or incomplete sections in the outline, and the team then took an artifact-driven approach, performing the design and analysis to fill in the gaps in the outline. ### Phase 3: Science Data Processor pre-CDR and CDR We then applied the ATAM to evaluate the Science Data Processor (SDP) software as part of the SDP pre-CDR. The SDP design was _unprecedented_ along dimensions that included data input rates, processing throughput, and data product size. SKAO wanted to allow as much time as possible for the architecture to mature, so a pre-CDR review was held in June 2018 to assess the work in progress, with the final CDR deferred to January 2019. The SKAO Head of Computing and Software led the evaluation with support from two SEI experts, one of whom was a certified ATAM Lead Evaluator who ensured that the ATAM processes were applied correctly. The evaluation team comprised four senior staff from SKAO and two external reviewers, recruited from the fields of high-performance computing and signal processing algorithm implementation. The utility tree contained more than 50 scenarios, three of which are shown in Table 2. The 11 highest priority scenarios were analyzed. The evaluation findings produced 57 risks, grouped into 9 risk themes. For example, one risk theme was the need to support data flows needed by certain types of algorithms. **Outcomes/Next Steps:** The SDP architecture team worked to address the risks found in the pre-CDR evaluation. This involved redesign, improved documentation, and prototypes and experiments to demonstrate feasibility. SEI experts continued to provide coaching and advice to the SDP architects, and performed an active review of the nearly-final SDP architecture documentation, checking to see where each of the pre-CDR risks was addressed in the V+B architecture documents. The SDP team was thus well prepared to pass the final CDR, which also used the ATAM, in January 2019. The review team comprised senior SKAO staff (i.e., organizationally independent of the SDP team) and two external reviewers with combined decades of expertise in telescope construction and operation. ### Summary of Changes to Analysis Approaches Methods such as ATAM and QAW are well-defined and the product of years of engagements. However, the unique context of SKAO meant we made certain adaptations. * We integrated the reporting of ATAM risks and risk themes into the existing system engineering lifecycle at SKAO; this meant creating a list of "Observations" based on the documentation, which were captured as Jira tickets. A Major Observation was defined as "anything that has a major impact on the Element requiring architectural changes or affects System Level interfaces, budgets, requirements,...". * The ATAM findings were combined with non-software review findings to produce the overall SDP CDR recommendation. SKAO decided that the SDP passed the CDR, with a number of outstanding action items that included the ATAM risks. * SKAO is a multi-national organization with no direct hierarchy. Meeting times constrained the traditional approach to architecture analysis. For example, SKAO architecture team drafted a utility tree which was augmented by a very brief elicitation session during the meeting, rather than creating that in the meeting itself. * ATAMs and QAWs typically separate internal and external stakeholders. In the case of SKAO, this was not necessary as the stakeholders were all internal. * The Mission Thread analysis was combined with an education session in a week-long training and elicitation approach. This ensured SKAO participants were familiarized with the broader approach and scenario-based analysis. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Quality** & **Environment** & **Stimulus** & **Response** & **Priority** \\ \hline Modifiability & Operations & A project requires the processing of visibilities using the SDP instance on the SRC platform, using new algorithms developed by the project team. & The SDP at the SRCs support generation of different instances allowing new algorithms and workflows to be combined with existing capabilities, while still permitting other large-scale observing programs. & H,H \\ Security & Operations & Unauthorized access is executed on the SDP & Access is detected and root cause analysis information is available to understand and close problem within 1 week. & H,H \\ Availability & Operations & Regional Data Centre connection is degraded for a long period of time. & Telescope continues to observe with data going to storage, minimizing impact on current schedule. Data catch-up while still operating. & H,H \\ \hline \hline \end{tabular} \end{table} Table 2: Selected scenarios used in the SDP CDR. The Priority reflects mission Importance and technical Difficulty. ## 4 SDP Design Maturation and Ongoing Work at SKAO We now expand on the design decisions taken as part of the analysis process, and describe the current state of software work at SKAO. ### SDP Design Maturation Based on the architecture analysis and quality attribute scenarios recovered, the current Science Data Processor data flow design is shown in Figure 3. Swart et al. (2022) provide a more detailed description of the entire system architecture of the MID instrument. The majority of data comes to a processing center hosting the SDP software from the Central Signal Processor component located near the dishes. This raw visibility information is then processed by the SDP. Other data sources include the Telescope State (e.g., which dishes are dedicated to which tasks) and models of the sky (for correlation and correction). The SDP will manage execution of multiple batch algorithms, perform quality assessment checks, and ultimately store and transfer processed data to long term storage and SKA Regional Centres (SRCs). A big challenge when developing a complex system with next-generation demands is to identify what aspects of the problem are _precedented_ and which Figure 3: UML component diagram showing data flow view of the SDP architecture. The architecture tension is between the real-time processing demands of the data stream from the Central Signal Processor (upper right) and the batch processing and delivery mechanisms, all of which will evolve over the lifespan of the Observatory. are _unprecedented_. For example, the telescope architects made an early decision to base their low-level system control on the Tango middleware framework (Pivetta et al., 2018), because Tango is relatively well-known, and choosing it simplifies subsequent decision-making. Previous instruments and precursors have successfully used Tango, so it is precedented. Indeed, the decision to use Tango continues to be validated Swart et al. (2022). On the other hand, the design of the Science Data Processor element was unprecedented and wide open, and was unlike other scientific computing environments. Implementation choices for the SDP have therefore been based on safe-to-fail experiments and prototypes to buy down risk. We now outline some of these approaches. ### Ongoing Work at SKAO SKAO transitioned into the construction phase in July 2021. For the software team at SKAO, the construction phase is when contracts are awarded for software development. The team has focused on preparing for collaborative development, building the tooling, testing, and continuous integration support. This includes using cloud infrastructure (based on Kubernetes) to virtualize the physical instruments to provide robust testing and monitoring insights. For example, the correlator (which synchronizes the raw signals from individual antennae) can be simulated to examine data rates and data stream characteristics. This virtualization will allow the software team to operationalize performance monitoring for handoff to hardware implementors. The team is following a process model based on the Scaled Agile Framework (SAFe) (Bartolini et al., 2020; Klaassen et al., 2020). The initial design documents created during the activities described in SS3 were used to successfully complete the CDR for the full system, and to create the legal entity to manage the SKAO. The construction plan added a new short-term milestone called AA0.5 (Array Assembly). This is a small-scale set of physical radio arrays that will allow for end-to-end proof of concept. Although the AA0.5 milestone will not provide stress testing, most of the major software components will be required. The software team used the scenarios and quality attributes described in SS3 to determine how to adapt to this new minor increment. A major challenge has been analyzing distributed system failure modes, particularly infrequent anomalies (for example, those occurring fewer than 1 in 1000 events). These integration points were a focus in element CDRs, but failure modes between elements (such as between the central signal processing and science data processing) are often hard to characterize until implementation begins. Another challenge for the SKAO staff is managing the large number of staff and contractors responsible for different software elements, with different delivery schedules and capabilities, hosted in many different countries. The team is using behavior-driven testing to align stakeholder expectations with software requirements and specification. **Reflecting On the Value of Architecture Reviews** Since the CDR, several items have emerged as potential challenges that were not addressed in the initial reviews. Many of these are a result of beginning to build an executable artifact. There is a new awareness of the importance of **integration** as a key quality attribute. There is also renewed focus on reuse and use of open source software instead of implementing every solution from scratch. A number of time-consuming concerns did not show up in design reviews. For example, the information technology aspects, such as figuring out the development stack and versioning (such as which version of Ubuntu OS). The socio-technical challenges inherent in software development are not well assessed in the ATAM-based process, which tends to examine artifacts. For example, the implementation skills and experiences of implementation teams and contractors are a big factor in technical decision making. The team may not have domain specific knowledge, or be used to working in a particular paradigm. There is an ongoing tension between the final, complete SKAO, and the intermediate and short-term steps to get there. The agile focus of three month release cadences from the adoption of the Scaled Agile Framework (SAFe) means diagrams are updated and redrawn as learning occurs. Upstream, the entire SKAO has major engineering change proposals that periodically impact the software component. For example, the signal processor component originally was co-located with the antenna and dish hardware (in a remote location), and is now closer to major urban environments. Solving today's problem might possibly make the longer-term issue more difficult. The ATAM and scenario approach are not well tailored for shorter, more detailed reviews. Three years on from the initial approval, the SKAO architecture team sees the value in the process more than the artifacts. The ATAM and related approaches are valuable in providing a framework (the documentation style and scenario focus) that are still used as foundation for reviews and evolution to this day. There is significant value in just exposing people to a coherent architectural vision, but the vision needs to be re-emphasized to build and maintain a focus. ## 5 Insights and Lessons Learned The observations and experiences from the CDR and design engagements led us to identify **four challenges for long-lived scientific software design**. As well as naming these challenges, we report on how we solved each challenge on this project. We also discuss broader implications for research and practice in other, related software contexts. We use call-out boxes shaded in grey to summarize each of the four sections, highlighting the **problem**, **our solutions**, and **wider practice and research implications**. ### Integrating Systems Engineering and Software Engineering Designing and integrating the massive-scale cyber-physical systems for the SKAO telescopes requires collaboration between systems engineers and software architects. The systems engineers on this project were very experienced, acting as requirements owners and systems analysts. They decomposed system requirements and allocated them to system elements and to components within an element, and later analyzed element designs to verify that the requirements were satisfied. Requirements and interface control document (ICD) management were high priority activities during the telescope design phase to provide evidence that the proposed designs were suitable and establishing the technical contracts between consortia developing the elements. The engagement discussed in SS3 occurred while the telescope design team was shifting from a systems engineering perspective--treating the telescope as a collection of interacting physical elements--to a software architecture perspective focused on the interaction between the software within the elements. We observed some friction between design team members as they tried to reconcile these different perspectives. This challenge is not unique to this project. Systems engineering is a broad field and systems engineers may fill many roles on a project (Sheard, 1996). Subsequent studies have examined the relationship between systems engineering and software architecture in the design of large-scale systems (Sheard et al., 2018; Cadavid et al., 2020). The systems engineering perspective treats the telescope as a whole, considering mechanical, electrical, software, and other types of components, and models the system as a hierarchy with each block decomposed into sub-blocks. While this perspective serves well for physical systems, it does not easily represent software systems where services are often shared across several elements (Maier, 2006). This difference in perspectives is an essential difference between the two disciplines, and remains unresolved in practice (Cadavid et al., 2021). The consortium-style organization of the design teams added challenges. Teams were funded by separate national agencies and had no authority over each other. The SKAO also had limited authority, essentially only able to accept or reject designs at PDR and CDR, and the SKAO was using conformance to ICDs as a primary acceptance criterion. The element-to-element ICDs had two deficiencies: First, while the ICDs specified syntax and protocols, they did not specify behavior such as message sequence constraints or error handling. Second, each ICD focused on a single interface with no artifact representing collaboration among several elements using multiple interfaces. Teams did overcome the systems engineering/software friction that is frequently observed in practice Cadavid et al. (2020). One possible explanation is that all team members participated enthusiastically in the Mission Thread Workshop and associated training discussed above in Section 2.2.1. The workshop provided a concrete bridge from systems engineering concerns to software architecture concerns and introduced a common language to discuss those concerns. The creation and elaboration of each mission thread produced an artifact that used the existing ICDs for interface details and added the collaboration among elements, both direct and mediated by other elements. The mission thread also provided the context to identify and reason about quality attribute concerns that were important to both systems engineers and software designers, such as availability, scalability, and throughput. **Practice Implications** The discussions during this Mission Thread Workshop were unlike any previous workshops that we have facilitated. Our experience has been that the primary functional thread is already documented or is completely understood by all participants so that any of them could quickly recite the thread steps. Typically, the workshop session focuses on quality attribute considerations at each step. In this case, the primary mission thread (i.e., how the telescope's normal operations should work) had not been documented, and the fragmented authority meant that no one was responsible to develop it. Much of the workshop time was spent developing the basic functional thread steps. As a result, we recommend deciding upfront whether the prime focus of the MTW will be to promote discussion among stakeholders or produce concrete deliverables. **Research Implications** This project shows that further research is needed into the socio-technical interactions between systems engineers and software architects. Another research opportunity is how to move beyond ICD conformance as the design acceptance criteria. Last, trans-disciplinary research is needed into how to structure agreements and incentives in a design context like the SKA Project that has independent teams and limited global authority, and how to operationalize measures of conformance to those agreements. ``` #1 Integrating Systems Eng. and SW Eng. ``` **Problem**: Socio-technical system of systems, with complex HW and SW components. **Solution**: Mission-thread analysis surfaces integration challenges and builds system-wide consensus. **Practice**: Decide early on the principal deliverables from the workshops. **Research**: Integrating systems engineering outputs like ICDs with SW eng. outputs like design documents. ### Upskilling Domain Experts and Contextualizing Software Experts Large-scale astronomy projects originate with astronomers, not software or systems engineers. There are complex technical challenges--such as processing of raw radio frequency signals into visibility data--that astronomers know better than anyone. At the same time, the typical astronomer has very little software engineering experience, perhaps having attended a workshop like Software Carpentry4. Thus most astronomer's software expertise comes from experience, which leads to a reliance on "how we did it before" and lessons from earlier projects. Furthermore, the radio astronomy community is not large, and so a large fraction of the external reviewers (for example, on a CDR panel) share many of the same experiences with the design team being reviewed. Footnote 4: [https://software-carpentry.org](https://software-carpentry.org) Similar situations are common in high-energy physics, genomics, and other science disciplines. Over time a science community develops highly-skilled software designers and engineers who move from project to project, but it can be hard for science projects to match the industry compensation levels needed to retain these experts. At the same time, the domain knowledge is as hard or harder to acquire for software engineers, requiring in this case postgraduate-level knowledge of radio astronomy concepts. Thus, one of the challenges for the SKA Project was to simultaneously upskill radio astronomers in software architecture, and to teach non-astronomers sufficient domain knowledge to allow them to communicate effectively. This domain knowledge transfer problem exists in other industries, but the amount of specialized knowledge required is often smaller, and commercial domains like automotive now have dedicated software teams. Our approach to structuring the engagement was to combine existing training materials (which we had used in many previous engagements) with domain-specific examples. We formed a 3-phase approach to bridging domain experts and software experts: in the Analysis phase, the teams Identify a problem in their existing system/process; in a Training phase, the consultants then help participants Learn a method to fix that problem; and finally, in the Practice phase, participants collaboratively fix the problem, with coaching from the instructors. **Practice Implications**: One benefit of recruiting generalists to help with architecture analysis or review is that an "informed ignoramus" can question assumptions or simplify the problem. However, there are limits to what a generalist can contribute. On this project, we (generalists from the SEI) saw cases where the review scope might exclude what we thought were important questions. For example, the need for high levels of resiliency or for rapid failure recovery are less important for this system, since most observations can be restarted and repeated. Thus, on this project it was hard to get designers to pay attention to failure and recovery, including concepts such as cascading failures and looking beyond "normal operation" in scenarios. These are not interesting to domain experts, who are focused on normal operations and whether the instrument will resolve the important science questions, and less interested in, for example, securing the supercomputer from malware planted by cryptocurrency miners. In its current phase, the SKAO software and engineering teams have realized the value of socializing a system-wide perspective across the entire group. Workshops on use cases and technical leadership have helped build a stronger organizational culture that understands the discipline of software engineering in the context of the wider system. **Research Implications**: There is a perennial challenge of inculcating software expertise and awareness in other disciplines. More importantly, the challenge is also how to bring existing approaches, such as requirements elicitation techniques, into consulting and collaboration exercises. Many of today's upskilling initiatives focus on the most basic software engineering concepts such as version control and testing, but other software concerns, such as release engineering, software design, and requirements engineering are as important in the early design of large-scale systems. ### Risk Reduction in the Design of Long-Lived Software The SKAO's telescopes have a long temporal scale: a planned lifespan of 50 years and a major upgrade anticipated before the end of that time. This is far longer than the planned design lifespan of most commercial software systems.5. For example, Google has developed three different file system architectures since its founding 23 years ago. Footnote 5: although some may well persist, if unintentionally, that long, for example, COBOL-based government systems In addition to temporal scale, SKAO's telescopes have significant scale physically: SKAO is planned to consume more processing resources (100s of petaFLOPS), bandwidth (8 Tbit/s), and storage (600 Pb) than any current system, in order to process raw radio signals into science data products. A key activity in designing for these scales is _risk reduction_: Identify the potential risks to project success, and find suitable mitigations for those risks. On this project, past experience and knowledge of radio astronomy make it easier to buy down hardware and physics based risks: The SKAO's telescopes build on the knowledge from previous telescopes such as ALMA and pathfinding prototypes such as ASKAP6, and on a long history of radio physics knowledge such as Fourier transforms, deconvolution, and calibration. Footnote 6: Australia SKA Pathfinder, [https://www.atnf.csiro.au/projects/askap/](https://www.atnf.csiro.au/projects/askap/) Risks due to scale are harder to mitigate. Testing the functional correctness of scientific software is very difficult. These systems must detect small and ephemeral signals, with confidence levels of 5 standard deviations in the hypothesis tests being common in some domains 7. The telescope systems on this project must also contend with testing at scale: Software works differently when integrating 100,000 instances of a component working together, compared to integrating 10 instances. Finally, at this scale, low probability and intermittent hardware and software failures become everyday occurences. Footnote 7: [https://blogs.scientificamerican.com/observations/five-sigmawhats-that/](https://blogs.scientificamerican.com/observations/five-sigmawhats-that/) It is also conceptually easier to model and identify risks associated with hardware components. Software is often seen, especially by non-software engineers, as infinitely malleable, weight-free and cost-free. Engineers make implicit assumptions about the evolvability of hardware vs. software (i.e., that Moore's law means exponential increase in processing speed). Physics risk can also be easier to quantify; we can mathematically model antenna pointing error bounds using the fabrication and assembly tolerances of the physical components. In software, by contrast, such models are significantly harder to construct and analyze. We have good tools for proving algorithmic properties (for example, the running time of a processing algorithm), but these tend to cover a small subset of the overall software system. For example, we can make only rough estimates to quantify the risks of unforeseen bugs, amount of development effort, or the longevity of key libraries. Using a quality attribute scenario approach in ATAMs and Mission Thread Workshops supports this risk-focused approach; the goal of a ATAM is to identify risk themes in the current design approach. Risks can also arise from the design process itself. This project, for example, is currently using an agile approach with the Scaled Agile Framework (SAFe) (Bartolini et al., 2020), however this approach is unproven at SKA Project's scale. These software systems are being designed 10-15 years before production deployment; this means accurate, at-scale feedback will not be available for years, and key design decisions may not be tested in production for decades. Software architecture embodies design decisions that are expensive to change (Booch, 2008); it is possible to rewrite code, but reverting key design choices can be nearly impossible. The SKA Project design teams must live with this risk: there are still unknown unknowns that cannot be addressed or identified, and the only approach to take is to experiment, learn lessons, and adapt. **Practice Implications**: The SKA Project uses four key approaches for risk reduction at scale. First, the roll-out plan calls for the telescopes to be incrementally delivered in four Array Assemblies, with increasing numbers of receptors and validated capabilities; second, development will use a modified incremental and iterative approach to learn lessons by continually delivering some form of working software, even if the software does not go into production; third, designs use open source software where possible, to avoid lock-in and vendor restrictions (for example, the use of Tango middleware); finally, the project is prepared to take ownership when needed, in case the open source project maintainer quits or retires. **Research Implications**: There are several research implications of our risk reduction approach. One is how well large-scale agile, such as SAFe, works compared with earlier methodologies like the Spiral model or iterative and incremental development. Is large-scale agile simply a misnomer? How do you apply agile principles to long-lived projects? Software engineering has long struggled with how to build knowledge and confidence in a design approach before beginning implementation. Currently this is resolved by shortening the cycle time from design to production as much as possible, but in many projects, like SKA, cycle times are necessarily long. Tests in production do not work if the user base does not yet exist, and using synthetic data is not adequate, particularly in scientific data processing. Model-driven approaches, most recently in the form of digital twins, promise to make such large-scale simulations possible. Facebook, for example, has a digital twin of its entire social graph in order to test new optimizations for its product offerings (Ahlgren et al., 2021). Digital twins are also proposed for mechatronic systems that combine hardware and software components. ``` #3Risk Reduction for Image-lived Systems ``` **Problem**: 50-year planned lifetime, 10-20 year design timeline. Risk management easier for HW components. **Solution**: Use scenarios in ATAM to focus on risk themes and create risk registers. **Practice**: Use an iterative and adaptive approach focusing on standards and open source software. **Research**: Track & analyze success of large-scale iterative models. Virtualization & twinning may reduce risk of longer increments. ### Use of Scenario-based Methods Many of the architects on this project played the stakeholder role of designer _and_ the role of domain expert _and_ the role of stakeholder as eventual user or operator of the system. This led to the need to represent and distinguish diverse types of concerns, for example, technology tradeoffs, physics limitations, and quality attributes. In this large and distributed project, clearly communicating such concerns across design teams was a challenge. While software architects spend a significant portion of their time communicating (Kruchten, 2008), the relative inexperience of the architects and the diversity of the audience made the situation here challenging. For example, as discussed earlier, the SDP team had difficulty making core tradeoffs (such as at interface boundaries) visible to reviewers. Scenarios, described in SS2.2, became an important tool for communication. The template leads an inexperienced architect to clearly frame an architectural concern, showing the need to make a decision or tradeoff, and providing a yardstick to measure each alternative. Each scenario was mapped to a business or mission goal, which defined the significance of the concern. The broad training discussed in SS3.2 ensured that other designers would understand the framing of concerns as scenarios. In the SDP example, the team created scenarios to illustrate the impact of the tradeoffs, mapped each scenario to operational and science goals, and then used the scenarios to frame the tradeoffs to SKAO and reach consensus on technology decisions. The scenario-based MTW and ATAM methods embody fundamental steps: Identify business/mission goals, elicit scenarios and map each to goals, prioritize and analyze the ability of the architecture to satisfy high-priority scenarios, identify risks, and feedback findings and iterate the steps. These steps were performed to communicate the architecture, as described in this section, and to evaluate the architecture, as described in SS3.4. Although scenarios were a valuable tool to improve communication, the inexperienced architects still had to triage, prioritize, and make innumerable decisions. **Practice Implications**: Scenarios embody a common, accessible narrative that resonates with many stakeholders. However, prioritizing and triaging these scenarios is largely qualitative and highly dependent on whom is surveyed. **Research Implications**: Methods are needed to guide less experienced architects, in particular guidance on how to sequence design decisions. This project's context is particularly stressing, with several years between separate design and construction phases, creating a long feedback cycle of final evidence of design quality. Principles are needed to guide the breadth and depth of the analysis and evaluation activities, to make the process less dependent on the expertise of the analysts. ``` #Semantics ``` **Problem**: Inexperienced architects with diverse concerns. **Solution**: Drive communication via scenarios. **Practice**: Prioritization still a challenge. **Research**: Sequencing design decisions; shorten design quality feedback; reduce process dependence on experts. ``` ## 6 Limitations As a singleton case study, the work described here is specific to the context and circumstances of the SKAO project, as well as the expertise of the people involved. However, the general approach of applying scenarios to architecture design and analysis has been established in other settings. We feel the specific details of radio astronomy have several lessons for other software systems that feature high data volumes, long life spans, and complex organizational structures. Similarly, while we know the SEI approaches best (MTW, ATAM, etc.) other risk-driven, scenario-based approaches should work as well, such as the Rational Unified Process (Kruchten, 2004). None have the lengthy track record of the ones we describe here, however. Since the SKAO's telescopes are not yet operational, the designs and approach are speculative until realized; we do not claim that the software design is perfect but rather that the design is a reasonable one, and that any risks are acceptable. And, naturally, the design can change as unanticipated factors arise. ## 7 Related Work **Design and Architecture Approaches for Software Systems** have been extensively researched and numerous techniques, tested in practical settings, exist. A good survey of the human aspects is Tang et al. (2017).Maranzano et al. (2005) survey the practice of architecture reviews. For a perspective on system of systems approaches, Klein and van Vliet (2013) is a good place to begin. The ATAM and related work mentioned in this paper are articulated and combined in the book _Software Architecture in Practice_(Bass et al., 2012). Bellomo et al. (2015) summarize the experiences of applying ATAM across several decades and hundreds of projects, finding _maintainability_ to be a key concern, as was the case for SKAO. Several experience reports describe how large-scale systems are analyzed, such as Rueckert et al. (2019), which looked at decision forces on the architecture of a large industrial control system, and Bucaioni et al. (2021), explaining how to align business goals and quality attributes for over the air updates of automotive software. Closely related to this study is the report from Cadavid et al. (2022), which explores system of systems issues in a different component of the SKA telescopes. Like us, they identify integration issues as a major challenge. **Design In Scientific Software**: For a more general background on software engineering for science we refer the reader to Heaton and Carver (2015) and Wilson et al. (2014), and the workshop series SE for Science.8 Footnote 8: [https://se4science.org/workshops/](https://se4science.org/workshops/) Most scientific computing, for example, in epidemiology, energy system models, or biology, often has important software components but relatively little devoted architectural effort behind these components. A significant amount of scientific software is focused on modeling and simulation, for example, of the Earth's climate. SKAO, by contrast, will be primarily a data collection and analysis platform, with results that may feed into scientific models (e.g., of how pulsars are formed). These modeling-oriented projects tend to evolve organically as models are themselves evolved, as described in Easterbrook and Johns (2009), as opposed to dedicated instrument software, as in SKAO, which is typically more structured. These efforts tend to lack dedicated and professionally-trained software engineers who are engaged on a long-term (rather than annual contract) basis. Recently the term Research Software Engineer has been coined to describe this role (Prause et al., 2010). Several communities in science have pioneered disciplined _engineering_ of software systems. CERN, the EU collaboration for nuclear research, has run a series of experiments based on world-leading software systems for decades (for example, (Azzopardi et al., 2019)). Easterbrook and Johns reported on the approach taken in climatology (Easterbrook and Johns, 2009) to build, maintain, and update the Hadley climate model code. On these projects, dedicated software engineers work closely with climate scientists to write the software, typically implemented in Fortran, to make the models efficient and scientifically accurate. In astronomy, the need to manage observation schedules, process large volumes of data, and distribute and analyze that data, has led to software of high quality with maintenance cycles that rival large industry leaders like Google or Facebook. The most relevant work is experience reports from other telescopes--The community is fairly small and lessons learned are widely shared. For example, the Atacama Large Millimetre Array (ALMA), engaged in similar science to the SKA Project, captured its experiences in reports describing how the software was created, challenges faced, and the approaches taken (Chavan et al., 2012; Marson and Hiriart, 2016). In addition, the ALMA project hosts open source software for managing the telescope scheduling, and many of those lessons (both good and bad) informed the SKAO's telescope designs, in some cases by having ALMA experts advise the project. There are, however, relatively few reports on applying architecture tradeoff analysis in the systematic way described here. In the systems engineering domain, Cadavid et al. (2021) conducted a series of interviews with the Low Frequency Array (LOFAR) radio telescope architects to examine the gap between software and systems engineers. Many of the SKA Project's challenges have been documented as part of the project's open development process (available at skatelescope.org) or in experience reports published in venues of the astronomy community. For example, Baffa et al. (2019) describe the telescope monitoring infrastructure experiments, and Bridger et al. (2017) report on the task scheduling design approach and investigation. In software engineering terminology, these experience reports and prototype experiments on telescope design elements are architecture spikes or tracer bullets (Hunt and Thomas, 1999) that uncover unknowns and buy down risk. ## 8 Conclusion The SKA Observatory is a unique, high-reward endeavor to see farther than humans have before. Building the Observatory is a socio-technical challenge that requires managing complex system of systems problems, not least of which is the unprecedented scale of the data ingest, processing, and sharing. Hundreds of highly skilled people are involved. Applying tested architectural analysis approaches has been able to reduce the known risks and uncover some of the unknown risks, and prepare the software aspects of the Observatory for the construction and operations phases. In doing so, we reported on several interesting challenges, such as how to upskill domain experts in software engineering, and software engineering over very long time scales. ## 9 Author Statement **Neil Ernst** Data curation, Formal Analysis, Investigation, Methodology, Software, Validation, Visualization, Writing - original draft, Writing - review & editing. **John Klein** Conceptualization, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Writing - original draft, Writing - review & editing. **Marco Bartolini** Data curation, Formal Analysis, Investigation, Project administration, Resources, Validation, Visualization, Writing - original draft, Writing - review & editing. **Nick Rees** Resources, Conceptualization, Funding acquisition, Validation, Writing - review & editing, Supervision. **Jeremy Coles** Writing - review & editing, Validation, Funding acquisition. ## Data Availability Statement The processed data informing some of the above findings are available to download from doi:10.5281/zenodo.7868987. The final documentation of the SDP CDR are available at [https://web.archive.org/web/20221202173301/https://ska-sdp.org/publications/sdp-cdr-closeout-documentation](https://web.archive.org/web/20221202173301/https://ska-sdp.org/publications/sdp-cdr-closeout-documentation). ## 10 Acknowledgements The authors thank the University of Cambridge and STFC for funding the SEI involvement in the SDP and the principals of the SDP design consortia--Paul Alexander in particular--for their continued support. Thanks also to all the participants in the workshops for their time and energy, and to Mary Popeck for her help in the second workshop. NR would also like to thank Peter Braam of Braam Research LLC for suggesting the SEI approach to him in the first place. Copyright 2022 Carnegie Mellon University, Marco Bartolini, Neil Ernst, and Nick Rees. References herein to any specific commercial product, process, or service by trade name, trade mark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by Carnegie Mellon University or its Software Engineering Institute. NO WARRANTY. THIS CARNEGIE Mellon UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN "AS-IS" BASIS. CARNEGIE Mellon UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE Mellon UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. Architecture Tradeoff Analysis Method(r) is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University. DM22-0316
2310.09888
A mirror theorem for non-split toric bundles
We construct an I-function for toric bundles obtained as a fiberwise GIT quotient of a (not necessarily split) vector bundle. This is a generalization of Brown's I-function for split toric bundles and the I-function for non-split projective bundles. In order to prove the mirror theorem, we establish a characterization of points on the Givental Lagrangian cones of toric bundles and prove a mirror theorem for the twisted Gromov-Witten theory of a fiber product of projective bundles. The former result generalizes Brown's characterization for split toric bundles to the non-split case.
Yuki Koto
2023-10-15T17:01:27Z
http://arxiv.org/abs/2310.09888v3
# A mirror theorem for non-split toric bundles ###### Abstract. We construct an \(I\)-function for toric bundles obtained as a fiberwise GIT quotient of a (not necessarily split) vector bundle. This is a generalization of Brown's \(I\)-function for split toric bundles [5] and the \(I\)-function for non-split projective bundles [21]. In order to prove the mirror theorem, we establish a characterization of points on the Givental Lagrangian cones of toric bundles and prove a mirror theorem for the twisted Gromov-Witten theory of a fiber product of projective bundles. The former result generalizes Brown's characterization for split toric bundles [5] to the non-split case. ###### Contents * 1 Introduction * 2 Genus-zero Gromov-Witten theory * 2.1 Gromov-Witten invariant and its variants * 2.2 Givental Lagrangian cone * 2.3 Quantum Riemann-Roch theorem and twisted theory * 3 Toric bundles * 3.1 Construction * 3.2 Cohomology ring * 3.3 Effective curve classes * 3.4 Fixed loci and one-dimensional orbits * 4 Lagrangian cones of toric bundles * 4.1 Characterization theorem * 4.2 Decorated graphs and fixed stable maps * 4.3 Contribution of the \((\alpha,1)\)-type graphs * 4.4 Contribution of the \((\alpha,2)\)-type graphs * 4.5 Proof of Theorem 4.2 * 5 Mirror theorem for a product of projective bundles * 5.1 Statement * 5.2 Big \(J\)-function * 5.3 Quantum Riemann-Roch operator * 5.4 Proof of Theorem 5.1 * 6 Mirror theorem for toric bundles * 6.1 Main theorem * 6.2 Restrictions of \((I^{\lambda}_{V})^{\frown}\) * 6.3 Poles of \(\iota^{*}_{\alpha}(I^{\lambda}_{V})^{\frown}\) * 6.4 Recursion formula of \(\iota^{*}_{\alpha}(I^{\lambda}_{V})^{\frown}\) * A Equivariant Fourier transformation ## 1. Introduction The genus-zero Gromov-Witten theory of a smooth projective variety \(X\) plays a significant role in symplectic geometry, algebraic geometry and mirror symmetry. It can be studied by a _mirror theorem_[13], that is, by finding a convenient point (called an _I-function_) on the Givental Lagrangian cone \(\mathcal{L}_{X}\)[14]. The cone \(\mathcal{L}_{X}\) is a Lagrangian submanifold of an infinite-dimensional symplectic vector space \(\mathcal{H}_{X}\), called the Givental space, and is defined by genus-zero gravitational Gromov-Witten invariants. A mirror theorem for \(X\) enables us to compute genus-zero Gromov-Witten invariants of \(X\) and study quantum cohomology. The \(I\)-function for a smooth (semi-)projective toric variety \(X\)[13, 19, 8] can be explicitly described as a hypergeometric series associated with the fan defining \(X\). A relative version of the toric mirror theorem has also been extensively studied. Let \(\mathbb{L}\) be a free abelian group, \(D_{1},\ldots,D_{N}\) be elements of \(\mathbb{L}^{\vee}=\operatorname{Hom}(\mathbb{L},\mathbb{Z})\) and let \(\omega\in\mathbb{L}^{\vee}\otimes\mathbb{R}\). If the triple \(\mathsf{L}=(\mathbb{L}^{\vee},D,\omega)\) is smooth (in the sense of Definition 3.1), \(\mathsf{L}\) determines an embedding of tori \(\mathbb{K}=\operatorname{Hom}(\mathbb{L}^{\vee},\mathbb{C}^{\times})\hookrightarrow \mathbb{T}=(\mathbb{C}^{\times})^{N}\) and a smooth semi-projective toric variety \(X_{\mathsf{L}}=\mathbb{C}^{N}/\!\!/_{\omega}\mathbb{K}\) with a \(\mathbb{T}\)-action. For vector bundles \(V_{1},\ldots,V_{N}\) over a smooth projective variety \(B\), we write \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) for the toric bundle constructed as a fiberwise GIT quotient \((\bigoplus_{i=1}^{N}V_{i})/\!\!/_{\omega}\mathbb{K}\). Note that \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) is endowed with the \(\mathbb{T}\)-action induced by the diagonal \(\mathbb{T}\)-action on \(\bigoplus_{i=1}^{N}V_{i}\). When \(V_{1},\ldots,V_{N}\) are all line bundles, we call \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) a _split_ toric bundle. Brown [5] proved the mirror theorem for split toric bundles, which had been conjectured by Elezi [10] for the projective bundle case. Iritani and the author [21] constructed an \(I\)-function for non-split projective bundles. In this paper, we generalize these works to a (possibly non-split) toric bundle \(\mathbb{X}_{\mathsf{L}}(\vec{V})\). **Theorem 1.1** (Theorem 6.1).: _Let \(\mathsf{L}=(\mathbb{L}^{\vee},D,\omega)\) and \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) be as above. We assume that the dual of the bundle \(V=\bigoplus_{i=1}^{N}V_{i}\to B\) is generated by global sections. Let \(\mathbb{T}\) act on \(V\) diagonally, and let \(I^{\lambda}_{V}\) be a point on the \(\mathbb{T}\)-equivariant Givental cone of \(V\) such that \(I^{\lambda}_{V}\) depends polynomially on the equivariant parameters \(\lambda_{1},\ldots,\lambda_{N}\) of \(\mathbb{T}\). Define the \(H^{*}_{\mathbb{T}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\)-valued function \((I^{\lambda}_{V})^{\frown}\) by_ \[(I^{\lambda}_{V})^{\frown}(z)=e^{\sum_{i=1}^{N}t_{i}u_{i}/z}\sum_{\ell\in \mathbb{L}}\frac{\tilde{q}^{\ell}e^{\sum_{i=1}^{N}D_{i}(\ell)\cdot t_{i}}}{ \prod_{i=1}^{N}\prod_{c=1}^{D_{i}(\ell)}\prod_{\delta\text{ \scriptsize{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ \ {{{{{{ 1 }}}}}}_{ \text{\rm{{{{{\leftleftleft\text{\leftleft({\leftleft({\left({ \left({ \left { \left { { {{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\ equivariant parameter with a quantum connection and a shift operator with an additional Novikov variable. From this conjecture, we can derive a Fourier transform of Givental cones [21, 20]. In Appendix A, we will introduce its \(\mathbb{T}\)-equivariant counterpart and check that the \(I\)-function \((I^{\lambda}_{V})^{\frown}\) coincides with the Fourier transformation of \(I^{\lambda}_{V}\). Using Theorem 1.1, we can derive \(I\)-functions that are already known: Brown's \(I\)-function for split toric bundles [5], the extended \(I\)-function [8] (for toric varieties), and the \(I\)-function for projective bundles [21]. To prove Theorem 1.1, we establish a Givental and Brown style characterization of points on \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{T}}\), that is, we characterize them in terms of the restrictions to the fixed loci. We write \(F_{\mathsf{L}}\) for the set of \(\mathbb{T}\)-fixed points on \(X_{\mathsf{L}}\). Since the torus \(\mathbb{T}\) acts on \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) fiberwise, for any \(\alpha\in F_{\mathsf{L}}\) we can construct the \(\mathbb{T}\)-fixed locus \(\iota_{\alpha}\colon\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\hookrightarrow \mathbb{X}_{\mathsf{L}}(\vec{V})\) by gathering the points corresponding to \(\alpha\) in each fiber, and hence the \(\mathbb{T}\)-fixed loci of \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) can be indexed by \(F_{\mathsf{L}}\). It can be easily seen that \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\) is a fiber product of projective bundles over \(B\), and \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\cong B\) for any \(\alpha\in F_{\mathsf{L}}\) if \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) is a split toric bundle. We can recover a point \(\mathbf{f}\) on the Givental space \(\mathcal{H}_{\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{T}}\) from its restrictions \(\{\iota_{\alpha}^{*}\mathbf{f}\}_{\alpha\in F_{\mathsf{L}}}\) by the (classical) localization formula [2, 4]. The following theorem provides an equivalent condition for \(\mathbf{f}\) to be a point on \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{T}}\) in terms of \(\{\iota_{\alpha}^{*}\mathbf{f}\}_{\alpha\in F_{\mathsf{L}}}\). **Theorem 1.4** (Theorem 4.2).: _Let \(\mathbf{f}\) be a point on the Givental space \(\mathcal{H}_{\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{T}}\) for the total space \(\mathbb{X}_{\mathsf{L}}(\vec{V})\). The point \(\mathbf{f}(z)\) lies in \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{T}}\) if and only if \(\{\iota_{\alpha}^{*}\mathbf{f}\}_{\alpha\in F_{\mathsf{L}}}\) satisfies the three conditions:_ * _for each_ \(\alpha\in F_{\mathsf{L}}\)_, the set of poles of_ \(\iota_{\alpha}^{*}\mathbf{f}(z)\) _as a function in_ \(z\) _is contained in a specific subset of_ \(H_{\mathbb{T}}^{*}(\mathrm{pt},\mathbb{Q})\) _determined by the triple_ \(\mathsf{L}=(\mathbb{L}^{\vee},D,\omega)\)_;_ * _the principle parts of the functions_ \(\iota_{\alpha}^{*}\mathbf{f}(z)\)__\((\alpha\in F_{\mathsf{L}})\) _satisfy certain recursion formulas;_ * \(\iota_{\alpha}^{*}\mathbf{f}(z)\) _represents a point on the Givental cone of the fixed locus_ \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\) _twisted by the normal bundle and the inverse Euler class._ This is a generalization of Brown's result [5, Theorem 2], which gives the same characterization for split toric bundles. There are also similar characterization results for other varieties/stacks; see [8, 23, 11]. Thanks to this characterization theorem, we can prove Theorem 1.1 by checking that the function \(-z(I^{\lambda}_{V})^{\frown}\,(-z)\) satisfies the three conditions. Note that we can confirm that the function fulfills **(C1)** and **(C2)** through a direct calculation. The verification of **(C3)** requires another mirror theorem for the twisted Gromov-Witten theory of \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\), a fiber product of projective bundles over \(B\). This is a new issue in the non-split case since \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\cong B\) if \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) is a split bundle. Let \(V_{1},\ldots,V_{K},V_{K+1},\ldots,V_{N}\) be vector bundles over a smooth projective variety \(B\), and let \(E\) be the fiber product of the projective bundles \(\mathbb{P}(V_{1}),\ldots,\mathbb{P}(V_{K})\) over \(B\). Let \(\mathbb{L}\cong\mathbb{Z}^{K}\) and let \(D_{1},\ldots,D_{N}\in\mathbb{L}^{\vee}\) be such that \(\{D_{i}\}_{i=1}^{K}\) forms a basis of \(\mathbb{L}^{\vee}\). We set \(\mathcal{W}\) to be the vector bundle \(\bigoplus_{i=K+1}^{N}V_{i}\otimes\mathcal{O}_{E}(D_{i})\) over \(E\) with \(\mathcal{O}_{E}(D_{i})=\mathcal{O}_{\mathbb{P}(V_{1})}(a_{i,1})\boxtimes_{B} \cdots\boxtimes_{B}\mathcal{O}_{\mathbb{P}(V_{K})}(a_{i,K})\) where \(D_{i}=\sum_{j=1}^{K}a_{i,j}D_{j}\). Note that \(\mathcal{W}\) is a fiberwise GIT quotient of \(V=\bigoplus_{i=1}^{N}V_{i}\), and is endowed with the fiberwise \(\mathbb{T}\)-action induced from that on \(V\). The bundle \(\mathcal{W}\) arises as a normal bundle to the fixed loci of toric bundles. The following theorem implies that \(-z(I^{\lambda}_{V})^{\frown}(-z)\) satisfies **(C3)**. **Theorem 1.5** (Theorem 5.1).: _Assume that the dual of the bundle \(\bigoplus_{i=1}^{N}V_{i}\) is generated by global sections. Let \(I_{V}^{\lambda}\) be as in Theorem 1.1. Define the \(H_{\mathbb{T}}^{\ast}(E)\)-valued function \((I_{V}^{\lambda})\widehat{\mathrm{tw}}\) by_ \[(I_{V}^{\lambda})\widehat{\mathrm{tw}}(z)=e^{\sum_{i=1}^{N}t_{i}u_{i}/z}\sum_ {\ell\in\mathbb{L}}\frac{\tilde{q}^{\ell}e^{\sum_{i=1}^{N}D_{i}(\ell)\cdot t_{ i}}}{\prod_{i=1}^{N}\prod_{c=1}^{D_{i}(\ell)}\prod_{\delta\colon\text{ \scriptsize{\rm Chern roots}}}(u_{i}+\delta+cz)}\cdot I_{V}^{u+D(\ell)z}\] _where \(\tilde{q}\) denotes the (extended) Novikov variable for the fiber, and \(u_{i}\in H_{\mathbb{T}}^{2}(E)\) is defined as_ \[u_{i}=\begin{cases}c_{1}(\mathcal{O}_{\mathbb{P}(V_{i})}(1))&1\leq i\leq K, \\ -\lambda_{i}+\sum_{j=1}^{K}a_{i,j}(u_{j}+\lambda_{j})&K+1\leq i\leq N.\end{cases}\] _Then, \(-z(I_{V}^{\lambda})\widehat{\mathrm{tw}}(-z)\) represents a point on the \((\mathcal{W},e_{\mathbb{T}}^{-1})\)-twisted Givental Lagrangian cone of \(E\)._ **Remark 1.6**.: In Theorem 5.1 (and Section 5), we will use different notation from here. There, we write \(W_{i}\) for \(V_{i}\) (\(K+1\leq i\leq N\)) here in order to distinguish between the vector bundles \(V_{1},\dots,V_{K}\) giving a variety \(E\) and those \(V_{K+1},\dots,V_{N}\) providing the twist data \((\mathcal{W},e_{\mathbb{T}}^{-1})\). Additionally, we will denote the torus acting on \(V\) as \(\mathbb{T}^{\prime}\) to distinguish it from the torus \(\mathbb{T}\) acting on \(\mathbb{X}_{\mathbb{L}}(\vec{V})\); see Remark 5.3. This result is a straightforward generalization of the mirror theorem for non-split projective bundles [21, Theorem 3.3]. The key ingredient of the proof is the quantum Riemann-Roch theorem [9, Corollary 4] and the well-known fact [24] that Gromov-Witten invariants of the zero locus of a regular section of a convex vector bundle over a variety \(X\) are given by twisted Gromov-Witten invariants of \(X\). The plan of the paper is as follows. In Section 2, we recall the definition of Gromov-Witten invariants, and introduce the non-equivariant/equivariant/twisted Givental cones and quantum Riemann-Roch theorem. In Section 3, we introduce the notion of split/non-split toric bundles, and summarize the structure of cohomology and the semigroups generated by effective curve classes, which will be needed in the subsequent sections. In Section 4, we establish a characterization theorem (Theorem 4.2) for points on the Lagrangian cone of a toric bundle. In Section 5, we prove a mirror theorem for twisted Gromov-Witten theory of a fiber product of projective bundles over \(B\). In Section 6, we prove the main result (Theorem 6.1) of this paper, that is, a mirror theorem for (possibly non-split) toric bundles. In Appendix A, we briefly explain a Fourier transform of Givental cones, and check that our \(I\)-function coincides with the Fourier transform of the \(I\)-function of a vector bundle. ### Acknowledgements The author is deeply grateful to Hiroshi Iritani for his guidance and enthusiastic support during the writing of this paper. He also would like to thank Yuan-Pin Lee and Fumihiko Sanda for very helpful discussions. This work was supported by JSPS KAKENHI Grant Number 22KJ1717. ## 2. Genus-zero Gromov-Witten theory In this section, we briefly recall the (torus-equivariant/twisted) genus-zero Gromov-Witten theory. We will introduce Gromov-Witten invariants, Givental Lagrangian cones and the quantum Riemann-Roch theorem. ### Gromov-Witten invariant and its variants We recall the definition of Gromov-Witten invariant. We also introduce an torus-equivariant version and a twisted version of #### 2.1.1. Gromov-Witten invariants Let \(X\) be a smooth projective variety over \(\mathbb{C}\). For \(d\in H_{2}(X,\mathbb{Z})\) and a non-negative integer \(n\), let \(X_{0,n,d}\) be the moduli space of degree-\(d\) stable maps to \(X\) from genus-zero curves with \(n\) marked points. It is endowed with the evaluation maps \(\mathrm{ev}_{i}\colon X_{0,n,d}\to X\) (\(1\leq i\leq n\)). For \(\alpha_{1},\ldots,\alpha_{n}\in H^{*}(X)\) and \(k_{1},\ldots,k_{n}\in\mathbb{Z}_{\geq 0}\), we define _genus-zero descendant Gromov-Witten invariants_ by \[\Big{\langle}\alpha_{1}\psi^{k_{1}},\ldots,\alpha_{n}\psi^{k_{n}}\Big{\rangle} _{0,n,d}^{X}:=\int_{[X_{0,n,d}]^{\mathrm{vir}}}\prod_{i=1}^{n}\mathrm{ev}_{i}^ {*}(\alpha_{i})\psi_{i}^{k_{i}},\] where \([X_{0,n,d}]^{\mathrm{vir}}\in H_{*}(X_{0,n,d},\mathbb{Q})\) is the virtual fundamental class [3] and \(\psi_{i}\) is the \(\psi\)-class, which is the first Chern class of the \(i\)-th universal cotangent line bundle over \(X_{0,n,d}\). #### 2.1.2. Virtual localization Let \(\mathbb{T}\) be an algebraic torus and \(X\) be a smooth projective variety endowed with a \(\mathbb{T}\)-action. Let \(F_{1},\ldots,F_{m}\) be the connected components of \(X_{0,n,d}^{\mathbb{T}}\), and let \(\iota_{F_{i}}\) be the embedding \(F_{i}\hookrightarrow X_{0,n,d}\) for \(1\leq i\leq m\). For \(\xi\in F\), let \(T^{1}\) be a tangent space and \(T^{2}\) be an obstruction space at \(\xi\). These spaces are naturally equipped with a \(\mathbb{T}\)-action, and the action induces the decomposition of them into the fixed parts \(T^{j,\mathrm{fix}}\) and the moving parts \(T^{j,\mathrm{mov}}\). By collecting the space \(T^{j,\mathrm{mov}}\) we have the bundle \(\mathcal{T}^{j,\mathrm{mov}}_{F_{i}}\) over \(F_{i}\). We define the _virtual normal bundle of \(F_{i}\)_ as \(N^{\mathrm{vir}}_{F_{i}}=\mathcal{T}^{1,\mathrm{mov}}_{F_{i}}\ominus\mathcal{ T}^{2,\mathrm{mov}}_{F_{i}}\). **Theorem 2.1** ([15]).: _In the above setting, we have_ \[[X_{0,n,d}]^{\mathrm{vir}}=\sum_{i=1}^{m}\iota_{F_{i}*}\left(\frac{[F_{i}]^{ \mathrm{vir}}}{e_{\mathbb{T}}(N^{\mathrm{vir}}_{F_{i}})}\right).\] _In particular, for any \(\phi\in H^{*}_{\mathbb{T}}(X_{0,n,d},\mathbb{C})\), we have_ \[\int_{[X_{0,n,d}]^{\mathrm{vir}}}^{\mathbb{T}}\phi=\sum_{i=1}^{m}\int_{[F_{i}] ^{\mathrm{vir}}}^{\mathbb{T}}\frac{\iota_{F_{i}}^{*}\phi}{e_{\mathbb{T}}(N^{ \mathrm{vir}}_{F_{i}})}.\] #### 2.1.3. Equivariant Gromov-Witten invariants We discuss Gromov-Witten invariants in the equivariant setting. Let \(X\) be a smooth variety, and assume that \(X\) is semi-projective, that is, \(X\) is projective over an affine variety. Let \(\mathbb{T}\) be an algebraic torus and consider a \(\mathbb{T}\)-action on \(X\) whose fixed point set is projective. This action naturally induces an \(\mathbb{T}\)-action on the moduli space \(X_{0,n,d}\). For \(\alpha_{1},\ldots,\alpha_{n}\in H^{*}_{\mathbb{T}}(X)\) and \(k_{1},\ldots,k_{n}\in\mathbb{Z}_{\geq 0}\), we define _\(\mathbb{T}\)-equivariant genus-zero descendant Gromov-Witten invariants_ by \[\Big{\langle}\alpha_{1}\psi^{k_{1}},\ldots,\alpha_{n}\psi^{k_{n}}\Big{\rangle} _{0.n,d}^{X,\mathbb{T}}:=\int_{[X_{0,n,d}]^{\mathrm{vir}}}^{\mathbb{T}}\prod_ {i=1}^{n}\mathrm{ev}_{i}^{*}(\alpha_{i})\psi_{i}^{k_{i}}.\] Here we define the right-hand side via the virtual localization formula (Corollary 2.1), and hence it belongs to the fraction field \(\mathrm{Frac}(H^{*}_{\mathbb{T}}(\mathrm{pt}))\). When \(X\) is projective, it can be computed without the localization formula and belongs to \(H^{*}_{\mathbb{T}}(\mathrm{pt})\). #### 2.1.4. Twisted Gromov-Witten invariants Let \(\mathbb{T}\) be an algebraic torus. For any \(\chi\in H^{2}_{\mathbb{T}}(\operatorname{pt},\mathbb{Z})\setminus 0\), we introduce the following four charcteristic classes \(e_{\chi},e_{\chi}^{-1},\tilde{e}_{\chi},\tilde{e}_{\chi}^{-1}\): \[e_{\chi}(V) =\sum_{i=0}^{\operatorname{rank}(V)}\chi^{\operatorname{rank}(V)-i} \check{c}_{i}(V),\] \[e_{\chi}^{-1}(V) =(e_{\chi}(V))^{-1},\] \[\tilde{e}_{\chi}(V) =\chi^{-\operatorname{rank}(V)}e_{\chi}(V)=\sum_{i=0}^{ \operatorname{rank}(V)}\chi^{-i}c_{i}(V),\] \[\tilde{e}_{\chi}^{-1}(V) =(\tilde{e}_{\chi}(V))^{-1}\] where \(V\) is a vector bundle over any topological space. We note that, if \(\mathbb{T}\) acts on \(V\) fiberwise via the character \(\chi\colon\mathbb{T}\to\mathbb{C}^{\times}\), the class \(e_{\chi}(V)\) coincides with the \(\mathbb{T}\)-equivariant Euler class \(e_{\mathbb{T}}(V)\). Let \(X\) be a smooth projective variety which is endowed with a trivial \(\mathbb{T}\)-action. Let \(W_{i}\), \(i=1,\dots,N\), be vector bundles over \(X\) and let \(\mathbf{c}^{i}\), \(i=1,\dots,N\), be one of the characteristic classes \(e_{\chi i},e_{\chi i}^{-1},\tilde{e}_{\chi i},\tilde{e}_{\chi i}^{-1}\) associated with a character \(\chi_{i}\colon\mathbb{T}\to\mathbb{C}^{\times}\). The collection \[(W_{1},\mathbf{c}^{1}),\dots,(W_{N},\mathbf{c}^{N})\] is referred to as _twist data_. We sometimes use the vector symbol \((\vec{W},\vec{\mathbf{c}})\) as an abbreviation for the twist data above. Let \(\pi\colon X_{0,n+1,d}\to X_{0,n,d}\) be the map forgetting the \((n+1)\)-st point and \(\operatorname{ev}_{n+1}\colon X_{0,n+1,d}\to X\) be the \((n+1)\)-st evaluation map: Note that these maps give the universal family of stable maps. For a vector bundle \(W\) over \(X\), we define \[W_{0,n,d}:=\mathbb{R}\pi_{*}\operatorname{ev}_{n+1}^{*}W\in K_{\mathbb{T}}^{ 0}(X_{0,n,d})\] where \(\mathbb{R}\pi_{*}\) denotes the \(K\)-theoretic pushforward. For \(\alpha_{1},\dots,\alpha_{n}\in H^{*}_{\mathbb{T}}(X)\) and \(k_{1},\dots,k_{n}\in\mathbb{Z}_{\geq 0}\), we define _genus-zero descendant Gromov-Witten invariants twisted by \((W_{1},\mathbf{c}^{1}),\dots,(W_{N},\mathbf{c}^{N})\)[9]_ by \[\left\langle\alpha_{1}\psi^{k_{1}},\dots,\alpha_{n}\psi^{k_{n}}\right\rangle_{ 0.n,d}^{X,(\vec{W},\vec{\mathbf{c}})}:=\int_{[\mathcal{X}_{0,n,d}]^{\operatorname {vir}}}^{\mathbb{T}}\left(\prod_{i=1}^{n}\operatorname{ev}_{i}^{*}(\alpha_{i} )\psi_{i}^{k_{i}}\right)\cdot\left(\prod_{i=1}^{N}\mathbf{c}^{i}((W_{i})_{0,n, d})\right),\] which take values in \(\operatorname{Frac}(H^{*}_{\mathbb{T}}(\operatorname{pt}))\). We will discuss in Subsection 2.3 the relationship between the twisted theory and the untwisted theory in terms of their Lagrangian cones, which are introduced in the next subsection. Finally, we introduce the notation \((W,e_{\mathbb{T}}^{\pm 1})\). This is useful when we treat the twist data coming from a \(\mathbb{T}\)-vector bundle \(W\). **Definition 2.2**.: Let \(W\) be a vector bundle over \(X\) with a fiberwise \(\mathbb{T}\)-action such that there are no \(\mathbb{T}\)-fixed non-zero vectors. We write \(W=\bigoplus_{i=1}^{N}W_{i}\) for the weight decompostion, and let \(\chi_{i}\in H^{2}_{\mathbb{T}}(\operatorname{pt})\setminus 0\) be the character associated with \(\mathbb{T}\)-action on \(W_{i}\). We define \((W,e_{\mathbb{T}}^{\pm 1})\) to be the twist data \((W_{1},e_{\mathbb{T}}^{\pm 1}),\dots,(W_{N},e_{\chi_{N}}^{\pm 1})\). ### Givental Lagrangian cone Following [14][9], we introduce a symplecto-geometric formulation of genus-zero Gromov-Witten theory. In this formulation, many properties of Gromov-Witten invariants are translated into geometric properties of a certain Lagrangian cone in an infinite dimensional symplectic vector space. We will introduce three variations of Lagrangian cones. #### 2.2.1. Non-equivariant case Let \(X\) be a smooth projective variety. Let \((\mathcal{H}_{X},\Omega)\) be a symplectic vector space: \[\mathcal{H}_{X}:= H^{*}(X)[z,z^{-1}][\operatorname{Eff}(X)],\] \[\Omega(f,g):= -\operatorname{Res}_{z=\infty}\left(\int_{X}f(-z)\cup g(z)\right) dz\qquad\text{for }f,g\in\mathcal{H}_{X}\] where \(\operatorname{Eff}(X)\subset H_{2}(X,\mathbb{R})\)1 denotes the intersection of the cone generated by effective curve classes and the image of \(H_{2}(X,\mathbb{Z})\to H_{2}(X,\mathbb{R})\), and \(\mathbb{C}[\operatorname{Eff}(X)]\) is the completion of \(\mathbb{C}[\operatorname{Eff}(X)]\) with respect to the additive valuation \(v\) induced by the semigroup homomorphism \(\omega\colon\operatorname{Eff}(X)\to\mathbb{Z}\) given by a Kahler form \(\omega\in H^{2}(X,\mathbb{Z})\): Footnote 1: \(\operatorname{Eff}(X)\) contains the image of the semigroup in \(H_{2}(X,\mathbb{Z})\) generated by the effective curve classes under the map \(H_{2}(X,\mathbb{Z})\to H_{2}(X,\mathbb{R})\). In general, \(\operatorname{Eff}(X)\) does not equal the image. \[\omega\colon \operatorname{Eff}(X)\to\mathbb{Z},\qquad d\mapsto\omega(d):=\int_ {d}\omega,\] \[v\left(\sum_{d\in\operatorname{Eff}(X)}c_{d}Q^{d}\right)=\min_{d \colon c_{d}\neq 0}\omega(d)\] where \(Q\) denotes a formal variable for the group ring \(\mathbb{C}[\operatorname{Eff}(X)]\) called the _Novikov variable_. There is a standard polarization \(\mathcal{H}_{X}=\mathcal{H}_{+}\oplus\mathcal{H}_{-}\), where \[\mathcal{H}_{+}:= H^{*}(X)[z][\operatorname{Eff}(X)],\] \[\mathcal{H}_{-}:= z^{-1}H^{*}(X)[z^{-1}][\operatorname{Eff}(X)].\] These are \(\Omega\)-isotropic subspaces and \(\mathcal{H}\) can be identified with the cotangent bundle \(T^{*}\mathcal{H}_{+}\). Let \(\mathbf{t}(z)=\sum_{i\geq 0}t_{i}z^{i}\in H^{*}(X)[z]\). We define the _genus-zero descendant potential_\(\mathcal{F}^{0}_{X}\) as \[\mathcal{F}^{0}_{X}(\mathbf{t}):=\sum_{\begin{subarray}{c}n\geq 0,d\in \operatorname{Eff}(X)\\ (n,d)\neq(0,0),(1,0),(2,0)\end{subarray}}\frac{Q^{d}}{n!}\,\langle\mathbf{t}( \psi),\dots,\mathbf{t}(\psi)\rangle^{X}_{0,n,d}\,.\] Under a shift \(\mathbf{q}(z)=\mathbf{t}(z)-\mathbf{1}z\), which is called the _dilaton shift_, we consider \(\mathcal{F}^{0}_{X}\) as a formal function on \(\mathcal{H}_{+}\) in a formal neighborhood of \(-\mathbf{1}z\in\mathcal{H}_{+}\). The Givental Lagrangian cone \(\mathcal{L}_{X}\)[14] is a formal germ of a Lagrangian submanifold of \(\mathcal{H}_{X}\) defined as the graph of the differential of \(\mathcal{F}^{0}_{X}\). We can give an explicit description for points on \(\mathcal{L}_{X}\). Let \(\{\phi_{i}\}_{i\in I}\) be a \(\mathbb{C}\)-basis of \(H^{*}(X)\), \(\{\phi^{i}\}_{i\in I}\) be the dual basis with respect to the Poincare pairing. We let \(S\) be a semigroup such that there exists a semigroup homomorphism \(i\colon\operatorname{Eff}(X)\hookrightarrow S\) and a valuation \(v_{S}\colon\mathbb{C}[S]\to\mathbb{Z}_{\geq 0}\) which extends the valuation \(v\) on \(\mathbb{C}[\operatorname{Eff}(X)]\) via the inclusion \(i\). In this situation, for any ring \(R\) we can define the completion \(R[S]\) of \(R[S]\) by using the valuation \(v_{S}\). A \(\mathbb{C}[S]\)-valued point on \(\mathcal{L}_{X}\) is of the form \[-\mathbf{1}z+\mathbf{t}(z)+\sum_{\begin{subarray}{c}n\geq 0,d\in\operatorname{Eff}(X )\\ (n,d)\neq(0,0),(1,0)\end{subarray}}\sum_{i\in I}\frac{Q^{d}}{n!}\left\langle\frac{ \phi_{i}}{-z-\psi},\mathbf{t}(\psi),\ldots,\mathbf{t}(\psi)\right\rangle_{0,n+ 1,d}^{X}\phi^{i}\] with \(\mathbf{t}(z)\in\mathcal{H}_{+}\otimes_{\mathbb{C}[z][\operatorname{Eff}(X)]} \mathbb{C}[z][\![S]\!]\) satisfying \(\mathbf{t}(z)|_{Q=0}=0\). Here we expand \(1/(-z-\psi)\) at \(z=\infty\): \[\frac{1}{-z-\psi}=\sum_{k\geq 0}(-z)^{-k-1}\psi^{k}.\] Since \(\psi_{1}\) is nilpotent, this form indeed belongs to \(\mathcal{H}_{X}\otimes_{\mathbb{C}[z][\operatorname{Eff}(X)]}\mathbb{C}[z][ \![S]\!]\). **Example 2.3**.: We give a typical example of \(S\). Set \(S=\operatorname{Eff}(X)\oplus(\mathbb{Z}_{\geq 0})^{\mathbb{N}}\) and write \(\{e_{n}\}_{n\in\mathbb{N}}\) for the standard basis of \((\mathbb{Z}_{\geq 0})^{\mathbb{N}}\). Define \(\omega_{S}\colon S\to\mathbb{Z}_{\geq 0}\) as a semigroup homomorphism satisfying that \(\omega_{S}|_{\operatorname{Eff}(X)}\) coincides with the homomorphism \(\omega\colon\operatorname{Eff}(X)\to\mathbb{Z}_{\geq 0}\) and \(\omega_{S}(e_{i})=i\). In this case, we have \[\mathbb{C}[\![S]\!]=\mathbb{C}[\![\operatorname{Eff}(X)]\!][\![t_{1},t_{2}, \cdots]\!]\] where \(t_{i}\) denotes a formal variable for \(\mathbb{C}[S]\) associated with \(e_{i}\in S\). **Remark 2.4**.: The Lagrangian cone \(\mathcal{L}_{X}\) (and its variants introduced in the following subsections) can be formulated as a formal scheme [7]. However, in this paper, it is sufficient to consider a set of valued points on \(\mathcal{L}_{X}\). #### 2.2.2. \(\mathbb{T}\)-equivariant case We let \(\mathbb{T}\) be an algebraic torus, and let \(X\) be a smooth semi-projective variety endowed with a \(\mathbb{T}\)-action whose fixed point set is projective. We will define the Lagrangian cone in a similar way. We let \(\lambda_{1},\ldots,\lambda_{N}\) denote the equivariant parameters and write \(\mathbb{C}[\lambda]\) and \(\mathbb{C}(\lambda)\) for \(H_{\mathbb{T}}^{*}(\operatorname{pt})\) and \(\operatorname{Frac}(H_{\mathbb{T}}^{*}(\operatorname{pt}))\) respectively. We set \[\mathcal{H}_{X,\mathbb{T}} :=H_{\mathbb{T}}^{*}(X)_{\operatorname{loc}}(\!(z^{-1})\!)[ \operatorname{Eff}(X)]\!],\] \[\mathcal{H}_{+} :=H_{\mathbb{T}}^{*}(X)_{\operatorname{loc}}[z][\![\operatorname{ Eff}(X)]\!],\] \[\mathcal{H}_{-} :=z^{-1}H_{\mathbb{T}}^{*}(X)_{\operatorname{loc}}[\![z^{-1}]\!][ \operatorname{Eff}(X)]\!],\] \[\Omega(f,g) :=-\operatorname{Res}_{z=\infty}\left(\int_{X}^{\mathbb{T}}f(-z) \cup g(z)\right)dz\qquad\text{for }f,g\in\mathcal{H}_{X,\mathbb{T}}\] where \(H_{\mathbb{T}}^{*}(X)_{\operatorname{loc}}:=H_{\mathbb{T}}^{*}(X)\otimes_{ \mathbb{C}[\lambda]}\mathbb{C}(\lambda)\). There is a standard polarization \(\mathcal{H}_{X,\mathbb{T}}=\mathcal{H}_{+}\oplus\mathcal{H}_{-}\), and we identify \(\mathcal{H}_{X,\mathbb{T}}\) with \(T^{*}\mathcal{H}_{+}\). Let \(\mathbf{t}(z)=\sum_{i\geq 0}t_{i}z^{i}\in H_{\mathbb{T}}^{*}(X)[z]\). We define _the \(\mathbb{T}\)-equivariant genus-zero descendant potential_\(\mathcal{F}_{\mathcal{X},\mathbb{T}}^{0}\) as \[\mathcal{F}_{X,\mathbb{T}}^{0}(-\mathbf{1}z+\mathbf{t}(z)):=\sum_{ \begin{subarray}{c}n\geq 0,d\in\operatorname{Eff}(X)\\ (n,d)\neq(0,0),(1,0),(2,0)\end{subarray}}\frac{Q^{d}}{n!}\left\langle\mathbf{t} (\psi),\ldots,\mathbf{t}(\psi)\right\rangle_{0,n,d}^{X,\mathbb{T}},\] which is defined over a formal neighborhood of \(-\mathbf{1}z\in\mathcal{H}_{+}\). The equivariant Givental Lagrangian cone \(\mathcal{L}_{X,\mathbb{T}}\) is defined as the graph of the differential of \(\mathcal{F}_{X,\mathbb{T}}^{0}\). Let \(\{\phi_{i}\}_{i\in I}\) be a basis of \(H_{\mathbb{T}}^{*}(X)_{\operatorname{loc}}\) over \(\operatorname{Frac}(H_{\mathbb{T}}^{*}(\operatorname{pt}))\), \(\{\phi^{i}\}\) be the dual basis with respect to the Poincare pairing, and let \(S\) and \(\mathbb{C}[\![S]\!]\) be as in the previous subsection. A \(\mathbb{C}(\lambda)[\![S]\!]\)-valued point on \(\mathcal{L}_{X}\) is of the form \[-\mathbf{1}z+\mathbf{t}(z)+\sum_{\begin{subarray}{c}n\geq 0,d\in\operatorname{Eff}(X) \\ (n,d)\neq(0,0),(1,0)\end{subarray}}\sum_{i\in I}\frac{Q^{d}}{n!}\left\langle\frac{ \phi_{i}}{-z-\psi},\mathbf{t}(\psi),\ldots,\mathbf{t}(\psi)\right\rangle_{0,n+ 1,d}^{X,\mathbb{T}}\phi^{i} \tag{2.1}\] with \(\mathbf{t}(z)\in\mathcal{H}_{+}\otimes_{\mathbb{C}(\lambda)[z][\operatorname{ Eff}(X)]}\mathbb{C}(\lambda)[z][\![S]\!]\) satisfying \(\mathbf{t}(z)|_{Q=0}=0\). We note that since \(\psi_{1}\) is not nilpotent in general, this form may not be an element of \(H_{\mathbb{T}}^{*}(X)_{\operatorname{loc}}[z,z^{-1}][\![S]\!]\). The semi-projectivity of \(X\) implies that \(\operatorname{ev}_{1}\) is proper, and it holds that \[\sum_{i\in I}\left\langle\frac{\phi_{i}}{-z-\psi},\mathbf{t}(\psi),\ldots, \mathbf{t}(\psi)\right\rangle_{0,n+1,d}^{X,\mathbb{T}}\phi^{i}=\operatorname{ ev}_{1*}\left[\frac{\prod_{i=2}^{n+1}\operatorname{ev}_{i}^{*}\mathbf{t}(\psi_{i})}{-z- \psi_{1}}\cap[X_{0,n+1,d}]^{\operatorname{vir}}\right].\] From this equation we can see that the form (2.1) lies on \(H_{\mathbb{T}}^{*}(X)(\!(z^{-1})\!)[\![S]\!]\) if \(\mathbf{t}(z)\in H_{\mathbb{T}}^{*}(X)[z][\![S]\!]\). **Remark 2.5**.: By interpreting \(z\) as an equivariant parameter of \(\mathbb{C}^{\times}\) and considering \(\mathbb{T}\times\mathbb{C}^{\times}\)-equivariant Gromov-Witten theory of \(X\) (where the second factor \(\mathbb{C}^{\times}\) acts trivially on \(X\)), we can interprete the form (2.1) as an element of \(H_{\mathbb{T}\times\mathbb{C}^{\times}}^{*}(X)_{\operatorname{loc}}[\![S]\!]\). If we take the Laurent expansion at \(z=\infty\), we can recover (2.1). In Section 4, we will consider the Laurent expansion of the form at \(z=0\). #### 2.2.3. Twisted case We take \(X\), \(\mathbb{T}\) and the data \((\vec{W},\vec{\mathbf{c}})\) as in Subsection 2.1.4, and consider the \((\vec{W},\vec{\mathbf{c}})\)-twisted theory. The construction is almost the same as the previous ones. We set \[\mathcal{H}_{X,(\vec{W},\vec{\mathbf{c}})} := H_{\mathbb{T}}^{*}(X)_{\operatorname{loc}}(\!(z)\!)[\![\operatorname {Eff}(X)]\!],\] \[\mathcal{H}_{+} := H_{\mathbb{T}}^{*}(X)_{\operatorname{loc}}[\![z]\!][\![ \operatorname{Eff}(X)]\!],\] \[\mathcal{H}_{-} := z^{-1}H_{\mathbb{T}}^{*}(X)_{\operatorname{loc}}[z^{-1}][\![ \operatorname{Eff}(X)]\!],\] \[\Omega(f,g) := -\operatorname{Res}_{z=\infty}\left(\int_{X}^{\mathbb{T}}f(-z) \cup g(z)\cup\prod_{i=1}^{N}\mathbf{c}^{i}(W_{i})\right)dz,\] \[\mathcal{F}_{X,(\vec{W},\vec{\mathbf{c}})}^{0}(-\mathbf{1}z+ \mathbf{t}(z)) := \sum_{\begin{subarray}{c}n\geq 0,d\in\operatorname{Eff}(X)\\ (n,d)\neq(0,0),(1,0),(2,0)\end{subarray}}\frac{Q^{d}}{n!}\left\langle \mathbf{t}(\psi),\ldots,\mathbf{t}(\psi)\right\rangle_{0,n,d}^{X,(\vec{W}, \vec{\mathbf{c}})}.\] Then there is a canonical polarization \(\mathcal{H}_{X,(\vec{W},\vec{\mathbf{c}})}=\mathcal{H}_{+}\oplus\mathcal{H}_{-}\), and we can obtain the Lagrangian cone \(\mathcal{L}_{X,(\vec{W},\vec{\mathbf{c}})}\) whose \(\mathbb{C}(\lambda)[\![S]\!]\)-valued point is of the form \[-\mathbf{1}z+\mathbf{t}(z)+\sum_{\begin{subarray}{c}n\geq 0,d\in\operatorname{Eff}(X) \\ (n,d)\neq(0,0),(1,0)\end{subarray}}\sum_{i\in I}\frac{Q^{d}}{n!}\left\langle \frac{\phi_{i}}{-z-\psi},\mathbf{t}(\psi),\ldots,\mathbf{t}(\psi)\right\rangle _{0,n+1,d}^{X,(\vec{W},\vec{\mathbf{c}})}\cdot\frac{\phi^{i}}{\prod_{i=1}^{N} \mathbf{c}^{i}(W_{i})}\] with \(\mathbf{t}(z)\in\mathcal{H}_{+}\otimes_{\mathbb{C}(\lambda)[\![z]\!][\![\operatorname {Eff}(X)]\!]}\mathbb{C}(\lambda)[\![z]\!][\![S]\!]\) satisfying \(\mathbf{t}(z)|_{Q=0}=0\). Here we use the notations in the previous subsection. Since \(\mathbb{T}\) acts trivially on \(X\), \(\psi_{1}\) is nilpotent and hence this function belongs to \(\mathcal{H}_{X,(\vec{W},\vec{\mathbf{c}})}\otimes_{\mathbb{C}(\lambda)(\!(z) \!)[\![\operatorname{Eff}(X)]\!]}\mathbb{C}(\lambda)(\!(z)\!)[\![S]\!]\). As in the untwisted case, the above function equals \[-\mathbf{1}z+\mathbf{t}(z)+\sum_{\begin{subarray}{c}n\geq 0,d\in\operatorname{Eff }(X)\\ (n,d)\neq(0,0),(1,0)\end{subarray}}\frac{Q^{d}}{n!}\cdot\prod_{i=1}^{N}\mathbf{c} ^{i}(W_{i})^{-1}\\ \cdot\operatorname{ev}_{1*}\left[\frac{\prod_{i=2}^{n+1} \operatorname{ev}_{i}^{*}\mathbf{t}(\psi_{i})}{-z-\psi_{1}}\cdot\prod_{i=1}^{N }\mathbf{c}^{i}((W_{i})_{0,n+1,d})\cap[X_{0,n+1,d}]^{\operatorname{vir}}\right]. \tag{2.2}\] ### Quantum Riemann-Roch theorem and twisted theory We introduce quantum Riemann-Roch theorem [9, Corollary 4], which relates twisted Givental cones via some transcendental operators. We also explain relationships between the Gromow-Witten theory of a vector bundle (resp. a subvariety) and that of a base space (resp. an ambient space) in terms of twisted theories. Note that we will use the material in this subsection only in Section 5. #### 2.3.1. Quantum Riemann-Roch operator We let \(X\) be a smooth projective variety and \(\mathbb{T}\) be a complex torus acting on \(X\) trivially. For \(\chi\in H^{2}_{\mathbb{T}}(\operatorname{pt})\setminus 0\), we set \[s_{k}^{\pm}(\chi)=\begin{cases}0&\text{for $k=0$},\\ \pm(-1)^{k-1}(k-1)!\chi^{-k}&\text{for $k>0$}.\end{cases}\] It is easy to see that \[\tilde{e}_{\chi}^{\pm}(\cdot)=\exp\left(\sum_{k=0}^{\infty}s_{k}^{\pm}(\chi) \cdot\operatorname{ch}_{k}(\cdot)\right).\] We take a vector bundle \(V\) over \(X\) and let \(\chi\) be a non-zero element of \(H^{2}_{\mathbb{T}}(\operatorname{pt})\). For the case \(\mathbf{c}=\tilde{e}_{\chi}\) or \(\mathbf{c}=\tilde{e}_{\chi}^{-1}\), we define the _quantum Riemann-Roch operator_\(\Delta_{(V,\mathbf{c})}(-z)\) as follows: \[\Delta_{(V,\mathbf{c})}(-z):=\exp\left[\sum_{l,m\geq 0}s_{l+m-1}^{\pm}(\chi) \cdot\frac{B_{m}}{m!}\cdot\operatorname{ch}_{l}(V)\cdot z^{m-1}\right]\] where we set \(s_{-1}=0\) and \(B_{m}\) is the Bernoulli number defined by \(\sum_{m=0}^{\infty}(B_{m}/m!)x^{m}=x/(e^{x}-1)\). Since \(\operatorname{ch}_{l}(V)\) is nilpotent for \(l>0\) and \(\operatorname{ch}_{l}(V)=0\) for \(l>\dim X\), the operator \(\Delta_{(V,\mathbf{c})}(\lambda,z)\) is well-defined and belongs to \(H^{*}(X)[\chi^{-1}](\!(z)\!)\cap H^{*}(X)[z,z^{-1}][\![\chi^{-1}]\!].\)2 For vector bundles \(V_{1},\dots,V_{N}\) and characteristic classes \(\mathbf{c}^{1},\dots,\mathbf{c}^{N}\) with each class \(\mathbf{c}^{i}\) being \(\tilde{e}_{\chi_{i}}\) or \(\tilde{e}_{\chi_{i}}^{-1}\) for some \(\chi_{i}\in H^{2}_{\mathbb{T}}(\operatorname{pt})\setminus 0\), we define the operator \(\Delta_{(\vec{V},\vec{\mathbf{c}})}(z)\) as a product \(\prod_{i=1}^{N}\Delta_{(V_{i},\mathbf{c}^{i})}(z)\). Footnote 2: For any \(\mathbb{C}\)-algebra \(R\) and a specific non-zero element \(\chi\in H^{2}_{\mathbb{T}}(\operatorname{pt})\), we set \(R[\chi^{-1}]\) to be a subring of \(R(\lambda)\) consisting of elements of finite sums \(\sum_{i=0}^{k}r_{i}\chi^{-i}\) where \(k\) is a non-negative integer and \(r_{i}\in R\) for any \(i\), and \(R[\chi^{-1}]\) to be the canonical completion of \(R[\chi^{-1}]\). **Theorem 2.6** ([9, Corollary 4],[28, Theorem 1.1]).: _Let \((\vec{V},\vec{\mathbf{c}})\) be twist data with the characteristic class \(\mathbf{c}^{i}\)\((1\leq i\leq N)\) being of the form \(\tilde{e}_{\chi_{i}}\) or \(\tilde{e}_{\chi_{i}}^{-1}\) for some non-zero element \(\chi_{i}\in H^{2}_{\mathbb{T}}(\operatorname{pt})\). Then we have_ \[\Delta_{(\vec{V},\vec{\mathbf{c}})}(-z)\mathcal{L}_{X,(\mathcal{O}_{X},1)}= \mathcal{L}_{X,(\vec{V},\vec{\mathbf{c}})}.\] _In particular, for any \(\mathbb{C}(\lambda)[\![\operatorname{Eff}(X)]\!][\![t]\!]\)-valued point \(\mathbf{f}\) on \(\mathcal{L}_{X,(\mathcal{O}_{X},1)}\), the function \(\Delta_{(\vec{V},\vec{\mathbf{c}})}(-z)\!\cdot\!\mathbf{f}\) is a \(\mathbb{C}(\lambda)[\![\operatorname{Eff}(X)]\!][\![t]\!]\)-valued point on \(\mathcal{L}_{X,(\vec{V},\vec{\mathbf{c}})}\)._ **Remark 2.7**.: The twist data \((\mathcal{O}_{X},1)\) is trivial, that is, the \((\mathcal{O}_{X},1)\)-twisted Gromov-Witten invariants are by definition the untwisted ones. However, there is a subtle difference between \(\mathcal{L}_{X,\mathbb{T}}\) and \(\mathcal{L}_{X,(\mathcal{O}_{X},1)}\). In fact, \(\mathcal{L}_{X,\mathbb{T}}\) is defined in \(\mathcal{H}_{X,\mathbb{T}}=H^{*}_{\mathbb{T}}(X)_{\mathrm{loc}}(\!(z^{-1}))[ \![\mathrm{Eff}(X)]\!]\), while \(\mathcal{L}_{X,(\mathcal{O}_{X},1)}\) is defined in \(\mathcal{H}_{X,(\mathcal{O}_{X},1)}=H^{*}_{\mathbb{T}}(X)_{\mathrm{loc}}(\!(z ))[\![\mathrm{Eff}(X)]\!]\). We now consider the inverse Euler twist. For any \(\mathbb{C}(\lambda)[\![\mathrm{Eff}(X)]\!][\![t]\!]\)-valued point \(\mathbf{f}\) on \(\mathcal{L}_{X,(W,\vec{e}_{\lambda}^{-1})}\), \(\mathbf{f}|_{Q^{d}\to Q^{d}\chi^{-d\cdot c_{1}(W)}}\) is a \(\mathbb{C}(\lambda)[\![\mathrm{Eff}(X)]\!][\![t]\!]\)-valued point on \(\mathcal{L}_{X,(W,e_{\chi}^{-1})}\): this follows from the definition of twisted cones and the fact that \(\mathrm{rank}((W)_{0,n,d})=\mathrm{rank}(W)+\int_{d}c_{1}(W)\). We define the operator \(\Delta_{W}^{\chi}\) on \(H^{*}_{\mathbb{T}}(X)_{\mathrm{loc}}(\!(z))[\![\mathrm{Eff}(X)]\!][\![t]\!]\) as follows: **Corollary 2.8**.: _Let \((\vec{V},\mathbf{\overline{c}})\) be any twist data. For any \(\mathbb{C}(\lambda)[\![\mathrm{Eff}(X)]\!][\![t]\!]\)-valued point \(\mathbf{f}(-z)\) on \(\mathcal{L}_{X,(\vec{V},\vec{\mathbf{c}})}\), the form \((\Delta_{W}^{\chi}\mathbf{f})(-z)\) gives a \(\mathbb{C}(\lambda)[\![\mathrm{Eff}(X)]\!][\![t]\!]\)-valued point on \(\mathcal{L}_{X,(\vec{V},\vec{\mathbf{c}}),(W,e_{\chi}^{-1})}\)._ #### 2.3.2. Gromov-Witten theory of vector bundles We let \(B\) be a smooth projective variety and \(V=\bigoplus_{i=1}^{N}V_{i}\) be a direct sum of vector bundles over \(B\). By considering the diagonal action of \(\mathbb{T}=(\mathbb{C}^{\times})^{N}\) on \(V\), we can study \(\mathbb{T}\)-equivariant Gromov-Witten theory of \(V\), which plays a key role in this paper. We assume that its dual \(V^{\vee}\) is globally generated, which implies that \(V\) is semi-projective ([21, Lemma 2.1]). In this case, a \(\mathrm{Frac}(H^{*}_{\mathbb{T}}(\mathrm{pt}))[\![\mathrm{Eff}(B)]\!][\![t]\!]\)-valued point of \(\mathcal{L}_{V,\mathbb{T}}\) is a point on \[\mathcal{H}^{\mathrm{pol}}_{V,\mathbb{T}}:=H^{*}_{\mathbb{T}}(V)[z,z^{-1}][ \![\mathrm{Eff}(B)]\!]\] if its non-negative part (as a \(z\)-series) belongs to \(-z+H^{*}_{\mathbb{T}}(V)[z][\![\mathrm{Eff}(B)]\!][\![t]\!]\), i.e., it is polynomial in the equivariant parameters \(\lambda\). This is because \(\psi_{1}\) is nilpotent which follows from the fact that an image of each \(\mathbb{T}\)-fixed stable map \(C\to V\) is in \(V^{\mathbb{T}}=B\). Applying the virtual localization formula (Corollary 2.1), we have \[\left\langle\alpha_{1}\psi^{k_{1}},\ldots,\alpha_{n}\psi^{k_{n}}\right\rangle_{ 0.n,d}^{V,\mathbb{T}}=\left\langle\alpha_{1}\psi^{k_{1}},\ldots,\alpha_{n} \psi^{k_{n}}\right\rangle_{0.n,d}^{B,(V,e_{\mathbb{T}}^{-1})}.\] Hence Gromov-Witten theory of \(V\) is equivalent to \((V,e_{\mathbb{T}}^{-1})\)-twisted Gromov-Witten theory of \(B\): \[\mathcal{L}_{V,\mathbb{T}}\cap\mathcal{H}^{\mathrm{pol}}_{V,\mathbb{T}}= \mathcal{L}_{B,(V,e_{\mathbb{T}}^{-1})}\cap\mathcal{H}^{\mathrm{pol}}_{V, \mathbb{T}}.\] For any ring \(R\), we define \(R[\lambda^{-1}]\) (resp. \(R[\![\lambda^{-1}]\!]\)) to be a ring \(R[\lambda_{1}^{-1},\ldots,\lambda_{N}^{-1}]\) (resp. \(R[\![\lambda_{1}^{-1},\ldots,\lambda_{N}^{-1}]\!]\)) where \(\lambda_{i}\) is a \(\mathbb{T}\)-equivariant parameter corresponding to the \(i\)-th projection \(\mathbb{T}\to\mathbb{C}^{\times}\). In Section 5, we will use the following lemma. **Lemma 2.9**.: _Let \(\mathbf{f}\) be a \(\mathbb{C}[\lambda][\![\mathrm{Eff}(B)]\!][\![x]\!]\)-valued point on \(\mathcal{L}_{V,\mathbb{T}}\) whose non-negative part (as a \(z\)-series) lies in \(H^{*}(B)[z][\![\mathrm{Eff}(B)]\!][\![x]\!]\), i.e., it contains no equivariant parameters. Then \((\Delta_{V}^{\lambda}|_{z\to-z})\mathbf{f}\) is a \(\mathbb{C}[\![\mathrm{Eff}(B)]\!][\![x,\lambda^{-1}]\!]\)-valued point on \(\mathcal{L}_{B}\)._ Proof.: It is enough to prove that \(\mathbf{f}|_{Q^{d}\to Q^{d}\lambda^{d\cdot c_{1}(V)}}\) belongs to \(H^{*}(B)[\lambda^{-1}][\![\mathrm{Eff}(B)]\!][\![x]\!]\) since \(\Delta_{(V,\vec{e}_{\lambda}^{-1})}\in H^{*}(B)[\lambda^{-1}](\!(z)\!)\). By the definition of \(\mathcal{L}_{V,\mathbb{T}}\), the function \(\mathbf{f}|_{Q^{d}\to Q^{d}\lambda^{d\cdot c_{1}(V)}}\) can be written as \[-\mathbf{1}z+\mathbf{t}(z)+\sum_{\begin{subarray}{c}n\geq 0,d\in\mathrm{Eff}(B)\\ (n,d)\neq(0,0),(1,0)\end{subarray}}\sum_{i\in I}\frac{Q^{d}}{n!}\left\langle \frac{\phi_{i}}{-z-\psi},\mathbf{t}(\psi),\ldots,\mathbf{t}(\psi)\right\rangle_{ 0,n+1,d}^{B,(V,\vec{e}_{\lambda}^{-1})}\phi^{i}\tilde{e}_{\lambda}(V)\] where \(\{\phi_{i}\}_{i\in I}\) is a basis of \(H^{*}(B)\), \(\{\phi^{i}\}_{i\in I}\) is a dual basis and \(\mathbf{t}(z)\in H^{*}(B)[z][\![\mathrm{Eff}(B)][\![x]\!]\) with \(\mathbf{t}(z)|_{(Q,x)=0}=0\). Since \(\tilde{e}_{\lambda}(V)\) and \(\tilde{e}_{\lambda}^{-1}(V_{0,n+1,d})\) are power series in \(\lambda_{1}^{-1},\ldots,\lambda_{N}^{-1}\), this form indeed belongs to \(H^{*}(B)[\lambda^{-1}][\![\mathrm{Eff}(B)][\![x]\!]\). #### 2.3.3. Gromov-Witten theory of subvarieties Let \(X\) be a smooth projective variety, and let \(V\) be a vector bundle over \(X\). We assume that \(V\) is globally generated,3 which implies that \(V_{0,n,d}\) is a vector bundle for any \(n\in\mathbb{Z}_{\geq 0}\) and \(d\in\mathrm{Eff}(X)\) with \((n,d)\neq(0,0),(1,0),(2,0)\). Footnote 3: This assumption implies that \(V\) is convex, i.e., for any genus zero stable map \(f\colon C\to X\), the first cohomology \(H^{1}(C,f^{*}V)\) vanishes.. We take a regular section of \(V\) and write its zero-scheme as \(\iota\colon Z\to X\). We let \(\mathbb{T}\times\mathbb{C}^{\times}\) act on \(X\) trivially, and write equivariant parameters for \(\mathbb{T}\) (resp. the second factor) as \(\lambda_{1},\ldots,\lambda_{N}\) (resp. \(\mu\)). Let \(W_{i}\), \(i=1,\ldots,N\), be a vector bundle over \(X\) and let \(\mathbf{c}^{i}\), \(i=1,\ldots,N\), be the characteristic class \(e_{\chi_{i}}^{-1}\) associated with a character \(\chi_{i}\colon\mathbb{T}\times\mathbb{C}^{\times}\to\mathbb{C}^{\times}\) which is non-trivial on \(\mathbb{T}\times\{1\}\). In this situation, we can relate \(\mathcal{L}_{X,(V,e_{\mu}),(\vec{W},\vec{e})}\) and \(\mathcal{L}_{Z,(\iota^{*}\vec{W},\vec{e}|_{\mu}=0)}\). **Theorem 2.10** (see also [26, Section 2.1], [18, Proposition 2.4]).: _For any \(\mathbb{C}(\lambda,\mu)[\![\mathrm{Eff}(X)][\![x]\!]\)-valued point \(\mathbf{f}\) on \(\mathcal{L}_{X,(V,e_{\lambda}),(\vec{W},\vec{e})}\) whose limit \(\lim_{\mu\to 0}\mathbf{f}\) exists, \(\lim_{\mu\to 0}\iota^{*}\mathbf{f}\) gives a \(\mathbb{C}(\lambda)[\![\mathrm{Eff}(X)][\![x]\!]\)-valued point on \(\mathcal{L}_{Z,(\iota^{*}\vec{W},\vec{e}|_{\mu}=0)}\)._ Proof.: We consider the following diagram: where \(Z_{0,n,d}:=\amalg_{d^{\prime}\colon\iota_{*}d^{\prime}=d}Z_{0,n,d^{\prime}}\), \(\pi_{X}\) and \(\pi_{Z}\) denote the forgetful map for the last marking, \(\mathrm{ev}_{X,i}\) (resp. \(\mathrm{ev}_{Z,i}\)) denotes the \(i\)-th evaluation map for \(X_{0,n+2,d}\) (resp. \(Z_{0,n+2,d}\)), and \(\iota_{n}\colon Z_{0,n,d}\to X_{0,n,d}\) denotes the map induced by \(\iota\). We note that this diagram is commutative, and in particular the square located in the lower left is a fiber diagram. Therefore, we have \[(\iota^{*}W)_{0,n+1,d}:=\pi_{Z*}\,\mathrm{ev}_{Z,n+2}^{*}\,\iota^{*}W=\iota_{ n+1}^{*}W_{0,n+1,d} \tag{2.3}\] for any vector bundle \(W\) over \(X\). Let \(s_{0}^{!}\) be a refined Gysin map [12, 29] associated to the fiber diagram where \(s_{0}\) is a map induced by the zero section \(X\to V\), and \(s\) is a map induced by a regular section of \(V\) defining the subvariety \(Z\hookrightarrow X\). From the functoriality of virtual fundamental classes [24] \[s_{0}^{!}[X_{0,n,d}]^{\mathrm{vir}}=\sum_{d^{\prime}\colon\iota_{*}d^{\prime} =d}[Z_{0,n,d^{\prime}}]^{\mathrm{vir}}\] and (2.3), we have \[s_{0}^{!}\left(\prod_{j=1}^{N}\mathbf{c}^{j}((W_{j})_{0,n,d})\cap[X_{0,n,d}]^{ \mathrm{vir}}\right)=\prod_{j=1}^{N}\mathbf{c}^{j}((\iota^{*}W_{j})_{0,n,d}) \cap\left(\sum_{d^{\prime}\colon\iota_{*}d^{\prime}=d}[Z_{0,n,d^{\prime}}]^{ \mathrm{vir}}\right). \tag{2.4}\] Let \(\mathbf{f}\) be a \(\mathbb{C}(\lambda,\mu)[\![\mathrm{Eff}(X)]\!][\![x]\!]\)-valued point on \(\mathcal{L}_{X,(V,e_{\lambda}),(\widetilde{W},\mathfrak{E})}\) with well-defined limit \(\lim_{\mu\to 0}\mathbf{f}\). From (2.2), \(\mathbf{f}\) can be written in the form of \[-\mathbf{1}z+\mathbf{t}(z) +\sum_{\begin{subarray}{c}n\geq 0,d\in\mathrm{Eff}(X)\\ (n,d)\neq(0,0),(1,0)\end{subarray}}\frac{Q^{d}}{n!}\cdot e_{\lambda}(V)^{-1} \cdot\prod_{j=1}^{N}\mathbf{c}^{j}(W_{j})^{-1}\] \[\cdot\mathrm{ev}_{X,1_{*}}\left[\frac{\prod_{i=2}^{n+1}\mathrm{ ev}_{X,i}^{*}\,\mathbf{t}(\psi_{i})}{-z-\psi_{1}}\cdot e_{\lambda}(V_{0,n+1,d}) \cdot\prod_{j=1}^{N}\mathbf{c}^{j}((W_{j})_{0,n+1,d})\cap[X_{0,n+1,d}]^{ \mathrm{vir}}\right]\] for some \(\mathbf{t}(z)\in H^{*}(X)\otimes\mathbb{C}(\lambda,\mu)[\![z]\!][\![\mathrm{Eff }(X)]\!][\![x]\!]\). Arguing as the proof of [18, Proposition 2.4], for any \(\alpha\in A_{*}(X_{0,n+1,d})\), we have \[\lim_{\mu\to 0}\iota^{*}\left(e_{\mu}(V)^{-1}\cdot\mathrm{ev}_{X,1_{*} }\left[\frac{\prod_{i=2}^{n+1}\mathrm{ev}_{X,i}^{*}\,\mathbf{t}(\psi_{i})}{-z- \psi_{1}}\cdot e_{\mu}((V_{0,n+1,d}))\cap\alpha\right]\right)\\ =\mathrm{ev}_{Z,1_{*}}\left[\frac{\prod_{i=2}^{n+1}\mathrm{ev}_{Z,i}^{*}\,\iota^{*}\mathbf{t}_{0}(\psi_{i})}{-z-\psi_{1}}\cap s_{0}^{!}\alpha\right]\] where \(\mathbf{t}_{0}(z):=\mathbf{t}(z)|_{\mu=0}\). Note that \(\mathbf{t}_{0}(z)\) exists as an element of \(H^{*}(X)\otimes\mathbb{C}(\lambda)[\![z]\!][\![\mathrm{Eff}(X)]\!][\![x]\!]\) since \(\lim_{\mu\to 0}\mathbf{f}\) exists. Applying this for \(\alpha=\prod_{j=1}^{N}\mathbf{c}^{j}((W_{j})_{0,n,d})\cap[X_{0,n,d}]^{\mathrm{ vir}}\) gives that \[\lim_{\mu\to 0}\iota^{*}\mathbf{f}=-\mathbf{1}z+\iota^{*} \mathbf{t}_{0}(z) +\sum_{\begin{subarray}{c}n\geq 0,d\in\mathrm{Eff}(X)\\ (n,d)\neq(0,0),(1,0)\end{subarray}}\frac{Q^{d}}{n!}\cdot\prod_{i=1}^{N} \mathbf{c}^{i}(\iota^{*}W_{i})^{-1}\\ \cdot\mathrm{ev}_{Z,1_{*}}\left[\frac{\prod_{i=2}^{n+1}\mathrm{ev} _{Z,i}^{*}\,\mathbf{t}(\psi_{i})}{-z-\psi_{1}}\cdot\prod_{j=1}^{N}\mathbf{c}^ {j}((\iota^{*}W_{j})_{0,n+1,d})\cap[Z_{0,n+1,d}]^{\mathrm{vir}}\right],\] which is a \(\mathbb{C}(\lambda)[\![\mathrm{Eff}(X)]\!][\![x]\!]\)-valued point on \(\mathcal{L}_{Z,(\iota^{*}\widetilde{W},\mathfrak{E}|_{\mu=0})}\). Here we use (2.4). ## 3. Toric bundles In this section we introduce toric bundles. We first review toric varieties, and then define toric bundles by doing the construction of toric varieties in a relative setting. Note that they include toric bundles appearing in [5][21]. We then investigate geometric structures of toric bundles: \(\mathbb{T}\)-equivariant cohomology ring (3.2), effective curves (3.3), \(\mathbb{T}\)-fixed loci and one-dimensional orbits (3.4). ### Construction We start with the triple \(\mathsf{L}=(\mathbb{L}^{\vee},D,\omega)\) where * \(\mathbb{L}^{\vee}\) is a free abelian group of rank \(K\); * \(D\colon\mathbb{Z}^{N}\to\mathbb{L}^{\vee}\) is a map; * \(\omega\) is a vector in \(\mathbb{L}^{\vee}\otimes\mathbb{R}\). We denote as \(D_{i}\) the image of the \(i\)-th standard basis vector of \(\mathbb{Z}^{N}\) under the map \(D\). We identify the datum \(D\colon\mathbb{Z}^{N}\to\mathbb{L}^{\vee}\) with the subset \(\{D_{1},\dots,D_{N}\}\) of \(\mathbb{L}^{\vee}\). We set \[\mathcal{A}_{\mathsf{L}}:= \left\{I\subset\{1,\dots,N\}:\omega\in\sum_{i\in I}\mathbb{R}_{ \geq 0}\cdot D_{i}\right\},\] \[\mathcal{U}_{\mathsf{L}}:= \mathbb{C}^{N}\setminus\bigcup_{I\notin\mathcal{A}_{\mathsf{L}} }\mathbb{C}^{I},\qquad\mathbb{K}:=\operatorname{Hom}(\mathbb{L}^{\vee}, \mathbb{C}^{\times}),\qquad\mathbb{T}:=\operatorname{Hom}(\mathbb{Z}^{N}, \mathbb{C}^{\times})\] where \(\mathbb{C}^{I}:=\{(z_{1},\dots,z_{N})\in\mathbb{C}^{N}:z_{i}=0\text{ for }i\notin I\}\). We call elements of \(\mathcal{A}_{\mathsf{L}}\)_anti-cones_. By applying the functor \(\operatorname{Hom}(\cdot,\mathbb{C}^{\times})\) to the map \(D\colon\mathbb{Z}^{N}\to\mathbb{L}^{\vee}\), we get an embedding \(\mathbb{K}\hookrightarrow\mathbb{T}\) and the torus \(\mathbb{K}\) acts on \(\mathcal{U}_{\mathsf{L}}\) via this embedding. We note that \(\mathcal{U}_{\mathsf{L}}\) is naturally endowed with \(\mathbb{T}\)-action. **Definition 3.1**.: We call the triple \(\mathsf{L}=(\mathbb{L}^{\vee},D,\omega)\) a _smooth toric data_ if it satisfies the following two conditions: 1. The vector \(\omega\) belongs to \(\sum_{i=1}^{N}\mathbb{R}_{\geq 0}\cdot D_{i}\). 2. For any \(I\in\mathcal{A}_{\mathsf{L}}\), \(\{D_{i}\}_{i\in I}\) generates \(\mathbb{L}^{\vee}\). The condition (1) ensures that \(\mathcal{U}_{\mathsf{L}}\) is nonempty and (2) ensures that the action of \(\mathbb{K}\) on \(\mathcal{U}_{\mathsf{L}}\) is free. Under these hypothesis, the quotient space \(X_{\mathsf{L}}=\mathcal{U}_{\mathsf{L}}/\mathbb{K}\) becomes a smooth semi-projective toric variety. We consider a relative version of this construction. We fix a smooth toric data \(\mathsf{L}=(\mathbb{L}^{\vee},D\colon\mathbb{Z}^{N}\to\mathbb{L}^{\vee},\omega)\). Let \(V_{1},\dots,V_{N}\) be non-zero vector bundles over a smooth projective variety \(B\) and set \(r_{i}:=\operatorname{rank}V_{i}\). We let \(\mathbb{K}\) act on \(V_{i}\) fiberwise by the character \(D_{i}\colon\mathbb{K}\to\mathbb{C}^{\times}\) and define a toric bundle \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) over \(B\) as follows: \[\mathcal{U}_{\mathsf{L}}(\vec{V}):=\left(\bigoplus_{i=1}^{N}V_{i}\setminus \bigcup_{I\notin\mathcal{A}_{\mathsf{L}}}\bigoplus_{i\in I}V_{i}\right), \qquad\mathbb{X}_{\mathsf{L}}(\vec{V}):=\mathcal{U}_{\mathsf{L}}(\vec{V}) \Big{/}\,\mathbb{K}.\] Note that \(\mathcal{U}_{\mathsf{L}}(V)\) is endowed with \(\mathbb{T}\)-action coming from the \(\mathbb{T}\)-actions on \(V_{1},\dots,V_{N}\), and it induces a \(\mathbb{T}\)-action on \(\mathbb{X}_{\mathsf{L}}(\vec{V})\). When \(B=\operatorname{pt}\), then \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) is a smooth semi-projective toric variety, which we denote by \(\mathbb{X}^{\vec{r}}\). For a general base, \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) is an \(\mathbb{X}^{\vec{r}}\)-bundle over \(B\). **Definition 3.2**.: Let \(E\to B\) be a toric bundle obtained by the above procedure. We say that \(E\) is _of split type_, or \(E\) is a _split toric bundle_ if there is a smooth toric data \(\mathsf{L}\) and line bundles \(\{L_{i}\}\) over \(B\) such that \(E\to B\) is isomorphic to \(\mathbb{X}_{\mathsf{L}}(\vec{L})\to B\). We say that \(E\) is _of non-split type_, or \(E\) is a _non-split toric bundle_ if \(E\) is not of split type. As explained in Section 1, mirror theorems for split toric bundles [5] and (non-split) projective bundles [21] are already known. We will prove a mirror theorem for (non-split) toric bundles (Theorem 6.1). **Proposition 3.3**.: 1. _The bundle_ \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) _is projective if and only if the toric variety_ \(X_{\mathsf{L}}\) _is projective._ 2. _The bundle_ \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) _is semi-projective if the vector bundles_ \(V_{1}^{\vee},\dots,V_{N}^{\vee}\) _are generated by global sections._ Proof.: We only prove (2). From the assumption, we have the exact sequences \[0\to V_{i}\to\mathcal{O}^{\oplus s_{i}}\to\mathcal{Q}_{i}\to 0\] for \(i=1,\ldots,N\). By taking the direct sum of these sequences, we have the inclusion \(\bigoplus_{i=1}^{N}V_{i}\hookrightarrow\bigoplus_{i=1}^{N}\mathcal{O}^{s_{i}}\). This induces a closed embedding \(\mathbb{X}_{\mathsf{L}}(\vec{V})\hookrightarrow B\times\mathbb{X}^{\vec{s}}\). Since \(B\times\mathbb{X}^{\vec{s}}\) is semi-projective, \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) is also semi-projective. ### Cohomology ring We want to describe the ordinary \(\mathbb{T}\)-equivariant cohomology of \(\mathbb{X}_{\mathsf{L}}(\vec{V})\). We write the element of \(H^{2}_{\mathbb{T}}(\operatorname{pt})\cong\operatorname{Hom}(\mathbb{T}, \mathbb{C}^{\times})\) corresponding to the \(i\)-th projection \(\mathbb{T}\to\mathbb{C}^{\times}\) as \(-\lambda_{i}\). For \(1\leq i\leq N\), let \(L_{i}\) be a \(\mathbb{T}\)-equivariant line bundle over \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) defined by \[L_{i}=\left.\mathcal{U}_{\mathsf{L}}(\vec{V})\times\mathbb{C}\right/\mathbb{K} \tag{3.1}\] where \(\mathbb{K}\) acts on the second factor \(\mathbb{C}\) by the character \(D_{i}\colon\mathbb{K}\to\mathbb{C}^{\times}\). We let \(\mathbb{T}\) act on \(L_{i}\) as \(t\cdot[v,w]=[t\cdot v,t_{i}w]\). We write the \(\mathbb{T}\)-equivariant first Chern class of \(L_{i}\) as \(u_{i}\). **Proposition 3.4**.: _We have the isomorphisms_ \[H^{*}_{\mathbb{T}}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z}) \cong H^{*}_{\mathbb{T}}(B,\mathbb{Z})[u_{1},\ldots,u_{N}]/( \mathcal{I}+\mathcal{J}),\] \[H^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z}) \cong H^{*}(B,\mathbb{Z})[u_{1},\ldots,u_{N}]/(\mathcal{I}|_{ \lambda=0}+\mathcal{J}|_{\lambda=0})\] _where the ideal \(\mathcal{I}\) is generated by \(\{\prod_{i\notin I}e_{\mathbb{T}}(V_{i}\otimes L_{i})\}_{I\notin\mathcal{A}_ {\mathsf{L}}}\), and the ideal \(\mathcal{J}\) is generated by \(\{\sum_{i=1}^{N}a_{i}(u_{i}+\lambda_{i})\}_{a\in\operatorname{Ker}(D\colon \mathbb{Z}^{N}\to\mathbb{L}^{\vee})}\). Here \(\mathbb{T}\) acts on \(B\) trivially and \(H^{*}_{\mathbb{T}}(B,\mathbb{Z})\cong H^{*}(B,\mathbb{Z})\otimes\mathbb{Z}[\lambda]\)._ Proof.: We first note that this proposition is known to be true when \(B\) is a point; see for example [16, Proposition 2.11]. Moreover, it is known that \(H^{*}_{\mathbb{T}}(\mathbb{X}^{\vec{r}},\mathbb{Z})\) is a free \(H^{*}_{\mathbb{T}}(\operatorname{pt},\mathbb{Z})\)-module of finite rank. We can choose a \(H^{*}_{\mathbb{T}}(\operatorname{pt})\)-basis \(\{\phi_{b}(u)\}_{b\in\mathcal{B}}\) of \(H^{*}_{\mathbb{T}}(\mathbb{X}^{\vec{r}},\mathbb{Z})\) where \(\phi_{b}(u)\in\mathbb{Z}[u_{1},\ldots,u_{N}]\). There is a natural ring homomorphism \(f\colon H^{*}_{\mathbb{T}}(B,\mathbb{Z})[u_{1},\ldots,u_{N}]\to H^{*}_{ \mathbb{T}}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z})\). For any \(I\notin\mathcal{A}_{\mathsf{L}}\), the vector bundle \[\bigoplus_{i\notin I}(V_{i}\otimes L_{i})=\left.\left(\mathcal{U}_{\mathsf{L} }(\vec{V})\times_{B}\bigoplus_{i\notin I}V_{i}\right)\right/\mathbb{K}\] has the nowhere vanishing \(\mathbb{T}\)-equivariant section \([v_{1},\ldots,v_{N}]\mapsto[v_{1},\ldots,v_{N},\{v_{i}\}_{i\notin I}]\), which implies \(\mathcal{I}\subset\ker(f)\). In the similar way, we can see that \(\mathcal{J}\subset\ker(f)\). Therefore, we have a ring homomorphism \[\tilde{f}\colon H^{*}_{\mathbb{T}}(B,\mathbb{Z})[u_{1},\ldots,u_{N}]/( \mathcal{I}+\mathcal{J})\to H^{*}_{\mathbb{T}}(\mathbb{X}_{\mathsf{L}}(\vec{V} ),\mathbb{Z}).\] It is easy to show that the domain of \(\tilde{f}\) is a free \(H^{*}_{\mathbb{T}}(B,\mathbb{Z})\)-module with basis \(\{\phi_{b}(u)\}_{b\in\mathcal{B}}\). We consider the following fiber bundle: where \(E\mathbb{T}\) is a contractible space on which \(\mathbb{T}\) acts freely, \(B\mathbb{T}=E\mathbb{T}/\mathbb{T}\) is a classifying space of \(\mathbb{T}\), and the vertical map is induced by the projections \(\mathbb{X}_{\mathsf{L}}(\vec{V})\to B\) and \(E\mathbb{T}\to B\mathbb{T}\). By the Leray-Hirsch theorem, \[H^{*}((\mathbb{X}_{\mathsf{L}}(\vec{V})\times E\mathbb{T})/\mathbb{T},\mathbb{ Z})=H^{*}_{\mathbb{T}}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z})\] is a free \(H^{*}_{\mathbb{T}}(B,\mathbb{Z})\)-module with basis \(\{\tilde{f}(\phi_{b}(u))\}_{b\in\mathcal{B}}\). This implies that \(\tilde{f}\) is an isomorphism. This proposition provides an explicit description of the second cohomology of \(\mathbb{X}_{\mathsf{L}}(\vec{V})\). For \(\vec{r}\in(\mathbb{Z}_{>0})^{N}\), we set \[I^{\vec{r}}_{\mathsf{L}}:=\{i\in\{1,\ldots,N\}\colon r_{i}=1\text{ and }\{1,\ldots,N\}\setminus\{i\}\notin\mathcal{A}_{\mathsf{L}}\}.\] It follows easily from Proposition 3.4 that there exists a natural exact sequence \[0\to\mathbb{Z}^{\oplus I^{\vec{r}}_{\mathsf{L}}}\to H^{2}(B,\mathbb{Z})\oplus \mathbb{L}^{\vee}\to H^{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z})\to 0 \tag{3.2}\] where the second arrow sends a standard generator \(e_{i}\) (\(i\in I^{\vec{r}}_{\mathsf{L}}\)) to \((c_{1}(V_{i}),D_{i})\). Hence we have the isomorphism \[H^{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z})\cong H^{2}(B,\mathbb{Z}) \oplus\mathbb{L}^{\vee}/\langle(c_{1}(V_{i}),D_{i})\colon i\in I^{\vec{r}}_{ \mathsf{L}}\rangle.\] In particular, we have \[H^{2}(\mathbb{X}^{\vec{r}},\mathbb{Z})\cong\mathbb{L}^{\vee}/\langle D_{i} \colon i\in I^{\vec{r}}_{\mathsf{L}}\rangle.\] ### Effective curve classes In this subsection we will study the effective curve classes \(\operatorname{Eff}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\) and introduce _extended effective classes_\(\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\). Taking the dual of the sequence (3.2) we can see that \[H_{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z})\cong\left\{(D,\ell)\in H_{2 }(B,\mathbb{Z})\oplus\mathbb{L}:c_{1}(V_{i})\cdot D+D_{i}(\ell)=0\ \text{ for }\ 1\leq i\leq N\right\}.\] We construct a canonical splitting of the second homology of \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) into those of \(B\) and \(\mathbb{X}^{\vec{r}}\) (which depends on the choice of \(\vec{V}\)). There exists a canonical splitting [17, Section 3.1.2]: \[H_{2}(\mathbb{X}^{\vec{r}},\mathbb{Z})\oplus\mathbb{Z}^{\oplus I^ {\vec{r}}_{\mathsf{L}}}\cong\mathbb{L}, \tag{3.4}\] \[H^{2}(\mathbb{X}^{\vec{r}},\mathbb{Z})\oplus\mathbb{Z}^{\oplus I ^{\vec{r}}_{\mathsf{L}}}\cong\mathbb{L}^{\vee}. \tag{3.3}\] We define \(\phi\colon H^{2}(\mathbb{X}^{\vec{r}})\to H^{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\) as follows. For \(\overline{\rho}\in H^{2}(\mathbb{X}^{\vec{r}})\), we take its lift \(\rho\in\mathbb{L}^{\vee}\) corresponding to \((\overline{\rho},0)\) under the splitting (3.4). We define \(\phi(\overline{\rho})\) to be the first Chern class of the line bundle \(\mathcal{O}(\rho)\) defined as \[\mathcal{O}(\rho)=\mathcal{U}_{\mathsf{L}}(\vec{V})\times\mathbb{C}\Big{/} \,\mathbb{K}\] where \(\mathbb{K}\) acts on the second factor via the character \(\mathbb{K}\to\mathbb{C}^{\times}\) obtained by \(\rho\). This map is well-defined and gives the splitting \(H^{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\cong H^{2}(B)\oplus H^{2}(\mathbb{X}^ {\vec{r}})\). By dualizing, we obtain the splitting \[H_{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z})\cong H_{2}(B,\mathbb{Z}) \oplus H_{2}(\mathbb{X}^{\vec{r}},\mathbb{Z}). \tag{3.5}\] We define \(\mathbb{L}_{\mathrm{eff}}\subset\mathbb{L}\) to be a cone which coincides with \(\operatorname{Eff}(\mathbb{X}^{\vec{r}})\oplus(\mathbb{Z}_{\geq 0})^{\oplus I^{ \vec{r}}_{\mathsf{L}}}\) via the above splitting. Note that \(\mathbb{L}_{\mathrm{eff}}\) is independent of \(\vec{r}\) and can be written as \[\mathbb{L}_{\mathrm{eff}}=\sum_{I\in\mathcal{A}_{\mathsf{L}}}\{\ell\in\mathbb{L }:D_{i}(\ell)\geq 0\ \text{ for all }\ i\in I\}.\] See [17, Section 3.1.2] for details. Combining the two splittings (3.3) and (3.5), we obtain the isomorphism \[H_{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z})\oplus\mathbb{Z}^{\mathbb{L} ^{\vec{r}}}\cong H_{2}(B,\mathbb{Z})\oplus\mathbb{L}. \tag{3.6}\] We henceforce assume that \(\bigoplus_{i=1}^{N}V_{i}\) is globally generated. We let \(\mathcal{D}\colon H_{2}(B,\mathbb{Z})\oplus\mathbb{L}\to H_{2}(\mathbb{X}_{ \mathsf{L}}(\vec{V}),\mathbb{Z})\) be the isomorphism (3.6) composed with the projection to \(H_{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z})\), and define a _semigroup of extended effective curve classes_ as follows: \[\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))= \mathcal{D}(\operatorname{Eff}(B)\oplus\mathbb{L}_{\operatorname{eff}}).\] As we can see from the following lemma, the cone \(\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\) indeed extends \(\operatorname{Eff}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\). **Lemma 3.5**.: \(\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V})) \supset\operatorname{Eff}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\)_._ Proof.: From the assumption that \(\bigoplus_{i=1}^{N}V_{i}\) is globally generated, we can construct the embedding \(\iota\colon\mathbb{X}_{\mathsf{L}}(\vec{V})\hookrightarrow B\times\mathbb{X} ^{\vec{s}}\) for some \(\vec{s}\in(\mathbb{Z}_{\geq 2})^{N}\); see the proof of Proposition 3.3. We note that \(H_{2}(\mathbb{X}^{\vec{s}},\mathbb{Z})\cong\mathbb{L}\) since \(I_{\mathsf{L}}^{\vec{s}}=\emptyset\). For any \(\rho\in\mathbb{L}^{\vee}\), the line bundle \(\mathcal{O}(\rho)\) comes from \(\mathcal{O}(\rho)\) over \(B\times\mathbb{X}^{\vec{s}}\) via the embedding \(\iota\). Therefore, the map \[\iota_{*}\colon H_{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z})\to H_{2}(B \times\mathbb{X}^{\vec{s}},\mathbb{Z})=H_{2}(B,\mathbb{Z})\oplus H_{2}( \mathbb{X}^{\vec{s}},\mathbb{Z})\] coincides with the inclusion \(H_{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z})\hookrightarrow H_{2}(B, \mathbb{Z})\oplus\mathbb{L}\) induced by the splitting (3.6). This gives the inclusion \[\operatorname{Eff}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\xrightarrow{\iota_{*}} \operatorname{Eff}(B\times\mathbb{X}^{\vec{s}})=\operatorname{Eff}(B)\oplus \mathbb{L}_{\operatorname{eff}}\xrightarrow{\mathcal{D}}\operatorname{Eff}^{ \operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\hookrightarrow H_{2}( \mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{Z}).\] **Remark 3.6**.: The splittings (3.5) and (3.6) depend on the choice of vector bundles \(V_{1},\dots,V_{N}\). When replacing \(\vec{V}\) by \(\vec{V}^{\prime}=(V_{1}\otimes M^{\otimes D_{1}(\ell)},\dots,V_{N}\otimes M^{ \otimes D_{N}(\ell)})\) for some \(\ell\in\mathbb{L}\) and line bundle \(M\to B\), the total space \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) does not change. However, for \(\rho\in\mathbb{L}^{\vee}\), the associated line bundles \(\mathcal{O}_{\mathbb{X}_{\mathsf{L}}(\vec{V})}(\rho)\) and \(\mathcal{O}_{\mathbb{X}_{\mathsf{L}}(\vec{V}^{\prime})}(\rho)\) may not coincide. In fact, we have \[\mathcal{O}_{\mathbb{X}_{\mathsf{L}}(\vec{V}^{\prime})}(\rho)=\mathcal{O}_{ \mathbb{X}_{\mathsf{L}}(\vec{V})}(\rho)\otimes M^{\otimes-\rho(\ell)}\] via the canonical identification of \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) and \(\mathbb{X}_{\mathsf{L}}(\vec{V}^{\prime})\). The splittings (3.5), (3.6) and the semigroup \(\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\) change according to this substitution. Via the splitting (3.6), the cone \(\operatorname{Eff}(B)\oplus\mathbb{L}_{\operatorname{eff}}\) is identified with \(\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V})) \oplus(\mathbb{Z}_{\geq 0})^{\oplus I_{\mathsf{L}}^{\vec{r}}}\). We choose a Kahler form \(\omega\) on \(B\times\mathbb{X}^{\vec{s}}\). Let \(v\) be the additive valuation on \(\mathbb{C}[\operatorname{Eff}(B)\oplus\mathbb{L}_{\operatorname{eff}}]=\mathbb{ C}[\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V})) \oplus(\mathbb{Z}_{\geq 0}^{\oplus I_{\mathsf{L}}^{\vec{r}}})]\) given by \(\omega\), and consider the completion with respect to \(v\). Let \(\mathcal{Q}\), \(Q\) and \(q\) denote the Novikov variables for \(\mathbb{X}_{\mathsf{L}}(\vec{V})\), \(B\) and \(\mathbb{X}^{\vec{r}}\) respectively, and \(\tilde{q}\) (resp. \(y\)) be a formal variable for \(\mathbb{C}[\![\mathbb{L}_{\operatorname{eff}}]\!]\) (resp. \(\mathbb{C}[\![\mathbb{Z}_{\geq 0}^{\oplus I_{\mathsf{L}}^{\vec{r}}}]\!]\)). Furthermore, we can identify \(\mathbb{C}[\![\operatorname{Eff}(B)\oplus\mathbb{L}_{\operatorname{eff}}]\!]\) with \(\mathbb{C}[\![\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}( \vec{V}))]\!][\![y]\!]\) in the following way: \[Q^{D}\tilde{q}^{\ell}=\mathcal{Q}^{\mathcal{D}(D,\ell)}\prod_{i\in I_{ \mathsf{L}}^{\vec{r}}}y_{i}^{D_{i}(\ell)}. \tag{3.7}\] We call \(\tilde{q}\) the _extended Novikov variable for \(\mathbb{X}^{\vec{r}}\)_. ### Fixed loci and one-dimensional orbits We describe the \(\mathbb{T}\)-fixed loci on \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) and introduce some varieties related to one-dimensional orbits. Let \(\mathsf{L}\) be a smooth toric data. We denote by \(F_{\mathsf{L}}\) the set of minimal anti-cones: \[F_{\mathsf{L}}=\{\alpha\in\mathcal{A}_{\mathsf{L}}:|\alpha|=k\}.\] For \([x]\in(\mathbb{X}_{\mathsf{L}}^{\vec{r}})^{\mathbb{T}}\), we take the set \(\{i:x_{i}\neq 0\}\), which belongs to \(F_{\mathsf{L}}\). This gives a one-to-one correspondence between the connected components of \((\mathbb{X}_{\mathsf{L}}^{\vec{r}})^{\mathbb{T}}\) and the set \(F_{\mathsf{L}}\). In fact, there is also a one-to-one correspondence between the connected components of \(\mathbb{X}_{\mathsf{L}}(\vec{V})^{\mathbb{T}}\) and the set \(F_{\mathsf{L}}\) obtained in the same manner. More precisely, we assign to \(\alpha\in F_{\mathsf{L}}\) a subvariety \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\) which is a bundle over \(B\) with fiber being the \(\mathbb{T}\)-fixed locus of \(\mathbb{X}_{\mathsf{L}}^{\vec{V}}\) corresponding to \(\alpha\in F_{\mathsf{L}}\). Note that \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\) is isomorphic to the fiber product over \(B\) of the projective bundles \(\{\mathbb{P}(V_{j})\}_{j\in\alpha}\). We introduce several definitions and notations. **Definition 3.7**.: Let \(\alpha\in F_{\mathsf{L}}\). 1. We write the inclusion \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\hookrightarrow\mathbb{X}_{\mathsf{ L}}(\vec{V})\) as \(\iota_{\alpha}\), and write as \(N_{\alpha}\) the normal bundle to \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\) in \(\mathbb{X}_{\mathsf{L}}(\vec{V})\): \[N_{\alpha}=\bigoplus_{i\not\in\alpha}(V_{i}\otimes L_{i}).\] 2. For \(\alpha\in 2^{\{1,\ldots,N\}}\), we write a fiber product of \(\{\mathbb{P}(V_{j})\to B\}_{j\in\beta}\) over \(B\) as \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\). (This convention is consistent with the above notation for the fixed loci.) Here we define \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}=B\) if \(\alpha=\emptyset\). If \(\alpha\supset\beta\) there is a natural projection \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\rightarrow\mathbb{X}_{\mathsf{L}}( \vec{V})_{\beta}\) which is denoted by \(p_{\alpha,\beta}\). 3. We denote by \(\{D_{\alpha,i}^{\vee}\}_{i\in\alpha}\subset\mathbb{L}\) the dual basis of \(\{D_{i}\}_{i\in\alpha}\subset\mathbb{L}^{\vee}\). 4. We say that \(\beta\in F_{\mathsf{L}}\)_is adjacent to_\(\alpha\) if \(\#(\beta\setminus\alpha)=1\). We write the unique element of \(\beta\setminus\alpha\) as \(i_{\alpha,\beta}\). Define \[\operatorname{adj}(\alpha)=\{\beta\in F_{\mathsf{L}}\colon\beta\text{ is adjacent to }\alpha\}.\] 5. Let \(\beta\in\operatorname{adj}(\alpha)\). We denote by \(d_{\alpha\beta}\) the homology class of \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) given by a one-dimensional orbit joining points on \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\) and \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\beta}\). 6. For \(\beta\in\operatorname{adj}(\alpha)\), we define (3.8) \[L_{\alpha,\beta}:=\left.\left(\bigoplus_{i\in\alpha\cup\beta}(V_{i}\setminus B )\oplus\mathcal{O}_{B}\right)\right/\left(\mathbb{K}\times\mathbb{C}^{\times}\right)\] where \(\mathbb{T}\) acts trivially on \(\mathcal{O}_{B}\) and the second factor \(t\in\mathbb{C}^{\times}\) acts on \((v,s)\in V_{i_{\alpha,\beta}}\oplus\mathcal{O}_{B}\) as \(t\cdot(v,s)=(tv,t^{-1}s)\) and acts trivially on the other components. (As we will see, \(L_{\alpha,\beta}\) is a line bundle over \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta}\).) 7. For \(\beta\in\operatorname{adj}(\alpha)\), we define \(\lambda_{\alpha,\beta}\in H_{\mathbb{T}}^{2}(\operatorname{pt})\) to be the image of \(c_{1}^{\mathbb{T}}(L_{\alpha,\beta})\) under the canonical projection \(H_{\mathbb{T}}^{2}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta})=H^{2}( \mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta})\otimes H_{\mathbb{T}}^{2} (\operatorname{pt})\to H_{\mathbb{T}}^{2}(\operatorname{pt})\). From the description of \(H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\) in Proposition 3.4, we can describe \(H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha})\) and the map \(\iota_{\alpha}^{*}\colon H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})) \to H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha})\). **Corollary 3.8**.: _For \(\alpha\in 2^{\{1,\ldots,N\}}\), we have the isomorphism_ \[H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha})\cong H^{*}(B)[\{u _{i}\}_{i\in\alpha}]/\langle e_{\mathbb{T}}(V_{i}\otimes L_{i})\colon i\in \alpha\rangle.\] _If \(\alpha\in F_{\mathsf{L}}\), the map_ \[\iota_{\alpha}^{*}\colon H_{\mathbb{T}}^{*}(B)[u_{1},\ldots,u_{N}]/( \mathcal{I}+\mathcal{J}) \cong H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\] \[\to H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}) \cong H_{\mathbb{T}}^{*}(B)[\{u_{i}\}_{i\in\alpha}]/\langle e_{\mathbb{T}}(V _{i}\otimes L_{i})\colon i\in\alpha\rangle\] _is a \(H_{\mathbb{T}}^{*}(B)\)-module homomorphism which sends \(u_{i}\) to_ \[\iota_{\alpha}^{*}u_{i}=-\lambda_{i}+\sum_{j\in\alpha}D_{i}(D_{\alpha,j}^{\vee}) \cdot(u_{j}+\lambda_{j}).\] _In particular, \(\iota_{\alpha}^{*}u_{i}=u_{i}\) if \(i\in\alpha\)._ By abuse of notation, for any \(\alpha\in 2^{\{1,\dots,N\}}\) and \(j\in\alpha\), we denote the equivalence class of \(u_{j}\) in \(H^{*}_{\mathbb{T}}(\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha})\) by \(u_{j}\). We remark that for any \(\alpha\in F_{\mathbb{L}}\), \(\beta\in\operatorname{adj}(\alpha)\) and \(i\in(\alpha\cap\beta)^{c}\), two pullbacks \(p^{*}_{\alpha\cup\beta,\alpha}t^{*}_{\alpha}u_{i}\) and \(p^{*}_{\alpha\cup\beta,\beta}t^{*}_{\beta}u_{i}\) do not coincide; see in Lemma 3.9 (3). Let \(\alpha\in F_{\mathbb{L}}\). For \(\beta\in F_{\mathbb{L}}\), the condition that \(\beta\) is adjacent to \(\alpha\) is equivalent to the existence of a one-dimensional orbit joining points on \(\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha}\) and \(\mathbb{X}_{\mathbb{L}}(\vec{V})_{\beta}\). If \(\beta\in\operatorname{adj}(\alpha)\), there is a canonical isomorphism \[\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha\cup\beta}\cong\bigoplus_{i\in\alpha \cup\beta}(V_{i}\setminus B)\Bigg{/}\ (\mathbb{K}\times\mathbb{C}^{\times}),\] and a projection \(L_{\alpha,\beta}\to\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha\cup\beta}\) which makes \(L_{\alpha,\beta}\) a line bundle over \(\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha\cup\beta}\). We give the description of \(d_{\alpha\beta}\) and \(c_{1}^{\mathbb{T}}(L_{\alpha,\beta})\). **Lemma 3.9**.: _Let \(\alpha\in F_{\mathbb{L}}\) and \(\beta\in\operatorname{adj}(\alpha)\)._ 1. _Under the splitting (_3.6_), it holds that_ \(d_{\alpha\beta}=D^{\vee}_{\alpha,i_{\beta,\alpha}}=D^{\vee}_{\beta,i_{\alpha, \beta}}\)_._ 2. _It holds that_ \[c_{1}^{\mathbb{T}}(L_{\alpha,\beta}) =-u_{i_{\alpha,\beta}}-\lambda_{i_{\alpha,\beta}}+\sum_{j\in\alpha }D_{i_{\alpha,\beta}}(D^{\vee}_{\alpha,j})\cdot(u_{j}+\lambda_{j})\] \[=-u_{i_{\alpha,\beta}}+p^{*}_{\alpha\cup\beta,\alpha}t^{*}_{ \alpha}u_{i_{\alpha,\beta}}.\] 3. _For_ \(1\leq i\leq N\)_, we have_ \[p^{*}_{\alpha\cup\beta,\beta}t^{*}_{\beta}u_{i}=p^{*}_{\alpha\cup\beta,\alpha} t^{*}_{\alpha}u_{i}-D_{i}(d_{\alpha\beta})\cdot c_{1}^{\mathbb{T}}(L_{\alpha, \beta}).\] Proof.: We first prove (1). Since \(d_{\alpha\beta}\) is represented by a curve \(C_{\alpha,\beta}\) in the fiber, we can assume that \(B=\operatorname{pt}\) and \(\mathbb{X}_{\mathbb{L}}(\vec{V})=\mathbb{X}^{\vec{r}}\) is a toric variety. Without loss of generality, we can assume that \(r_{i}=1\) for any \(i\). In this case, we can write \(X:=\mathbb{X}^{\vec{r}}\) and \(C_{\alpha,\beta}\) as follows: \[X =\left\{[v_{1},\dots,v_{N}]\in\mathbb{C}^{N}/\mathbb{K}\colon(v_{ 1},\dots,v_{N})\notin\bigcup_{I\notin\mathcal{A}_{\mathbb{L}}}\mathbb{C}^{I} \right\},\] \[C_{\alpha,\beta} =\{[v_{1},\dots,v_{N}]\in X\colon v_{i}=1\text{ for }i\in\alpha \cap\beta,\ v_{j}=0\text{ for }i\notin\alpha\cup\beta\}.\] By definition, \(u_{i}\) is the \(i\)-th toric divisor for \(X\). Hence its Poincare dual \([Z_{i}]\) can be taken as \[Z_{i}=\{[v_{1},\dots,v_{N}]\in X\colon v_{i}=0\}.\] Using these descriptions, we can show that \[u_{i}(d_{\alpha\beta})=D_{i}(d_{\alpha\beta})=\begin{cases}0&\text{if }i\in\alpha \cap\beta,\\ 1&\text{if }i=i_{\alpha,\beta},i_{\beta,\alpha}.\end{cases}\] for \(i\in\alpha\cup\beta\). This implies that \(d_{\alpha\beta}=D^{\vee}_{\alpha,i_{\beta,\alpha}}=D^{\vee}_{\beta,i_{\alpha, \beta}}\). By (3.8), it follows that \[L_{\alpha,\beta}=L^{\vee}_{i_{\alpha,\beta}}\otimes\bigotimes_{j\in\alpha}L^{ \otimes D^{\vee}_{\alpha,j}(D_{i_{\alpha,\beta}})}\] as \(\mathbb{T}\)-equivariant line bundles where \(L_{i}\to\mathbb{X}_{\mathbb{L}}(\vec{V})\) is the line bundle (3.1). By taking the first Chern class, we obtain the desired formula. For \(1\leq i\leq N\), we calculate \(p^{*}_{\alpha\cup\beta,\alpha}t^{*}_{\alpha}u_{i}\) as follows: \[p^{*}_{\alpha\cup\beta,\alpha}t^{*}_{\alpha}u_{i} =p^{*}_{\alpha\cup\beta,\alpha}\left(-\lambda_{i}+\sum_{j\in\alpha }D_{i}(D^{\vee}_{\alpha,j})\cdot(u_{j}+\lambda_{j})\right)\] \[=-\lambda_{i}+\sum_{j\in\alpha}D_{i}(D^{\vee}_{\alpha,j})\cdot(u_ {j}+\lambda_{j}).\] On the other hand, we have \[p^{*}_{\alpha\cup\beta,\beta}t^{*}_{\beta}u_{i} =p^{*}_{\alpha\cup\beta,\beta}\left(-\lambda_{i}+\sum_{j\in\beta }D_{i}(D^{\vee}_{\beta,j})\cdot(u_{j}+\lambda_{j})\right)\] \[=-\lambda_{i}+\sum_{j\in\beta}D_{i}(D^{\vee}_{\beta,j})\cdot(u_{j }+\lambda_{j}).\] Since \(D_{i_{\beta,\alpha}}=D_{i_{\alpha,\beta}}-\sum_{j\in\alpha\cap\beta}D_{i_{ \alpha,\beta}}(D^{\vee}_{\alpha,j})\cdot D_{j}\), we can see that \[D^{\vee}_{\beta,j}=\begin{cases}D^{\vee}_{\alpha,i_{\beta,\alpha}}&\text{if $ j=i_{\alpha,\beta}$},\\ D^{\vee}_{\alpha,j}-D_{i_{\alpha,\beta}}(D^{\vee}_{\alpha,j})\cdot D^{\vee}_{ \alpha,i_{\beta,\alpha}}&\text{if $j\in\alpha\cap\beta$}.\end{cases}\] Therefore, \(p^{*}_{\alpha\cup\beta,\beta}t^{*}_{\beta}u_{i}\) is equal to \[-\lambda_{i}+D_{i}(D^{\vee}_{\alpha,i_{\beta,\alpha}})\cdot(u_{i_{\alpha, \beta}}+\lambda_{i_{\alpha,\beta}})+\sum_{j\in\alpha\cap\beta}\left(D_{i}(D^{ \vee}_{\alpha,j})-D_{i_{\alpha,\beta}}(D^{\vee}_{\alpha,j})\cdot D_{i}(D^{ \vee}_{\alpha,i_{\beta,\alpha}})\right)\cdot(u_{j}+\lambda_{j})\] \[=p^{*}_{\alpha\cup\beta,\alpha}t^{*}_{\alpha}u_{i}+D_{i}(d_{\alpha\beta}) \cdot\left(u_{i_{\alpha,\beta}}+\lambda_{i_{\alpha,\beta}}-u_{i_{\beta,\alpha }}-\lambda_{i_{\beta,\alpha}}-\sum_{j\in\alpha\cap\beta}D_{i_{\alpha,\beta}}( D^{\vee}_{\alpha,j})\cdot(u_{j}+\lambda_{j})\right)\] \[=p^{*}_{\alpha\cup\beta,\alpha}t^{*}_{\alpha}u_{i}-D_{i}(d_{\alpha\beta}) \cdot c^{\mathbb{T}}_{1}(L_{\alpha,\beta}).\] Here we use (1) and (2). For later use, we study moduli spaces parametrizing multiple one-dimensional orbits. **Definition 3.10**.: Let \(\alpha\in F_{\mathsf{L}}\), \(\beta\in\operatorname{adj}(\alpha)\) and \(k\in\mathbb{N}\). We define \(\overline{\mathcal{M}}^{\alpha,\beta}_{k}\) to be the closed substack of \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{0,2,k\cdot d_{\alpha\beta}}\) consisting of the stable maps which satisfy the following conditions: * its domain is non-singular and surjects onto a one-dimensional orbit joining points on \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\) and \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\beta}\); * the map is \(\mathbb{T}\)-invariant; * the image of the first marking belongs to \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\) and that of the second marking belongs to \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\beta}\). We can interpret \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta}\), \(L_{\alpha,\beta}\) and \(\overline{\mathcal{M}}^{\alpha,\beta}_{k}\) as follows. Firstly, since there is a canonical bijection \[\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta}\stackrel{{ \sim}}{{\longrightarrow}}\left\{\text{one-dimensional orbits joining points on $\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}$ and $\mathbb{X}_{\mathsf{L}}(\vec{V})_{\beta}$}\right\}, \tag{3.9}\] we can understand \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta}\) as the moduli space of one-dimensional orbits in the variety \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha,\beta}\) which is the subvariety of \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) defined as \[\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha,\beta}=\left\{[v_{1},\ldots,v_{N}] \in\mathbb{X}_{\mathsf{L}}(\vec{V})\colon v_{i}=0\text{ for any }i\notin\alpha\cup\beta\right\}.\] In other words, we have \(\overline{\mathcal{M}}_{1}^{\alpha,\beta}\cong\mathbb{X}_{\mathsf{L}}(\vec{V})_{ \alpha\cup\beta}\). Its universal curve \(\mathcal{C}_{\alpha,\beta}\) is obtained by the blow up of \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha,\beta}\) along \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\amalg\mathbb{X}_{\mathsf{L}}(\vec{V} )_{\beta}\) with the canonical maps \(\pi\colon\mathcal{C}_{\alpha,\beta}\to\mathbb{X}_{\mathsf{L}}(\vec{V})_{ \alpha\cup\beta}\) and \(\mathcal{C}_{\alpha,\beta}\to\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha,\beta}\). There is a natural embedding of \(L_{\alpha,\beta}\) into \(\mathcal{C}_{\alpha,\beta}\): \[L_{\alpha,\beta}\cong\operatorname{Bl}_{\mathbb{X}_{\mathsf{L}}(\vec{V})_{ \alpha}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\setminus\mathbb{X}_{\mathsf{L}}( \vec{V})_{\beta}\hookrightarrow\mathcal{C}_{\alpha,\beta},\] which gives the projection \(L_{\alpha,\beta}\to\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta}\) by restricting \(\pi\) to \(L_{\alpha,\beta}\). Due to this isomorphism, \(L_{\alpha,\beta}\to\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta}\) can be seen as the universal tangent line bundle associated to the section \(s_{\alpha}\colon\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta}\to\mathcal{ C}_{\alpha,\beta}\) defined as follows: for any one-dimensional orbit \(C\) passing through \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\) and \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\beta}\) (which represents a point \([C]\in\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta}\) via (3.9)), set \(s_{\alpha}([C])\in\pi^{-1}([C])\cong C\) to be the intersection of \(C\subset\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha,\beta}\) and \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\). Moreover, we can see that \(\overline{\mathcal{M}}_{k}^{\alpha,\beta}\) is isomorphic to the \(k\)-th root stack [6, Section 2][1, Appendix B] of the line bundle \(L_{\alpha,\beta}\), which can be given as the quotient stack \([L_{\alpha,\beta}^{0}/\mathbb{C}^{\times}]\) where \(L_{\alpha,\beta}^{0}=L_{\alpha,\beta}\setminus\mathbb{X}_{\mathsf{L}}(\vec{ V})_{\alpha\cup\beta}\) and \(\mathbb{C}^{\times}\) acts on \(L_{\alpha,\beta}^{0}\) by the formula \(t\cdot x=t^{k}x\) for \(t\in\mathbb{C}^{\times}\). The universal map for \(\overline{\mathcal{M}}_{k}^{\alpha,\beta}\) is given by the diagram where \(F\) is the morphism sending \(([(v_{i})_{i\in\alpha\cup\beta},s],[x,y])\) to \([(F_{i}(v,s,x,y))_{i\in\alpha\cup\beta}]\): \[F_{i}(v,s,x,y)=\begin{cases}v_{i}&i\in\alpha\cap\beta,\\ x^{k}v_{i_{\beta,\alpha}}&i=i_{\beta,\alpha},\\ sy^{k}v_{i_{\alpha,\beta}}&i=i_{\alpha,\beta}.\end{cases}\] Here we use the identification (3.8) and endow \(\mathbb{P}^{1}\) with the \(\mathbb{T}\)-action such that \(t\cdot[x,y]=[tx,y]\) for \(t\in\mathbb{C}^{\times}\). ## 4. Lagrangian cones of toric bundles Throughout this section, we fix a smooth toric data \(\mathsf{L}\), a smooth projective variety \(B\) and a collection of vector bundles \(V_{1},\ldots,V_{N}\) whose duals are globally generated. In this section, we study the \(\mathbb{T}\)-equivariant Lagrangian cone of \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) (Theorem 4.2) and establish a characterization of points on \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V})}\) in a similar manner to [5, 8, 23, 11]. ### Characterization theorem In this subsection, we state a characterization theorem for points on \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{T}}\) (Theorem 4.2). The proof will be given in Section 4.5. We first introduce some notions. **Definition 4.1**.: Let \(X\) be a smooth projective variety with trivial \(\mathbb{T}\)-action and let \((S,v_{S})\) be a semigroup with a discrete grading structure \(v_{S}\colon S\to\mathbb{Z}_{\geq 0}\) which extends that on \(\operatorname{Eff}(X)\). We choose a \(\mathbb{C}\)-basis \(\{\phi_{i}\}_{i\in I}\) of \(H^{*}(X)\) and let \(\chi\in H^{*}_{\mathbb{T}}(\operatorname{pt})\). We say that a function \[\mathbf{f}=\sum_{s\in S}\sum_{i\in I}Q^{s}\phi_{i}\cdot\mathbf{f}_{s,i}\in H^{ *}_{\mathbb{T}}(X)\otimes_{H^{*}_{\mathbb{T}}(\operatorname{pt})}\operatorname {Frac}(H^{*}_{\mathbb{T}}(\operatorname{pt})[z])\llbracket S\rrbracket=H^{*}(X) \otimes\mathbb{C}(\lambda,z)\llbracket S\rrbracket,\] where \(\mathbf{f}_{s,i}\in\mathbb{C}(\lambda,z)\) for any \((s,i)\in S\times I\), has a _pole at \(z=\chi\)_ if there exists \((s,i)\in S\times I\) such that \(\mathbf{f}_{s,i}\) has a pole along \(z-\chi=0\). Define a _principle part of \(\mathbf{f}\) at \(z=\chi\)_ as \[\operatorname{Prin}_{z=\chi}\mathbf{f}=\sum_{s\in S}\sum_{i\in I}Q^{s}\phi_{i} \operatorname{Prin}_{z=\chi}\mathbf{f}_{s,i}.\] **Theorem 4.2**.: _Let \(\mathsf{L}=(\mathbb{L}^{\vee},D\colon\mathbb{Z}^{N}\to\mathbb{L}^{\vee},\omega)\) be a smooth toric data, \(B\) be a smooth projective variety and \(V_{1},\dots,V_{N}\) be vector bundles over \(B\) whose duals are generated by global sections. Let \(x=(x_{1},x_{2},\dots)\) be formal vaiables and let \(\mathbf{f}\) be an element of such that \(\mathbf{f}|_{(\mathcal{Q},x)=0}=-1z\). In this situation, \(\mathbf{f}\) is a \(\operatorname{Frac}(H^{*}_{\mathbb{T}}(\operatorname{pt}))[\![\operatorname{ Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))]\![\![x]\!]\)-valued point on \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{T}}\) if and only if the following three conditions hold \(:\)_ 1. _For each_ \(\alpha\in F_{\mathsf{L}}\)_, the restriction_ \(\iota^{*}_{\alpha}\mathbf{f}\) _belongs to_ \[\left(H^{*}_{\mathbb{T}}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha})\otimes_{H^ {*}_{\mathbb{T}}(\operatorname{pt})}\operatorname{Frac}(H^{*}_{\mathbb{T} \times\mathbb{C}^{\times}_{z}}(\operatorname{pt}))\right)[\![\operatorname{ Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))]\![\![x]\!],\] _and is regular as a function in_ \(z\) _except possibly for poles at_ \[\{0,\infty\}\cup\left\{\frac{\lambda_{\alpha,\beta}}{k}\colon\beta\in \operatorname{adj}(\alpha),k\in\mathbb{N}\right\}.\] 2. _The principal parts of the restrictions of_ \(\mathbf{f}\) _satisfy the following recursion relations: for any_ \(\alpha\in F_{\mathsf{L}}\)_,_ \(\beta\in\operatorname{adj}(\alpha)\) _and_ \(k\in\mathbb{N}\)_, we have_ \[\operatorname{Prin}_{z=\frac{\lambda_{\alpha,\beta}}{k}}\iota^{*}_{\alpha} \mathbf{f}=p_{\alpha\cup\beta,\alpha_{*}}\left[q^{k\cdot d_{\alpha\beta}}\cdot \frac{C_{\alpha,\beta}(k)}{-kz+c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}\cdot p^{*} _{\alpha\cup\beta,\beta}\iota^{*}_{\beta}\mathbf{f}\left(z=\frac{c_{1}^{ \mathbb{T}}(L_{\alpha,\beta})}{k}\right)\right]\] _where_ \(C_{\alpha,\beta}(k)\) _is an element of_ \(H^{*}_{\mathbb{T}}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha\cup\beta})_{ \operatorname{loc}}\) _defined as_ \[C_{\alpha,\beta}(k)^{-1} =\prod_{c=1}^{k-1}\prod_{\begin{subarray}{c}\delta\colon \operatorname{Chern\ roots}\\ \text{of }V_{i_{\alpha},\beta}\end{subarray}}\left(\delta+p^{*}_{\alpha\cup \beta,\alpha}\iota^{*}_{\alpha}u_{i_{\alpha,\beta}}-\frac{c}{k}c_{1}^{\mathbb{T }}(L_{\alpha,\beta})\right)\] \[\cdot\prod_{i\notin\beta}\prod_{c=1}^{k\cdot u_{i}(d_{\alpha\beta} )}\prod_{\begin{subarray}{c}\delta\colon\operatorname{Chern\ roots}\\ \text{of }V_{i}\end{subarray}}\left(\delta+p^{*}_{\alpha\cup\beta,\alpha} \iota^{*}_{\alpha}u_{i}-\frac{c}{k}c_{1}^{\mathbb{T}}(L_{\alpha,\beta})\right).\] 3. _The Laurent expansion of_ \(\iota^{*}_{\alpha}\mathbf{f}\) _at_ \(z=0\) _is a_ \(\operatorname{Frac}(H^{*}_{\mathbb{T}}(\operatorname{pt}))[\![\operatorname{ Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))]\![\![x]\!]\)_-valued point of_ \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha},(N_{\alpha},e_{\mathbb{ T}}^{-1})}\) _for any_ \(\alpha\in F_{\mathsf{L}}\)_._ The key tool for the proof is the virtual localization (Section 2.1.2). The \(\mathbb{T}\)-action on \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) induces a \(\mathbb{T}\)-action on the moduli stack \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{0,n,\mathcal{D}}\). We will study the \(\mathbb{T}\)-fixed locus \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{0,n,\mathcal{D}}^{\mathbb{T}}\subset\mathbb{ X}_{\mathsf{L}}(\vec{V})_{0,n,\mathcal{D}}\) and its virtual normal bundle in order to compute the virtual localization formula. In the rest of this section, we will give a proof of Theorem 4.2. **Remark 4.3**.: This theorem can be directly generalized to general points on \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{T}}\). However, since it is enough to characterize \(\mathbb{C}[\![\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}( \vec{V}))]\![\![x]\!]\)-valued points for our purpose, we will not deal with the general case. ### Decorated graphs and fixed stable maps We introduce the notion of an \(\mathsf{L}\)_-decorated graph_ and describe the stack of \(\mathbb{T}\)-fixed stable maps. We first define a graph \(\Gamma_{\mathsf{L}}\) associated to \(\mathsf{L}\). We let the set of vertices \(V(\Gamma_{\mathsf{L}})\) equal \(F_{\mathsf{L}}\), and the edge joining \(\alpha\in F_{\mathsf{L}}\) and \(\beta\in F_{\mathsf{L}}\) exists if and only if they are adjacent (in the sense of Definition 3.7). **Definition 4.4** ([5, 25, 8, 23, 11]).: 1. An \((n\)-pointed) \((\mathsf{L},\vec{V})\)_-decorated tree_\(\vec{\Gamma}=(\Gamma,\vec{\alpha},\vec{k},\vec{\mathcal{D}},\vec{s})\) consists of the following data: * a connected acyclic undirected graph \(\Gamma\); * a graph homomorphism \(\vec{\alpha}\colon\Gamma\to\Gamma_{\mathsf{L}}\), called a _label map_, which sends \(v\in V(\Gamma)\) to \(\alpha_{v}\in F_{\mathsf{L}}\); * an _edge-degree map_\(\vec{k}\colon E(\Gamma)\to\mathbb{Z}_{>0}\) which sends \(e\in E(\Gamma)\) to a positive integer \(k_{e}\); * a _vertex-degree map_\(\vec{\mathcal{D}}\colon V(\Gamma)\to\operatorname{Eff}(\mathbb{X}_{ \mathsf{L}}(\vec{V}))\) which sends \(v\in V(\Gamma)\) to an effective curve class \(\mathcal{D}_{v}\in\operatorname{Eff}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{ \alpha_{v}})\); * a _marking map_\(\vec{s}\colon\{1,\dots,n\}\to V(\Gamma)\) which sends \(i\) to \(s_{i}\in V(\Gamma)\) (this datum is trivial if \(n=0\)). 2. Let \(e\in E(\Gamma)\) and \(v\in V(\Gamma)\). We call the pair \((e,v)\) a _flag_ if \(v\) incident to \(e\). The set of flags for \(\Gamma\) is denoted by \(F(\Gamma)\). 3. For any \(v\in V(\Gamma)\), we define \(\operatorname{adj}(v)\) to be the set of adjacent vertices of \(v\), and \(\operatorname{mark}(v)\) to be the set of markings on \(v\): \[\operatorname{adj}(v) =\{v^{\prime}\in V(\Gamma)\colon\text{there exists an edge between $v$ and $v^{\prime}$}\},\] \[\operatorname{mark}(v) =\vec{s}^{\,-1}(v).\] We write the valency of \(v\) as \(\operatorname{val}(v):=|\operatorname{adj}(v)|+|\operatorname{mark}(v)|\). 4. For \(e\in E(\Gamma)\), define \(d_{e}:=d_{\alpha_{v}\alpha_{v^{\prime}}}\in\operatorname{Eff}(\mathbb{X}_{ \mathsf{L}}(\vec{V}))\) where \(v,v^{\prime}\in V(\Gamma)\) denote the endpoints of \(e\). Define \[\deg(\vec{\Gamma})=\sum_{e\in E(\Gamma)}k_{e}d_{e}+\sum_{v\in V(\Gamma)} \mathcal{D}_{v},\] which we call a _degree of \(\vec{\Gamma}\)_. 5. We define \(\operatorname{DG}_{0,n,\mathcal{D}}=\operatorname{DG}_{0,n,\mathcal{D}}( \mathsf{L},\vec{V})\) to be a set of all \(n\)-pointed \((\mathsf{L},\vec{V})\)-decorated trees of of degree \(\mathcal{D}\). We can naturally associate an \(n\)-pointed decorated graph \((\Gamma,\vec{\alpha},\vec{k},\vec{\mathcal{D}},\vec{s})\) of degree \(\mathcal{D}\) to a \(\mathbb{T}\)-fixed stable map \([f\colon(C,\mathbf{p})\to\mathbb{X}_{\mathsf{L}}(\vec{V})]\in\mathbb{X}_{ \mathsf{L}}(\vec{V})_{0,n,\mathcal{D}}\) as follows. Let \(E(\Gamma)\) be a set of irreducible components of \(C\) sent by \(f\) dominantly to a one-dimensional orbit in \(\mathbb{X}_{\mathsf{L}}(\vec{V})\), and let \(V(\Gamma)\) be a set of connected components of the set \(f^{-1}(\mathbb{X}_{\mathsf{L}}(\vec{V})^{\mathbb{T}})\). These sets with topological data of \(C\) give a tree \(\Gamma\). We write for \(e\in E(\Gamma)\) (resp. \(v\in V(\Gamma)\)) the corresponding subvariety of \(C\) as \(C_{e}\) (resp. \(C_{v}\)). We remark that \(C_{v}\) might be just a one point. The other data of \(\vec{\Gamma}\) are determined by the following conditions: * for \(v\in V(\Gamma)\), the image of \(C_{v}\) via \(f\) is in \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha_{v}}\); * for \(e\in E(\Gamma)\), we have \(\deg(f|_{C_{e}})=k_{e}d_{e}\); * for \(v\in V(\Gamma)\), we have \(\deg(f|_{C_{v}})=\mathcal{D}_{v}\); * for \(1\leq i\leq n\), the \(i\)-th marking point is on the component \(C_{s_{i}}\) or equal to \(C_{s_{i}}\) if \(C_{s_{i}}\) is just a one point. For convenience, we introduce the following notation. **Notation 4.5**.: Let \(X\) be a smooth projective variety. We set \(X_{0,1,0}=X\) and \(X_{0,2,0}=X\). When we regard \(x\in X\) as a point on \(X_{0,1,0}\) or \(X_{0,2,0}\), we sometimes denote it as follows: \[[f\colon(C=\operatorname{pt},p_{1})\to\mathbb{X}_{\mathsf{L}}(\vec{V})]\in X_{0,1,0}\] \[[f\colon(C=\operatorname{pt},p_{1},p_{2})\to\mathbb{X}_{\mathsf{L}}(\vec{V})] \in X_{0,2,0}\] where in each case the image of \(f\) is \(x\). We also define evaluation maps: \[\operatorname{ev}_{1} =\operatorname{id}_{X}\colon X_{0,1,0}\to X,\] \[\operatorname{ev}_{1} =\operatorname{ev}_{2} =\operatorname{id}_{X}\colon X_{0,2,0}\to X.\] We decompose the stack of fixed stable maps \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{0,n,\mathcal{D}}^{\mathbb{T}}\) with respect to \((\mathsf{L},\vec{V})\)-decorated graphs. For a decorated tree \(\vec{\mathsf{I}}^{\prime}\) we define a stack \(\overline{\mathcal{M}}_{\vec{\Gamma}}\) as follows. First we take and fix a total order \(\{1,\dots,A=\#V(\Gamma)\}\to V(\Gamma)\), of which the image of \(i\) is denoted by \(v_{i}\), that satisfies the following; for any \(1\leq i\leq A\), the full subgraph \(\Gamma_{i}\) of \(\Gamma\) formed by \(\{v_{1},\dots,v_{i}\}\) is connected. This is possible since \(\Gamma\) is connected. Then we recursively define \(\overline{\mathcal{M}}_{i}\) for \(1\leq i\leq A\). We set \(\overline{\mathcal{M}}_{1}=(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha_{v_{1}} })_{0,\operatorname{val}(v_{1}),\mathcal{D}_{v_{1}}}\). Assume that we know \(\overline{\mathcal{M}}_{i}\) for some \(i<A\). There is a unique vertex \(v\in V(\Gamma_{i})\) which is adjacent to \(v_{i+1}\) in \(\Gamma\). We write \(e\in E(\Gamma)\) for the edge joining \(v\) and \(v_{i+1}\). We first take the fiber product with respect to \(\overline{\mathcal{M}}_{i}\to\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha_{v}}\) and \(\operatorname{ev}_{1}\colon\overline{\mathcal{M}}_{k_{e}}^{\alpha_{v},\alpha_ {v_{i+1}}}\to\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha_{v}}\). Here the first morphism is the composition \[\overline{\mathcal{M}}_{i}\to(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha_{v}})_ {0,\operatorname{val}(v),\mathcal{D}_{v}}\xrightarrow{\operatorname{ev}_{v_ {i+1}}}\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha_{v}}\] where the first map is a canonical projection. We define \(\overline{\mathcal{M}}_{i+1}\) as the fiber product with respect to \[\operatorname{ev}_{v}\colon(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha_{v_{i+1} }})_{0,\operatorname{val}(v_{i+1}),\mathcal{D}_{v_{i+1}}}\to\mathbb{X}_{ \mathsf{L}}(\vec{V})_{\alpha_{v_{i+1}}}\] and \[\overline{\mathcal{M}}_{i}\times_{\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha_{v} }}\overline{\mathcal{M}}_{k_{e}}^{\alpha_{v},\alpha_{v_{i+1}}}\to\overline{ \mathcal{M}}_{k_{e}}^{\alpha_{v},\alpha_{v_{i+1}}}\xrightarrow{\operatorname{ ev}_{2}}\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha_{v_{i+1}}}.\] Finally, we set \(\overline{\mathcal{M}}_{\vec{\Gamma}}=\overline{\mathcal{M}}_{A}\). Note that \(\overline{\mathcal{M}}_{\vec{\Gamma}}\) is determined independently of the choice of the total order. There exists a natural morphism \(\overline{\mathcal{M}}_{\vec{\Gamma}}\to\mathbb{X}_{\mathsf{L}}(\vec{V})_{0,n, \mathcal{D}}^{\mathbb{T}}\). For each point \[\left(\left(\left[f_{e}\colon(C_{e}=\mathbb{P}^{1},\mathbf{p}_{e})\to\mathbb{X }_{\mathsf{L}}(\vec{V})\right]\right)_{e\in E(\Gamma)},\] on \(\overline{\mathcal{M}}_{\vec{\Gamma}}\), we assign a \(\mathbb{T}\)-fixed stable map \([f\colon(C,\mathbf{p})\to\mathbb{X}_{\mathsf{L}}(\vec{V})]\) as follows. For \(e\in E(\Gamma)\), we denote its endpoints as \(\{v_{e},v_{e}^{\prime}\}\), and set \(\mathbf{p}_{e}=\{p_{e,v_{e}},p_{e,v_{e}^{\prime}}\}\) which satisfies \(f_{e}(p_{e,v_{e}})=f_{v_{e}}(p_{v_{e},v_{e}^{\prime}})\) and \(f_{e}(p_{e,v_{e}^{\prime}})=f_{v_{e}^{\prime}}(p_{v_{e}^{\prime},v_{e}})\). We set \[C=\left.\left(\coprod_{v\in V(\Gamma)}C_{v}\amalg\coprod_{e\in E(\vec{\Gamma}) }C_{e}\right)\right/\sim\] where the equivalence relation \(\sim\) is generated by \(p_{v_{e},v_{e}^{\prime}}\sim p_{e,v_{e}}\) and \(p_{v_{e}^{\prime},v_{e}}\sim p_{e,v_{e}^{\prime}}\) for each \(e\in E(\Gamma)\). For each \(i\) with \(1\leq i\leq n\), there are points \([p_{i}]\in C\), which will be denoted by \(p_{i}\). The maps \(f_{v}\) and \(f_{e}\) induce a morphism \(f\colon(C,\mathbf{p})\to\mathbb{X}_{\mathsf{L}}(\vec{V})\), which is stable and fixed by \(\mathbb{T}\). This gives rise to the morphism \(\overline{\mathcal{M}}_{\vec{\Gamma}}\to\mathbb{X}_{\mathsf{L}}(\vec{V})_{0,n, \mathcal{D}}^{\mathbb{T}}\). In order to obtain a substack of \(\mathbb{X}_{\mathsf{L}}(\vec{V})^{\mathbb{T}}_{0,n,\mathcal{D}}\) from \(\overline{\mathcal{M}}_{\vec{\Gamma}}\), we need to take a quotient by \(\operatorname{Aut}(\vec{\Gamma})\), the automorphism group of the decorated graph \(\vec{\Gamma}\). **Proposition 4.6**.: _We have an isomorphism of Deligne-Mumford stacks_ : \[\mathbb{X}_{\mathsf{L}}(\vec{V})^{\mathbb{T}}_{0,n,\mathcal{D}}\cong\coprod_{ \vec{\Gamma}\in\operatorname{DG}_{0,n,\mathcal{D}}}\left[\overline{\mathcal{M }}_{\vec{\Gamma}}/\operatorname{Aut}(\vec{\Gamma})\right].\] Finally we introduce some morphisms related to \(\overline{\mathcal{M}}_{\vec{\Gamma}}\). **Definition 4.7**.: Let \(\vec{\Gamma}\) be an \(n\)-pointed \((\mathsf{L},\vec{V})\)-decorated tree. 1. Define \(i_{\vec{\Gamma}}\colon\overline{\mathcal{M}}_{\vec{\Gamma}}\to\mathbb{X}_{ \mathsf{L}}(\vec{V})_{0,n,\mathcal{D}}\) to be a composition of the canonical morphism \(\overline{\mathcal{M}}_{\vec{\Gamma}}\to\mathbb{X}_{\mathsf{L}}(\vec{V})^{ \mathbb{T}}_{0,n,\mathcal{D}}\) and the embedding \(\mathbb{X}_{\mathsf{L}}(\vec{V})^{\mathbb{T}}_{0,n,\mathcal{D}}\to\mathbb{X}_ {\mathsf{L}}(\vec{V})_{0,n,\mathcal{D}}\). 2. For \(v\in V(\Gamma)\), \(\operatorname{pr}_{v}\) denotes a projection to the component corresponding to \(v\in V(\Gamma)\): \[\operatorname{pr}_{v}\colon\overline{\mathcal{M}}_{\vec{\Gamma}}\to(\mathbb{ X}_{\mathsf{L}}(\vec{V})_{\alpha_{v}})_{0,\operatorname{val}(v),\mathcal{D}_{v}}.\] For \(e\in E(\Gamma)\), \(\operatorname{pr}_{e}\) denotes a canonical projection \[\operatorname{pr}_{e}\colon\overline{\mathcal{M}}_{\vec{\Gamma}}\to\overline {\mathcal{M}}^{\alpha,\beta}_{k_{e}}\] where \(\alpha\) and \(\beta\) are labels of endpoints of \(e\). 3. For \(1\leq i\leq n\), define \[\operatorname{ev}_{\vec{\Gamma},i}\colon\overline{\mathcal{M}}_{\vec{\Gamma}} \to\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha_{s_{i}}}\] to be a composition \(\operatorname{ev}_{i}\circ\operatorname{pr}_{s_{i}}\). Let \(\vec{\Gamma}\in\operatorname{DG}_{0,n+1,\mathcal{D}}\) be an \((\mathsf{L},\vec{V})\)-decorated graph. We write the pull-back to \(\overline{\mathcal{M}}_{\vec{\Gamma}}\) of the virtual normal bundle of \(\mathcal{F}_{\vec{\Gamma}}\) as \(N^{\operatorname{vir}}_{\vec{\Gamma}}\). For any point \([f\colon(C,\mathbf{p})\to\mathbb{X}_{\mathsf{L}}(\vec{V})]\) on the moduli space \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{0,n,\mathcal{D}}\), we have the _tangent-obstruction exact sequence_: where \(T^{1}_{\vec{\Gamma}}\) is the tangent space and \(T^{2}_{\vec{\Gamma}}\) is the obstruction space at \([f]\). Collecting on \(\overline{\mathcal{M}}_{\vec{\Gamma}}\) the spaces appearing in the above sequence gives rise to \(\mathbb{T}\)-equivariant sheaves \(\operatorname{Aut}(C,\mathbf{p})_{\vec{\Gamma}}\), \(\operatorname{Def}(f)_{\vec{\Gamma}}\), \(\operatorname{Def}(C,\mathbf{p})_{\vec{\Gamma}}\), \(\operatorname{Ob}(f)_{\vec{\Gamma}}\) and \(\mathcal{T}^{2}_{\vec{\Gamma}}\) respectively, and we have the exact sequence of sheaves: \[0\to\operatorname{Aut}(C,\mathbf{p})_{\vec{\Gamma}}\to\operatorname{Def}(f)_{ \vec{\Gamma}}\to\mathcal{T}^{1}_{\vec{\Gamma}}\to\operatorname{Def}(C,\mathbf{ p})_{\vec{\Gamma}}\to\operatorname{Ob}(f)_{\vec{\Gamma}}\to\mathcal{T}^{2}_{ \vec{\Gamma}}\to 0.\] These sheaves are endowed with \(\mathbb{T}\)-actions, and all arrows are \(\mathbb{T}\)-equivariant. By taking the moving parts we obtain the following exact sequence: Since \(N^{\operatorname{vir}}_{\vec{\Gamma}}=\mathcal{T}^{1,\operatorname{mov}}_{\vec {\Gamma}}\ominus\mathcal{T}^{2,\operatorname{mov}}_{\vec{\Gamma}}\), it follows that \[e_{\mathbb{T}}(N^{\operatorname{vir}}_{\vec{\Gamma}})=\frac{e_{\mathbb{T}}( \operatorname{Def}(C,\mathbf{p})^{\operatorname{mov}}_{\vec{\Gamma}})\cdot e _{\mathbb{T}}(\operatorname{Def}(f)^{\operatorname{mov}}_{\vec{\Gamma}})}{e_{ \mathbb{T}}(\operatorname{Aut}(C,\mathbf{p})^{\operatorname{mov}}_{\vec{\Gamma}}) \cdot e_{\mathbb{T}}(\operatorname{Ob}(f)^{\operatorname{mov}}_{\vec{\Gamma}})}. \tag{4.1}\] The vector bundles \(\operatorname{Aut}(C,\mathbf{p})_{\vec{\Gamma}}^{\operatorname{mov}}\), \(\operatorname{Def}(C,\mathbf{p})_{\vec{\Gamma}}^{\operatorname{mov}}\) and \(\operatorname{Def}(f)_{\vec{\Gamma}}\ominus\operatorname{Ob}(f)_{\vec{\Gamma}}\) over \(\overline{\mathcal{M}}_{\vec{\Gamma}}\) can be described as follows [25]: \[\begin{split}\operatorname{Aut}(C,\mathbf{p})_{\vec{\Gamma}}^{ \operatorname{mov}}&=\bigoplus_{\begin{subarray}{c}(e,v)\in F( \Gamma);\\ |\operatorname{adj}(v)|=1,|\operatorname{mark}(v)|=0,\mathcal{D}_{v}=0 \end{subarray}}\operatorname{pr}_{e}^{*}\mathcal{L}_{e,v}^{\vee},\\ \operatorname{Def}(C,\mathbf{p})_{\vec{\Gamma}}^{\operatorname{mov}}& =\bigoplus_{\begin{subarray}{c}v\in V(\Gamma)\\ \operatorname{adj}(v)=\{e,e^{\prime}\},|\operatorname{mark}(v)|=0,\mathcal{D}_ {v}=0\end{subarray}}\operatorname{pr}_{e}^{*}\left(\mathcal{L}_{e,v}^{\vee} \otimes\mathcal{L}_{e^{\prime},v}^{\vee}\right)\\ &\oplus\bigoplus_{\begin{subarray}{c}(e,v)\in F(\Gamma);\\ \operatorname{val}(v)\geq 3\text{ or }\mathcal{D}_{v}\neq 0\end{subarray}} \operatorname{pr}_{v}^{*}\mathcal{L}_{v,e}^{\vee}\otimes\operatorname{pr}_{e}^ {*}\mathcal{L}_{e,v}^{\vee},\\ &\operatorname{Def}(f)_{\vec{\Gamma}}\ominus\operatorname{Ob}(f)_{ \vec{\Gamma}}=\bigoplus_{\begin{subarray}{c}v\in V(\Gamma);\\ \operatorname{val}(v)\geq 3\text{ or }\mathcal{D}_{v}\neq 0\end{subarray}} \operatorname{pr}_{v}^{*}\mathbb{R}g_{v*}F_{v}^{*}t_{\alpha_{v}}^{*}T_{\mathbb{ X}_{\mathbb{L}}(\vec{V})}\oplus\bigoplus_{e\in E(\Gamma)}\operatorname{pr}_{e}^{*} \mathbb{R}g_{e*}F_{e}^{*}T_{\mathbb{X}_{\mathbb{L}}(\vec{V})}\\ &\ominus\bigoplus_{\begin{subarray}{c}v\in V(\Gamma);\\ |\operatorname{adj}(v)|=2,|\operatorname{mark}(v)|=0,\mathcal{D}_{v}=0\end{subarray}} \operatorname{pr}_{v}^{*}t_{\alpha_{v}}^{*}T_{\mathbb{X}_{\mathbb{L}}(\vec{V})} \\ &\ominus\bigoplus_{\begin{subarray}{c}(e,v)\in F(\Gamma);\\ \operatorname{val}(v)\geq 3\text{ or }\mathcal{D}_{v}\neq 0\end{subarray}} \operatorname{pr}_{e}^{*}\operatorname{ev}_{e,v}^{*}T_{\mathbb{X}_{\mathbb{L}}( \vec{V})}\end{split} \tag{4.2}\] where \(\operatorname{ev}_{e,v}\colon\overline{\mathcal{M}}^{a}b_{k_{e}}\to\mathbb{X}_ {\mathbb{L}}(\vec{V})\) is the evaluation map associated to \((e,v)\in F(\Gamma)\) where \(\alpha,\beta\) are the labels of the endpoints of \(e\), and \(g_{v}\) and \(F_{v}\) (resp. \(g_{e}\) and \(F_{e}\)) fit into the following diagram for the universal map over \((\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha_{v}})_{0,\operatorname{val}(v), \mathcal{D}_{v}}\) (resp. \(\overline{\mathcal{M}}_{k_{e}}^{\alpha,\beta}\)): Let \(\mathbf{f}\) be a \(\operatorname{Frac}(H_{\mathbb{T}}^{*}(\operatorname{pt}))[\operatorname{Eff}^ {\operatorname{ext}}(\mathbb{X}_{\mathbb{L}}(\vec{V}))][\![x]\!]\)-valued point on \(\mathcal{L}_{\mathbb{X}_{\mathbb{L}}(\vec{V}),\mathbb{T}}\). By definition, \(\mathbf{f}\) is an element of \(H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathbb{L}}(\vec{V}))_{\operatorname{loc}}((z^{- 1}))[\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathbb{L}}(\vec{V}))] [\![x]\!]\) and can be written as \[\mathbf{f}=-1\cdot z+\mathbf{t}(z)+\sum_{\begin{subarray}{c}n\geq 0,\mathcal{D} \in\operatorname{Eff}(\mathbb{X}_{\mathbb{L}}(\vec{V}))\\ (n,\mathcal{D})\neq(0,0),(1,0)\end{subarray}}\frac{\mathcal{Q}^{\mathcal{D}}}{ \mathcal{D}^{1}}\cdot\operatorname{ev}_{1*}\left[\frac{\prod_{i=2}^{n+1} \operatorname{ev}_{i}^{*}\mathbf{t}(\psi_{i})}{-z-\psi_{1}}\cap[\mathbb{X}_{ \mathbb{L}}(\vec{V})_{0,n+1,\mathcal{D}}]^{\operatorname{vir}}\right]\] where \(\mathbf{t}(z)\in H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathbb{L}}(\vec{V}))_{ \operatorname{loc}}[z][\![\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{ \mathbb{L}}(\vec{V}))]\!][x]\!]\) with \(\mathbf{t}(z)|_{(\mathcal{Q},x)=0}\). Using the virtual localization formula (Theorem 2.1), we have \[\iota_{\alpha}^{*}\mathbf{f}=-1\cdot z+\iota_{\alpha}^{*}\mathbf{t}(z)+\sum_{ \begin{subarray}{c}n\geq 0,\mathcal{D}\in\operatorname{Eff}(\mathbb{X}_{\mathbb{L}}(\vec{V}))\\ (n,\mathcal{D})\neq(0,0),(1,0)\end{subarray}}\frac{\mathcal{Q}^{\mathcal{D}}}{ n!}\sum_{\vec{\Gamma}\in\operatorname{DG}_{0,n+1,\mathcal{D}}}\iota_{\alpha}^{*} \operatorname{Cont}(\vec{\Gamma}) \tag{4.3}\] where \[\operatorname{Cont}(\vec{\Gamma}):=\frac{1}{|\operatorname{Aut}(\vec{\Gamma})|} \cdot\operatorname{ev}_{1*}i_{\vec{\Gamma}}\left[i_{\vec{\Gamma}}^{*}\left( \frac{\prod_{i=2}^{n+1}\operatorname{ev}_{i}^{*}\mathbf{t}(\psi_{i})}{-z-\psi_{1 }}\right)e_{\mathbb{T}}\left(N_{\vec{\Gamma}}^{\operatorname{vir}}\right)^{-1 }\cap[\overline{\mathcal{M}}_{\vec{\Gamma}}]^{\operatorname{vir}}\right].\] In order to compute the right-hand side, \(\operatorname{DG}_{0,n+1,\mathcal{D}}\) is divided into three parts. **Definition 4.8**.: Let \(\alpha\in F_{\mathsf{L}}\). Define \[\operatorname{DG}_{0,n+1,\mathcal{D}}^{\alpha,0}:= \left\{\vec{\Gamma}\in\operatorname{DG}_{0,n+1,\mathcal{D}}\colon \alpha_{s_{1}}\neq\alpha\right\},\] \[\operatorname{DG}_{0,n+1,\mathcal{D}}^{\alpha,1}:= \left\{\vec{\Gamma}\in\operatorname{DG}_{0,n+1,\mathcal{D}}\colon \alpha_{s_{1}}=\alpha,\operatorname{val}(s_{1})=2,\mathcal{D}_{s_{1}}=0 \right\},\] \[\operatorname{DG}_{0,n+1,\mathcal{D}}^{\alpha,2}:= \operatorname{DG}_{0,n+1,\mathcal{D}}\setminus\left(\operatorname{ DG}_{0,n+1,\mathcal{D}}^{\alpha,0}\cup\operatorname{DG}_{0,n+1,\mathcal{D}}^{ \alpha,1}\right),\] \[\operatorname{DG}_{0,n+1,\mathcal{D}}^{\alpha}:= \operatorname{DG}_{0,n+1,\mathcal{D}}^{\alpha,1}\amalg\operatorname{ DG}_{0,n+1,\mathcal{D}}^{\alpha,2}.\] These sets give decompositions \[\operatorname{DG}_{0,n+1,\mathcal{D}} =\operatorname{DG}_{0,n+1,\mathcal{D}}^{\alpha,0}\amalg\operatorname {DG}_{0,n+1,\mathcal{D}}^{\alpha,1}\amalg\operatorname{DG}_{0,n+1,\mathcal{D}} ^{\alpha,2},\] \[\operatorname{DG}_{0,n+1,\mathcal{D}} =\coprod_{\alpha\in F_{\mathsf{L}}}\operatorname{DG}_{0,n+1, \mathcal{D}}^{\alpha}.\] We call an element of \(\operatorname{DG}_{0,n+1,\mathcal{D}}^{\alpha,i}\) a _graph of type \((\alpha,i)\)_ or an \((\alpha,i)\)_-type graph_. **Proposition 4.9**.: _If \(\vec{\Gamma}\) is an \((\alpha,0)\)-type graph, its contribution \(\iota_{\alpha}^{*}\operatorname{Cont}(\vec{\Gamma})\) equals zero._ Proof.: Let \(\vec{\Gamma}\in\operatorname{DG}_{0,n+1,\mathcal{D}}^{\alpha,0}\) and set \(\beta=\alpha_{s_{1}}\). Consider the following commutative diagram: From the diagram, it follows that \(\iota_{\alpha}^{*}\mathrm{ev}_{1*}i_{\vec{\Gamma}*}=\iota_{\alpha}^{*}\iota_{ \beta_{*}}\mathrm{ev}_{\vec{\Gamma},1_{*}}\). Since \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}\cap\mathbb{X}_{\mathsf{L}}(\vec{V}) _{\beta}=\emptyset\), we have \(\iota_{\alpha}^{*}\iota_{\beta_{*}}=0\). In the following two subsections, we will compute the contributions of the graphs of type \((\alpha,1)\) and type \((\alpha,2)\) separately. ### Contribution of the \((\alpha,1)\)-type graphs Let \(\vec{\Gamma}=(\Gamma,\vec{\alpha},\vec{k},\vec{\mathcal{D}},\vec{s})\) be a decorated graph of type \((\alpha,1)\) satisfying \(|\operatorname{adj}(s_{1})|=|\operatorname{mark}(s_{1})|=1\). The graph \(\vec{\Gamma}\) is decomposed into two trees, \(\vec{\Gamma}_{1}\) and \(\vec{\Gamma}_{2}=\vec{\Gamma}_{2,k}^{\alpha,\beta}\). Here \(\beta\) denotes the label of the vertex adjacent to \(m_{1}\), \(k\) denotes the degree of the edge attached to \(m_{1}\), and \(\vec{\Gamma}_{2,k}^{\alpha,\beta}\) denotes the decorated tree determined by the following data: * a \(2\)-marked tree consisting of two vertices \(v_{1}\) and \(v_{2}\), and one edge; * \(\alpha_{v_{1}}=\alpha\) and \(\alpha_{v_{2}}=\beta\); * a degree of the edge equals \(k\); * \(\mathcal{D}_{v_{i}}=0\) for \(i=1,2\); * \(s_{i}=v_{i}\) for \(i=1,2\). The graph \(\vec{\Gamma}_{1}\in\operatorname{DG}_{0,n+1,\mathcal{D}-k\cdot d_{\alpha\beta}}^ {\beta}\) is obtained by removing from \(\vec{\Gamma}\) the vertex \(m_{1}\) and the edge attached to \(m_{1}\), and adding the first marking on the vertex which is linked to \(m_{1}\) in \(\vec{\Gamma}\). Conversely, if we are given \(\beta\in\operatorname{adj}(\alpha)\), \(k\in\mathbb{N}\) and \(\vec{\Gamma}_{1}\in\operatorname{DG}_{0,n+1.\mathcal{D}}^{\beta}\), we can recover \(\vec{\Gamma}\in\operatorname{DG}_{0,n+1,\mathcal{D}+k\cdot d_{\alpha\beta}}^ {\alpha,1}\) by connecting \(\vec{\Gamma}_{2,k}^{\alpha,\beta}\) and \(\vec{\Gamma}_{1}\) by clutching the second marking of \(\vec{\Gamma}_{2,k}^{\alpha,\beta}\) with the first marking of \(\vec{\Gamma}_{1}\). Summarizing, we have the following bijections. **Lemma 4.10**.: _There is a bijection_:__ \[\Phi_{1}\colon\coprod_{\beta\in\operatorname{adj}(\alpha)}\coprod_{k\in\mathbb{N} }\coprod_{n=0}^{\infty}\coprod_{\mathcal{D}\in\operatorname{Eff}(\mathbb{X}_{ \operatorname{L}}(\vec{V}))}\operatorname{DG}^{\beta}_{0,n+1,\mathcal{D}}\times \{\vec{\Gamma}^{\alpha,\beta}_{2,k}\}\overset{\sim}{\longrightarrow}\coprod_{ \begin{subarray}{c}n\geq 0,\mathcal{D}\in\operatorname{Eff}(\mathbb{X}_{\operatorname{L}}( \vec{V}))\\ (n,\mathcal{D})\neq(0,0),(1,0)\end{subarray}}\operatorname{DG}^{\alpha,1}_{0,n+1,\mathcal{D}}.\] We let \(\vec{\Gamma}\in\operatorname{DG}^{\alpha,1}_{0,n+1,\mathcal{D}}\) and write as \(\Phi_{1}(\vec{\Gamma})=(\vec{\Gamma}_{1},\vec{\Gamma}_{2}=\vec{\Gamma}^{ \alpha,\beta}_{2,k})\). By definition we have \(\overline{\mathcal{M}}_{\vec{\Gamma}_{2}}=\overline{\mathcal{M}}^{\alpha,\beta} _{k}\) for some \(\beta\in\operatorname{adj}(\alpha)\) and \(k\in\mathbb{N}\). We denote the canonical morphism \(\overline{\mathcal{M}}^{\alpha,\beta}_{k}\to\mathbb{X}_{\operatorname{L}}( \vec{V})_{\alpha\cup\beta}\) by \(\pi\). There exist two natural morphisms \[\overline{\mathcal{M}}_{\vec{\Gamma}}\to\overline{\mathcal{M}}_{\vec{\Gamma} _{1}}\qquad\quad\text{and}\qquad\overline{\mathcal{M}}_{\vec{\Gamma}}\to \overline{\mathcal{M}}_{\vec{\Gamma}_{2}}\] which are denoted by \(\operatorname{pr}_{1}\) and \(\operatorname{pr}_{2}\) respectively. By definition, \(\overline{\mathcal{M}}_{\vec{\Gamma}}\) fits into the following diagram (4.4) where the square in the diagram is Cartesian. **Lemma 4.11**.: _Let \(\vec{\Gamma}\in\operatorname{DG}^{\alpha,1}_{0,n+1,\mathcal{D}}\) and let \((\vec{\Gamma}_{1},\vec{\Gamma}_{2}=\vec{\Gamma}^{\alpha,\beta}_{2,k})\) be as above._ 1. \(|\operatorname{Aut}(\vec{\Gamma})|=|\operatorname{Aut}(\vec{\Gamma}_{1})|\)_._ 2. \(i^{*}_{\vec{\Gamma}}\psi_{1}=-\operatorname{pr}^{*}_{2}\pi^{*}c_{1}^{\top}(L_{ \alpha,\beta})/k\)_._ 3. \(i^{*}_{\vec{\Gamma}}\psi_{j}=\operatorname{pr}^{*}_{1}i^{*}_{\vec{\Gamma}_{1} }\psi_{j}\) _for_ \(2\leq j\leq n+1\)_._ Proof.: The first equality is obvious. For \(1\leq i\leq n+1\), let \(\mathcal{L}_{i}\) (resp. \(\mathcal{L}^{\prime}_{i}\), \(\mathcal{L}^{\prime\prime}_{i}\)) denote the \(i\)-th universal cotangent line bundle over \(\mathbb{X}_{\operatorname{L}}(\vec{V})_{0,n+1,\mathcal{D}}\) (resp. \(\mathbb{X}_{\operatorname{L}}(\vec{V})_{0,n+1,\mathcal{D}-k\cdot d_{\alpha \beta}}\), \(\mathbb{X}_{\operatorname{L}}(\vec{V})_{0,2,k\cdot d_{\alpha\beta}}\)). It can be seen that there exist the following fiber diagrams: These give the second formula. The third one follows from the fiber diagram for \(i=2,\dots,n+1\). **Lemma 4.12**.: _Let \(\vec{\Gamma}\) and \((\vec{\Gamma}_{1},\vec{\Gamma}_{2})\) be as in Lemma 4.11. For \(\omega\in H^{*}(\overline{\mathcal{M}}_{\vec{\Gamma}_{1}})\) and \(\eta\in H^{*}(\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha\cup\beta})\), it holds that_ \[\iota_{\alpha}^{*}\mathrm{ev}_{1*}i_{\vec{\Gamma}_{*}}\left[( \mathrm{pr}_{1}^{*}\,\omega\cdot\mathrm{pr}_{2}^{*}\,\pi^{*}\eta)\cap\left[ \overline{\mathcal{M}}_{\vec{\Gamma}}\right]^{\mathrm{vir}}\right]\\ =\frac{1}{k}\cdot e_{\mathbb{T}}\left(N_{\alpha}\right)\cdot p_{ \alpha\cup\beta,\alpha_{*}}\left[\eta\cdot p_{\alpha\cup\beta,\beta}^{*}\left( e_{\mathbb{T}}\left(N_{\beta}\right)^{-1}\cdot\iota_{\beta}^{*}\mathrm{ev}_{1*}i_{ \vec{\Gamma}_{1}*}\left(\omega\cap\left[\overline{\mathcal{M}}_{\vec{\Gamma}_{ 1}}\right]^{\mathrm{vir}}\right)\right)\right].\] Proof.: Since \(\mathrm{ev}_{1}\,\circ i_{\vec{\Gamma}}=\iota_{\alpha}\circ p_{\alpha\cup \beta,\alpha}\circ\pi\circ\mathrm{pr}_{2}\) and \(\iota_{\alpha}^{*}\iota_{\alpha*}=e_{\mathbb{T}}(N_{\alpha})\), we have \[\iota_{\alpha}^{*}\mathrm{ev}_{1*}i_{\vec{\Gamma}_{*}} =\iota_{\alpha}^{*}\iota_{\alpha*}p_{\alpha\cup\beta,\alpha_{*}} \pi_{*}\mathrm{pr}_{2*}\] \[=e_{\mathbb{T}}\left(N_{\alpha}\right)\cdot p_{\alpha\cup\beta, \alpha_{*}}\pi_{*}\mathrm{pr}_{2*}.\] From the diagram (4.4) it follows that \[\mathrm{pr}_{2*}\,\mathrm{pr}_{1}^{*}=\pi^{*}p_{\alpha\cup\beta,\beta}^{*} \mathrm{ev}_{\vec{\Gamma}_{1},1_{*}}.\] Since \(\mathrm{ev}_{1}\,\circ i_{\vec{\Gamma}_{1}}=\iota_{\beta}\circ\mathrm{ev}_{ \vec{\Gamma}_{1},1}\) and \(\iota_{\beta}^{*}{d_{\beta}}_{*}=e_{\mathbb{T}}(N_{\beta})\), we have \[\mathrm{ev}_{\vec{\Gamma}_{1},1_{*}} =e_{\mathbb{T}}\left(N_{\beta}\right)^{-1}\cdot\iota_{\beta}^{*}{ t_{\beta}}_{*}{\rm ev}_{\vec{\Gamma}_{1},1_{*}}\] \[=e_{\mathbb{T}}\left(N_{\beta}\right)^{-1}\cdot\iota_{\beta}^{*} \mathrm{ev}_{1*}i_{\vec{\Gamma}_{1}*}.\] Finally, from the construction we have \[\mathrm{pr}_{1}^{*}\left[\overline{\mathcal{M}}_{\vec{\Gamma}_{1}}\right]^{ \mathrm{vir}}=k\cdot\left[\overline{\mathcal{M}}_{\vec{\Gamma}}\right]^{ \mathrm{vir}}.\] The desired equality follows from these equations and the projection formula. **Lemma 4.13**.: _Let \(\vec{\Gamma}\in\mathrm{DG}_{0,n+1,\mathcal{D}}^{\alpha,1}\) and let \((\vec{\Gamma}_{1},\vec{\Gamma}_{2})\) be as in Lemma 4.11. Assume that \(\vec{\Gamma}\neq\vec{\Gamma}_{1,k}^{\alpha,\beta},\vec{\Gamma}_{2,k}^{\alpha,\beta}\). Then the following equality holds\(:\)_ \[\frac{\mathrm{pr}_{1}^{*}\,e_{\mathbb{T}}(N_{\vec{\Gamma}_{1}}^{\mathrm{vir}})}{ e_{\mathbb{T}}(N_{\vec{\Gamma}}^{\mathrm{vir}})}=\frac{k}{-\,\mathrm{pr}_{2}^{*}\, \pi^{*}c_{1}^{\mathbb{T}}(L_{\alpha,\beta})-k\,\mathrm{pr}_{1}^{*}\,i_{\vec{ \Gamma}_{1}}^{*}\,\psi_{1}}\cdot\mathrm{pr}_{2}^{*}\,\pi^{*}\left[\frac{p_{ \alpha\cup\beta,\beta}^{*}e_{\mathbb{T}}(N_{\beta})}{p_{\alpha\cup\beta, \alpha}^{*}e_{\mathbb{T}}(N_{\alpha})}\cdot C_{\alpha,\beta}(k)\right].\] _Here we use the notation \(C_{\alpha,\beta}(k)\) introduced in Theorem 4.2._ Proof.: Thanks to the formula (4.1), we can divide the left-hand side into three parts: \[\frac{\mathrm{pr}_{1}^{*}\,e_{\mathbb{T}}(N_{\vec{\Gamma}_{1}}^{ \mathrm{vir}})}{e_{\mathbb{T}}(N_{\vec{\Gamma}}^{\mathrm{vir}})} =\frac{e_{\mathbb{T}}(\mathrm{Aut}(C,\mathbf{p})_{\vec{\Gamma}_{ 1}}^{\mathrm{mov}})}{\mathrm{pr}_{1}^{*}\,e_{\mathbb{T}}(\mathrm{Aut}(C, \mathbf{p})_{\vec{\Gamma}_{1}}^{\mathrm{mov}})}\cdot\frac{\mathrm{pr}_{1}^{*} \,e_{\mathbb{T}}(\mathrm{Def}(C,\mathbf{p})_{\vec{\Gamma}_{1}}^{\mathrm{mov}}) }{e_{\mathbb{T}}(\mathrm{Def}(C,\mathbf{p})_{\vec{\Gamma}}^{\mathrm{mov}})}\] \[\cdot\frac{\mathrm{pr}_{1}^{*}\,e_{\mathbb{T}}(\mathrm{Def}(f)_{ \vec{\Gamma}_{1}}^{\mathrm{mov}}\ominus\mathrm{Ob}(f)_{\vec{\Gamma}_{1}}^{ \mathrm{mov}})}{e_{\mathbb{T}}(\mathrm{Def}(f)_{\vec{\Gamma}}^{\mathrm{mov}} \ominus\mathrm{Ob}(f)_{\vec{\Gamma}}^{\mathrm{mov}})}. \tag{4.5}\] We will compute them separatedly. From (4.2), we can see that \(\mathrm{Aut}(C,\mathbf{p})_{\vec{\Gamma}}^{\mathrm{mov}}\cong\mathrm{pr}_{1}^{*} \,\mathrm{Aut}(C,\mathbf{p})_{\vec{\Gamma}_{1}}^{\mathrm{mov}}\) and \[\mathrm{Def}(C,\mathbf{p})_{\vec{\Gamma}}^{\mathrm{mov}}\ominus\mathrm{pr}_{1}^ {*}\,\mathrm{Def}(C,\mathbf{p})_{\vec{\Gamma}_{1}}^{\mathrm{mov}}=\mathrm{pr}_{2 }^{*}\,i_{\vec{\Gamma}_{2}}^{*}\mathcal{L}_{2}^{\prime\prime\vee}\otimes \mathrm{pr}_{1}^{*}\,i_{\vec{\Gamma}_{1}}^{*}\,\mathcal{L}_{1}^{\prime}{}^{ \vee}.\] (We use the notation in the proof of Lemma 4.11.) Hence the first part of (4.5) equals \(1\), and the second one is equal to \[\frac{1}{-\,\mathrm{pr}_{2}^{*}\,i_{\vec{\Gamma}_{2}}^{*}\,\psi_{1}-\mathrm{pr}_{ 1}^{*}\,i_{\vec{\Gamma}_{1}}^{*}\,\psi_{1}}=\frac{k}{-\,\mathrm{pr}_{2}^{*}\, \pi^{*}c_{1}^{\mathbb{T}}(L_{\alpha,\beta})-k\,\mathrm{pr}_{1}^{*}\,i_{\vec{ \Gamma}_{1}}^{*}\,\psi_{1}}.\] Here we use Lemma 4.11 (2). Let \(v\in V(\Gamma)\) be the vertex adjacent to \(m_{1}\), and write the unique edge joining \(m_{1}\) and \(v\) as \(e\in E(\Gamma)\). We write the universal curve over \(\overline{\mathcal{M}}_{\Gamma_{2}}=\overline{\mathcal{M}}_{k}^{\alpha,\beta}\) as follows: Again from (4.2), we have \[\left(\operatorname{Def}(f)_{\overline{\Gamma}}\ominus\operatorname{Ob}(f)_{ \overline{\Gamma}}\right)\ominus\operatorname{pr}_{1}^{*}\left(\operatorname{ Def}(f)_{\overline{\Gamma}_{1}}\ominus\operatorname{Ob}(f)_{\overline{\Gamma}_{1}} \right)=\operatorname{pr}_{2}^{*}\mathbb{R}g_{*}F^{*}T_{\mathbb{X}_{\text{L}}( \overline{V})}\ominus\operatorname{pr}_{2}^{*}s_{2}^{*}F^{*}T_{\mathbb{X}_{ \text{L}}(\overline{V})}\] where \(\mathbb{R}g_{*}\) denotes the \(K\)-theoretic pushforward. Since we have the exact sequences and \(\mathbb{T}\) acts trivially on \(p^{*}T_{B}\) and \(\mathcal{O}\), we have \[\left(\mathbb{R}g_{*}F^{*}T_{\mathbb{X}_{\text{L}}(\overline{V})}\right)^{ \operatorname{mov}}=\bigoplus_{i=1}^{N}\left(\mathbb{R}g_{*}F^{*}(V_{i}\otimes L _{i})\right)^{\operatorname{mov}}.\] We set \[\mathcal{U}_{\alpha} =\mathcal{C}_{k}^{\alpha,\beta}\setminus\operatorname{Im}(s_{2}), g_{\alpha}\colon\mathcal{U}_{\alpha}\hookrightarrow\mathcal{C}_{k}^{ \alpha,\beta}\xrightarrow{g}\overline{\mathcal{M}}_{k}^{\alpha,\beta},\] \[\mathcal{U}_{\beta} =\mathcal{C}_{k}^{\alpha,\beta}\setminus\operatorname{Im}(s_{1}), g_{\beta}\colon\mathcal{U}_{\beta}\hookrightarrow\mathcal{C}_{k}^{ \alpha,\beta}\xrightarrow{g}\overline{\mathcal{M}}_{k}^{\alpha,\beta},\] \[\mathcal{U}_{\alpha\beta} =\mathcal{U}_{\alpha}\times_{\mathcal{C}_{k}^{\alpha,\beta}} \mathcal{U}_{\beta}, g_{\alpha\beta}\colon\mathcal{U}_{\alpha\beta}\hookrightarrow \mathcal{C}_{k}^{\alpha,\beta}\xrightarrow{g}\overline{\mathcal{M}}_{k}^{ \alpha,\beta}.\] Note that \(\mathcal{U}_{\alpha}\cong\mathcal{L}_{1}^{\vee}|_{\overline{\mathcal{M}}_{k}^{ \alpha,\beta}}\) and \(\mathcal{U}_{\beta}\cong\mathcal{L}_{2}^{\vee}|_{\overline{\mathcal{M}}_{k}^{ \alpha,\beta}}\), and \(\mathcal{U}=\{\mathcal{U}_{\alpha},\mathcal{U}_{\beta}\}\) is an open covering of \(\mathcal{C}_{k}^{\alpha,\beta}\). It is easy to see that \[g_{\alpha\ast}(F|_{\mathcal{U}_{\alpha}})^{*}(V_{i}\otimes L_{i}) =\pi^{*}p_{\alpha\cup\beta}^{*}V_{i}\otimes\pi^{*}p_{\alpha\cup \beta,\alpha}^{*}\iota_{\alpha}^{*}L_{i}\otimes\bigoplus_{c=0}^{\infty}\left( \mathcal{L}_{1}^{\vee}|_{\overline{\mathcal{M}}_{k}^{\alpha,\beta}}\right)^{ \otimes c},\] \[g_{\beta\ast}(F|_{\mathcal{U}_{\beta}})^{*}(V_{i}\otimes L_{i}) =\pi^{*}p_{\alpha\cup\beta}^{*}V_{i}\otimes\pi^{*}p_{\alpha\cup \beta,\beta}^{*}\iota_{\beta}^{*}L_{i}\otimes\bigoplus_{c=0}^{\infty}\left( \mathcal{L}_{2}^{\vee}|_{\overline{\mathcal{M}}_{k}^{\alpha,\beta}}\right)^{ \otimes c},\] \[g_{\alpha\beta}{}_{\ast}(F|_{\mathcal{U}_{\alpha\beta}})^{*}(V_{i }\otimes L_{i}) =\pi^{*}p_{\alpha\cup\beta}^{*}V_{i}\otimes\pi^{*}p_{\alpha\cup \beta,\beta}^{*}\iota_{\beta}^{*}L_{i}\otimes\bigoplus_{c=-\infty}^{\infty} \left(\mathcal{L}_{1}^{\vee}|_{\overline{\mathcal{M}}_{k}^{\alpha,\beta}} \right)^{\otimes c}.\] Hence we have \[e_{\mathbb{T}}\left(\mathbb{R}g_{*}F^{*}(V_{i}\otimes L_{i})\right)\] \[= e_{\mathbb{T}}\left(g_{\alpha\ast}(F|_{\mathcal{U}_{\alpha}})^{ *}(V_{i}\otimes L_{i})\oplus g_{\beta\ast}(F|_{\mathcal{U}_{\beta}})^{*}(V_{ i}\otimes L_{i})\ominus g_{\alpha\beta\ast}(F|_{\mathcal{U}_{\alpha\beta}})^{*}(V_{ i}\otimes L_{i})\right)\] \[= \pi^{*}\prod_{\delta\colon\text{\scriptsize Chern roots}}\frac{ \prod_{c=-\infty}^{k\cdot u_{i}(d_{\alpha\beta})}}{\prod_{c=-\infty}^{-1}} \left(\delta+p_{\alpha\cup\beta,\alpha}^{*}\iota_{\alpha}^{*}u_{i}-\frac{c}{k }c_{1}(L_{\alpha,\beta})\right).\] The moving part can be described as \[e_{\mathbb{T}}\left(\bigoplus_{i=1}^{N}\left(\mathbb{R}g_{*}F^{*}(V_{i}\otimes L_ {i})\right)^{\text{mov}}\right)=\pi^{*}\left[\frac{p_{\alpha\cup\beta,\alpha}e_ {\mathbb{T}}(N_{\alpha})}{C_{\alpha,\beta}(k)}\right].\] On the other hand, we have \[e_{\mathbb{T}}\left(\left(s_{2}^{*}F^{*}T_{\mathbb{X}_{\mathbb{L} }(\vec{V})}\right)^{\text{mov}}\right) =e_{\mathbb{T}}\left(\bigoplus_{i=1}^{N}s_{2}^{*}F^{*}(V_{i} \otimes L_{i})^{\text{mov}}\right)\] \[=\pi^{*}\prod_{i\notin\beta}\left(\delta+p_{\alpha\cup\beta,\beta }^{*}\iota_{\beta}^{*}u_{i}\right)\] \[=\pi^{*}p_{\alpha\cup\beta,\beta}^{*}e_{\mathbb{T}}(N_{\beta}).\] These computations give the desired formula. By performing calculations similar to those in the previous proof, we can establish the following formulas. **Lemma 4.14**.: _Let \(\beta\in\mathrm{adj}(\alpha)\) and \(k\in\mathbb{N}\). We have_ \[\iota_{\alpha}^{*}\operatorname{Cont}_{\vec{\Gamma}_{1,k}^{\alpha,\beta}}(z) =p_{\alpha\cup\beta,\alpha*}\left[\frac{C_{\alpha,\beta}(k)}{-kz+c_{ 1}^{\mathbb{T}}(L_{\alpha,\beta})}\cdot\left(-\frac{c_{1}^{\mathbb{T}}(L_{ \alpha,\beta})}{k}\right)\right],\] \[\iota_{\alpha}^{*}\operatorname{Cont}_{\vec{\Gamma}_{2,k}^{\alpha,\beta}}(z) =p_{\alpha\cup\beta,\alpha*}\left[\frac{C_{\alpha,\beta}(k)}{-kz+c _{1}^{\mathbb{T}}(L_{\alpha,\beta})}\cdot p_{\alpha\cup\beta,\beta}^{*}\iota_ {\beta}^{*}\mathbf{t}\left(\frac{c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}{k} \right)\right].\] Using the above lemmas, we can compute the contributions of the graphs of type \((\alpha,1)\). **Proposition 4.15**.: \[\sum_{\begin{subarray}{c}n\geq 0,\mathcal{D}\in\mathrm{Eff}( \mathbb{X}_{\mathbb{L}}(\vec{V}))\\ (n,\mathcal{D})\neq(0,0),(1,0)\end{subarray}}\frac{\mathcal{Q}^{\mathcal{D}}}{ n!}\sum_{\vec{\Gamma}\in\mathrm{DG}_{0,n+1,\mathcal{D}}^{\alpha,1}}\iota_{ \alpha}^{*}\operatorname{Cont}(\vec{\Gamma})\] \[=\sum_{\beta\in\mathrm{adj}(\alpha)}\sum_{k\in\mathbb{N}}p_{ \alpha\cup\beta,\alpha*}\left[q^{k\cdot d_{\alpha\beta}}\cdot\frac{C_{\alpha, \beta}(k)}{-kz+c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}\cdot p_{\alpha\cup\beta, \beta}^{*}\iota_{\beta}^{*}\mathbf{f}\left(z=\frac{c_{1}^{\mathbb{T}}(L_{ \alpha,\beta})}{k}\right)\right]\] Proof.: To begin with, we rewrite the left-hand side using the bijection \(\Phi_{1}\) as follows: \[\sum_{\begin{subarray}{c}n\geq 0,\mathcal{D}\in\mathrm{Eff}( \mathbb{X}_{\mathbb{L}}(\vec{V}))\\ (n,\mathcal{D})\neq(0,0),(1,0)\end{subarray}}\frac{\mathcal{Q}^{\mathcal{D}}}{ n!}\sum_{\vec{\Gamma}\in\mathrm{DG}_{0,n+1,\mathcal{D}}^{\alpha,1}}\iota_{ \alpha}^{*}\operatorname{Cont}_{\vec{\Gamma}}(z)\] \[=\sum_{\beta\in\mathrm{adj}(\alpha)}\sum_{k\in\mathbb{N}}q^{k \cdot d_{\alpha\beta}}\left(\iota_{\alpha}^{*}\operatorname{Cont}_{\vec{ \Gamma}_{1,k}^{\alpha,\beta}}(z)+\iota_{\alpha}^{*}\operatorname{Cont}_{\vec{ \Gamma}_{2,k}^{\alpha,\beta}}(z)\right)\] \[+\sum_{\beta\in\mathrm{adj}(\alpha)}\sum_{k\in\mathbb{N}}q^{k \cdot d_{\alpha\beta}}\sum_{\begin{subarray}{c}n\geq 0,\mathcal{D}\in\mathrm{Eff}( \mathbb{X}_{\mathbb{L}}(\vec{V}))\\ (n,\mathcal{D})\neq(0,0),(1,0)\end{subarray}}\frac{\mathcal{Q}^{\mathcal{D}}}{ n!}\sum_{\vec{\Gamma}_{1}\in\mathrm{DG}_{0,n+1,\mathcal{D}}^{\beta}}\iota_{ \alpha}^{*}\operatorname{Cont}_{\Phi_{1}(\vec{\Gamma}_{1},\vec{\Gamma}_{2,k}^{ \alpha,\beta})}(z).\] By using Lemma 4.11, Lemma 4.12 and Lemma 4.13, we have \[\iota_{\alpha}^{*}\operatorname{Cont}_{\vec{\Gamma}}(z)=p_{\alpha\cup\beta, \alpha*}\left[\frac{C_{\alpha,\beta}(k)}{-kz+c_{1}^{\mathbb{T}}(L_{\alpha, \beta})}\cdot p_{\alpha\cup\beta,\beta}^{*}\iota_{\beta}^{*}\operatorname{ Cont}_{\vec{\Gamma}_{1}}\left(z=\frac{c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}{k} \right)\right]\] where \(\vec{\Gamma}\in\operatorname{DG}_{0,n+1,\mathcal{D}}^{\alpha,1}\) and \(\Phi_{1}(\vec{\Gamma}_{1},\vec{\Gamma}_{2,k}^{\alpha,\beta})=\vec{\Gamma}\). By combining the above equations with Lemma 4.14, we obtain the desired equality. ### Contribution of the \((\alpha,2)\)-type graphs The contribution of the \((\alpha,2)\)-type graphs can be computed as follows. **Proposition 4.16** ([11]).: _It holds that_ \[\sum_{\begin{subarray}{c}n\geq 0,\mathcal{D}\in\operatorname{Eff}( \mathbb{X}_{\mathsf{L}}(\vec{\mathcal{V}}))\\ (n,\mathcal{D})\neq(0,0),(1,0)\end{subarray}}\frac{\mathcal{Q}^{\mathcal{D}}} {n!}\sum_{\vec{\Gamma}\in\operatorname{DG}_{0,n+1,\mathcal{D}}^{\alpha,2}} \iota_{\alpha}^{*}\operatorname{Cont}(\vec{\Gamma})\\ =\sum_{\begin{subarray}{c}n\geq 0,\mathcal{D}\in\operatorname{Eff}( \mathbb{X}_{\mathsf{L}}(\vec{\mathcal{V}})_{\alpha})\\ (n,\mathcal{D})\neq(0,0),(1,0)\end{subarray}}\frac{\mathcal{Q}^{\iota_{\alpha *}\mathcal{D}}}{n!}\cdot e_{\mathbb{T}}(N_{\alpha})\cdot\operatorname{ev}_{1* }\left[\frac{\prod_{i=2}^{n+1}\operatorname{ev}_{i}^{*}\mathbf{t}^{\alpha}( \psi_{i})}{-z-\psi_{1}}\cdot e_{\mathbb{T}}((N_{\alpha})_{0,n+1,\mathcal{D}}) ^{-1}\right.\\ \left.\cap\left[(\mathbb{X}_{\mathsf{L}}(\vec{\mathcal{V}})_{ \alpha})_{0,n+1,\mathcal{D}}\right]^{\operatorname{vir}}\right]\] _where_ \[\mathbf{t}^{\alpha}(z)=\iota_{\alpha}^{*}\mathbf{t}(z)\\ +\sum_{\beta\in\operatorname{adj}(\alpha)}\sum_{k\in\mathbb{N}}p_ {\alpha\cup\beta,\alpha*}\left[q^{k\cdot d_{\alpha\beta}}\cdot\frac{C_{\alpha,\beta}(k)}{-kz+c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}\cdot p_{\alpha\cup\beta, \beta}^{*}\iota_{\beta}^{*}\mathbf{f}\left(\frac{c_{1}^{\mathbb{T}}(L_{ \alpha,\beta})}{k}\right)\right].\] Proof.: This follows from the argument in [11, Section 3.2]. We only give a sketch of the proof. Let \(\vec{\Gamma}=(\Gamma,\vec{\alpha},\vec{k},\vec{\mathcal{D}},\vec{s})\) be a decorated graph of type \((\alpha,2)\), and set \(m=\operatorname{val}(s_{1})-1\). Then \(\vec{\Gamma}\) can be decomposed into the graph \(\vec{\Gamma}_{1}\) and \(m\)\((\alpha,1)\)-type graphs \(\vec{\Gamma}_{2},\ldots,\vec{\Gamma}_{m+1}\). Here \(\vec{\Gamma}_{1}\) is the decorated graph given by the following data: * a tree \(\Gamma_{1}\) consisting of one vertex \(v\) and \(m+1\) markings; * \(\alpha_{v}=\alpha\) and \(\mathcal{D}_{v}\) equals the degree of the vertex \(s_{1}\in\Gamma\). Let \(\vec{\Gamma}_{j}\in\operatorname{DG}_{0,n_{j}+1,\mathcal{D}_{j}}^{\alpha,1}\) for \(2\leq j\leq m+1\). By construction, \(\overline{\mathcal{M}}_{\vec{\Gamma}}\) fits into the following fiber diagram: where the morphism \(\overline{\mathcal{M}}_{\vec{\Gamma}_{1}}\to(\mathbb{X}_{\mathsf{L}}(\vec{V}) _{\alpha})^{m}\) is given by the evaluation maps \(\operatorname{ev}_{\vec{\Gamma}_{1},2},\ldots,\operatorname{ev}_{\vec{ \Gamma}_{1},m+1}\) and the morphism \(\prod_{j=2}^{m+1}\overline{\mathcal{M}}_{\vec{\Gamma}_{j}}\to(\mathbb{X}_{ \mathsf{L}}(\vec{V})_{\alpha})^{m}\) is given by the evaluation maps \(\operatorname{ev}_{\vec{\Gamma}_{2},1},\ldots,\operatorname{ev}_{\vec{ \Gamma}_{m+1},1}\). Hence the integral over \(\overline{\mathcal{M}}_{\vec{\Gamma}}\) can be computed by the integral over \(\overline{\mathcal{M}}_{\vec{\Gamma}_{1}}\) with inputs given by the integrals over \(\overline{\mathcal{M}}_{2},\ldots,\overline{\mathcal{M}}_{m+1}\). By taking the sum over all \((\alpha,1)\)-type decorated graphs, each input becomes the summation of the contribution of the \((\alpha,1)\)-type graphs, which equals \(\mathbf{t}^{\alpha}(\psi_{j})\); see Proposition 4.15. The term \(e_{\mathbb{T}}((N_{\alpha})_{0,n+1,\mathcal{D}})^{-1}\) comes from the comparison of the Euler classes of the virtual normal bundles: \[\frac{\prod_{j=2}^{n_{1}+1}\operatorname{pr}_{j}^{*}e_{\mathbb{T}}(N_{\Gamma_{j} ^{\prime}}^{\operatorname{vir}})}{e_{\mathbb{T}}(N_{\Gamma_{j}^{\prime}}^{ \operatorname{vir}})}=\left[\prod_{j}\hskip-1.422638pt\prime\frac{ \operatorname{pr}_{j}^{*}\operatorname{ev}_{\Gamma_{j,1}^{\prime}}^{*}e_{ \mathbb{T}}(N_{\alpha})}{-\operatorname{pr}_{1}^{*}i_{\widetilde{\Gamma}_{1}}^ {*}\psi_{1,j}-\operatorname{pr}_{j}^{*}i_{\widetilde{\Gamma}_{j}}^{*}\psi_{j,1 }}\right]\cdot\frac{1}{\operatorname{pr}_{1}^{*}e_{\mathbb{T}}\left((N_{\alpha} \right)_{0,n_{1}+1,\mathcal{D}_{1}}\right)}\] where the symbol \(\prod_{j}^{\prime}\) means taking the product over all \(2\leq j\leq n_{1}+1\) such that \((n_{j},\mathcal{D}_{j})\neq(1,0)\), \(\operatorname{pr}_{j}\colon\overline{\mathcal{M}}_{\widetilde{\Gamma}}\to \overline{\mathcal{M}}_{\widetilde{\Gamma}_{j}}^{*}\) denotes the canonical projection, and \(\psi_{j,i}\), \(2\leq j\leq m\), (resp. \(\psi_{1,i}\)) denotes the \(\mathbb{T}\)-equivariant first Chern class of the \(i\)-th universal cotangent line bundle for \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{0,n_{j}+1,\mathcal{D}_{j}}\) (resp. \(\mathbb{X}_{\mathsf{L}}(\vec{V})_{0,m+1,\mathcal{D}_{0}}\)). ### Proof of Theorem 4.2 We now prove Theorem 4.2. We write \(S=\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\). We first assume that \(\mathbf{f}\) be a \(\operatorname{Frac}(H_{\mathbb{T}}^{*}(\operatorname{pt}))[\![S]\!][\![x]\!]\)-valued point on \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V})}^{\mathbb{T}}\) and write \[\mathbf{f}=-1\cdot z+\mathbf{t}(z)+\sum_{\begin{subarray}{c}n\geq 0,\mathcal{D} \in\operatorname{Eff}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\\ (n,\mathcal{D})\neq(0,0),(1,0)\end{subarray}}\frac{\mathcal{Q}^{\mathcal{D}}} {n!}\cdot\operatorname{ev}_{1*}\left[\frac{\prod_{i=2}^{n+1}\operatorname{ev}_ {i}^{*}\mathbf{t}(\psi_{i})}{-z-\psi_{1}}\cap[\mathbb{X}_{\mathsf{L}}(\vec{V} )_{0,n+1,\mathcal{D}}]^{\operatorname{vir}}\right]\] where \(\mathbf{t}(z)\in H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V}))_{\operatorname {loc}}[z][\![S]\!][\![x]\!]\) with \(\mathbf{t}(z)|_{(\mathcal{Q},x)=0}\). By combining the equation (4.3), Proposition 4.9, Proposition 4.15 and Proposition 4.16, we can see that, via Laurent expansion at \(z=0\), \(\iota_{\alpha}^{*}\mathbf{f}\) can be interpreted as a \(\operatorname{Frac}(H_{\mathbb{T}}^{*}(\operatorname{pt}))[\![S]\!][\![x]\!]\)-valued point of the twisted cone \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha},(N_{\alpha},e_{\mathbb{ T}}^{-1})}\) whose non-negative part as a \(z\)-series equals \[\iota_{\alpha}^{*}\mathbf{f}-\operatorname{Prin}_{z=0}(\iota_{\alpha}^{*} \mathbf{f})=-1\cdot z+\mathbf{t}^{\alpha}(z), \tag{4.6}\] which implies **(C3)**. Since the coefficients of \(\operatorname{Prin}_{z=0}(\iota_{\alpha}^{*}\mathbf{f})\) (as a formal power series in \((\mathcal{Q},x)\)) are all polynomials in \(z^{-1}\), the poles of \(\iota_{\alpha}^{*}\mathbf{f}\) except for the pole at \(z=0\) come from (4.6). This observation with the explicit formula for \(\mathbf{t}^{\alpha}(z)\) in Proposition 4.16 implies **(C1)** and **(C2)**. Conversely, we assume that \(\mathbf{f}\in\mathcal{H}_{\mathbb{X}_{\mathsf{L}}(\vec{V})}^{\mathbb{T}}[\![S] \!][\![x]\!]\) satisfies \(\mathbf{f}|_{(\mathcal{Q},x)=0}=-1\cdot z\), **(C1)**, **(C2)** and **(C3)**. It is enough to show that, under these assumptions, \(\mathbf{f}\) (or equivalently \(\{\iota_{\alpha}^{*}\mathbf{f}\}_{\alpha\in F_{\mathsf{L}}}\)) is uniquely determined by its non-negative part \(z^{-1}\operatorname{Prin}_{z=\infty}(z\mathbf{f})\). For \(\mathcal{D}\in\operatorname{Eff}^{\operatorname{ext}}(\mathbb{X}_{\mathsf{ L}}(\vec{V}))\), we define a degree of \(\mathcal{Q}^{\mathcal{D}}\) by using the valuation introduced in Section 3.3, which can be assumed to be an integer, and define \(\deg(x_{i})=i\) for \(i\geq 1\). For any \((\mathcal{Q},x)\)-series \(\mathbf{g}\), we write the homogeneous part of \(\mathbf{g}\) of degree \(n\) as \((\mathbf{g})_{n}\). We proceed by induction on the degree of \((\mathcal{Q},x)\). We assume that we know \(\mathbf{f}_{\leq n}:=\sum_{m=0}^{n}\mathbf{f}_{m}\) for some integer \(n\). From **(C1)**, we can write \[(\iota_{\alpha}^{*}\mathbf{f})_{n+1}=z^{-1}\operatorname{Prin}_{z=\infty}(z \cdot\iota_{\alpha}^{*}\mathbf{f})_{n+1}+\operatorname{Prin}_{z=0}(\iota_{ \alpha}^{*}\mathbf{f})_{n+1}+\sum_{\beta\in\operatorname{adj}(\alpha)}\sum_{k \in\mathbb{N}}\operatorname{Prin}_{z=\frac{\lambda_{\alpha,\beta}}{k}}(\iota_{ \alpha}^{*}\mathbf{f})_{n+1}.\] In the right-hand side, the third term can be computed from \(\mathbf{f}_{\leq n}\) thanks to **(C2)**, while the second term is determined from \[z^{-1}\operatorname{Prin}_{z=\infty}(z\cdot\iota_{\alpha}^{*}\mathbf{f})_{\leq n +1}+\sum_{\beta\in\operatorname{adj}(\alpha)}\sum_{k\in\mathbb{N}} \operatorname{Prin}_{z=\frac{\lambda_{\alpha,\beta}}{k}}(\iota_{\alpha}^{*} \mathbf{f})_{\leq n+1}\] by **(C3)**. By repeating this procedure, we can completely determine \(\mathbf{f}\) from \(z^{-1}\operatorname{Prin}_{z=\infty}(z\mathbf{f})\) by using **(C1)**, **(C2)** and **(C3)**. This proves Theorem 4.2. ## 5. Mirror theorem for a product of projective bundles In this section, we construct a twisted \(I\)-function for a product of projective bundles each coming from a vector bundle. The proof is based on the proof of the mirror theorem for a projective bundle [21, Theorem 1.1]. This section is independent of the previous section. By combining Theorem 4.2 with the mirror theorem (Theorem 5.1), we will establish the main result in the next section. ### Statement We begin with the following data: * a smooth toric data \(\mathsf{L}=(\mathbb{L}^{\vee},D\colon\mathbb{Z}^{K}\to\mathbb{L}^{\vee},\omega)\) with \(\operatorname{rank}(\mathbb{L}^{\vee})=K\); * a complex smooth projective variety \(B\); * \(N\) vector bundles \(V_{1},\ldots V_{K},W_{K+1},\ldots,W_{N}\) over \(B\) whose duals are globally generated; * \(D_{K+1},\ldots,D_{N}\in\mathbb{L}^{\vee}\). In this case \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) is a fiber product of projective bundles \(\mathbb{P}(V_{1}),\ldots,\mathbb{P}(V_{K})\) over \(B\). Due to Proposition 3.4, its cohomology can be written as \[H^{*}_{\mathbb{T}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\cong H^{*}_{\mathbb{T}}( B)[u_{1},\ldots,u_{K}]/\left(e_{\mathbb{T}}(V_{1}\otimes L_{1}),\ldots,e_{ \mathbb{T}}(V_{K}\otimes L_{K})\right).\] For \(K+1\leq i\leq N\), we define \[u_{i}=-\lambda_{i}+\sum_{j=1}^{K}D_{j}^{\vee}(D_{i})\cdot(u_{j}+\lambda_{j})\] where \(\{D_{i}^{\vee}\}_{i=1}^{K}\subset\mathbb{L}\) denotes a dual basis of \(\{D_{i}\}_{i=1}^{K}\). We construct the \(\mathbb{T}\)-equivariant vector bundle \(\mathcal{W}\to\mathbb{X}_{\mathsf{L}}(\vec{V})\) as follows: \[\mathcal{W}:=\left.\left(\mathcal{U}_{\mathsf{L}}(\vec{V})\times_{B}\bigoplus_ {i=K+1}^{N}W_{i}\right)\right/\mathbb{K}\] where \(\mathbb{T}=(\mathbb{C}^{\times})^{N}\) acts on \(\bigoplus_{i=1}^{K}V_{i}\oplus\bigoplus_{i=K+1}^{N}W_{i}\) diagonally, and \(\mathbb{K}=\operatorname{Hom}(\mathbb{L}^{\vee},\mathbb{C}^{\times})\) acts on \(W_{i}\) via the character given by \(D_{i}\) for \(K+1\leq i\leq N\). Using the notation in Section 3.3, we can write \(\mathcal{W}=\bigoplus_{i=K+1}^{N}W_{i}(D_{i})\). We set \(V=\bigoplus_{i=1}^{K}V_{i}\), \(W=\bigoplus_{i=K+1}^{N}W_{i}\) and \(\mathbb{T}^{\prime}\) to be the copy of \(\mathbb{T}\). We consider the diagonal action of \(\mathbb{T}^{\prime}\) on \(V\oplus W\), and write \(\mu_{1},\ldots,\mu_{N}\) for the equivariant parameters for \(\mathbb{T}^{\prime}\). We take a function \(I^{\mu}_{V\oplus W}(x,z)\in H^{*}_{\mathbb{T}^{\prime}}(V\oplus W)[z,z^{-1}] \llbracket\operatorname{Eff}(B)\rrbracket\llbracket x\rrbracket\) such that \((zI^{\mu}_{V\oplus W})|_{z\mapsto-z}\) is a \(H^{*}_{\mathbb{T}^{\prime}}(\operatorname{pt})\llbracket\operatorname{Eff}(B )\rrbracket\llbracket x\rrbracket\)-valued point of \(\mathcal{L}_{V\oplus W,\mathbb{T}^{\prime}}\) where \(x\) be a set of formal parameters. We will prove the following. **Theorem 5.1**.: _Let \(\mathsf{L}\), \(\vec{V}\), \(\vec{W}\), \(\mathcal{W}\) and \(I^{\mu}_{V\oplus W}\) be as above. Define the function \((I^{\mu}_{V\oplus W})_{\widehat{\operatorname{tw}}}(t,x,y,z)\) to be_ \[(I^{\mu}_{V\oplus W})_{\widehat{\operatorname{tw}}}(t,x,y,z)=e^{ \sum_{i=1}^{N}t_{i}u_{i}/z}\sum_{\ell\in\mathbb{L}}\frac{\hat{q}^{\ell}e^{\sum_{ i=1}^{N}D_{i}(\ell)\cdot t_{i}}}{\prod_{i=1}^{K}\prod_{c=1}^{D_{i}(\ell)}\prod_{ \delta\colon\text{\scriptsize Chern roots}}(u_{i}+\delta+cz)}\\ \cdot\frac{I^{u+D(\ell)z}_{V\oplus W}(x,z)}{\prod_{i=K+1}^{N} \prod_{c=1}^{D_{i}(\ell)}\prod_{\delta\colon\text{\scriptsize Chern roots}}(u_{i}+ \delta+cz)}\] _where \(\tilde{q}\) is a formal variable for \(\mathbb{C}[\![\mathbb{L}_{\mathrm{eff}}]\!]\), and \(I^{u+D(\ell)z}_{V\oplus W}\) denotes the function \(I^{\mu}_{V\oplus W}\) with replaced \(\mu_{i}\) with \(u_{i}+D_{i}(\ell)z\) for \(1\leq i\leq N\). Then \(-z(I^{\mu}_{V\oplus W})\widehat{\tau_{\mathrm{tw}}}(t,x,y,-z)\) is a \(\mathrm{Frac}(H^{*}_{\mathbb{T}}(\mathrm{pt}))[\![\mathrm{Eff}^{\mathrm{ext}}( \mathbb{X}_{\mathsf{L}}(\vec{V}))]\!][\![t,x,y]\!]\)-valued point of the twisted Givental cone \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V}),(\mathcal{W},e_{\lambda}^{-1})}\)._ **Remark 5.2**.: A priori the function \((I^{\mu}_{V\oplus W})\widehat{\tau_{\mathrm{tw}}}\) belongs to \[H^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V}))(\lambda)(\!(z)\!][\![\mathrm{Eff}(B) \oplus\mathbb{L}_{\mathrm{eff}}]\!][\![t,x]\!].\] We implicitly use the identification in (3.7) and interpret this function as an element of \[H^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V}))(\lambda)(\!(z)\!][\![\mathrm{Eff}^{ \mathrm{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))]\!][\![t,x,y]\!].\] **Remark 5.3**.: For the convenience of the proof, we wish to distinguish between the torus \(\mathbb{T}\) acting on \(\mathbb{X}_{\mathsf{L}}(\vec{V})\) and the torus acting on \(V\oplus W\), so we denote the latter one as \(\mathbb{T}^{\prime}\). While this is a different setup from Theorem 1.5, the assertion remains the same. ### Big \(J\)-function Before proceeding to the proof, we introduce a specific point on \(\mathcal{L}_{V\oplus W,\mathbb{T}^{\prime}}\). We take any \(\mathbb{C}\)-basis of \(H^{*}(B)\) and write as \(\{\phi_{i}\}_{i\in I}\). Since \(H^{*}_{\mathbb{T}^{\prime}}(V\oplus W)\cong H^{*}(B)[\mu]\), we can take a \(\mathbb{C}\)-basis of \(H^{*}_{\mathbb{T}^{\prime}}(V\oplus W)[z]\) as follows: \[\{\phi_{i}z^{n}\mu^{a}\colon i\in I,n\in\mathbb{Z}_{\geq 0},a\in\mathbb{Z}_{\geq 0 }^{N}\}. \tag{5.1}\] We write the coordinate on \(H^{*}_{\mathbb{T}^{\prime}}(V\oplus W)[z]\) associated to this basis as \(\tilde{\boldsymbol{\tau}}=\{\tau^{i}_{n,a}\}\). We also obtain the coordinate system \(\boldsymbol{\tau}=\{\tau^{i}_{n}=\tau^{i}_{n,0}\}\) on \(H^{*}(V\oplus W)[z]\). **Definition 5.4**.: We define \(\mathbb{J}^{\mu}_{V\oplus W}(\tilde{\boldsymbol{\tau}},z)\in H^{*}_{\mathbb{T }^{\prime}}(B)[z,z^{-1}][\![\mathrm{Eff}(B)]\!][\![\tilde{\boldsymbol{\tau}}]\!]\) as follows: \[z\mathbb{J}^{\mu}_{V\oplus W}(\tilde{\boldsymbol{\tau}},z)=z+\tilde{ \boldsymbol{\tau}}(z)+\sum_{\begin{subarray}{c}n\geq 0,d\in\mathrm{Eff}(B)\\ (n,d)\neq(0,0),(1,0)\end{subarray}}\sum_{i\in I}\frac{Q^{d}}{n!}\left\langle \frac{\phi_{i}}{z-\psi},\tilde{\boldsymbol{\tau}}(\psi),\ldots,\tilde{ \boldsymbol{\tau}}(\psi)\right\rangle_{0,n+1,d}^{V\oplus W,\mathbb{T}^{ \prime}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: The right-hand side is equal to \[\Delta(t,\tilde{\mathbf{\tau}},z\partial_{t},z\partial_{\mathbf{\tau}},z) \sum_{\ell\in\mathbb{L}}\frac{\tilde{q}^{\ell}e^{\sum_{i=1}^{N}\left(D_{i}(\ell) \cdot t_{i}+\frac{t_{i}u_{i}}{z}\right)}}{\prod_{i=1}^{K}\prod_{c=1}^{D_{i}( \ell)}\prod_{\begin{subarray}{c}\text{\rm{Chern roots}}\\ \text{\rm{of }}V_{i}\end{subarray}}(u_{i}+\delta+cz)}\] \[\cdot\ \frac{\mathbb{J}_{V\oplus W}^{u+D(\ell)z}(\mathbf{\tau},z)}{\prod_{i =K+1}^{N}\prod_{c=1}^{D_{i}(\ell)}\prod_{\begin{subarray}{c}\text{\rm{f: Chern roots}}\\ \text{\rm{of }}W_{i}\end{subarray}}(u_{i}+\delta+cz)}.\] \[= \ \sum_{\ell\in\mathbb{L}}\frac{\tilde{q}^{\ell}e^{\sum_{i=1}^{N} \left(D_{i}(\ell)\cdot t_{i}+\frac{t_{i}u_{i}}{z}\right)}\Delta(t,\tilde{\mathbf{ \tau}},u+D(\ell)z,z\partial_{\mathbf{\tau}},z)\mathbb{J}_{V\oplus W}^{u+D(\ell)z} (\mathbf{\tau},z)}{\prod_{i=1}^{K}\prod_{c=1}^{D_{i}(\ell)}\prod_{\begin{subarray} {c}\text{\rm{f: Chern roots}}\\ \text{\rm{of }}V_{i}\end{subarray}}(u_{i}+\delta+cz)\prod_{i=K+1}^{N} \prod_{c=1}^{D_{i}(\ell)}\prod_{\begin{subarray}{c}\text{\rm{f: Chern roots}}\\ \text{\rm{of }}W_{i}\end{subarray}}(u_{i}+\delta+cz)}.\] Since \(\Delta(t,\tilde{\mathbf{\tau}},\mu,z\partial_{\mathbf{\tau}},z)\) is the operator that shifts \(\tau\) to \(\mathbf{\tau}\), we have \[\Delta(t,\tilde{\mathbf{\tau}},u+D(\ell)z,z\partial_{\mathbf{\tau}},z)\mathbb{J}_{V \oplus W}^{u+D(\ell)z}(\mathbf{\tau},z)=\mathbb{J}_{V\oplus W}^{u+D(\ell)z}( \tilde{\mathbf{\tau}},z).\] These computations imply the desired formula. ### Quantum Riemann-Roch operator In this subsection, we use the notations introduced in Section 2.3.1. Let \(W\) be a vector bundle over \(\mathbb{X}_{\mathsf{L}}(\tilde{\mathcal{O}})\). We introduce the following functions: \[z^{-1}G(\lambda) =\frac{\lambda\log\lambda-\lambda}{z}+\frac{1}{2}\log\lambda+\sum_ {m\geq 2}\frac{B_{m}}{m(m-1)}\left(\frac{z}{\lambda}\right)^{m-1},\] \[G_{W}^{\lambda} =\sum_{\begin{subarray}{c}\delta:\text{\rm{Chern roots}}\\ \text{\rm{of }}W\end{subarray}}G(\lambda+\delta),\] \[H_{W}^{\lambda} =\operatorname{rank}(W)\cdot\left(\lambda\log\lambda-\lambda+\frac {z}{2}\log\lambda\right)+(\log\lambda)z\partial_{c_{1}(W)},\] \[S_{W}^{\lambda} =(\log\lambda)\left(c_{1}(W)-z\partial_{c_{1}(W)}\right)\] where \(\partial_{c_{1}(W)}\) denotes the unique vector field on the \((\mathbf{\tau},t_{1},\ldots,t_{K})\)-space such that \[\partial_{c_{1}(W)}\left(\mathbf{\tau}+\sum_{i=1}^{K}t_{i}u_{i}\right)=c_{1}(W).\] We note that \(G(\lambda)\) is Since \(H_{W}^{\lambda},S_{W}^{\lambda}\in H^{*}(B\times\prod_{i=1}^{K}\mathbb{P}^{s _{i}})[\lambda,\log\lambda,z,z\partial_{\mathbf{\tau}},z\partial_{t}]\), the exponents \(e^{H_{W}(\lambda)/z}\) and \(e^{S_{W}(\lambda)/z}\) are ill-defined as operators on \(\mathcal{H}_{B\times\prod_{i=1}^{K}\mathbb{P}^{s_{i}}}\). However, it follows from the divisor equation that the operator \(e^{S_{W}(\lambda)/z}\) restricted to Lagrangian cones is well-defined, which replaces \(\mathcal{Q}^{\mathcal{D}}\) by \(\mathcal{Q}^{\mathcal{D}}\lambda^{-\int_{\mathcal{D}}c_{1}(W)}\). \[\Delta_{W}^{\lambda}=e^{S_{W}(\lambda)/z}\Delta_{(W,\tilde{e}^{-1})}(\lambda,z).\] **Lemma 5.6**.: _Let \(W\) be a vector bundle over \(\mathbb{X}_{\mathsf{L}}(\tilde{\mathcal{O}})\), \(D\in\mathbb{L}^{\vee}\), \(u=c_{1}(\mathcal{O}(D))\) and \(k\in\mathbb{Z}\)._ 1. _The operator_ \[\exp\left(\frac{H_{W}^{\lambda+z\partial_{u}}-H_{W(D)}^{\lambda}}{z}\right)\] _is well-defined and belongs to_ \(\mathbb{C}[z^{-1},z\partial_{t},z\partial_{\tau}][\![\lambda^{-1}]\!]\)_. Moreover, this operator preserves (twisted) Lagrangian cones._ _._ 2. _It holds that_ (5.2) \[e^{(H^{\lambda+u+kz}_{W}-H^{\lambda}_{W(D)})/z}\cdot\frac{\Delta^{\lambda+u+kz}_{W} }{\Delta^{\lambda}_{W(D)}}=\prod_{\begin{subarray}{c}\delta:\text{ Chern roots}\\ \text{of }W\end{subarray}}\frac{\prod_{c=-\infty}^{k}(\lambda+u+\delta+cz)}{ \prod_{c=-\infty}^{0}(\lambda+u+\delta+cz)}.\] _Here the right-hand side is intepreted as an element of_ \(\lambda^{k}\cdot H^{*}(\mathbb{X}_{\mathbb{L}}(\vec{\mathcal{O}}))[\lambda^{-1} ](\!(z)\!)\)_._ Proof.: A direct computation shows that \(z^{-1}(H^{\lambda+z\partial_{u}}_{W}-H^{\lambda}_{W(D)})\) is equal to \[\frac{\operatorname{rank}(W)}{z}\cdot\left(\left(\lambda+z\partial_{u}+\frac{z }{2}\right)\log\left(1+\frac{z\partial_{u}}{\lambda}\right)-z\partial_{u} \right)+\frac{z\partial_{c_{1}(W)}}{z}\log\left(1+\frac{z\partial_{u}}{\lambda }\right).\] This belongs to \(\lambda^{-1}\cdot\mathbb{C}[z^{-1},z\partial_{\tau},z\partial_{t}][\![ \lambda^{-1}]\!]\), and hence its exponent is a well-defined operator which preserves Lagrangian cones; see Lemma [21, Lemma 2.7]. We have the following identities: \[\frac{G(\lambda+kz)-G(\lambda)}{z}=\sum_{c=-\infty}^{k}\log( \lambda+cz)-\sum_{c=-\infty}^{0}\log(\lambda+cz),\] \[G^{\lambda}_{W\otimes L}=G^{\lambda+c_{1}(L)}_{W},\qquad z\log \left(\Delta^{\lambda}_{(W,\vec{e}^{-1})}\right)=G^{\lambda}_{W}-H^{\lambda}_{ W}-S^{\lambda}_{W}.\] Using these identities, the logarithm of the left-hand side of (5.2) is computed as follows: \[z^{-1}\left(H^{\lambda+u+kz}_{W}+S^{\lambda+u+kz}_{W}+z\log\left( \Delta^{\lambda+u+kz}_{(W,\vec{e}^{-1})}\right)-H^{\lambda}_{W(D)}-S^{\lambda} _{W(D)}-z\log\left(\Delta^{\lambda}_{(W(D),\vec{e}^{-1})}\right)\right)\] \[= z^{-1}\left(G^{\lambda+u+kz}_{W}-G^{\lambda}_{W(D)}\right)\] \[= \sum_{\begin{subarray}{c}\delta:\text{ Chern roots}\\ \text{of }W\end{subarray}}\left(\sum_{c=-\infty}^{k}\log(\lambda+u+\delta+cz)- \sum_{c=-\infty}^{0}\log(\lambda+u+\delta+cz)\right).\] By taking exponentials, we obtain (5.2). ### Proof of Theorem 5.1 As discussed at the end of Section 5.2, we prove Theorem 5.1 only for \(\mathbb{J}^{\mu}_{V\oplus W}(\mathbf{\tau},z)\). The proof is based on that of [21, Theorem 1.1]. Since \(V_{1},\ldots,V_{K}\) are assumed to be generated by global sections, there exist vector bundles \(\mathcal{Q}_{1},\ldots,\mathcal{Q}_{K}\) and non-negative integers \(s_{1},\ldots,s_{K}\) which fit into the following short exact sequences \[0\to V_{i}\to\mathcal{O}^{\oplus s_{i}}\to\mathcal{Q}_{i}\to 0\] for \(1\leq i\leq K\). Without loss of generality, we can assume that \(s_{i}\geq 2\) for any \(i\). We set \(\vec{\mathcal{O}}=(\mathcal{O}^{\oplus s_{1}},\ldots,\mathcal{O}^{\oplus s_{K}})\). By definition, we have \(\mathbb{X}_{\mathbb{L}}(\vec{\mathcal{O}})\cong B\times\prod_{i=1}^{K}\mathbb{ P}^{s_{i}-1}\). From Lemma 2.9, we can see that the function \[\left(\prod_{i=1}^{K}\Delta^{\mu_{i}}_{\mathcal{Q}_{i}}\right)\left(\prod_{i=K+ 1}^{N}\left(\Delta^{\mu_{i}}_{W_{i}}\right)^{-1}\right)\mathbb{J}^{\mu}_{V \oplus W}(\mathbf{\tau},z)\Bigg{|}_{z\to-z}\cdot(-z) \tag{5.3}\] is a \(\mathbb{C}[\![\text{Eff}(B)]\!][\![\mathbf{\tau},\mu^{-1}]\!]\)-valued point of \(\mathcal{L}_{B}\). We apply the following lemma to this function. **Lemma 5.7** ([21, Lemma 2.8]).: _Let \(X\) be a smooth projective variety, \(\{\phi_{i}\}_{0\leq i\leq s}\) be a basis of \(H^{*}(X)\) and \(\tau=\sum_{i=0}^{s}\tau^{i}\phi_{i}\in H^{*}(X)\). Let \(-zI(\tau,x,-z)\) be a \(\mathbb{C}[\![\text{Eff}(X)]\!][\![\tau,x]\!]\)-valued point of \(\mathcal{L}_{X}\) such that_ \[I(\tau,0,z)|_{Q=0}=1+\frac{\tau}{z}+O(z^{-2}).\] _Then there exists a differential operator \(F\in\sum_{i=0}^{s}\mathbb{C}[\mathbb{E}\!\!\mathrm{f}(X)][\![\tau,x]\!]z\partial_{ \tau^{i}}\) satisfying \(e^{F/z}J_{X}(\tau,z)=I(\tau,x,z)\)._ We can write (5.3) as \(\exp(F(\boldsymbol{\tau},\mu,z\partial_{\tau})/z)J_{B}(\tau,z)\) for some \(F(\boldsymbol{\tau},\mu,z\partial_{\tau})\in\sum_{i\in I}\mathbb{C}[\mathbb{E} \!\!\mathrm{f}(B)][\![\boldsymbol{\tau},\mu^{-1}]\!]z\partial_{\tau_{0}^{i}}\). We introduce the following function: \[I^{\mu}:=\left(\prod_{i=1}^{K}\left(\Delta_{\mathcal{Q}_{i}(D_{i })}^{\mu_{i}}\right)^{-1}\cdot e^{(H_{\mathcal{Q}_{i}}^{\mu_{i}+\lambda\theta _{i}}-H_{\mathcal{Q}_{i}(D_{i})}^{\mu_{i}})/z}\right)\cdot\left(\prod_{i=K+1}^ {N}\Delta_{W_{i}(D_{i})}^{\mu_{i}+u_{i}^{0}}\cdot e^{(H_{W_{i}(D_{i})}^{\mu_{ i}+u_{i}^{0}}-H_{W_{i}}^{\mu_{i}+z\partial_{t_{i}}})/z}\right)\\ \cdot e^{F(\boldsymbol{\tau},\mu+z\partial_{t},z\partial_{\tau})/ z}J_{\mathbb{X}_{\mathbb{L}}(\vec{\mathcal{O}})}(t,\tau,z)\] where \(u_{i}^{0}:=-\lambda_{i}+\sum_{i=1}^{K}D_{j}^{\vee}(D_{i})\lambda_{j}\) is the image of \(u_{i}\) under the projection \(H_{\mathbb{T}}^{2}(\mathbb{X}_{\mathbb{L}}(\vec{\mathcal{O}}))\to H_{ \mathbb{T}}^{2}(\mathrm{pt})\), and \(J_{\mathbb{X}_{\mathbb{L}}(\vec{\mathcal{O}})}(t,\tau,z)\) is the \(J\)-function at the parameter \(\tau+\sum_{i=1}^{N}t_{i}u_{i}\): \[J_{\mathbb{X}_{\mathbb{L}}(\vec{\mathcal{O}})}(t,\tau,z)=e^{\sum_{i=1}^{N}t_{ i}u_{i}/z}\sum_{\ell\in\mathbb{L}}\frac{q^{\ell}e^{\sum_{i=1}^{N}D_{i}(\ell) \cdot t_{i}}}{\prod_{i=1}^{K}\prod_{c=1}^{D_{i}(\ell)}(u_{i}+cz)^{s_{i}}}J_{B} (\tau,z).\] From the mirror theorem for split toric bundles [5], it can be seen that the function \(-zJ_{\mathbb{X}_{\mathbb{L}}(\vec{\mathcal{O}})}(t,\tau,-z)\) is a point on \(\mathcal{L}_{\mathbb{X}_{\mathbb{L}}(\vec{\mathcal{O}})}\). For any \(f\in H^{*}(\mathbb{X}_{\mathbb{L}}(\vec{\mathcal{O}}))[z\partial_{t}][\![ \mathbb{E}\!\!\mathrm{f}(B)][\![\boldsymbol{\tau},\mu^{-1}]\!]\), we have \[f(z\partial_{t})J_{\mathbb{X}_{\mathbb{L}}(\vec{\mathcal{O}})}(t,\tau,z)=\sum _{\ell\in\mathbb{L}}\frac{q^{\ell}e^{\sum_{i=1}^{N}\left(D_{i}(\ell)\cdot t_{ i}+\frac{t_{i}u_{i}}{z}\right)}}{\prod_{i=1}^{K}\prod_{c=1}^{D_{i}(\ell)}(u_{i}+cz)^{s_{i}}}f (u+D(\ell)z)J_{B}.\] Therefore, \(I^{\mu}\) can be computed as \[I^{\mu}= \sum_{\ell\in\mathbb{L}}\frac{q^{\ell}e^{\sum_{i=1}^{N}\left(D_{i }(\ell)\cdot t_{i}+\frac{t_{i}u_{i}}{z}\right)}}{\prod_{i=1}^{K}\prod_{c=1}^{D_ {i}(\ell)}(u_{i}+cz)^{s_{i}}}\left(\prod_{i=1}^{K}\left(\Delta_{\mathcal{Q}_{ i}(D_{i})}^{\mu_{i}}\right)^{-1}\cdot e^{(H_{\mathcal{Q}_{i}}^{\mu_{i}+u_{i}+D_{i}( \ell)z}-H_{\mathcal{Q}_{i}(D_{i})}^{\mu_{i}})/z}\right)\] \[\cdot\left(\prod_{i=K+1}^{N}\Delta_{W_{i}(D_{i})}^{\mu_{i}+u_{i} ^{0}}\cdot e^{(H_{W_{i}(D_{i})}^{\mu_{i}+u_{i}^{0}}-H_{W_{i}}^{\mu_{i}+u_{i}+D_ {i}(\ell)z})/z}\right)\cdot e^{F(\boldsymbol{\tau},\mu+u+D(\ell)z,z\partial_{ \tau})}J_{B}(\tau,z)\] \[= \sum_{\ell\in\mathbb{L}}\frac{q^{\ell}e^{\sum_{i=1}^{N}\left(D_{i }(\ell)\cdot t_{i}+\frac{t_{i}u_{i}}{z}\right)}}{\prod_{i=1}^{K}\prod_{c=1}^{D_ {i}(\ell)}(u_{i}+cz)^{s_{i}}}\left(\prod_{i=1}^{K}\frac{\Delta_{\mathcal{Q}_{ i}}^{\mu_{i}+u_{i}+D_{i}(\ell)z}}{\Delta_{\mathcal{Q}_{i}(D_{i})}^{\mu_{i}}} \cdot e^{(H_{\mathcal{Q}_{i}}^{\mu_{i}+u_{i}+D_{i}(\ell)z}-H_{\mathcal{Q}_{i} (D_{i})}^{\mu_{i}})/z}\right)\] \[\cdot\left(\prod_{i=K+1}^{N}\frac{\Delta_{W_{i}}^{\mu_{i}+u_{i} +D_{i}(\ell)z}}{\Delta_{W_{i}(D_{i})}^{\mu_{i}+u_{i}^{0}}}\cdot e^{(H_{W_{i}}^ {\mu_{i}+u_{i}+D_{i}(\ell)z}-H_{W_{i}(D_{i})}^{\mu_{i}+u_{i}^{0}})/z}\right)^{- 1}\cdot\mathbb{J}_{V\oplus W}^{\mu+u+D(\ell)z}\] \[= \sum_{\ell\in\mathbb{L}}q^{\ell}e^{\sum_{i=1}^{N}\left(D_{i}(\ell) \cdot t_{i}+\frac{t_{i}u_{i}}{z}\right)}\left(\prod_{i=1}^{K}\prod_{c=1}^{D_{i }(\ell)}\frac{\prod_{\delta\colon\text{ Chern roots}}(\mu_{i}+u_{i}+\delta+cz)}{(u_{i}+cz)^{s_{i}}}\right)\] \[\cdot\frac{1}{\prod_{i=K+1}^{N}\prod_{c=1}^{D_{i}(\ell)}\prod_{ \begin{subarray}{c}\delta\colon\text{ Chern roots}\\ \text{ of }W_{i}\end{subarray}}(\mu_{i}+u_{i}+\delta+cz)}\cdot\mathbb{J}_{V\oplus W}^{\mu +u+D(\ell)z}.\] Here Lemma 5.6 (2) is used for the last equality. Since \(\exp((H_{\mathcal{Q}_{i}}^{\mu_{i}+z\partial_{t_{i}}}-H_{\mathcal{Q}_{i}(D_{i})} ^{\mu_{i}})/z)\) and \(\exp((H_{W_{i}(D_{i})}^{u_{i}^{0}}-H_{W_{i}}^{u_{i}^{0}+z\partial_{t_{i}}})/z)\) preserve Lagrangian cones (Lemma 5.6 (1)), \(-z\cdot I^{\mu}|_{z\to-z}\) lies in the Givental cone for \((\vec{E},\vec{\mathbf{c}})\)-twisted Gromov-Witten theory of \(\mathbb{X}_{\mathsf{L}}(\vec{\mathcal{O}})\) where \((\vec{E},\vec{\mathbf{c}})\) denotes the following collection: \[(E_{i},\mathbf{c}^{i})=\begin{cases}(Q_{i}(D_{i}),e_{\mu_{i}})&\text{for $1\leq i \leq K$},\\ (W_{i}(D_{i}),e_{\mu_{i}+u_{i}^{0}}^{-1})&\text{for $K+1\leq i\leq N$}.\end{cases}\] Since the non-equivariant limit \(\lim_{\mu\to 0}I^{\mu}\) with respect to \(\mathbb{T}^{\prime}\) exists, we can apply Theorem 2.10 to \(-zI^{\mu}(-z)\) and obtain a point on \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{\mathcal{V}}),(\mathcal{W},e_{ \lambda}^{-1})}\), which coincides with \(-z(I^{\mu}_{V\oplus W})^{\frown}_{\mathrm{tw}}(-z)\). This proves Theorem 5.1. **Remark 5.8**.: For convenience, we list the rings to which the functions introduced in this section belong. \[\mathbb{J}^{\mu}_{V\oplus W}(\boldsymbol{\tau},z) \in H^{*}(B)[\mu,z,z^{-1}][\llbracket\mathsf{Eff}(B)\rrbracket[ \boldsymbol{\tau}],\] \[\frac{\prod_{i=1}^{K}\Delta_{\mathcal{Q}_{i}}^{\mu_{i}}}{\prod_{i =K+1}^{N}(\Delta_{W_{i}}^{\mu_{i}})^{-1}}\mathbb{J}^{\mu}_{V\oplus W}( \boldsymbol{\tau},z) \in H^{*}(B)[\mu^{-1},z,z^{-1}][\llbracket\mathsf{Eff}(B) \rrbracket[\boldsymbol{\tau}],\] \[I^{\mu}(t,\boldsymbol{\tau},z) \in H^{*}(\mathbb{X}_{\mathsf{L}}(\vec{\mathcal{O}}))[\mu](\lambda) (z)\llbracket\mathsf{Eff}(B)\oplus\mathbb{L}_{\mathrm{eff}}\rrbracket[t, \boldsymbol{\tau}]\] \[(\subset H^{*}(\mathbb{X}_{\mathsf{L}}(\vec{\mathcal{O}}))(\mu, \lambda)(z)\llbracket\mathsf{Eff}(B)\oplus\mathbb{L}_{\mathrm{eff}} \rrbracket[t,\boldsymbol{\tau}],)\] \[(\mathbb{J}^{\mu}_{V\oplus W}(\boldsymbol{\tau},z))^{\frown}_{ \mathrm{tw}} \in H^{*}(\mathbb{X}_{\mathsf{L}}(\vec{\mathcal{V}}))(\lambda)(z) \llbracket\mathsf{Eff}(B)\oplus\mathbb{L}_{\mathrm{eff}}\rrbracket[t, \boldsymbol{\tau}].\] ## 6. Mirror theorem for toric bundles In this section, we will prove the mirror theorem (Theorem 6.1) for (possibly non-split) toric bundles. Throughout this section, we fix the following data: * a smooth toric data \(\mathsf{L}=(\mathbb{L}^{\vee},D\colon\mathbb{Z}^{N}\to\mathbb{L}^{\vee},\omega)\); * a smooth projective variety \(B\); * vector bundles \(V_{1},\dots,V_{N}\) over \(B\) whose duals are generated by global sections. We let \(-\lambda_{i}\in H^{2}_{\mathbb{T}}(\mathrm{pt})\) be the equivariant parameter corresponding to the \(i\)-th projection \(\mathbb{T}\to\mathbb{C}^{\times}\). ### Main theorem We consider the diagonal \(\mathbb{T}\)-action on \(V\). Let \(I^{\lambda}_{V}(x,z)\) be a function lying in \[H^{*}_{\mathbb{T}}(V)[z,z^{-1}][\llbracket\mathrm{Eff}(B)\rrbracket[\![x]\!] =H^{*}(B)[\mu,z,z^{-1}][\llbracket\mathrm{Eff}(B)\rrbracket[\![x]\!]\] such that \(-zI^{\lambda}_{V}(x,-z)\) is a \(H^{*}_{\mathbb{T}}(\mathrm{pt})[\llbracket\mathrm{Eff}(B)\rrbracket[\![x]\!]\)-valued point of \(\mathcal{L}_{V,\mathbb{T}}\). We define \[(I^{\lambda}_{V})^{\frown}(t,x,y,z)=e^{\sum_{i=1}^{N}t_{i}u_{i}/z}\sum_{\ell \in\mathbb{L}}\frac{\tilde{q}^{\ell}e^{\sum_{i=1}^{N}D_{i}(\ell)\cdot t_{i}}} {\prod_{i=1}^{N}\prod_{c=1}^{D_{i}(\ell)}\prod_{\delta\colon\begin{subarray}{c }\mathrm{Chern}\,\mathrm{roots}\\ \mathrm{of}\,\,V_{i}\end{subarray}}(u_{i}+\delta+cz)}\cdot I^{u+D(\ell)z}_{V}\] where \(I^{u+D(\ell)z}_{V}:=(I^{\lambda}_{V})|_{\lambda_{i}\to u_{i}+D_{i}(\ell)z}\) and \(\tilde{q}\) denotes a formal variable for \(\mathbb{C}[\mathbb{L}_{\mathrm{eff}}]\). Note that \((I^{\lambda}_{V})^{\frown}\) can be interpreted as an element of \[H^{*}_{\mathbb{T}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))_{\mathrm{loc}}(\!(z^{-1}) \!)[\llbracket\mathrm{Eff}^{\mathrm{ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V})) \rrbracket[\![t,x,y]\!]\] via the identification (3.7). We can now state our main result. **Theorem 6.1**.: _The function \(-z\cdot(I^{\lambda}_{V})^{\frown}(-z)\) is a \(\mathrm{Frac}(H^{*}_{\mathbb{T}}(\mathrm{pt}))[\llbracket\mathrm{Eff}^{\mathrm{ ext}}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\rrbracket[\![x,y]\!]\)-valued point on \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V}),\mathbb{T}}\)._ Thanks to Theorem 4.2, it is enough to confirm that \((I_{V}^{\lambda})^{\frown}\) satisfies the three conditions appearing there. In the following three subsections, we will sequentially verify the conditions **(C3)**, **(C1)** and **(C2)**. ### Restrictions of \((I_{V}^{\lambda})^{\frown}\) In this sbsection, we describe the restriction \(\iota_{\alpha}^{*}(I_{V}^{\lambda})\) for \(\alpha\in F_{\mathsf{L}}\). Recall from Section 3.2 and 3.4 that we have the isomorphisms \[H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})) \cong H_{\mathbb{T}}^{*}(B)[u_{1},\ldots,u_{N}]/(\mathcal{I}+ \mathcal{J}),\] \[H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha}) \cong H_{\mathbb{T}}^{*}(B)[\{u_{i}\}_{i\in\alpha}]/\langle e(V_{i }\otimes L_{i})\colon i\in\alpha\rangle,\] and the map \[\iota_{\alpha}^{*}\colon H_{\mathbb{T}}^{*}(B)[u_{1},\ldots,u_{N}]/(\mathcal{ I}+\mathcal{J})\to H_{\mathbb{T}}^{*}(B)[\{u_{i}\}_{i\in\alpha}]/\langle e_{ \mathbb{T}}(V_{i}\otimes L_{i})\colon i\in\alpha\rangle\] is a \(H_{\mathbb{T}}^{*}(B)\)-module morphism with \[\iota_{\alpha}^{*}u_{i}=\begin{cases}u_{i}&\text{if }i\in\alpha,\\ -\lambda_{i}+\sum_{j\in\alpha}D_{\alpha,j}^{\vee}(D_{i})\cdot(u_{j}+\lambda_{ j})&\text{if }i\notin\alpha\end{cases}\] where \(\{D_{\alpha,i}^{\vee}\}_{i\in\alpha}\subset\mathbb{L}\) is a dual basis of \(\{D_{i}\}_{i\in\alpha}\). Hence the function \(\iota_{\alpha}^{*}(I_{V}^{\lambda})^{\frown}(t,x,y,z)\) can be written as \[\iota_{\alpha}^{*}(I_{V}^{\lambda})^{\frown}(z)=e^{\sum_{i=1}^{N}t_{i}\cdot \iota_{\alpha}^{*}u_{i}/z}\sum_{\ell\in\mathbb{L}_{\text{\rm{eff}}}}\frac{ \tilde{q}^{\ell}e^{\sum_{i=1}^{N}D_{i}(\ell)\cdot t_{i}}}{\prod_{i\in\alpha} \prod_{c=1}^{D_{i}(\ell)}R_{V_{i}}(u_{i}+cz)}\cdot\frac{I_{V}^{\iota_{\alpha}^ {*}u+D(\ell)z}}{\prod_{i\notin\alpha}\prod_{c=1}^{D_{i}(\ell)}R_{V_{i}}(\iota_ {\alpha}^{*}u_{i}+cz)} \tag{6.1}\] where, for a vector bundle \(V\to B\), the form \(R_{V}(w)\in H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha})[w]\) is defined as \[R_{V}(w):=\prod_{\begin{subarray}{c}\delta\colon\text{\scriptsize Chern roots}\\ \text{\scriptsize of }V\end{subarray}}(w+\delta)=w^{\text{rank}(V)}+c_{1}(V)\cdot w^{ \text{rank}(V)-1}+\cdots+c_{\text{rank}(V)}(V).\] This coincides with \((I_{V}^{\lambda})_{\text{tw}}^{\frown}\), which is introduced in Section 5.1, associated to the following data: * a smooth toric data \((\mathbb{L}^{\vee},D_{\alpha}\colon\mathbb{Z}^{\alpha}\to\mathbb{L}^{\vee},\omega)\); * vector bundles \(\{V_{i}\}_{i\in\alpha}\) and \(\{V_{i}\}_{i\notin\alpha}\); * \(\{D_{i}\}_{i\notin\alpha}\subset\mathbb{L}^{\vee}\). Considering that \(N_{\alpha}=\bigoplus_{i\notin\alpha}(V_{i}\otimes L_{i})\), Theorem 5.1 shows that \(-z\iota_{\alpha}^{*}(I_{V}^{\lambda})^{\frown}\,(-z)\) is a \(\text{Frac}(H_{\mathbb{T}}^{*}(\text{pt}))[\mathbb{E}\text{\rm{ff}}^{\text{ext }}(\mathbb{X}_{\mathsf{L}}(\vec{V}))][\![t,x,y]\!]\)-valued point of \(\mathcal{L}_{\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha},(N_{\alpha},e_{\mathbb{ T}}^{-1})}\), which implies that \(\mathbf{f}\) satisfies **(C3)**. ### Poles of \(\iota_{\alpha}^{*}(I_{V}^{\lambda})^{\frown}\) In order to verify **(C1)**, we examine the set of poles of \(\iota_{\alpha}^{*}(I_{V}^{\lambda})^{\frown}(z)\). Since the function \(I_{V}^{\iota_{\alpha}^{*}u+D(\ell)z}\) belongs to \[H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha})[z,z^{-1}][\![ \text{Eff}(B)]\!][\![x]\!],\] all poles except for those at \(z=0,\infty\) arise from the denominators in (6.1): \[\{\iota_{\alpha}^{*}u_{i}+\delta+cz\colon 1\leq i\leq N,c\geq 1,\delta\colon \text{Chern roots of }V_{i}\}.\] Note that the image of \(\iota_{\alpha}^{*}u_{i}+\delta\) under the projection \(H_{\mathbb{T}}^{2}(\mathbb{X}_{\mathsf{L}}(\vec{V}))\to H_{\mathbb{T}}^{2}( \text{pt})\) equals \[-\lambda_{i}+\sum_{j\in\alpha}D_{\alpha,j}^{\vee}(D_{i})\cdot\lambda_{j},\] which coincides with \(\lambda_{\alpha,\beta}\) if there exists \(\beta\in F_{\mathsf{L}}\) such that \(i_{\alpha,\beta}=i\). The condition **(C1)** for \(\mathsf{f}\) follows from the fact that, if \(D_{i}(\ell)>0\) for some \(\ell\in\mathbb{L}_{\mathrm{eff}}\) and \(i\notin\alpha\), there exists \(\beta\in\mathrm{adj}(\alpha)\) such that \(i_{\alpha,\beta}=i\). ### Recursion formula of \(\iota_{\alpha}^{*}(I_{V}^{\lambda})\widehat{\cdot}\) In this subsection, we establish the recursion formula (6.2) for principal parts of \(\iota_{\alpha}^{*}(I_{V}^{\lambda})^{\sim}\), which implies that \(-z\iota_{\alpha}^{*}(I_{V}^{\lambda})^{\sim}(-z)\) satisfies **(C2)**. Note that since we already know that \(-z\iota_{\alpha}^{*}(I_{V}^{\lambda})^{\sim}(-z)\) satisfies **(C1)** and **(C3)**, the following proposition finish the proof of Theorem 6.1. **Proposition 6.2**.: _For any \(\alpha\in F_{\mathsf{L}}\), \(\beta\in\mathrm{adj}(\alpha)\) and \(k\in\mathbb{N}\), it holds that_ \[\mathrm{Prin}_{z=-\frac{\lambda_{\alpha,\beta}}{k}}\,\iota_{ \alpha}^{*}(I_{V}^{\lambda})^{\sim}(z)\\ =p_{\alpha\cup\beta,\alpha_{*}}\left[q^{k\cdot d_{\alpha\beta}} \cdot\frac{C_{\alpha,\beta}(k)}{kz+c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}\cdot p _{\alpha\cup\beta,\beta}^{*}\iota_{\beta}^{*}(I_{V}^{\lambda})^{\sim}\left(z=- \frac{c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}{k}\right)\right] \tag{6.2}\] _where \(C_{\alpha,\beta}(k)\) is the element of \(H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha})_{\mathrm{loc}}\) introduced in Theorem 4.2._ Proof.: Since the function \(R_{V_{i_{\alpha,\beta}}}(\iota_{\alpha}^{*}u_{i_{\alpha,\beta}}+kz)\cdot\iota_ {\alpha}^{*}(I_{V}^{\lambda})^{\sim}(z)\) is regular at \(z=-\lambda_{\alpha,\beta}/k\), we have \[\mathrm{Prin}_{z=-\frac{\lambda_{\alpha,\beta}}{k}}\,\iota_{\alpha}^{*}(I_{V}^ {\lambda})^{\sim}(z)=\frac{q^{k\cdot d_{\alpha\beta}}}{R_{V_{i_{\alpha,\beta}} }(\iota_{\alpha}^{*}u_{i_{\alpha,\beta}}+kz)}\cdot\sum_{\ell\in\mathbb{L}_{ \mathrm{eff}}}\tilde{q}^{\ell}e^{\sum_{i=1}^{N}D_{i}(\ell)\cdot t_{i}}\cdot \bar{A}_{\alpha,\beta,k;\ell}(\iota_{\alpha}^{*}u_{i_{\alpha,\beta}}+kz) \tag{6.3}\] where \(\bar{A}_{\alpha,\beta,k;\ell}(w)\in H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L} }(\vec{V})_{\alpha})_{\mathrm{loc}}[\![\mathrm{Eff}(B)]\!][t,x,y]\![w]\) is the unique element satisfying * the function lying in \(H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha})_{\mathrm{loc}}[w ][\![\mathrm{Eff}(B)]\!][t,x,y]\!]\) \[A_{\alpha,\beta,k;\ell}(w):=\exp\left(\frac{k\sum_{i=1}^{N}t_{i} \cdot\iota_{\alpha}^{*}u_{i_{\alpha,\beta}}}{w-\iota_{\alpha}^{*}u_{i_{\alpha,\beta}}}\right)\cdot\frac{e^{\sum_{i=1}^{N}D_{i}(\ell+k\cdot d_{\alpha\beta} )\cdot t_{i}}}{\prod_{i=1}^{N}\prod_{c=1}^{D_{i}(\ell+k\cdot d_{\alpha\beta} )}R_{V_{i}}(\iota_{\alpha}^{*}u_{i}-\frac{c}{k}\iota_{\alpha}^{*}u_{i_{\alpha, \beta}}+\frac{c}{k}w)}\\ \cdot R_{V_{i_{\alpha,\beta}}}(w)\cdot I_{V}^{\iota_{\alpha}^{*}u -\frac{D(\ell+k\cdot d_{\alpha\beta})}{k}}\iota_{\alpha}^{*}u_{i_{\alpha,\beta} }+\frac{D(\ell+k\cdot d_{\alpha\beta})}{k}w\] where \[I_{V}^{\iota_{\alpha}^{*}u-\frac{D(\ell+k\cdot d_{\alpha\beta})}{k}}\iota_{ \alpha}^{*}u_{i_{\alpha,\beta}}+\frac{D(\ell+k\cdot d_{\alpha\beta})}{k}w\] denotes the function \(I_{V}^{\lambda}\) with \(\mu_{i}\) replaced with \(\iota_{\alpha}^{*}u_{i}-\frac{D_{i}(\ell+k\cdot d_{\alpha\beta})}{k}\iota_{ \alpha}^{*}u_{i_{\alpha,\beta}}+\frac{D_{i}(\ell+k\cdot d_{\alpha\beta})}{k}w\), coincides with the function \(\bar{A}_{\alpha,\beta,k;\ell}(w)\) in the quotient ring \(H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha})_{\mathrm{loc}}[w ][\![\mathrm{Eff}(B)]\!][\![t,x,y]\!]/(R_{V_{i_{\alpha,\beta}}}(w))\); * as a \((Q,t,x,y)\)-series, all coefficients of \(\bar{A}_{\alpha,\beta,k;\ell}(w)\) are polynomials in \(w\) of degree less than \(\mathrm{rank}(V_{i_{\alpha,\beta}})\) with coefficients in \(H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L}}(\vec{V})_{\alpha})_{\mathrm{loc}}\). By definition, there exist functions \(\bar{A}_{\alpha,\beta,k;\ell}^{n}\in H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathsf{L} }(\vec{V})_{\alpha})_{\mathrm{loc}}[\![\mathrm{Eff}(B)]\!][\![t,x,y]\!]\)\((0\leq n\leq\mathrm{rank}(V_{i_{\alpha,\beta}})-1)\) such that \[\bar{A}_{\alpha,\beta,k;\ell}(w)=\sum_{n=0}^{\mathrm{rank}(V_{i_{\alpha,\beta}}) -1}\bar{A}_{\alpha,\beta,k;\ell}^{n}\cdot w^{n}.\] We now proceed with the computation of the right-hand side. From Corollary 3.8, it follows that \[H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha\cup\beta}) \cong H_{\mathbb{T}}^{*}(B)[\{u_{i}\}_{i\in\alpha\cup\beta}]/\langle e (V_{i}\otimes L_{i})\colon i\in\alpha\cup\beta\rangle,\] \[H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathbb{L}}(\vec{V})_{\beta}) \cong H_{\mathbb{T}}^{*}(B)[\{u_{i}\}_{i\in\beta}]/\langle e(V_{i} \otimes L_{i})\colon i\in\beta\rangle\] and the map \(p_{\alpha\cup\beta,\beta}^{*}\colon H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathbb{L}}( \vec{V})_{\beta})\to H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathbb{L}}(\vec{V})_{ \alpha\cup\beta})\) is the \(H_{\mathbb{T}}^{*}(B)\)-module morphism sending \(u_{i}\) to \(u_{i}\) for \(i\in\beta\). By direct calculation, we can see that \[p_{\alpha\cup\beta,\beta}^{*}t_{\beta}^{*}(I_{V}^{\lambda})^{ \sim}\left(z=-\frac{c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}{k}\right)\\ =\exp\left(-\frac{k\sum_{i=1}^{N}t_{i}\cdot p_{\alpha\cup\beta, \beta}^{*}t_{\beta}^{*}u_{i}}{c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}\right) \cdot\sum_{\ell\in\mathbb{L}_{\mathrm{eff}}}\frac{\tilde{q}^{\ell}e^{\sum_{i=1 }^{N}D_{i}(\ell)\cdot t_{i}}I_{V}^{\epsilon_{\beta}^{*}u-\frac{D(\ell)}{k}c_{1 }^{\mathbb{T}}(L_{\alpha,\beta})}}{\prod_{i=1}^{N}\prod_{c=1}^{D_{i}(\ell)}R_{ V_{i}}(p_{\alpha\cup\beta,\beta}^{*}t_{\beta}^{*}u_{i}-\frac{c}{k}c_{1}^{ \mathbb{T}}(L_{\alpha,\beta}))}.\] From Lemma 3.9 (3), we have \[\exp\left(-\frac{k\sum_{i=1}^{N}t_{i}\cdot p_{\alpha\cup\beta, \beta}^{*}t_{\beta}^{*}u_{i}}{c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}\right) =\exp\left(-k\sum_{i=1}^{N}t_{i}\cdot\left(\frac{p_{\alpha\cup \beta,\alpha}^{*}t_{\alpha}^{*}u_{i}}{c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}-D_{ i}(d_{\alpha\beta})\right)\right),\] \[\prod_{c=1}^{D_{i}(\ell)}R_{V_{i}}\left(p_{\alpha\cup\beta, \beta}^{*}t_{\beta}^{*}u_{i}-\frac{c}{k}c_{1}^{\mathbb{T}}(L_{\alpha,\beta}) \right) =\prod_{c=k\cdot D_{i}(d_{\alpha\beta})+1}^{k\cdot D_{i}(\ell+d_{ \alpha\beta})}R_{V_{i}}\left(p_{\alpha\cup\beta,\alpha}^{*}t_{\alpha}^{*}u_{i} -\frac{c}{k}c_{1}^{\mathbb{T}}(L_{\alpha,\beta})\right).\] Hence we have \[q^{k\cdot d_{\alpha\beta}}\cdot\frac{C_{\alpha,\beta}(k)}{kz+c_{1 }^{\mathbb{T}}(L_{\alpha,\beta})}\cdot p_{\alpha\cup\beta,\beta}^{*}t_{\beta} ^{*}(I_{V}^{\lambda})^{\sim}\left(z=-\frac{c_{1}^{\mathbb{T}}(L_{\alpha,\beta })}{k}\right)\] \[= \frac{q^{k\cdot d_{\alpha\beta}}}{kz+c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}\cdot\exp\left(-k\sum_{i=1}^{N}t_{i}\cdot\left(\frac{p_{\alpha\cup \beta,\alpha}^{*}t_{\alpha}^{*}u_{i}}{c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}-D_{ i}(d_{\alpha\beta})\right)\right)\] \[\cdot R_{V_{i_{\alpha,\beta}}}\left(p_{\alpha\cup\beta,\alpha}^{*} t_{\alpha}^{*}u_{i_{\alpha,\beta}}-c_{1}^{\mathbb{T}}(L_{\alpha,\beta}) \right)\cdot\sum_{\ell\in\mathbb{L}_{\mathrm{eff}}}\frac{\tilde{q}^{\ell}e^{\sum_ {i=1}^{N}D_{i}(\ell)\cdot t_{i}}I_{V}^{\epsilon_{\beta}^{*}u+\frac{D(\ell)}{k}c _{1}^{\mathbb{T}}(L_{\alpha,\beta})}}{\prod_{i=1}^{N}\prod_{c=1}^{D_{i}(\ell) \cdot t_{i}}(p_{\alpha\cup\beta,\alpha}^{*}t_{\alpha}^{*}u_{i}-\frac{c}{k}c_{ 1}^{\mathbb{T}}(L_{\alpha,\beta}))}\] \[= \frac{q^{k\cdot d_{\alpha\beta}}}{kz+p_{\alpha\cup\beta,\alpha}^{ *}t_{\alpha}^{*}u_{i_{\alpha,\beta}}-u_{i_{\alpha,\beta}}}\cdot\sum_{\ell \in\mathbb{L}_{\mathrm{eff}}}\tilde{q}^{\ell}e^{\sum_{i=1}^{N}D_{i}(\ell) \cdot t_{i}}\cdot p_{\alpha\cup\beta,\alpha}^{*}\bar{A}_{\alpha,\beta,k;\ell} \left(u_{i_{\alpha,\beta}}\right).\] For the last equality, we use Lemma 3.9 (2) and the equality \[p_{\alpha\cup\beta,\alpha}^{*}A_{\alpha,\beta,k;\ell}\left(u_{i_{\alpha,\beta}} \right)=p_{\alpha\cup\beta,\alpha}^{*}\bar{A}_{\alpha,\beta,k;\ell}\left(u_{i_ {\alpha,\beta}}\right)\] in \(H_{\mathbb{T}}^{*}(\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha\cup\beta})_{\mathrm{ loc}}[\!\mathrm{Eff}(B)][\![t,x,y]\!]\). Since \(\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha\cup\beta}\) is isomorphic to the projectivization of the vector bundle \(V_{i_{\alpha,\beta}}\to\mathbb{X}_{\mathbb{L}}(\vec{V})_{\alpha}\), it holds that \[p_{\alpha\cup\beta,\alpha_{*}}\left[u_{i_{\alpha,\beta}}^{n}\right]=s_{n-\mathrm{ rank}(V_{i_{\alpha,\beta}})+1}(V_{i_{\alpha,\beta}})\] where \(s_{i}\) denotes the \(i\)-th Segre class [12]. Here we define \(s_{i}(V)=0\) if \(i<0\). Note that the Segre classes satisfy the following formula \[\sum_{i\geq 0}x^{-i-\mathrm{rank}(V)}s_{i}(V)=R_{V}(x)^{-1}\] for any vector bundle \(V\). Using these formulas and the projection formula, it holds that \[p_{\alpha\cup\beta,\alpha_{*}}\left[\frac{p_{\alpha\cup\beta,\alpha }^{*}\bar{A}_{\alpha,\beta,k;\ell}(u_{i_{\alpha,\beta}})}{kz+p_{\alpha\cup\beta, \alpha}^{*}\iota_{\alpha}^{*}u_{i_{\alpha,\beta}}-u_{i_{\alpha,\beta}}}\right]\] \[= \sum_{m\geq 0}\left(\iota_{\alpha}^{*}u_{i_{\alpha,\beta}}+kz \right)^{-m-1}\cdot p_{\alpha\cup\beta,\alpha_{*}}\left[u_{i_{\alpha,\beta}}^{ m}\cdot\sum_{n=0}^{\operatorname{rank}(V_{i_{\alpha,\beta}})-1}p_{\alpha\cup\beta, \alpha}^{*}\bar{A}_{\alpha,\beta,k;\ell}^{n}\cdot u_{i_{\alpha,\beta}}^{n}\right]\] \[= \sum_{n=0}^{\operatorname{rank}(V_{i_{\alpha,\beta}})-1}\bar{A} _{\alpha,\beta,k;\ell}^{n}\cdot\sum_{m\geq 0}\left(\iota_{\alpha}^{*}u_{i_{ \alpha,\beta}}+kz\right)^{-m-1}\cdot s_{m+n-\operatorname{rank}(V_{i_{\alpha, \beta}})+1}(V_{i_{\alpha,\beta}})\] \[= \sum_{n=0}^{\operatorname{rank}(V_{i_{\alpha,\beta}})-1}\bar{A} _{\alpha,\beta,k;\ell}^{n}\cdot\left(\iota_{\alpha}^{*}u_{i_{\alpha,\beta}}+ kz\right)^{n}\cdot R_{V_{i_{\alpha,\beta}}}\left(\iota_{\alpha}^{*}u_{i_{\alpha, \beta}}+kz\right)^{-1}\] \[= \frac{\bar{A}_{\alpha,\beta,k;\ell}(\iota_{\alpha}^{*}u_{i_{ \alpha,\beta}}+kz)}{R_{V_{i_{\alpha,\beta}}}(\iota_{\alpha}^{*}u_{i_{\alpha, \beta}}+kz)}.\] From the calculations above, the right-hand side of (6.2) can be computed as follows.: \[p_{\alpha\cup\beta,\alpha_{*}}\left[q^{k\cdot d_{\alpha\beta}} \cdot\frac{C_{\alpha,\beta}(k)}{kz+c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}\cdot p _{\alpha\cup\beta,\beta}^{*}\iota_{\beta}^{*}(I_{V}^{\lambda})^{\sim}\left(z= -\frac{c_{1}^{\mathbb{T}}(L_{\alpha,\beta})}{k}\right)\right]\] \[= \,p_{\alpha\cup\beta,\alpha_{*}}\left[\frac{q^{k\cdot d_{\alpha \beta}}}{kz+p_{\alpha\cup\beta,\alpha}^{*}\iota_{\alpha}^{*}u_{i_{\alpha, \beta}}-u_{i_{\alpha,\beta}}}\cdot\sum_{\ell\in\mathbb{L}_{\operatorname{eff }}}\tilde{q}^{\ell}e^{\sum_{i=1}^{N}D_{i}(\ell)\cdot t_{i}}\cdot p_{\alpha\cup \beta,\alpha}^{*}\bar{A}_{\alpha,\beta,k;\ell}\left(u_{i_{\alpha,\beta}} \right)\right]\] \[= \,q^{k\cdot d_{\alpha\beta}}\cdot\sum_{\ell\in\mathbb{L}_{ \operatorname{eff}}}\tilde{q}^{\ell}e^{\sum_{i=1}^{N}D_{i}(\ell)\cdot t_{i}} \cdot p_{\alpha\cup\beta,\alpha_{*}}\left[\frac{p_{\alpha\cup\beta,\alpha}^{* }\bar{A}_{\alpha,\beta,k;\ell}(u_{i_{\alpha,\beta}})}{kz+p_{\alpha\cup\beta, \alpha}^{*}\iota_{\alpha}^{*}u_{i_{\alpha,\beta}}-u_{i_{\alpha,\beta}}}\right]\] \[= \,\frac{q^{k\cdot d_{\alpha\beta}}}{R_{V_{i_{\alpha,\beta}}}( \iota_{\alpha}^{*}u_{i_{\alpha,\beta}}+kz)}\cdot\sum_{\ell\in\mathbb{L}_{ \operatorname{eff}}}\tilde{q}^{\ell}e^{\sum_{i=1}^{N}D_{i}(\ell)\cdot t_{i}} \cdot\bar{A}_{\alpha,\beta,k;\ell}(\iota_{\alpha}^{*}u_{i_{\alpha,\beta}}+kz),\] which coincides with (6.3). ## Appendix A Equivariant Fourier transformation In this appendix, we introduce a Fourier transform of Givental cones, which we learned from [22], and check that the function \((I_{V}^{\lambda})^{\sim}\) coincides with a Fourier transform of \(I_{V}^{\lambda}\). See Remark 1.3 for the backfround. Let \(\mathbb{T}=(\mathbb{C}^{\times})^{N}\) be an algebraic torus, and let \(X\) be a smooth semi-projective variety with a \(\mathbb{T}\)-action whose fixed point set is projective. For such a \(\mathbb{T}\)-variety, we can consider the _extended shift operator_[20]\(\widehat{\mathbb{S}}^{\beta}\) and \(\widehat{\mathcal{S}}^{\beta}\) for each \(\beta\in H_{2}^{\mathbb{T}}(X)\). Via the fundamental solution \(M_{X}(\tau)\) for the \(\mathbb{T}\)-equivariant theory, the operators are related in the following way [20, Proposition 2.7]: \[\widehat{\mathcal{S}}^{\beta}\circ M_{X}(\tau)=M_{X}(\tau)\circ\widehat{ \mathbb{S}}^{\beta}.\] The extended shift operator \(\widehat{\mathbb{S}}^{\beta}\) satisfies that \[\widehat{\mathbb{S}}^{\beta}(f(\lambda,z)\alpha)=f(\lambda-\overline{\beta}z,z) \widehat{\mathbb{S}}^{\beta}\alpha\] for any \(f(\lambda,z)\in H^{*}_{\mathbb{T}}(\mathrm{pt})[z]\) and any \(\alpha\in H^{*}_{\mathbb{T}}(X)\), and is defined by using Gromov-Witten invariants of the Seidel space \(E_{-\overline{\beta}}\), which is defined as the quotient \[E_{-\overline{\beta}}=X\times(\mathbb{C}^{2}\setminus 0)/\mathbb{C}^{\times}\] where \(\mathbb{C}^{\times}\) acts on \(X\times(\mathbb{C}^{2}\setminus 0)\) by the formula \(t\cdot(x,(v_{1},v_{2}))=(t^{-\overline{\beta}}x,(tv_{1},tv_{2}))\). We omit the precise definition here; see [19, 20] for details. Let \(F\) be a connected component of the \(\mathbb{T}\)-fixed point set \(X^{\mathbb{T}}\). Let \(N_{F}\) be the normal bundle to \(F\) in \(X\), and consider its \(\mathbb{T}\)-weight decomposition \(N_{F}=\bigoplus_{\alpha\in\mathrm{Hom}(\mathbb{T},\mathbb{C}^{\times})}N_{F,\alpha}\). Let \(\{\rho_{F,\alpha,j}\}_{j=1}^{\operatorname{rank}N_{F,\alpha}}\) be the Chern roots of \(N_{F,\alpha}\). For \(\beta\in H^{\mathbb{T}}_{2}(X)\), the shift operator \(\widehat{\mathcal{S}}^{\beta}\) on the Givental space \(\mathcal{H}_{X,\mathbb{T}}\) satisfies that \[\left(\widehat{\mathcal{S}}^{\beta}\mathbf{f}\right)\Big{|}_{F}=\frac{Q^{ \beta+\sigma_{F}(-\overline{\beta})}e^{-z\overline{\beta}\partial_{\lambda}}( \mathbf{f}|_{F})}{\prod_{\alpha\in\mathrm{Hom}(\mathbb{T},\mathbb{C}^{\times} )}\prod_{j=1}^{\operatorname{rank}N_{F,\alpha}}\prod_{c=1}^{-\alpha\cdot \overline{\beta}}(\rho_{F,\alpha,j}+\alpha+cz)}\] for any point \(\mathbf{f}\) on \(\mathcal{H}_{X,\mathbb{T}}\) and any connected component \(F\) of \(X^{\mathbb{T}}\). Here \(\sigma_{F}(-\overline{\beta})\in H^{\mathbb{T}}_{2}(X,\mathbb{Z})\) denotes the image of the section class of \(E_{-\overline{\beta}}\to\mathbb{P}^{1}\) associated with \(F\subset X\) under the natural map \(H_{2}(E_{-\overline{\beta}},\mathbb{Z})\to H^{\mathbb{T}}_{2}(X,\mathbb{Z})\), and \(e^{-z\overline{\beta}\partial_{\lambda}}\) denotes the operator sending \(f(\lambda,z)\) to \(f(\lambda-(\lambda\cdot\overline{\beta})z,z)\). One can see that this formula uniquely determines \(\widehat{\mathcal{S}}\) by using the localization formula. Now, we assume that there is a smooth GIT quotient \(Y=X/\!/\mathbb{K}\) for a subtorus \(\mathbb{K}\) of \(\mathbb{T}\), that is, \(X^{s}=X^{ss}\) and \(\mathbb{K}\) acts freely on \(X^{s}\). We denote the equivariant Kirwan map by \(\kappa_{Y}\colon H^{*}_{\mathbb{T}}(X)\to H^{*}_{\mathbb{T}/\mathbb{K}}(Y)\). We define the _Fourier transform_ of \(\mathbf{f}\in\mathcal{H}^{\mathrm{pol}}_{X,\mathbb{T}}\) to be \[\hat{\mathbf{f}}=\sum_{[\beta]\in H^{\mathbb{K}}_{2}(X)/H_{2}(X)}\kappa_{Y} \left(\widehat{\mathcal{S}}^{-\beta}e^{\sum_{i=1}^{N}t_{i}\lambda_{i}/z} \mathbf{f}\right)\widehat{S}^{\beta}\] where \(\widehat{S}^{\beta}\) denotes a formal variable associated to \(\beta\in H^{\mathbb{K}}_{2}(X,\mathbb{Z})\). It is easy to see that, for any \(\lambda\in H^{2}_{\mathbb{T}}(\mathrm{pt},\mathbb{Z})\) and \(\beta\in H^{\mathbb{K}}_{2}(X)\), \[\left(\lambda\mathbf{f}\right)^{\widehat{\phantom{\lambda}}}=\left(\kappa_{Y} (\lambda)+z\cdot\lambda\widehat{S}\frac{\partial}{\partial\widehat{S}}\right) \hat{\mathbf{f}},\quad\left(\widehat{\mathcal{S}}^{\beta}\mathbf{f}\right)^{ \widehat{\phantom{\lambda}}}=\widehat{S}^{\beta}\hat{\mathbf{f}}\] where \(\lambda\widehat{S}\frac{\partial}{\partial\widehat{S}}\) is the derivation sending \(\widehat{S}^{\beta}\) to \((\lambda\cdot\beta)\widehat{S}^{\beta}\). Moreover, it is expected the following [22]: 1. The summation \(\hat{\mathbf{f}}\) is, as a power series in \(\widehat{S}\), supported on a strictly convex cone in \(H^{\mathbb{K}}_{2}(X)\) (described by the GIT data). 2. The transform \(\mathbf{f}\mapsto\hat{\mathbf{f}}\) gives a map from the \(\mathbb{T}\)-equivariant Givental cone of \(X\) to the \(\mathbb{T}/\mathbb{K}\)-equivariant Givental cone of \(Y\). Note that this is a straightforward generalization of [20, Conjecture 1.7]. We now consider the case that \(X=V=\bigoplus_{i=1}^{N}V_{i}\) with the \(\mathbb{T}\)-action described in Section 3. Using a very ample line bundle over \(B\), one can easily check that \(Y=\mathbb{X}_{\mathbb{L}}(\vec{V})\) can be realized as a smooth GIT quotient of \(V\) by \(\mathbb{K}\). Via the canonical splitting \(H^{\mathbb{K}}_{2}(V)=H_{2}(B)\oplus H^{\mathbb{K}}_{2}(\mathrm{pt})\), \(H^{\mathbb{K}}_{2}(\mathrm{pt})=\mathbb{L}\) can be interpreted as the subgroup of \(H^{\mathbb{K}}_{2}(V)\). We can explicitly compute the shift operator and the equivariant Kirwan map. For any character \(\ell\in\mathrm{Hom}(\mathbb{C}^{\times},\mathbb{K})\cong\mathbb{L}\) and any point \(\mathbf{f}(\lambda,z)\) on \(\mathcal{H}_{V,\mathbb{T}}\), we have \[\widehat{\mathcal{S}}^{\ell}\mathbf{f}(\lambda,z)=\frac{\mathbf{f}(\lambda-D( \ell)z,z)}{\prod_{i=1}^{N}\prod_{c=1}^{-D_{i}(\ell)}\prod_{\delta:\text{ \scriptsize\begin{array}{c}\text{Chern roots}\\ \text{of }V_{i}\end{array}}(u_{i}+\delta+cz)}}.\] The equivariant Kirwan map \(\kappa_{\mathbb{X}_{\mathbb{L}}(\vec{V})}\colon H^{*}_{\mathbb{T}}(V)=H^{*}(B )[\lambda]\to H^{*}_{\mathbb{T}/\mathbb{K}}(\mathbb{X}_{\mathbb{L}}(\vec{V}))\) sends \(\phi\in H^{*}(B)\) to its pull-back along the projection \(\mathbb{X}_{\mathbb{L}}(\vec{V})\to B\), and sends \(\lambda_{i}\) to \(c_{1}^{\mathbb{T}/\mathbb{K}}(L_{i})\), where \(L_{i}\) is the \(\mathbb{T}/\mathbb{K}\)-line bundle (3.1) over \(\mathbb{X}_{\mathbb{L}}(\vec{V})\). By a direct computation, we can see that, for any point \(I_{V}^{\lambda}\) on \(\mathcal{L}_{V,\mathbb{T}}\cap\mathcal{H}_{V,\mathbb{T}}^{\text{pol}}\), the Fourier transform of \(I_{V}^{\lambda}\) actually coincides with the function \((I_{V}^{\lambda})^{\sim}\) introduced in Theorem 1.1.
2308.12984
Green ILC Concept: Scenarios toward 2050 Carbon Neutrality in Japan and ILC
This paper describes Japan's scenario for achieving carbon neutrality by 2050 and the policy that should be adopted by the ILC in Japan in line with that policy. This paper only discusses CO2 emissions during operation, not the lifecycle CO2 emissions of the ILC.
Masakazu Yoshioka
2023-08-24T01:24:38Z
http://arxiv.org/abs/2308.12984v1
# Green ILC Concept: Scenarios toward 2050 Carbon Neutrality in Japan and ILC ###### Abstract This paper describes Japan's scenario for achieving carbon neutrality by 2050 and the policy that should be adopted by the ILC in Japan in line with that policy. This paper only discusses CO\({}_{2}\) emissions during operation, not the lifecycle CO\({}_{2}\) emissions of the ILC. ## 1 Introduction: CO\({}_{2}\) emissions from accelerator facilities The earth is currently in a warming trend due to natural cycles (Milankovitch cycle). Although some researchers question the warming caused by anthropogenic factors, I believe that rapid changes are occurring that cannot be explained by natural cycles alone, and I also believe that the world and Japanese government policies that set a goal of achieving carbon neutrality by 2050 are very correct. Accelerators are powered by electricity, and CO\({}_{2}\) is emitted during the production of electricity. In the case of the ILC, peak electricity consumption is about 130 megawatts, resulting in an annual consumption of around 700 million kWh, depending on the operating hours. In Japan, the CO\({}_{2}\) emissions per kWh are reported annually by the local electric power company. Electricity is produced from three energy sources: (1) fossil fuels, (2) renewable energy, and (3) nuclear power. For instance, in the region where the ILC candidate site is located (in Japan), the proportionate of renewable energy, including hydroelectric power, is 21%, while nuclear power accounts for zero, and others depend on fossil fuels. As a result, the coefficient per kWh is 480 grams (in FY2021). This value is quite large compared to Europe and the United States. When this value is multiplied by the electricity consumption by the ILC yields an annual CO\({}_{2}\) emission of 336 kilotons. Annually, Japan's Ministry of the Environment conducts a quantitative assessment of each municipality in Japan to ascertain their respective CO\({}_{2}\) emissions. According to this assessment, Ichinoseki City, where the ILC candidate site is located, has a value of 871 kilotons in 2018. Based on the CO\({}_{2}\) emission factor from 2021, the emissions attributed to the ILC constitute 40% of Ichinoseki's total emissions. However, it's worth noting that the CO\({}_{2}\) emissions per kWh of electricity are anticipated to decrease to less than 50% of the current level by the time the ILC operation. The emissions of Ichinoseki City at that time will be considerably smaller, so the discussion here is for reference only. Figure 1 was given to us by Benno List of DESY. The left side of the figure is the group of
2310.19443
Asymptotically accurate and locking-free finite element implementation of first order shear deformation theory for plates
A formulation of the asymptotically exact first-order shear deformation theory for linear-elastic homogeneous plates in the rescaled coordinates and rotation angles is considered. This allows the development of its asymptotically accurate and shear-locking-free finite element implementation. As applications, numerical simulations are performed for circular and rectangular plates, showing complete agreement between the analytical solution and the numerical solutions based on two-dimensional theory and three-dimensional elasticity theory.
Khanh Chau Le, Hoang Giang Bui
2023-10-30T11:12:23Z
http://arxiv.org/abs/2310.19443v2
Asymptotically accurate and locking-free finite element implementation of first order shear deformation theory for plates ###### Abstract A formulation of the asymptotically exact first-order shear deformation theory for linear-elastic homogeneous plates in the rescaled coordinates and angles of rotation is considered. This allows the development of its asymptotically accurate and shear-locking-free finite element implementation. As applications, numerical simulations are performed for circular and rectangular plates, showing complete agreement between the analytical solution and the numerical solutions based on two-dimensional theory and three-dimensional elasticity theory. keywords: first-order shear deformation theory, plates, finite element, asymptotic accuracy, shear-locking. + Footnote †: journal: Comput. Methods Appl. Mech. Eng. ## 1 Introduction The first-order shear deformation theory (FSDT) for plates, originally proposed by Reissner [1],2 has since attracted the attention of both theorists and practitioners alike, mainly because of its applicability to moderately thick plates and the development of numerical methods [3]-[10], but also because of the logic behind its derivation, which can be applied to other problems [11]-[13]. The first asymptotically exact version of FSDT for plates has been derived by Berdichevsky [14] using the variational-asymptotic method he himself developed. The extension of his result to laminated plates using the same method has been considered by Sutyrin [15] and Yu [16]. However, since in the general case of laminated plates, the dimension reduction does not lead to a FSDT, these authors have tried to optimize the parameters so that a derived theory is as close as possible to asymptotic correctness while being a FSDT. Le [17] has recently shown that the construction of asymptotically exact FSDT for functionally graded (FG) plates is possible when the mass density and elastic moduli vary across thickness such that their distributions are even functions of the transverse coordinate. Similar to Berdichevsky's FSDT for homogeneous plates, his theory for FG-plates is asymptotically exact up to the order of \(h^{2}/l^{2}\), where \(h\) is the plate thickness and \(l\) is the characteristic scale of change of the deformation state in the longitudinal directions. Also worth mentioning are some recent applications of the variational-asymptotic method to dimension reduction, homogenization, nonlinear vibrations, and wave propagation in [18]-[23]. In numerical simulations of bending deformation of plates under a transverse load within FSDT based on the finite element method, the so-called shear-locking (SL) effect often occurs, especially when using low-order finite plate elements (see the above cited papers [3]-[10]). Physically, this effect is due to the fact that the shear stiffness, in terms of the small plate thickness \(h\), is two orders of magnitude larger than the bending stiffness. On the other hand, the rotation angles, which are obviously of the order of the characteristic strain, the angles caused by pure shear being even smaller, are much less than the bending measures (or curvature changes) of the plate, since the latter are of the order of the characteristic strain divided by the thickness. This is also evident from the variational-asymptotic analysis of the energy functional [24, 25]. Therefore, the numerical instability with respect to the shear energy and also to the constitutive equations must occur in the limit \(h\to 0\) due to the multiplication of the extremely small and large numbers when the standard low-order finite element calculation is used. There are several sophisticated methods that make it possible to alleviate this shear-locking effect but at the expense of computational efficiency. The reduced and selective integration method, using two different integration rules for bending and shear energies [26]-[29], has long been the preferred technique for the numerical treatment of SL. The mathematical justification of this method, based on the equivalence between the reduced integration approach and certain mixed models, was later given by Malkus and Hughes [30]. Unfortunately, the reduced integration technique often leads to instability due to rank deficiency and to zero-energy modes [31]-[33]. Therefore, several alternative formulations and numerical techniques have been developed to mitigate SL and increase the accuracy and stability of the solution. These include the modified shear strain method [34, 35], the hybrid and mixed method [36]-[39], the extended assumed strain method [40, 41], the assumed natural strain method [34, 42], and the shear gap method [5] (see also the recent paper [6] discussing relevant publications). As far as the authors of this paper are aware, none of the existing studies on this topic has found a simple formulation of the FSDT that is inherently free of the shear-locking effect, regardless of the discretization scheme and integration technique used. The goal of this paper is therefore twofold. First, we give the formulation of the FSDT for plates in the rescaled coordinates and rotation angles. This formulation occurs naturally when the coordinates in the mid-plane are scaled by the plate thickness \(h\) (thus becoming dimensionless), while the rotation angles are multiplied by \(h\), resulting in equal and finite orders of the bending and shear stiffnesses as well as the scaled rotation angles and bending measures.3 Since this formulation is independent of the plate thickness and inherently shear-locking-free, no high-order interpolation scheme and/or sophisticated integration technique is required for the discretization and FE-implementation, so the computational efficiency can be significantly improved. However, our second goal is to ensure that the FE-implementation is asymptotically accurate. According to [14, 17], the asymptotic accuracy requires that both the transverse displacement and the rotation angles belong to the \(C^{1}\)-function space. We will show by numerical simulations of the circular and square plates and comparison with the analytical solutions of the FSDT and the numerical solution of the three-dimensional elasticity theory that the asymptotic accuracy is indeed achieved, provided that the isogeometric elements guaranteeing the \(C^{1}\)-continuity for the primary variables are used. Footnote 3: Compare with [43], where the intrinsically shear-locking-free formulation was obtained by reparametrizing the kinematic equations for bending measures and shear. The paper is organized as follows. After this brief introduction, Section 2 gives the inherently shear-locking-free rescaled variational formulation of the FSDT for plates. Section 3 is devoted to its FE-implementation. In Section 4, we consider two numerical examples where the developed FE-code is applied: (i) clamped circular plates subjected to uniform loading, and (ii) rectangular plates with one clamped edge and three free edges subjected to uniform loading. Finally, Section 5 concludes the paper. ## 2 Rescaled variational formulation of FSDT for plates Let \(\Omega\) be a two-dimensional domain in the \((x_{1},x_{2})\)-plane bounded by a piecewise smooth closed curve \(\partial\Omega\). We consider a linear elastic homogeneous plate which in the undeformed stress-free state occupies the 3-D region \(\mathcal{V}=\Omega\times(-h/2,h/2)\). Its cross section in the plane \((x_{1},x_{3})\) is shown in Fig. 1. We call \(\Omega\) the plate mid-plane and \(h\) its thickness. Variational principle of the asymptotically exact two-dimensional first-order shear deformation theory (FSDT) for linearly elastic homogeneous plate [14; 25] states that the true deflection and rotation angles of the plate minimize the 2-D average energy functional \[J[u,\varphi_{\alpha}]=\int_{\Omega}\Bigl{\{}\frac{\mu h^{3}}{12}[\sigma(\rho_{ \alpha\alpha})^{2}+\rho_{\alpha\beta}\rho_{\alpha\beta}]+\frac{5\mu h}{12} \varphi_{\alpha}\varphi_{\alpha}\Bigr{\}}\,\mathrm{d}a-\int_{\Omega}\Bigl{(} fu+\frac{\sigma h^{2}}{10}f\rho_{\alpha\alpha}\Bigr{)}\,\mathrm{d}a\] among all deflections and rotation angles satisfying the kinematic boundary conditions, with \(\mathrm{d}a=\mathrm{d}x_{1}\mathrm{d}x_{2}\) and \(\sigma=\frac{\lambda}{\lambda+2\mu}=\frac{\nu}{1-\nu}\). Note that \(\varphi_{\alpha}\) are the rotation angles due to pure shear deformation. Here \(\rho_{\alpha\beta}\) are the measures of bending and \(f\) is the external transverse load. They are given by \[\rho_{\alpha\beta}=u_{,\alpha\beta}-\varphi_{\alpha,\beta},\] \[f=\tau_{3}|_{x_{3}=h/2}+\tau_{3}|_{x_{3}=-h/2},\] with \(\tau_{3}\) being the normal traction. In the following, we use Greek indices, running from 1 to 2, to refer to the plane coordinates \(x_{1}\) and \(x_{2}\). The comma Figure 1: Cross section of a plate before an index denotes differentiation with respect to the corresponding coordinate, the parentheses surrounding a pair of indices denote symmetrization operation and summation over repeated indices is understood. The FE-implementation of this variational problem is not straightforward due to the different magnitudes of the bending and shear stiffnesses. Since the shear stiffness \(5\mu h/12\) is two orders of magnitude larger than the bending stiffness \(\mu h^{3}/12\) with respect to \(h\), its multiplication by very small rotation angles leads to a numerical instability called the shear-locking effect when \(h\) becomes small. To solve this problem, we first try to get rid of the second derivatives in this variational problem. We introduce the new unknown functions \[\psi_{\alpha}=-u_{,\alpha}+\varphi_{\alpha}\] which have the meaning of the total angles of rotation of the transverse fibers due to both bending and shear deformation so that \[\rho_{\alpha\beta}=u_{,\alpha\beta}-\varphi_{\alpha,\beta}=-\psi _{(\alpha,\beta)},\] \[\varphi_{\alpha}=u_{,\alpha}+\psi_{\alpha}.\] In terms of these new unknown functions, the functional becomes \[J[u,\psi_{\alpha}]=\int_{\Omega}\Bigl{\{}\frac{\mu h^{3}}{12}[ \sigma(\psi_{\alpha,\alpha})^{2}+\psi_{(\alpha,\beta)}\psi_{(\alpha,\beta)}]+ \frac{5\mu h}{12}(u_{,\alpha}+\psi_{\alpha})(u_{,\alpha}+\psi_{\alpha}) \Bigr{\}}\,\mathrm{d}a\\ -\int_{\Omega}\Bigl{(}fu-\frac{\sigma h^{2}}{10}f\psi_{\alpha, \alpha}\Bigr{)}\,\mathrm{d}a\,.\] To avoid the shear-locking effect, we try to make the problem independent of \(h\) so that the orders of magnitude of the bending and shear stiffnesses become the same. For this purpose, we introduce the rescaled coordinates and rotation angles as follows \[\bar{x}_{\alpha}=\frac{x_{\alpha}}{h},\quad\bar{\psi}_{\bar{\alpha}}=h\psi_{ \alpha}.\] Note that \(\bar{x}_{\alpha}\) are dimensionless, while \(\bar{\psi}_{\bar{\alpha}}\) have the dimension of length and can be interpreted as the longitudinal displacements of the positive face surface of the plate [14, 17]. Since \[\rho_{\alpha\beta}=-\psi_{(\alpha,\beta)}=-\frac{1}{h^{2}}\bar{ \psi}_{(\bar{\alpha},\bar{\beta})},\] \[u_{,\alpha}+\psi_{\alpha}=\frac{1}{h}(u_{,\bar{\alpha}}+\bar{ \psi}_{\bar{\alpha}}),\] \[\mathrm{d}a=\mathrm{d}x_{1}\,\mathrm{d}x_{2}=h^{2}\,\mathrm{d} \bar{x}_{1}\,\mathrm{d}\bar{x}_{2}=h^{2}\,\mathrm{d}\bar{a}\,,\] we reduce the energy functional to \[J[u,\bar{\psi}_{\bar{\alpha}}]=h\int_{\bar{\Omega}}\Bigl{\{}\frac{ \mu}{12}[\sigma(\bar{\psi}_{\bar{\alpha},\bar{\alpha}})^{2}+\bar{\psi}_{(\bar{ \alpha},\bar{\beta})}\bar{\psi}_{(\bar{\alpha},\bar{\beta})}]+\frac{5\mu}{12}(u _{,\bar{\alpha}}+\bar{\psi}_{\bar{\alpha}})(u_{,\bar{\alpha}}+\bar{\psi}_{ \bar{\alpha}})\Bigr{\}}\,\mathrm{d}\bar{a}\\ -\int_{\bar{\Omega}}h^{2}\Bigl{(}fu-\frac{\sigma}{10}f\bar{\psi} _{\bar{\alpha},\bar{\alpha}}\Bigr{)}\,\mathrm{d}\bar{a}\,.\] Here \(\bar{\Omega}=\{(\bar{x}_{1},\bar{x}_{2})\,|\,(x_{1},x_{2})\in\Omega\}\) denotes the rescaled 2-D domain, and components of vectors and their derivatives having Greek indices with bar are related to the rescaled coordinates \(\bar{x}_{\alpha}\). We can further simplify this functional by dividing it by \(\mu h\). Then the minimization problem reduces to \[\bar{J}[u,\bar{\psi}_{\bar{\alpha}}]=\int_{\bar{\Omega}}\Bigl{\{} \frac{1}{12}[\sigma(\bar{\psi}_{\bar{\alpha},\bar{\alpha}})^{2}+\bar{\psi}_{( \bar{\alpha},\bar{\beta})}\bar{\psi}_{(\bar{\alpha},\bar{\beta})}]+\frac{5}{1 2}(u_{,\bar{\alpha}}+\bar{\psi}_{\bar{\alpha}})(u_{,\bar{\alpha}}+\bar{\psi}_ {\bar{\alpha}})\Bigr{\}}\,\mathrm{d}\bar{a}\\ -\int_{\bar{\Omega}}\Bigl{(}\bar{f}u-\frac{\sigma}{10}\bar{f}\bar {\psi}_{\bar{\alpha},\bar{\alpha}}\Bigr{)}\,\mathrm{d}\bar{a}\to\min_{u,\bar{ \psi}_{\bar{\alpha}}}, \tag{1}\] where \[\bar{f}=\frac{hf}{\mu}.\] Thus, \(\bar{f}\) is equal to the plate thickness \(h\) times the characteristic strain \(\varepsilon=f/\mu\), and since both the bending and shear stiffness in the rescaled functional (1) have the order of unity, the minimizer must be of the same order as \(\bar{f}\). Returning to the original functions, we see that the rotation angles \(\psi_{\alpha}\) have the order of the characteristic strain (\(\varphi_{\alpha}\) are even much smaller [24, 25]), while the bending measures \(\rho_{\alpha\beta}\) have the order of the characteristic strain divided by \(h\), as discussed in the introduction. In this way, the rescaled problem (1) provides an elegant and effective way to avoid the shear-locking effect. To solve problem (1) we need to pose the boundary conditions. If the part of the plate's edge, \(\bar{\partial}_{k}\), is clamped, the admissible functions must satisfy the following kinematical conditions \[u=0,\quad\bar{\psi}_{\bar{\alpha}}=0\quad\text{at }\bar{\partial}_{k}. \tag{2}\] If the remaining part of the plate's edge is simply supported, then only the kinematical condition \(u=0\) should be fulfilled, while \(\bar{\psi}_{\bar{\alpha}}\) can have arbitrary variation at that part. Finally, if the remaining part \(\bar{\partial}_{s}\) of the plate's edge is free, then no constraints are imposed on \(u\) and \(\bar{\psi}_{\bar{\alpha}}\) there. After finding the solution, we want also to check the asymptotic accuracy of the theory. For this purpose, we need to compute the true average displacement of the plate by adding the correction term to the previously determined deflection [14] \[\check{u}=u+\frac{h^{2}\sigma}{60}\rho_{\alpha\alpha}=u-\frac{\sigma}{60}\bar{ \psi}_{\bar{\alpha},\bar{\alpha}}. \tag{3}\] The determination of \(\varphi_{\alpha}\) also requires the first derivatives of \(u\). Therefore we would like to have the solution of (1) such that \(u\) and \(\bar{\psi}_{\bar{\alpha}}\) are both \(C^{1}\)-functions. To check the asymptotic accuracy we must compare functions \(\check{u}\) and \(\check{\psi}_{\alpha}\) found by the 2-D FSDT with \[\langle w_{3}(x_{\alpha},x_{3})\rangle\equiv\frac{1}{h}\int_{-h/2}^{h/2}w_{3}( x_{\alpha},x_{3})\,\mathrm{d}x_{3}\quad\text{and}\quad\langle w_{\alpha}(x_{ \alpha},x_{3})x_{3}\rangle/(h^{3}/12), \tag{4}\] where \(w_{i}(x_{\alpha},x_{3})\) are the displacements computed by the 3-D exact theory of elasticity. If they agree with the accuracy up to \(h^{2}/l^{2}\), then the asymptotic accuracy of our FE-implementation is guaranteed. ## 3 Finite element implementation ### Weak and strong formulations Since we will only deal with rescaled coordinates and rotation angles in this Section and in the analytical parts of the next Section, we briefly omit all bars in the functional (1). Calculating the first variation of (1) and equating it to zero, we obtain the following necessary condition for the minimum \[\delta J=\int_{\Omega}\Bigl{\{}\frac{1}{6}[\sigma\psi_{\alpha, \alpha}\,\delta\psi_{\beta,\beta}+\psi_{(\alpha,\beta)}\,\delta\psi_{(\alpha, \beta)}]+\frac{5}{6}(u_{,\alpha}+\psi_{\alpha})(\delta u_{,\alpha}+\delta\psi _{\alpha})\Bigr{\}}\,\mathrm{d}a\\ -\int_{\Omega}\Bigl{(}f\,\delta u-\frac{\sigma}{10}f\,\delta\psi_{ \alpha,\alpha}\Bigr{)}\,\mathrm{d}a=0. \tag{5}\] Introducing the notations \[\delta W^{b} =\int_{\Omega}\frac{1}{6}[\sigma\psi_{\alpha,\alpha}\,\delta\psi_ {\beta,\beta}+\psi_{(\alpha,\beta)}\,\delta\psi_{(\alpha,\beta)}]\,\mathrm{d}a\,,\] \[\delta W^{s} =\int_{\Omega}\frac{5}{6}(u_{,\alpha}+\psi_{\alpha})(\delta u_{, \alpha}+\delta\psi_{\alpha})\,\mathrm{d}a\,, \tag{6}\] \[\delta W^{e} =\int_{\Omega}(f\,\delta u-\frac{\sigma}{10}f\,\delta\psi_{ \alpha,\alpha})\,\mathrm{d}a\,,\] for the variations of the bending energy, shear energy, and the work of external forces, respectively, we recast Eq. (5) in the form \[\delta W^{b}+\delta W^{s}=\delta W^{e}\,. \tag{7}\] Denote the space of admissible functions as \(\mathcal{K}=\{(v,\chi_{\alpha})\,|\,(v,\chi_{\alpha})|_{\partial_{k}}=0\}\). The weak formulation of the problem is stated as follows: Given \(f\), find \((u,\psi_{\alpha})\in\mathcal{K}\) such that Eq. (7) is satisfied for all \((\delta u\,,\delta\psi_{\alpha})\in\mathcal{K}\). For the integrals in (6) to be meaningful, \(v\) and \(\chi_{\alpha}\) must belong at least to the Sobolev's space of square integrable functions with the square integrable first derivatives, \(H^{1}(\Omega)\). However, since the desired asymptotic accuracy of FSDT may require higher smoothness of \(u\) and \(\psi_{\alpha}\), the continuity assumption in \(\mathcal{K}\) still remains unspecified. Although not relevant for the FE-implementation, the strong formulation of the problem is also stated for the sake of completeness. Under the assumptions that \(u\) and \(\psi_{\alpha}\) are doubly differentiable, we apply the partial integration to Eq. (5) and put it in the form \[\delta J=\int_{\Omega}\Bigl{\{}\Bigl{[}-\frac{\sigma}{6}\psi_{ \beta,\beta\alpha}-\frac{1}{6}\psi_{(\alpha,\beta)\beta}\Bigr{]}\,\delta\psi_ {\alpha}-\frac{5}{6}(u_{,\alpha}+\psi_{\alpha})_{,\alpha}\,\delta u+\frac{5}{ 6}(u_{,\alpha}+\psi_{\alpha})\,\delta\psi_{\alpha}\Bigr{\}}\,\mathrm{d}a\\ -\int_{\Omega}\Bigl{(}f\,\delta u+\frac{\sigma}{10}f_{,\alpha}\, \delta\psi_{\alpha}\Bigr{)}\,\mathrm{d}a+\int_{\partial_{s}}\Bigl{[}\Bigl{(} \frac{\sigma}{6}\psi_{\beta,\beta}n_{\alpha}+\frac{1}{6}\psi_{(\alpha,\beta)} n_{\beta}\Bigr{)}\,\delta\psi_{\alpha}\\ +\frac{5}{6}(u_{,\alpha}+\psi_{\alpha})n_{\alpha}\,\delta u \Bigr{]}\,\mathrm{d}s+\int_{\partial_{s}}\frac{\sigma}{10}fn_{\alpha}\,\delta \psi_{\alpha}\,\mathrm{d}s=0, \tag{8}\] where \(n_{\alpha}\) is the unit normal outward to the boundary and \(\mathrm{d}s\) is the length element. It is assumed here that the remaining part \(\partial_{s}\) of the plate edge is free. Because of the arbitrariness of the variations \(\delta\psi_{\alpha}\) and \(\delta u\) in \(\Omega\) and on \(\partial_{s}\), from (8) follow the second order partial differential equations \[\begin{split}&-\frac{\sigma+1}{6}\psi_{\beta,\beta\alpha}-\frac{1} {6}\psi_{\alpha,\beta\beta}+\frac{5}{6}(u_{,\alpha}+\psi_{\alpha})=\frac{ \sigma}{10}f_{,\alpha},\\ &-\frac{5}{6}(u_{,\alpha}+\psi_{\alpha})_{,\alpha}=f,\end{split} \tag{9}\] the kinematic boundary conditions (2) on \(\partial_{k}\), and the natural boundary conditions \[\begin{split}&\frac{\sigma}{6}\psi_{\beta,\beta}n_{\alpha}+\frac{ 1}{6}\psi_{(\alpha,\beta)}n_{\beta}+\frac{5}{6}(u_{,\alpha}+\psi_{\alpha})=- \frac{\sigma}{10}fn_{\alpha},\\ &\frac{5}{6}(u_{,\alpha}+\psi_{\alpha})n_{\alpha}=0\end{split} \tag{10}\] on \(\partial_{s}\). Eqs. (9), (2), and (10) constitute the strong formulation of the problem. ### Discretization To discretize Eq. (7), interpolation spaces for displacement and rotation angles are required. In the following, a general interpolation scheme is used without regard to specific requirements on the order and continuity of the function space as mentioned above. In this sense, \(u\) and \(\psi_{\alpha}\) (\(\alpha=1,2\)) are considered as primary variables. Their interpolation is \[u=(\mathbf{N}^{u})^{T}\mathbf{u},\quad\psi_{\alpha}=(\mathbf{N}^{\psi})^{T} \boldsymbol{\psi}_{\alpha}, \tag{11}\] where \(\mathbf{N}^{u}\) and \(\mathbf{N}^{\psi}\) are column vectors representing the shape functions for \(u\) and \(\psi_{\alpha}\), respectively, while \(\mathbf{u}\) and \(\boldsymbol{\psi}_{\alpha}\) are deflection and rotation angle vectors, respectively. The vector notation is in column ordering unless otherwise stated. Based on Eq. (11) we find the variation of the primary variables \[\delta u=\delta\mathbf{u}^{T}\,\mathbf{N}^{u},\quad\delta\psi_{\alpha}= \delta\boldsymbol{\psi}_{\alpha}^{T}\,\mathbf{N}^{\psi}, \tag{12}\] and their derivatives \[\delta u_{,\alpha}=\delta\mathbf{u}^{T}\,\mathbf{N}^{u}_{,\alpha},\] \[\delta\psi_{\alpha,\alpha}=\delta\boldsymbol{\psi}_{\alpha}^{T} \,\mathbf{N}^{\psi}_{,\alpha}=\delta\boldsymbol{\psi}_{1}^{T}\,\mathbf{N}^{ \psi}_{,1}+\delta\boldsymbol{\psi}_{2}^{T}\,\mathbf{N}^{\psi}_{,2},\] \[\delta\psi_{(\alpha,\beta)}=\frac{1}{2}(\delta\boldsymbol{\psi}_ {\alpha}^{T}\,\mathbf{N}^{\psi}_{,\beta}+\delta\boldsymbol{\psi}_{\beta}^{T} \,\mathbf{N}^{\psi}_{,\alpha}).\] Substituting Eqs. (12) into Eqs. (6), we obtain \[\delta W^{b} =\frac{1}{6}\int_{\Omega}[\sigma(\delta\boldsymbol{\psi}_{1}^{T} \,\mathbf{N}^{\psi}_{,1}+\delta\boldsymbol{\psi}_{2}^{T}\,\mathbf{N}^{\psi}_{, 2})(\psi_{1,1}+\psi_{2,2})+\delta\boldsymbol{\psi}_{1}^{T}\,\mathbf{N}^{\psi}_ {,1}\psi_{1,1}\] \[+\delta\boldsymbol{\psi}_{2}^{T}\,\mathbf{N}^{\psi}_{,2}\psi_{2, 2}+(\delta\boldsymbol{\psi}_{1}^{T}\,\mathbf{N}^{\psi}_{,2}+\delta\boldsymbol {\psi}_{2}^{T}\,\mathbf{N}^{\psi}_{,1})\psi_{(1,2)}]\,\mathrm{d}a\,,\] \[\delta W^{s} =\frac{5}{6}\int_{\Omega}[(\delta\mathbf{u}^{T}\,\mathbf{N}^{u}_ {,1}+\delta\boldsymbol{\psi}_{1}^{T}\,\mathbf{N}^{\psi})(u_{,1}+\psi_{1})\] \[+(\delta\mathbf{u}^{T}\,\mathbf{N}^{u}_{,2}+\delta\boldsymbol{ \psi}_{2}^{T}\,\mathbf{N}^{\psi})(u_{,2}+\psi_{2})]\,\mathrm{d}a\,,\] \[\delta W^{e} =\int_{\Omega}[\delta\mathbf{u}^{T}\,\mathbf{N}^{u}f-\frac{\sigma }{10}(\delta\boldsymbol{\psi}_{1}^{T}\,\mathbf{N}^{\psi}_{,1}+\delta \boldsymbol{\psi}_{2}^{T}\,\mathbf{N}^{\psi}_{,2})f]\,\mathrm{d}a\,.\] Consequently, the residual forces with respect to \(u\) and \(\psi_{\alpha}\) read \[\mathbf{R}^{u} =\int_{\Omega}\mathbf{N}^{u}f\,\mathrm{d}a-\frac{5}{6}\int_{\Omega }[\mathbf{N}^{u}_{,1}(u_{,1}+\psi_{1})+\mathbf{N}^{u}_{,2}(u_{,2}+\psi_{2})]\, \mathrm{d}a\,,\] \[\mathbf{R}^{\psi_{1}} =-\int_{\Omega}\frac{\sigma}{10}\mathbf{N}^{\psi}_{,1}f\,\mathrm{ d}a-\frac{5}{6}\int_{\Omega}\mathbf{N}^{\psi}(u_{,1}+\psi_{1})\,\mathrm{d}a\] \[-\frac{1}{6}\int_{\Omega}[\sigma\mathbf{N}^{\psi}_{,1}(\psi_{1,1} +\psi_{2,2})+\mathbf{N}^{\psi}_{,1}\psi_{1,1}+\mathbf{N}^{\psi}_{,2}\psi_{(1,2 )}]\,\mathrm{d}a\,, \tag{13}\] \[\mathbf{R}^{\psi_{2}} =-\int_{\Omega}\frac{\sigma}{10}\mathbf{N}^{\psi}_{,2}f\,\mathrm{ d}a-\frac{5}{6}\int_{\Omega}\mathbf{N}^{\psi}(u_{,2}+\psi_{2})\,\mathrm{d}a\] \[-\frac{1}{6}\int_{\Omega}[\sigma\mathbf{N}^{\psi}_{,2}(\psi_{1,1} +\psi_{2,2})+\mathbf{N}^{\psi}_{,2}\psi_{2,2}+\mathbf{N}^{\psi}_{,1}\psi_{(1,2 )}]\,\mathrm{d}a\,.\] The use of Eq. (11) in Eq. (13) allows us to derive the following blocks of the stiffness matrix \[\mathbf{K}^{uu} =\frac{5}{6}\int_{\Omega}\left[\mathbf{N}^{u}_{,1}(\mathbf{N}^{u }_{,1})^{T}+\mathbf{N}^{u}_{,2}(\mathbf{N}^{u}_{,2})^{T}\right]\mathrm{d}a\,,\] \[\mathbf{K}^{u\psi_{1}} =\frac{5}{6}\int_{\Omega}\mathbf{N}^{\psi}_{,1}(\mathbf{N}^{\psi })^{T}\,\mathrm{d}a\,,\] \[\mathbf{K}^{u\psi_{2}} =\frac{5}{6}\int_{\Omega}\mathbf{N}^{\psi}_{,2}(\mathbf{N}^{\psi })^{T}\,\mathrm{d}a\,,\] \[\mathbf{K}^{\psi_{1}u} =\frac{5}{6}\int_{\Omega}\mathbf{N}^{\psi}(\mathbf{N}^{\psi}_{,1 })^{T}\,\mathrm{d}a\,,\] \[\mathbf{K}^{\psi_{1}\psi_{1}} =\int_{\Omega}\Bigl{[}\frac{5}{6}\mathbf{N}^{\psi}(\mathbf{N}^{ \psi})^{T}+\frac{\sigma+1}{6}\mathbf{N}^{\psi}_{,1}(\mathbf{N}^{\psi}_{,1})^{ T}+\frac{1}{12}\mathbf{N}^{\psi}_{,2}(\mathbf{N}^{\psi}_{,2})^{T}\Bigr{]}\, \mathrm{d}a\,,\] \[\mathbf{K}^{\psi_{1}\psi_{2}} =\int_{\Omega}\Bigl{[}\frac{\sigma}{6}\mathbf{N}^{\psi}_{,1}( \mathbf{N}^{\psi}_{,2})^{T}+\frac{1}{12}\mathbf{N}^{\psi}_{,2}(\mathbf{N}^{ \psi}_{,1})^{T}\Bigr{]}\,\mathrm{d}a\,,\] \[\mathbf{K}^{\psi_{2}u} =\frac{5}{6}\int_{\Omega}\mathbf{N}^{\psi}(\mathbf{N}^{\psi}_{,2 })^{T}\,\mathrm{d}a\,,\] \[\mathbf{K}^{\psi_{1}\psi_{2}} =\int_{\Omega}\Bigl{[}\frac{\sigma}{6}\mathbf{N}^{\psi}_{,2}( \mathbf{N}^{\psi}_{,1})^{T}+\frac{1}{12}\mathbf{N}^{\psi}_{,1}(\mathbf{N}^{ \psi}_{,2})^{T}\Bigr{]}\,\mathrm{d}a\,,\] \[\mathbf{K}^{\psi_{2}\psi_{2}} =\int_{\Omega}\Bigl{[}\frac{5}{6}\mathbf{N}^{\psi}(\mathbf{N}^{ \psi})^{T}+\frac{\sigma+1}{6}\mathbf{N}^{\psi}_{,2}(\mathbf{N}^{\psi}_{,2})^{T }+\frac{1}{12}\mathbf{N}^{\psi}_{,1}(\mathbf{N}^{\psi}_{,1})^{T}\Bigr{]}\, \mathrm{d}a\,.\] ### Isogeometric analysis The weak form (8) contains the first order derivatives of deflection and rotation angles and their variations and is therefore meaningful only if \((u,\psi_{\alpha})\) and \((\delta u\,,\delta\psi_{\alpha})\) belong to the Sobolev space \(H^{1}(\Omega)\). However, as mentioned in Section 2, for the desired asymptotic accuracy, the discretization space must be at least \(C^{1}\) to ensure continuity and smoothness of the solution. In this paper, we use isogeometric analysis with the non-uniform rational B-splines (NURBS) shape function [44]. This approach also allows to facilitate the refinement of the mesh and the elevation of the discretization order if necessary. In the context of geometric modeling, a plate can be represented by NURBS surface patches, in which each patch can be described as a tensor product of two univariate B-splines such as \[\mathbf{S}\left(\xi_{1},\xi_{2}\right)=\sum_{i=1}^{m}\sum_{j=1}^{n}N_{i}^{p} \left(\xi_{1}\right)N_{j}^{q}\left(\xi_{2}\right)\mathbf{P}_{ij}. \tag{14}\] In Eq. (14), the univariate B-spline basis function of order \(p\) and \(q\), denoted as \(N_{i}^{p}\) and \(N_{j}^{q}\), is computed via recursive Cox-de-Boor formula \[N_{i}^{p}(\xi)=\frac{\xi-\xi_{i}}{\xi_{i+p}-\xi_{i}}N_{i}^{p-1}(\xi)+\frac{ \xi_{i+p+1}-\xi_{i}}{\xi_{i+p+1}-\xi_{i+1}}N_{i+1}^{p-1}(\xi),\ N_{i}^{0}(\xi)= \begin{cases}1&\xi_{i}\leq\xi\leq\xi_{i+1}\\ 0&\text{otherwise}\end{cases}.\] We use \(\{\mathbf{P}_{ij}\}_{0\leq i\leq m,0\leq j\leq n}\) to denote the control point grid. The definition of the control point in homogeneous coordinates will facilitate the construction of NURBS patch. A surface can comprise a single or multiple patches that are connected at the patch interface. In the latter case, the concept is called multipatch analysis. In the typical scenario, the patch information matches at the interface, i.e. strong coupling, including the parametric (knot) information and the location of the control points, the multipatch is naturally connected and the \(C^{0}\) continuity is guaranteed on the patch interfaces. However, patch-wise parametric matching is not the necessary condition. There are various methods to maintain the patch continuity in a NURBS multipatch, such as penalty method [45], Nitsche method [46] or bending strip method [47]. These methods impose weak coupling condition and are more sophisticated to implement. The design of the B-spline surface allows different interpolation orders in each parametric direction. This feature is particularly useful for analyses of materials with anisotropic behavior where accuracy needs to be improved in a particular direction. In addition, rescaling of the mesh can be conveniently performed by rescaling the control point coordinates. This eliminates the need for remeshing. The NURBS patches in a multipatch structure are macro elements where the shape function needs the full knot vectors for values and derivatives evaluation. This on one hand prevents the possibility of applying parallelization algorithms to the numerical code, on the other hand it makes the element matrices very dense, which reduces the performance of the sparse direct solver. Bezier decomposition algorithm [48] is proposed to retain the local characteristic of the finite element. In essence, the Bezier decomposition strategy constructs a local operator for each knot span (in 1D) or knot cell (in 2D and 3D) so that the shape function within the cell can be evaluated independently with the knot vectors. The support domain, i.e. control points influenced by the cell, of each cell is reduced to the minimum number along with the number of shape functions required for each cell. This eliminates the global access to patch-wise data and allows a typical finite element code to work seamlessly with NURBS elements. ### Structure of the finite element code The computational code for simulating the proposed FSDT plate element is implemented as an extension of the Kratos Multiphysics framework [49]. A brief overview of the structure of the code is visualized in Fig. 2. The Figure 2: Structure of the computational code. plate element is implemented within the extension PlateAndShellApplication. By combining with the existing extension IsogeometricApplication, which supports NURBS patch modeling and Bezier extraction operator evaluation, a new extension IsogeometricPlateAndShellApplication can be created. This module enables isogeometric analysis with FSDT plate element. In addition, mesh refinement and order elevation are supported via IsogeometricApplication. ## 4 Numerical examples ### Clamped circular plate under uniform transverse load As a first test problem, we apply the developed finite element code to the numerical simulation of the axisymmetric bending of a circular plate clamped at the edge and subjected to a constant transverse load \(f\). We chose this 2-D problem because it admits an analytical solution, so we can directly compare the numerical solution with it to test the former's divergence and investigate the optimal choice of interpolation order. As the bending is axisymmetric, we must have in the polar coordinates \((r,\theta)\) the vanishing rotation angle \(\psi_{\theta}=0\), while \(u\) and \(\psi_{r}\) are functions of \(r\) only. Consequently, \[\rho_{rr}=-\psi_{r,r},\quad\rho_{\theta\theta}=-\frac{1}{r}\psi_{r},\] while all other components of \(\rho_{\alpha\beta}\) vanish. It is easy to show that the problem (1) reduces to minimizing the following functional \[I[u,\psi_{r}]=\int_{0}^{R}\Bigl{\{}\frac{1}{12}[\sigma(\psi_{r,r} +\frac{1}{r}\psi_{r})^{2}+(\psi_{r,r})^{2}+\frac{1}{r^{2}}(\psi_{r})^{2}]+ \frac{5}{12}(u_{,r}+\psi_{r})^{2}\Bigr{\}}r\,\mathrm{d}r\\ -\int_{0}^{R}\Bigl{[}fu-\frac{\sigma}{10}f\Bigl{(}\psi_{r,r}+\frac {1}{r}\psi_{r}\Bigr{)}\Bigr{]}r\,\mathrm{d}r\to\min_{u(R)=\psi_{r}(R)=0},\] where \(R\) is the dimensionless radius of the plate and all bars are still dropped for short. The standard calculus of variations shows that the minimizer must satisfy the equation \[-\frac{\sigma}{6}\Bigl{[}(\psi_{r,r}+\frac{1}{r}\psi_{r})r\Bigr{]} _{,r}+\frac{\sigma}{6}(\psi_{r,r}+\frac{1}{r}\psi_{r})-\frac{1}{6}(\psi_{r,r}r) _{,r} \tag{15}\] \[+\frac{1}{6r}\psi_{r}+\frac{5}{6}(u_{,r}+\psi_{r})r=0,\] \[-\frac{5}{6}\left[(u_{,r}+\psi_{r})r\right]_{,r}=0,\] subjected to the clamped boundary conditions \(u(R)=\psi_{r}(R)=0\). Integrating the second equation of (15) and using the fact that \(\varphi=u_{,r}+\psi_{r}\) cannot be singular at \(r=0\), we find that \[\varphi(r)=u_{,r}+\psi_{r}=-\frac{3}{5}fr. \tag{16}\] Substituting (16) into the first of (15), we reduce it to \[\psi_{r,rr}+\frac{1}{r}\psi_{r,r}-\frac{1}{r^{2}}\psi_{r}=-\frac{3}{\sigma+1}fr.\] Integrating this equation and using the boundary condition \(\psi_{r}(R)=0\) as well as the non-singularity of \(\psi\) at \(r=0\), we get \[\psi_{r}(r)=\frac{3f}{8(\sigma+1)}(R^{2}r-r^{3})=\frac{3(1-\nu)f}{8}(R^{2}r-r^ {3}). \tag{17}\] Now, the deflection \(u\) can be found from \[u_{,r}=-\frac{3}{5}fr-\psi_{r}=-\frac{3}{5}fr-\frac{3f}{8(\sigma+1)}(R^{2}r-r^ {3}).\] Integrating this equation and using the boundary condition \(u(R)=0\), we obtain finally (cf. [50, 51]) \[u(r)=\frac{3}{10}f(R^{2}-r^{2})+\frac{3f}{32(\sigma+1)}(R^{2}-r^{2})^{2}= \frac{3}{10}f(R^{2}-r^{2})+\frac{3(1-\nu)f}{32}(R^{2}-r^{2})^{2}, \tag{18}\] Note that the deflection according to Kirchhoff's plate theory reads [52] \[u_{K}(r)=\frac{3(1-\nu)f}{32}(R^{2}-r^{2})^{2}.\] while the angle of rotation, \(-u_{K,r}\), coincides with (17). For the numerical simulations of the real plate let us return to the original notation with bars. In the rescaled formulation, the problem for the circular plate contains only three parameters: \(\bar{R}=R/h\) (geometry), \(\nu\) (material), and \(\bar{f}=hf/\mu\) (load). The material parameter is chosen to be \(\nu=0.3\). Since the problem is linear, the solution depends linearly on \(\bar{f}\), so by setting \(\bar{f}=1\) we calculate the normalized solution. To get the real deflection, we just need to multiply our normalized \(u\) by \(h\varepsilon\) (where \(\varepsilon=f/\mu\)). Similarly, to calculate the real rotation angles, we need to multiply our normalized \(\bar{\psi}_{\bar{\alpha}}\) (or \(\bar{\varphi}_{\bar{\alpha}}\)) by \(\varepsilon\). To investigate the efficiency of the proposed plate element, we perform the analysis with three dimensionless radii \(\bar{R}=10\), \(\bar{R}=100\) and \(\bar{R}=1000\). The circular plate is constructed with 5 NURBS fields, as shown in Fig. 3 (left). The deflection agrees very well with the analytical solution (18) (with bars being restored over the rescaled quantities), as shown in Table 1. The numerical values of the normalized deflection (for \(\bar{f}=1\)) are large. However, considering that the real deflection is obtained after multiplication by \(h\varepsilon\), where \(\varepsilon\) is much smaller than \(10^{-3}\), these values become reasonable. The \(L_{2}\) error shown in the right column of this Table is computed according to \[\|e\|_{L_{2}}=\frac{\int_{\Omega}\left(u_{computed}-u_{ana}\right)^{2}\mathrm{ d}a}{\int_{\Omega}u_{ana}^{2}\,\mathrm{d}a}.\] It can be seen that the cubic order discretization gives very good results even at coarse mesh. Fig. 4 visualizes the convergence rate of analysis with second order (\(p=2,q=2\)) and cubic order (\(p=3,q=3\)) discretization. We observe that the error increases with increasing \(\bar{R}\), yet the convergence rate is close to optimal, showing that the proposed formulation does not exhibit shear-locking behavior. Although rescaling inflates the domain, the number of elements and degrees of freedom required to obtain a small error is not significantly affected by this rescaling. A slight reduction of convergence rate is observed for \(\bar{R}=10\) of the cubic order analysis. This can be attributed Figure 3: Numerical example 4.1: Multipatch geometry (left) and contour plot of scaled deflection \(\left(\frac{u}{h\varepsilon\bar{R}^{4}}\right)\) at \(\bar{R}=10\) (right). to the loss of accuracy when the \(L_{2}\) error is very small (\(\sim 10^{-9}\)) and the machine precision starts to affect the computation. Due to the high accuracy of the solution, the distribution of the deflection is almost radially symmetric as expected (see Fig. 3 (right) for the solution of the cubic discretization with 39786 d.o.fs for \(\bar{R}=10\)). The normalized deflection \(u/(h\varepsilon\bar{R}^{4})\) together with the normalized rotation angles, i.e. \(\psi_{r}/(\varepsilon\bar{R}^{3})\) and \(\varphi_{r}/(\varepsilon\bar{R})\), for this particular discretization along the radial direction are plotted in Fig. 5. It clearly shows perfect agreement with the analytical formulas (16)-(18) (where the bars are recovered over the scaled quantities) for both \(\bar{R}=10\) and \(\bar{R}=1000\). It is worth noting that to evaluate \(\varphi\), the derivatives of \(u\) at the integration point are calculated and then transferred to the control points using an \(L_{2}\)-global projection algorithm [53]. Figure 4: Numerical example 4.1: Convergence rate of displacement (\(L_{2}\) error) with \(\bar{R}=10\) (left), \(\bar{R}=100\) (middle) and \(\bar{R}=1000\) (right). ### Rectangular plate under uniform transverse load For the second test problem, we apply the developed finite element code to the numerical simulation of the bending of a rectangular plate subjected to a constant load \(f\). We assume that \(\Omega\) occupies the domain \((0,L)\times(-D/2,D/2)\) of the \((x_{1},x_{2})\)-plane, the left edge at \(x_{1}=0\) is clamped while the three remaining edges of the plate are free (see Fig. 6 (left)). When the depth of the rectangular plate, \(D\), becomes large, the solution must exhibit the plane strain state due to the almost translational invariance in the \(x_{2}\)-direction. For FSDT in the rescaled formulation (1), this means that Figure 5: Numerical example 4.1: Normalized deflection and rotation angle along the radial direction for \(\bar{R}=10\) (left) and \(\bar{R}=1000\) (right). Figure 6: Numerical example 4.2: Geometry and boundary condition for (left) the analysis using FSDT plate element and (right) the 2D analysis using solid element. the 2-D problem reduces to the following 1-D variational problem: Minimize the energy functional of the beam-like model \[I[u,\psi]=\int_{0}^{L}\Bigl{[}\frac{1}{2}E_{b}(\psi_{,x})^{2}+\frac {1}{2}E_{s}(u_{,x}+\psi)^{2}\Bigr{]}\,\mathrm{d}x\\ -\int_{0}^{L}\Bigl{(}fu-\frac{\sigma}{10}f\psi_{,x}\Bigr{)}\, \mathrm{d}x\rightarrow\min_{u(0)=\psi(0)=0}, \tag{19}\] where the bars in this theoretical part are again dropped for short, \(x\equiv x_{1}\) and \[E_{b}=\frac{\sigma+1}{6}=\frac{1}{6(1-\nu)},\quad E_{s}=\frac{5}{6}.\] The vanishing first variation leads to the differential equations \[-E_{b}\psi_{,xx}+E_{s}(u_{,x}+\psi)=0, \tag{20}\] \[-E_{s}(u_{,x}+\psi)_{,x}=0,\] which are subjected to the boundary conditions \[u(0)=\psi(0)=0,\] \[E_{b}\psi_{,x}(L)=-\frac{\sigma}{10}f,\quad E_{s}(u_{,x}+\psi)|_ {x=L}=0.\] From the first equation of (20) follows \[u_{,x}+\psi=\varphi=\frac{E_{b}}{E_{s}}\psi_{,xx}.\] Substituting this into the second equation yield \[E_{b}\psi_{,xxx}=-f. \tag{21}\] Integrating Eq. (21) and using the boundary conditions, we obtain \[\psi(x)=\frac{f}{E_{b}}\Bigl{[}-\frac{1}{6}x^{3}+\frac{1}{2}Lx^{2}-(\frac{1}{ 2}L^{2}+\frac{\sigma}{10})x\Bigr{]}. \tag{22}\] The deflection can be found from the equation \[u_{,x}=\varphi-\psi=-\psi+\frac{E_{b}}{E_{s}}\psi_{,xx},\] which, together with the boundary condition \(u(0)=0\), yields \[u(x)=\frac{f}{E_{b}}\Big{[}\frac{1}{24}x^{4}-\frac{1}{6}Lx^{3}+(\frac{1}{4}L^{2}+ \frac{\sigma}{20})x^{2}\Big{]}+\frac{f}{E_{s}}(-\frac{x^{2}}{2}+Lx).\] However, this is still not the true average transverse displacement of the plate. The latter should be computed in accordance with Eq. (3) giving \[\check{u}(x)=\frac{f}{E_{b}}\Big{[}\frac{1}{24}x^{4}-\frac{1}{6}Lx ^{3}+(\frac{1}{4}L^{2}+\frac{\sigma}{20})x^{2}\Big{]}+\frac{f}{E_{s}}(-\frac{ x^{2}}{2}+Lx)\\ -\frac{\sigma}{60}\frac{f}{E_{b}}\Big{(}-\frac{1}{2}x^{2}+Lx- \frac{1}{2}L^{2}-\frac{\sigma}{10}\Big{)}. \tag{23}\] Note that the average transverse displacement according to the classical Kirchhoff's plate theory in this beam-like model is given by \[u_{K}(x)=\frac{f}{E_{b}}\Big{(}\frac{1}{24}x^{4}-\frac{1}{6}Lx^{3}+\frac{1}{4} L^{2}x^{2}\Big{)},\] while the angle of rotation reads \[\psi_{K}(x)=-u_{K,x}=-\frac{f}{E_{b}}\Big{(}\frac{1}{6}x^{3}-\frac{1}{2}Lx^{2} +\frac{1}{2}L^{2}x\Big{)}.\] The presence of the analytical solution given by Eqs. (22)-(23) for this particular case allows a comparison with the solution of the 2-D FSDT and with the solution of the exact elasticity theory. On one hand, the comparison with the solution of the 2-D FSDT can show that our FE-code is free of shear-locking and allows us to check its convergence and efficiency as \(D\) becomes large. On the other hand, the comparison with the solution of the exact elasticity theory allows us to verify the asymptotic accuracy of our FE-code. To solve this problem within the exact elasticity theory, we use the geometry and boundary condition shown in Fig. 6 (right). The normal traction is applied to the top and bottom of the plate, as shown in this Figure, to avoid its elongation and at the same time maintain the bending combined with a pure shear [14]. Under the plane strain condition the problem becomes 2-D, so we can employ 2-D solid elements. Having the numerical solution within the plane strain elasticity theory, the average transverse displacement and rotation angles over the thickness are evaluated using Eq. (4). Within 2-D FSDT the deflection and rotation angles are evaluated along the centerline \(x_{2}=0\). Then the true average transverse displacement \(\tilde{u}\) is computed in accordance with (3). Both the plate analysis and the elastic solid analysis employ the cubic order NURBS discretization with sufficiently fine mesh. Using again the bars for the recalled quantities, we show the results of numerical simulation in Fig. 7, in which two cases \(\bar{L}=3\) (left) and \(\bar{L}=10\) (right) are analyzed (with \(\bar{L}=L/h\)). In Fig. 7, two trends are clearly visible. First, the numerical solution using the 2-D FSDT plate element converges to the analytical solution of FSDT given by Eqs. (22)-(23) (with the bars being recovered over the scaled quantities) as \(\bar{D}=D/h\) increases, as shown in all plots for three different cases \(\bar{D}=1\), \(\bar{D}=10\), and \(\bar{D}=100\). This is due to the fact that the boundary effect becomes negligible for large \(\bar{D}\) and agrees well with the assumption of almost translational invariance discussed at the beginning of this subsection. This also confirms that our rescaled formulation of the FSDT is Figure 7: Numerical example 4.2: Deflection and rotation angle for \(\bar{L}=3\) (left) and \(\bar{L}=10\) (right). inherently shear-locking-free and that the FE-code is efficient. Second, the solution of the 2-D FSDT approximates the solution of the exact elasticity theory much better than that of the classical Kirchhoff plate theory. For large \(\bar{L}\) (thin plate), the difference between the solutions of FSDT and exact elasticity theory is negligibly small. Even for moderate \(\bar{L}\) (moderately thick plate), the error is only noticeable near the end point when the deflection reaches its maximum. This also confirms the asymptotic accuracy of our FE-implementation of the FSDT. Since the difference between the analytical solution of the FSDT (red solid line) and the solution resulting from the classical Kirchhoff (beam-like) theory (blue dashed line) becomes significant for moderate \(\bar{L}\), the latter is not applicable for moderately thick plates. A detail worth mentioning is that the true average displacement calculated according to (3) does not exactly satisfy the kinematic boundary condition: The last correction term in (3), although small, does not vanish at \(\bar{x}_{1}=0\) (cf. Eq. (23)). The reason for this is well known: The FSDT correction might not work in a thin boundary layer near the edge of the plate [14]. Far from the edge, this correction term of order \(h^{2}/l^{2}\) compared to unity is essential for ensuring the asymptotic accuracy of the FSDT up to that order. Another detail worth mentioning is that the elastic 2-D analysis does not use an approximation, but evaluates the solution fields in a brute force method with a very fine mesh. The average transverse displacements and rotation angles obtained by calculating integrals (4) over the thickness in a post-processing represent only the integral characteristics of the FSDT. The comparison between the detailed distributions of displacements across the thickness is left for future investigations. ## 5 Conclusion In this work, the inherently shear-locking-free formulation of the asymptotically exact FSDT for homogeneous elastic plates in the rescaled coordinates and rotation angles is found. This leads to an asymptotically accurate and computationally efficient finite element code. The requirement that the deflection and rotation angles belong to the \(C^{1}\) function space is only necessary to achieve asymptotic accuracy but is not relevant to the shear-locking effect, which is absent in this formulation. Since our main focus was on the inherently shear-locking-free formulation, the simplest linear FSDT for homogeneous elastic plates was chosen intentionally. Once understood, the results obtained in this paper can be extended in many directions. Among others, we will develop FE-implementation based on the inherently shear-locking-free formulation of the asymptotically exact: (i) linear FSDT for homogeneous elastic shells, (ii) linear FSDT for FG-plates and shells, (iii) dynamic FSDT for FG-plates and shells, (iv) nonlinear FSDT for FG-plates and shells with application to buckling analysis. The asymptotically accurate and shear-locking free finite element implementation of the linear FSDT for homogeneous elastic shells will be addressed in our next publication.